Today I sped a query up by over 5 times by setting _small_table_threshold=1000000.
Here is the query elapsed time and a piece of the plan showing its behavior before setting the parameter:
Elapsed: 00:28:41.67 ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | O/1/M | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | 69 | PARTITION RANGE ITERATOR | | 9125 | 1 | 9122 |00:13:02.42 | 3071K| 3050K| | | | |* 70 | TABLE ACCESS FULL | SIS_INV_DTL | 9125 | 1 | 9122 |00:13:02.25 | 3071K| 3050K| | | |
I think that this part of the plan means that the query scanned a range of partitions 9125 times resulting in over three million physical reads. These reads took about 13 minutes. If you do the math it works out to between 200-300 microseconds per read. I have seen similar times from repeated reads from a storage server that has cached the data in memory. I have seen this with a SAN and with Delphix.
Here is my math for fun:
>>> 1000000*((60*13)+2.25)/3050000 256.4754098360656
About 256 microseconds per read.
I ran this query again and watched the wait events in Toad’s session browser to verify that the query was doing a bunch of direct path reads. Even though the query was doing full scans on the partition range 9000 times the database just kept on doing direct path reads for 13 minutes.
So, I got the idea of trying to increase _small_table_threshold. I was not sure if it would work with parallel queries. By the way, this is on 11.2.0.4 on HP-UX Itanium platform. So, I tried
alter session set "_small_table_threshold"=1000000;
I ran the query again and it ran in under 5 minutes. I had to add a comment to the query to get the plan to come back cleanly. So, then I reran the query again and I guess because of caching it came back in under 2 minutes:
First run: Elapsed: 00:04:28.83 Second run: Elapsed: 00:01:39.69 ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | O/1/M | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | 69 | PARTITION RANGE ITERATOR | | 9125 | 1 | 9122 |00:00:45.33 | 3103K| 0 | | | | |* 70 | TABLE ACCESS FULL | SIS_INV_DTL | 9125 | 1 | 9122 |00:00:45.27 | 3103K| 0 | | | |
The second execution did zero physical reads on these partitions instead of the 3 million that we had without the parameter!
So, it seems that if you have a query that keeps doing full scans on partitions over and over it can run a lot faster if you disable direct path read by upping _small_table_threshold.
Bobby
have you seen this thread: https://jonathanlewis.wordpress.com/2011/03/24/small-tables/
Kyle, thanks for your comment. Yes, I looked at that link and several others. I left a lot of detail out of my post to keep it short. You might be interested to know that this work has a Delphix connection. The first database that we used Delphix on has some key queries that run a bunch of scans of partitions like this. These queries have always run much slower on our Delphix vdbs than on production because the vdbs are using direct I/O and our production db is not. I tried to explain this in some past Delphix presentations. Now I’m thinking that I should have bumped _small_table_threshold way up two or three years ago when I first saw this effect on Delphix. The nice thing is that we are doing an in place upgrade from 11.1.0.7 to 11.2.0.4 so I’m thinking I’ll change _small_table_threshold at the same time we upgrade. That way we won’t be using direct path reads during our testing on our Delphix vdbs and we will see the same behavior on production even though it doesn’t use direct I/O. Plus, we plan to eventually migrate the production db to Linux and we use direct I/O on Linux so it would probably be best to turn off direct path reads there as well. Anyway, we are still testing but I wish I had tried increasing _small_table_threshold a long time ago. I think this particular application does not benefit from direct path reads, or at least there are some key queries that direct path reads do not make sense for.