DBMS_METADATA in utilities manual

Just found out that all the good documentation for the DBMS_METADATA package is in the Utilities manual.  I’ve just been looking at the “PL/SQL Packages and Types Reference” manual for the details of how to use DBMS_METADATA.  I’ve been handcuffing myself for years.  There are lots of good examples in the Utilities manual.  In the 11.2 manual it is chapter 20 “Using the Metadata APIs”.

DBMS_METADATA lets you extract the SQL needed to recreate tables, packages, etc.

– Bobby

Posted in Uncategorized | Leave a comment

A day in the life of an Oracle DBA

I thought I would do a fun “day in the life” blog post about the work I’m doing today.  And then chaos ensues!

8:22 AM MST  After starting this blog post and reading my email I updated my time card for yesterday.  Pretty easy so far.  Then I get a chat message that an important web site is down.  One web server is flooding our database with locks – select for update.  Fun!  Web server is brought down and problem solved.

8:48 AM MST Checked on import as part of PeopleSoft financials dev database refresh from prod.  First import failed part way.  Second one is running but I had to let it fail on IMP-00015 object already exists errors to get to the point it was out after a day of importing.  Normally it take about six days to do this import!  Hopefully we can get this into Delphix at some point if we can get our network throughput straightened out and funding for the disk space.

9:01 AM MST Reviewed weekend PeopleSoft changes.  11 small migrations.  Already prepped 6 on Friday.  Have all week to prep the rest.  Probably will do today if I don’t get interrupted.

9:18 AM MST Review possible technical architect projects – evaluate RAC, study OBIEE performance, improve DW batch job performance.  Hmm.  There are only so many hours in the day, but some of these seem interesting.  I’m really pondering OBIEE and adhoc queries performance.  But, it may be that the performance of the batch jobs on the weekend is more important to the business.  That’s not as cool but it is something I can actually do something about.  Batch jobs are great because I can change the SQL, break it up, add hints, etc.

9:46 AM MST One PS change prepped.  Four to go. Whoo hoo!

9:59 AM MST Take survey on migration to Windows 7.  My new laptop is sweet but there were a couple of issues that I documented.

10:24 AM MST Meeting about weekend changes.  Bunch of things going in.  May be another PeopleSoft change coming my way.

10:32 AM MST Both daughters (12 and 16 years old) are up.  They are home from school.  Evidently my 16 year old doesn’t know how to use the washing machine so I start a load for her.  I telecommute two days a week and normally the critters are in school but this week is fall break for Chandler schools.

11:01 AM MST Started long running audit query on a database.  This is more of a developer function but I’m verifying row counts.  I guess being a PeopleSoft DBA you get called in to do some audit functions even though I’m not that familiar with the data.  Made myself a nice salami sandwich while waiting for the first query to return.

11:53 AM MST Chaos ensues again.  Email and chat messages interrupt the two things I’m already doing.  My stack gets to about four items.  Finally I’m back to prepping for another PeopleSoft change for the weekend.  Only three more left to prep unless one gets approved late.  Also, my first audit query completed nicely and the second ran fine in test so its going now in prod.  Also, finished my lunch with two apples – one green, one red.

12:28 PM MST Little break – moved the laundry to the dryer, took out the trash, did some reading.

12:47 PM MST Coordinating some downtime on our Delphix environments while working on the next PS change.  Evidently there is something about HP-UX’s virtual switch that is preventing us from getting good network throughput on our HP-UX VMs.  Alas, it appears that all other Delphix customers with HP-UX are using physical servers so we are blazing a trail with the HP-UX VMs and Delphix.

1:16 PM MST Interrupted with issue with Payroll performance.  The database was almost idle except for a normal export, but there were high i/o latencies.  sar -d was reporting 43 millisecond reads on a certain device.  Not sure if it was part of a database file system.  Interesting note here is that from an AWR perspective db file sequential reads for the past hour were 8 milliseconds.  This doesn’t seem bad but last Tuesday they were 3 milliseconds.  We don’t use direct i/o on this database so the 8 milliseconds is an average of the reads cached in the unix filesystem cache and the actual disk i/o so the real i/o as reported in sar could be a lot more than 8 ms.

1:29 PM MST Did some more querying.  I think the last hour was just less busy because we were at 2 ms the hour before.  Here is my query for single block read wait times:

select sn.END_INTERVAL_TIME,
trunc((after.time_waited_micro-before.time_waited_micro)/
(1000*(after.total_waits-before.total_waits))) 
"ave read milliseconds",
(after.total_waits-before.total_waits) "number of reads",
before.event_name
from DBA_HIST_SYSTEM_EVENT before, 
DBA_HIST_SYSTEM_EVENT after,
DBA_HIST_SNAPSHOT sn
where before.event_name='db file sequential read' and
after.event_name=before.event_name and
after.snap_id=before.snap_id+1 and
after.instance_number=1 and
before.instance_number=after.instance_number and
after.snap_id=sn.snap_id and
after.instance_number=sn.instance_number
order by after.snap_id;

Here is the output:

END_INTERVAL_TIME          ave read milliseconds number of reads
-------------------------- --------------------- ---------------
30-SEP-13 05.00.49.277 AM                      7            2509
30-SEP-13 06.00.03.711 AM                      3           25515
30-SEP-13 07.00.18.398 AM                      3          281640
30-SEP-13 08.00.33.128 AM                      2          293962
30-SEP-13 09.00.47.908 AM                      3          334224
30-SEP-13 10.00.02.648 AM                      5          503945
30-SEP-13 11.00.17.868 AM                      4          608978
30-SEP-13 12.00.32.760 PM                      3          892744
30-SEP-13 01.00.48.088 PM                      0         1341677
30-SEP-13 02.00.02.882 PM                      0         2467534
30-SEP-13 03.00.18.168 PM                      0         2877788
30-SEP-13 04.00.33.095 PM                      0         1582026
30-SEP-13 05.00.47.870 PM                      1          801732
30-SEP-13 06.00.02.389 PM                      0          574242
30-SEP-13 07.00.17.254 PM                      3           12142
30-SEP-13 08.00.32.313 PM                      0          110974
30-SEP-13 09.00.47.481 PM                      1           67871
30-SEP-13 10.00.03.485 PM                      1          137784
30-SEP-13 11.00.19.079 PM                      1          180278
01-OCT-13 12.00.33.957 AM                      3          326129
01-OCT-13 01.00.49.154 AM                      5          551264
01-OCT-13 02.00.03.925 AM                      5          560862
01-OCT-13 03.00.18.645 AM                      1          513999
01-OCT-13 04.00.33.720 AM                      4           19192
01-OCT-13 05.00.48.836 AM                      8            5431
01-OCT-13 06.00.03.244 AM                      9            3470
01-OCT-13 07.00.17.843 AM                      4          256345
01-OCT-13 08.00.32.633 AM                      2          667393
01-OCT-13 09.00.47.515 AM                      5          263574
01-OCT-13 10.00.02.986 AM                      2          583552
01-OCT-13 11.00.17.808 AM                      4         1514417
01-OCT-13 12.00.33.205 PM                      4         1282821
01-OCT-13 01.00.47.920 PM                      3          412715
01-OCT-13 02.00.02.482 PM                      8          317958
01-OCT-13 03.00.17.964 PM                      7          560480

4 milliseconds isn’t bad but with unix level caching it sometimes is under 1.

1:50 PM MST – Coffee break.  Black since I’m on a diet.

2:12 PM MST – Finished prepping for one more weekend change.  Two more to go.

2:24 PM MST – Payroll issue revisited.  Confirm process is 20 minutes longer than normal.  Did awr report, checked plan of top query, it hasn’t changed.  Is averaging 34 milliseconds per execution today which is the same as earlier days.

     select ss.sql_id,
  2  ss.plan_hash_value,
  3  sn.END_INTERVAL_TIME,
  4  ss.executions_delta,
  5  ELAPSED_TIME_DELTA/(executions_delta*1000) "Elapsed Average ms",
  6  CPU_TIME_DELTA/(executions_delta*1000) "CPU Average ms",
  7  IOWAIT_DELTA/(executions_delta*1000) "IO Average ms",
  8  CLWAIT_DELTA/(executions_delta*1000) "Cluster Average ms",
  9  APWAIT_DELTA/(executions_delta*1000) "Application Average ms",
 10  CCWAIT_DELTA/(executions_delta*1000) "Concurrency Average ms",
 11  BUFFER_GETS_DELTA/executions_delta "Average buffer gets",
 12  DISK_READS_DELTA/executions_delta "Average disk reads",
 13  ROWS_PROCESSED_DELTA/executions_delta "Average rows processed"
 14  from DBA_HIST_SQLSTAT ss,DBA_HIST_SNAPSHOT sn
 15  where ss.sql_id = '3d56qsnvv3shc'
 16  and ss.snap_id=sn.snap_id
 17  and executions_delta > 0
 18  and ss.INSTANCE_NUMBER=sn.INSTANCE_NUMBER
 19  order by ss.snap_id,ss.sql_id;

SQL_ID        PLAN_HASH_VALUE END_INTERVAL_TIME   EXECUTI Average ms
------------- --------------- ------------------- ------- ----------
3d56qsnvv3shc      1288646537 20-AUG-13 05.00 PM   22091         14
3d56qsnvv3shc      1288646537 20-AUG-13 06.00 PM    3374         11
3d56qsnvv3shc      1288646537 27-AUG-13 02.00 PM   10666         39
3d56qsnvv3shc      1288646537 03-SEP-13 04.00 PM   21203         14
3d56qsnvv3shc      1288646537 10-SEP-13 11.00 AM    9254         39
3d56qsnvv3shc      1288646537 10-SEP-13 12.00 PM    1820         32
3d56qsnvv3shc      1288646537 17-SEP-13 05.00 PM     314         27
3d56qsnvv3shc      1288646537 17-SEP-13 06.00 PM   25256         15
3d56qsnvv3shc      1288646537 24-SEP-13 02.00 PM   10885         18
3d56qsnvv3shc      1288646537 01-OCT-13 04.00 PM   14446         34

I hacked the output to fit on the page.  Varies a bit due to caching but same plan.

3:15 PM MST Ran into a snag prepping my next to last PS change.  We don’t migrate menus and permission lists in a project.  Have to see if we can push this one out.  Yuck!

3:27 PM MST Schedule oil change and service for our van.  We may be driving some teenagers up into the mountains this weekend.

3:36 PM MST Chat about dropping a partition from a table and local index.

3:57 PM MST Splitting headache.  Maybe documenting every minute on the blog isn’t a good idea.  Didn’t take much of a break for lunch.  Time for some Earl Grey tea and brief relaxation.

4:22 PM MST Prepped for last change – simple SQL update.  Built backup script to use CTAS to backup 7 rows each from two tables.  Staged on production.  Also, requested change with policy violation to be pushed out.  Done with production preparation for now.

4:34 PM MST Finished up editing this blog post.  I’ll do some more stuff after now until 5 pm but I wanted to finish the blog post today.  Maybe this gives you an idea of my work life.  Some cool performance stuff.  Some mundane migration stuff.  Reminds me of the way a flight instructor described his job – hours of boredom punctuated by moments of stark terror!  In my case I’m not really bored with the mundane, but it is punctuated by the interesting, and occasionally interrupted by the stressful.  Not quite stark terror since it isn’t really life threatening.

The life of a DBA – at least for one day.

– Bobby

Posted in Uncategorized | 10 Comments

SQLT Data Explosion

Well, I’ve been running SQLT all day.  I’ve got a case open with Oracle support on the issue described in my previous post.  Oracle support wants me to use SQLT to run one of the queries with the long parse time and so I picked one out of the AWR.  Just to be safe I decided to download the latest SQLT zip and install it on our development database first.  Then I ran the problem query knowing that it wouldn’t hit the same issue in development.

Well, when I installed SQLT the first time in development I put all the objects in the USERS tablespace which promptly filled up.  So, then I created a 10 gig tablespace just for SQLT and reinstalled it with this TS.  I ran through a full XECUTE step and it took several hours and produced a 100 megabyte zip file.  So, then I promptly deleted all the output and started over on production.

On production SQLT is still running hours later and it filled up my 10 gig tablespace.  So, now it is a 20 gig tablespace.  But, I’m left to wonder if I should start over and let this run all night.  But then maybe it will fill up something overnight or cause some issue with the batch processes.  Yuck!  The good thing is that the /u01 filesystem I’m using has 80 gig free so hopefully when the 20 gig of data is extracted it won’t fill my 80 gig filesystem with the resulting zip.  But, I have dinner plans tonight so do I let it run or not?  Not to mention that now it appears hung trying to extract our VPD functions.  Fun.

So, bottom line is that SQLT seems like a pretty cool tool but if you cut it loose on a system like ours with tables that have thousands of subpartions and many other complexities you may spend all day generating gigabytes of information to upload to oracle support that they probably can’t use anyway.

– Bobby

P.S. They query I’m testing ran for less than 15 minutes.  It is all the data capture that is killing us.

P.P.S.  Well, it blew out my 20 gig SQLT tablespace so I upped it to 80 gig.  Looks like it is up to 28 gig used.  The bad thing is on this run the query ran in seconds so it probably didn’t even get a trace of the issue.  SQLT is not my friend.

Also, Oracle support had me try these options to speed up SQLT:

EXEC sqltxplain.sqlt$a.set_param(‘test_case_builder’, ‘N’);
EXEC sqltxplain.sqlt$a.set_param(‘sta_time_limit_secs’, ’30’);

But, it has still been running since this morning.  Fun.

Thursday update:

Latest recommendation is to run XPREXC version instead.  This looks like it may work.  It only used about 8 gig of my SQLT tablespace.  But, it has been running for about 16 hours.  Still, it has only used about 1 gig of my /u01 filesystem and I have 78 gig free so it may well finish.

2:15 pm MST Thursday.  I spoke too soon.  SQLT has been stuck for about 4 hours with no new output.  Here is the last file updated:

-rw-r--r--  1 oracle oinstall 610406400 Sep 26 11:54 
sqlt_s20527_cell_state_begin_and_end.txt

The server timezone is CST.  Currently running this query:

SELECT 'INST_ID="'
|| inst_id
|| '" CELL_NAME="'
|| cell_name
|| '" OBJECT_NAME="'
|| object_name
|| '" STATISTICS_TYPE="'
|| statistics_type
|| '"'
|| CHR (10)
id,
'BEGIN:'
|| CHR (10)
|| REPLACE (statistics_value_b, '><', '>' || CHR (10) || '<')
begin_value,
'END:'
|| CHR (10)
|| REPLACE (statistics_value_e, '><', '>' || CHR (10) || '<')
end_value
FROM SQLTXADMIN.sqlt$_gv$cell_state_v
WHERE statement_id = :v_statement_id
ORDER BY inst_id,
cell_name,
object_name,
statistics_type

I’m stuck in SQLT purgatory!

3:08 pm MST:  Victory!  I guess I just had to be patient.  Here are the files left after the final run:

-rw-r--r--  1 oracle oinstall     16342 Sep 25 19:12 
65mvg950s0q7z_output_s20527.txt

-rw-r--r--  1 oracle oinstall 506419884 Sep 26 17:03 
sqlt_s20527_xprexc.zip

A mere 22 hours and a 500 meg zip which now I have to upload to Oracle’s support site.  Sweet!

– Bobby

Posted in Uncategorized | Leave a comment

9b4m3fr3vf9kn and an4593dzvqr4v

You got to love Exadata and subpartitioned tables.  We have these queries on one particular table that are taking forever to parse.  Sometimes a simple explain plan on the simplest possible query against the table takes 20 or 30 seconds.  It seems that the time is spent primarily on sql_id an4593dzvqr4v which is this internal query:

select obj#, dataobj#, subpart#, hiboundlen, hiboundval, ts#, file#,
 block#, pctfree$, pctused$, initrans, maxtrans, flags, analyzetime,
 samplesize, rowcnt, blkcnt, empcnt, avgspc, chncnt, avgrln,
 length(bhiboundval), bhiboundval 
from tabsubpart$ 
where pobj# = :1 
order by subpart#

Gross.  Appears to look up information about a partition’s subpartitions and for some reason it likes to do this over and over again during a parse.  A similar popular query appears, namely, 9b4m3fr3vf9kn:

select obj#, dataobj#, subpart#, hiboundlen, hiboundval, flags, ts#,
 file#, block#, pctfree$, initrans, maxtrans, analyzetime, 
samplesize, rowcnt, blevel, leafcnt, distkey, lblkkey, dblkkey,
 clufac, spare2, length(bhiboundval), bhiboundval 
from indsubpart$ 
where pobj# = :1 
order by subpart#

I haven’t delved into this one but it appears to have to do with index subpartitions.  Of course it goes without saying that this is our second largest table and don’t bother telling me to regather stats because you can’t just willy-nilly regather stats on production tables without testing no matter what people may tell you.

But, if one of you gurus out there have any great ideas let me know.  I’d love to delete and regather stats but the system is in use and the table is large.  What I really want to do is set a small estimate percentage and abandon the 11g auto sample size but I haven’t convinced anyone of this yet.

Sometimes we see this when the subpartitions are being compressed and that makes total sense, but we see it during the work week also and that has no rhyme or reason to it since there is no updating or DDL that corresponds to the high parse times and the large numbers of executions of 9b4m3fr3vf9kn and an4593dzvqr4v.

Lest I forget, this is Exadata 11.2.0.2 bundle 20.  Fun!

– Bobby

P.S.  The one Oracle support entry related to these sql statements is this one:

Bug 16694856 : EXADATA: SUBPARTITION WITH ‘LIBRARY CACHE LOCK’ ON RAC + INDEX OPERATION

It specifically mentions an4593dzvqr4v and 9b4m3fr3vf9kn.  But, we are seeing the same results without any index operations.  Anyway, I’ve got a case open with Oracle support.

Posted in Uncategorized | 10 Comments

Delphix, direct I/O, and direct path reads

I’ve been working on testing performance on a database that uses Delphix as its storage system and ran across an unexpected side effect of using Delphix to clone a production database for testing.  Because Delphix uses NFS to present the filesystems for the datafiles you are required to use direct IO, at least on HP-UX which is our platform.  But, our production source database doesn’t have direct I/O turned on.  So, I’m seeing differences in runtimes in some queries that can be confusing if you don’t understand the reason.

If you have a query that uses direct path reads on a system that has a Unix filesystem and doesn’t use direct I/O those reads can be cached in the Unix filesystem’s buffer cache so that if you query that same table multiple times you eventually get direct path read times far lower than is possible on a system with direct I/O.  So, in our case if you have a developer running a test query on a non-Delphix, non-direct I/O system it can run many times faster than is possible on any system, Delphix or otherwise, that uses direct I/O.

This Oracle document spells out the requirements for using NFS for Oracle filesystems on various platforms:

Mount Options for Oracle files when used with NFS on NAS devices (Doc ID 359515.1)

For HP-UX datafiles you have to use:

rw,bg,vers=3,proto=tcp,noac,forcedirectio,hard,nointr,timeo=600,rsize=32768,wsize=32768,suid

Note the inclusion of “forcedirectio”.  This does just what it sounds like.  It forces the NFS filesystem to be accessed using direct I/O which bypasses the Unix filesystem buffer cache.  Generally direct I/O is the recommended option but when you implement Delphix you have a special situation.  The main point of Delphix is to be able to easily spin up multiple clones of your production database for development and testing.  This is great because it allows you to have a very realistic set of data to test against.  But, if your source system doesn’t use direct I/O you have introduced a difference from production that will affect testing.

I’ve put together a simple test case to show that direct path reads can be cached at the Unix filesystem level if you don’t have direct I/O and that with direct I/O – whether on a Delphix based NFS filesystem or a SAN based filesystem that uses direct I/O – you don’t see the effect of the Unix filesystem caching.  Here is the zip of the test case: zip.

I built a table big enough that 11 g Oracle would do direct path reads on a full scan:

create table test as select * from dba_objects where rownum < 1001;

insert /*+append */ into test select * from test;
commit;
insert /*+append */ into test select * from test;
commit;
insert /*+append */ into test select * from test;
commit;
insert /*+append */ into test select * from test;
commit;
insert /*+append */ into test select * from test;
commit;
insert /*+append */ into test select * from test;
commit;
insert /*+append */ into test select * from test;
commit;
insert /*+append */ into test select * from test;
commit;
insert /*+append */ into test select * from test;
commit;
insert /*+append */ into test select * from test;
commit;
insert /*+append */ into test select * from test;
commit;

execute dbms_stats.gather_table_stats(NULL,'TEST');

-- do an alter move to get blocks out of the buffer cache

alter table test move;

Then I ran a select against this table over and over again until I got the best results possible due to caching.  Here is what I got on a system (11.1.0.7) without direct I/O:

EVENT              AVERAGE_WAIT
------------------ ------------
direct path read              0

This is from v$session_event so the units are centiseconds rounded to the nearest hundredth centisecond.  So this means the direct path reads were averaging less than 100 microseconds so it must be cached at the database server host level.

Here is about the best I could get on our Delphix system:

EVENT             AVERAGE_WAIT
----------------- ------------
direct path read           .11

This is 1.1 milliseconds which means the data was probably cached within Delphix but not at the database server’s filesystem cache.

Just for comparison here is a SAN based filesystem with direct I/O and the same test:

EVENT              AVERAGE_WAIT
------------------ ------------
direct path read            .07

So, in every case with direct I/O we were stuck with around 1 millisecond or just under no matter how many times I ran the query to get things cached.  Without direct I/O it was about ten or more times faster.

Of course, not everything is going to be cached like this, especially large tables that you are doing full scans on and getting direct path I/O.  But there may be certain applications that run significantly different on your source system than on your target if your source is not using direct I/O, your target is using direct I/O, and your queries are doing direct path reads.

– Bobby

 

Posted in Uncategorized | 3 Comments

How to break a large query into many small ones

I haven’t figured out the best way to do this yet so this is a work in progress.  I’m trying to document the process of taking a large query that joins many tables together and breaking it into a series of small queries that only join two tables at a time.  Today I finished this process on a query that ran for four hours and the eleven queries I broke it up into together ran for five minutes.  I did put a full and use_hash hint on one of the eleven queries.

Here is the process I followed:

  1. I started with permanent tables to save the intermediate results with the idea of making them global temporary tables with on commit preserve rows once I’d built all the new smaller queries.
  2. I worked on one join at a time.  The query I worked on used the ANSI syntax for joins and had INNER and LEFT joins so I just worked in the order the tables were listed.
  3. The first temp table was the output of joining the first two tables.  My query had twelve tables.  Every temp table after that was the result of joining the latest temp table T1…T10 to the next real table.
  4. I included all of the columns in the select list and where clause from each real table I added to the join.  I’m sure there is a better way to to this.  Also, I took the table aliases like SLS and added them to the column names because there were columns of the same name from multiple tables.  I.e. PROD_CODE became SLS_PROD_CODE because there might be a PDT_PROD_CODE from the table with the alias PDT.
  5. I inserted the final output into a table and compared it to the output from the original query to make sure they were exactly the same.

The weird thing about this in that there is no guarantee that  joining tables in the order listed in the query will produce good results, but in this case it was much faster than running the entire query.  It was kind of a one legged tree:

tree

R1 through R5 represent my real tables from the original join.  Really there where twelve of these.

T1 through T4 represent the global temporaries created with each join.  In reality I had ten of these and the final output was inserted into a permanent table.

I don’t really know why breaking the query up into these small pieces was so effective but I do know that it would have taken a lot more work to try to make the bigger query run faster.  It took time to split it up but it wasn’t difficult if you know what I mean.  I was just tedious getting the list of column names edited together but that’s a lot easier than figuring out why the query with twelve tables joined together was taking four hours to run.

– Bobby

Posted in Uncategorized | 2 Comments

Outline hint for query tuning

I had a situation today where I had two slightly different queries that I thought should have the same plan but they didn’t.  So, I needed a way to force the slower query to run with the same plan as the faster one.  I wanted to do this so I could see if the row counts, bytes, or costs for the second query would be different than on the first.  Something must be different or the optimizer would have chosen the same plan for both queries.  By imposing the fast query’s plan on the slow one I could see where the optimizer’s calculations are different.

In the past I’ve done this by adding hints such as an INDEX hint if the faster one used an index and the slower one didn’t.  Today the query was pretty complex so I used an outline hint to force all aspects of the plan on the slow query to be the same as the fast one.  I’ve put together a simplistic example of how to do this.  Script and log is in this zip.

My fast query example is hinted to use an index just to make it different from my slow example:

SQL> -- Get plan for query with index hint including outline
SQL> 
SQL> explain plan into plan_table for
  2  select /*+index(test testi) */ * from test
  3  /

Explained.

SQL> 
SQL> select * from table(dbms_xplan.display('PLAN_TABLE',NULL,
'OUTLINE'));

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------
Plan hash value: 2400950076

--------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost  
--------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       | 31471 |  7191K|  1190 
|   1 |  TABLE ACCESS BY INDEX ROWID| TEST  | 31471 |  7191K|  1190 
|   2 |   INDEX FULL SCAN           | TESTI | 31471 |       |    80 
--------------------------------------------------------------------

Outline Data
-------------

  /*+
      BEGIN_OUTLINE_DATA
      INDEX(@"SEL$1" "TEST"@"SEL$1" ("TEST"."OWNER"))
      OUTLINE_LEAF(@"SEL$1")
      ALL_ROWS
      OPT_PARAM('_unnest_subquery' 'false')
      DB_VERSION('11.2.0.3')
      OPTIMIZER_FEATURES_ENABLE('11.2.0.3')
      IGNORE_OPTIM_EMBEDDED_HINTS
      END_OUTLINE_DATA
  */

It’s not a realistic example.  The index version of this plan is really slower.  But pretend you wanted to force a similar query to use the same plan as this first query.

Here is the second query with no hints:

SQL> -- Get plan for query with no hint
SQL> 
SQL> explain plan into plan_table for
  2  select * from test
  3  /

Explained.

SQL> select * from table(dbms_xplan.display('PLAN_TABLE',NULL,
'BASIC'));

PLAN_TABLE_OUTPUT
-------------------------------------
Plan hash value: 217508114

----------------------------------
| Id  | Operation         | Name |
----------------------------------
|   0 | SELECT STATEMENT  |      |
|   1 |  TABLE ACCESS FULL| TEST |
----------------------------------

Now here is the second query with the outline hint added and its resulting plan – same as the first query’s:

SQL> -- Get plan forcing query using outline
SQL> 
SQL> explain plan into plan_table for
  2  select
  3    /*+
  4        BEGIN_OUTLINE_DATA
  5        INDEX(@"SEL$1" "TEST"@"SEL$1" ("TEST"."OWNER"))
  6        OUTLINE_LEAF(@"SEL$1")
  7        ALL_ROWS
  8        OPT_PARAM('_unnest_subquery' 'false')
  9        DB_VERSION('11.2.0.3')
 10        OPTIMIZER_FEATURES_ENABLE('11.2.0.3')
 11        IGNORE_OPTIM_EMBEDDED_HINTS
 12        END_OUTLINE_DATA
 13    */
 14  * from test
 15  /

Explained.

SQL> 
SQL> select * from table(dbms_xplan.display('PLAN_TABLE',NULL,
'BASIC'));

PLAN_TABLE_OUTPUT
------------------------------------------------
Plan hash value: 2400950076

---------------------------------------------
| Id  | Operation                   | Name  |
---------------------------------------------
|   0 | SELECT STATEMENT            |       |
|   1 |  TABLE ACCESS BY INDEX ROWID| TEST  |
|   2 |   INDEX FULL SCAN           | TESTI |
---------------------------------------------

So, imagine a scenario where the two queries were similar enough to have the same plan but complex enough that figuring out individual hints to force the second one to have the same plan as the first would take some time.  This outline hint method is an easy way to impose the first query’s plan on the second.  It won’t work if the two queries are too different but today I used this in a real tuning situation and it saved me a lot of time.

– Bobby

 

Posted in Uncategorized | 4 Comments

Yet another test_select package update

Most recent version here(updated).

Added these two:

display_results(test_name,compared_to_test_name) - output results of
testings in the two scenarios.  List results from all queries that
ran more than 3 times as long with one test or the other.  Also 
summarize results with total elapsed time, number of queries 
executed, average elapsed time, and percent improvement.

show_plan(test_name,sqlnumber) - extract plan from plan_table for
given test name and sql statement.

Example usage:

execute test_select.display_results('PARALLEL 128','PARALLEL 8');

select test_select.show_plan('PARALLEL 128',57) from dual;

Give it a try.

– Bobby

Posted in Uncategorized | 1 Comment

Update to test_select package

Quick update to package from previous post.

Here is the zip of the download: zip(updated)

I added these procs to collect select statements on a source production database and to copy them back (as clobs) to the test database:

collect_select_statements(max_number_selects,
include_pattern1,...,include_pattern10,
exclude_pattern1,...,exclude_pattern10)

This proc is run on the the source database to 
collect select statements including statements 
that have the include patterns and excluding those 
who have the exclude patterns. 

Patterns use LIKE conditions %x%.

copy_select_statements(link_name)

copies select statements from remote source 
database pointed to by link_name's db link.

I intended to have this functionality in the package from the beginning but I got bogged down getting ORA-22992 errors trying to copy a clob from a remote table into a variable.  So, now I just execute collect_select_statements on the remote database over the link and run copy_select_statements to copy the populated table back to the test database:

execute TEST_SELECT.
collect_select_statements@myqalink
(10,include_pattern1=>'%TARGETTABLE%');

execute TEST_SELECT.
copy_select_statements('myqalink');
Posted in Uncategorized | Leave a comment

Package to test large select statements

I’ve uploaded this package(updated) that I’m using to test some large select statements.  See the file packagedescriptionv1.txt for a description of the package.  Also, test.sql and test.log is an example of using the package.

This is an updated version of the scripts in this post.

– Bobby

Posted in Uncategorized | 2 Comments