Patch 19183482 resolves ORA-01403 getting plan with baseline

I was testing SQL plan baselines on a base 11.2.0.3 release of Oracle on a 64 bit Linux virtual machine.  I ran DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE to create a SQL plan baseline for a test query after running that query once to get its plan in the cursor cache.  When I ran the test query after creating the SQL plan baseline and called dbms_xplan.display_cursor to see its new plan I got an ORA-01403 error:

ORCL:SYSTEM>select * from table(dbms_xplan.display_cursor(null,null,'ALL'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------
SQL_ID  4mu5a860ardpz, child number 1

An uncaught error happened in prepare_sql_statement : ORA-01403: no data found

NOTE: cannot fetch plan for SQL_ID: 4mu5a860ardpz, CHILD_NUMBER: 1
      Please verify value of SQL_ID and CHILD_NUMBER;
      It could also be that the plan is no longer in cursor cache (check v$sql_plan)

I applied patch 19183482 to my test system and the ORA-01403 error went away:

ORCL:SYSTEM>select * from table(dbms_xplan.display_cursor(null,null,'ALL'));

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------
SQL_ID  4mu5a860ardpz, child number 1
-------------------------------------
select sum(blocks) from test

Plan hash value: 1950795681

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |       |       |    29 (100)|          |
|   1 |  SORT AGGREGATE    |      |     1 |     3 |            |          |
|   2 |   TABLE ACCESS FULL| TEST |  2844 |  8532 |    29   (0)| 00:00:01 |
---------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

   1 - SEL$1
   2 - SEL$1 / TEST@SEL$1

Column Projection Information (identified by operation id):
-----------------------------------------------------------

   1 - (#keys=0) SUM("BLOCKS")[22]
   2 - "BLOCKS"[NUMBER,22]

Note
-----
   - SQL plan baseline SQL_PLAN_dscatqjvyk4qy6b581ab9 used for this statement

Here is a zip of the script that I ran to create the ORA-01403 error and the output that it generated with and without the patch applied on my test database: zip

Here is a list of the bugs that patch 19183482’s readme says it will fix:

14512308: SPM BASELINE PLAN CANNOT BE REPRODUCED
15858022: ‘LIBRARY CACHE: MUTEX X’ AND LIBRARY CACHE LOCKS  PURGED_CURSOR
16400122: SPIKES IN LIBRARY CACHE
16625010: SPM BASELINE NOT WORKING FOR SQL CALLED FROM PL/SQL

I haven’t gotten very far into my investigation of SQL plan baselines but it looks like it would be a good idea to apply 19183482 before using SQL plan baselines on 11.2.0.3.  I barely got started using SQL plan baselines and I immediately hit this bug.

– Bobby

Posted in Uncategorized | Leave a comment

Query to show sql_ids related to SQL Profiles

I have a number 0f SQL Profiles on a database I’m working on and I wanted to know the sql_id of the SQL statements that they relate to.  Here is what I came up with:

select distinct 
p.name sql_profile_name,
s.sql_id
from 
dba_sql_profiles p,
DBA_HIST_SQLSTAT s
where
p.name=s.sql_profile;

Here is the output on my system:

SQL_PROFILE_NAME               SQL_ID
------------------------------ -------------
coe_5up1944u1zb7r_1979920644   5up1944u1zb7r
coe_b9ad7h2rvtxwf_3569562598   b9ad7h2rvtxwf
coe_9f21udzrfcygh_2815756405   9f21udzrfcygh
coe_366brswtxxuct_10194574     366brswtxxuct
coe_2261y37rp45py_2815756405   2261y37rp45py

These are all profiles I created manually with the SQLT scripts so the profile name includes the sql_id but not all sql profiles are like this.  I have more entries in dba_sql_profiles than these five but these are the only rows that matched a row in dba_hist_sqlstat so I guess this won’t work for queries that are not used very often or are so fast that they don’t get pulled into the tables that are behind dba_hist_sqlstat.

– Bobby

4/27/21

Current version profiles.sql

Posted in Uncategorized | 22 Comments

Startup upgrade suppresses ORA-00955 on create table WRH$_SQL_PLAN

Today I was trying to see if upgrading from 11.2.0.2 to 11.2.0.4 would change the SYS.WRH$_SQL_PLAN table.  This table is large on our production system so I wanted to find out if some time-consuming update to this table would occur that would slow down our production upgrade but not be detected on our test systems.  We recently performed this upgrade on our development database and I was looking at the logs to see whether SYS.WRH$_SQL_PLAN was modified.  I found this curious entry (edited for brevity):

create table WRH$_SQL_PLAN
2  (snap_id           number        /* last snap id, used for purging */
3  ,dbid           number       not null
4  ,sql_id           varchar2(13)    not null
...
42   using index tablespace SYSAUX
43  ) tablespace SYSAUX
44  /

Table created.

The “Table created.” message sounds like the database created a new table without any errors.  But, looking at DBA_OBJECTS the table was not new.  So, I guessed that when you are running the catproc.sql script which includes the create table statement for SYS.WRH$_SQL_PLAN it must contain something that suppresses the error that you should get when you try to create a table and the table already exists:

ORA-00955: name is already used by an existing object

So, I opened my 11.2.0.3 test database using STARTUP RESTRICT  and ran @catproc.sql as SYSDBA and to my surprise I got the error just as you normally would:

 42   using index tablespace SYSAUX
 43  ) tablespace SYSAUX
 44  /
create table WRH$_SQL_PLAN
             *
ERROR at line 1:
ORA-00955: name is already used by an existing object

So, I decided to restart this database with STARTUP UPGRADE and rerun catproc.sql and as a result the error disappeared:

 40  ,constraint WRH$_SQL_PLAN_PK primary key
 41      (dbid, sql_id, plan_hash_value, id)
 42   using index tablespace SYSAUX
 43  ) tablespace SYSAUX
 44  /

Table created.

Cue the mysterious Twilight Zone music…

I guess this is a “feature” of the startup upgrade command but the “Table created.” message is kind of confusing.  The table isn’t really created if it exists.  But, I guess the good thing is that it doesn’t report an error.

– Bobby

Posted in Uncategorized | Leave a comment

ps and top differences with HugePages

The Unix utilities ps and top report memory differently with HugePages than without.

Without HugePages ps -eF seems to include the SGA memory under the SZ column:

UID        PID  PPID  C    SZ   RSS PSR STIME TTY          TIME CMD
... 
oracle    1822     1  0 846155 16232  0 07:19 ?        00:00:00 ora_d000_orcl
oracle    1824     1  0 846155 16228  0 07:19 ?        00:00:00 ora_d001_orcl
oracle    1826     1  0 846155 16236  0 07:19 ?        00:00:00 ora_d002_orcl
oracle    1828     1  0 846155 16232  0 07:19 ?        00:00:00 ora_d003_orcl
oracle    1830     1  0 846155 16224  0 07:19 ?        00:00:00 ora_d004_orcl
oracle    1832     1  0 846155 16232  0 07:19 ?        00:00:00 ora_d005_orcl
oracle    1834     1  0 846155 16236  0 07:19 ?        00:00:00 ora_d006_orcl
oracle    1836     1  0 846155 16228  0 07:19 ?        00:00:00 ora_d007_orcl
oracle    1838     1  0 846155 16224  0 07:19 ?        00:00:00 ora_d008_orcl
oracle    1840     1  0 846155 16232  0 07:19 ?        00:00:00 ora_d009_orcl
oracle    1842     1  0 846155 16240  0 07:19 ?        00:00:00 ora_d010_orcl
oracle    1844     1  0 846155 16228  0 07:19 ?        00:00:00 ora_d011_orcl
...

Here SZ = 846155 kilobytes = 826 megabytes.  If you add up all the SZ values it comes to 81 gigabytes which wont fit in my 4 gig memory and 4 gig swap.  It seems to include the amount of the SGA actually used, not the full 3 gigabyte max sga size, otherwise the total would have been hundreds of gigabytes.

Doing the same exercise with 3 gigabytes of huge pages ps looks like this:

UID        PID  PPID  C    SZ   RSS PSR STIME TTY          TIME CMD
...
oracle    1809     1  0 59211 15552   0 07:52 ?        00:00:00 ora_d000_orcl
oracle    1811     1  0 59211 15544   0 07:52 ?        00:00:00 ora_d001_orcl
oracle    1813     1  0 59211 15548   0 07:52 ?        00:00:00 ora_d002_orcl
oracle    1815     1  0 59211 15544   0 07:52 ?        00:00:00 ora_d003_orcl
oracle    1817     1  0 59211 15544   0 07:52 ?        00:00:00 ora_d004_orcl
oracle    1819     1  0 59211 15548   0 07:52 ?        00:00:00 ora_d005_orcl
oracle    1821     1  0 59211 15544   0 07:52 ?        00:00:00 ora_d006_orcl
oracle    1823     1  0 59211 15544   0 07:52 ?        00:00:00 ora_d007_orcl
oracle    1825     1  0 59211 15544   0 07:52 ?        00:00:00 ora_d008_orcl
oracle    1827     1  0 59211 15544   0 07:52 ?        00:00:00 ora_d009_orcl
oracle    1829     1  0 59211 15544   0 07:52 ?        00:00:00 ora_d010_orcl
oracle    1831     1  0 59211 15544   0 07:52 ?        00:00:00 ora_d011_orcl
...

SZ = 59211 k= 57 meg.  Total SZ = 5.89 gigabytes.  Still this is bigger than total memory but closer to the 4 gig memory available.  It’s just a guess, but I’m pretty sure that with HugePages this total doesn’t include the amount of memory in use in the SGA in the SZ for each process as it did without HugePages.

The other weird thing is how different top looks with HugePages.  Here is top with the database having just come up without HugePages:

top - 07:20:16 up 3 min,  2 users,  load average: 1.06, 0.33, 0.13
Tasks: 187 total,   1 running, 186 sleeping,   0 stopped,   0 zombie
Cpu(s):  1.6%us,  6.3%sy,  0.0%ni, 77.8%id, 14.2%wa,  0.0%hi,  0.1%si,  0.0%st
Mem:   4050836k total,   984444k used,  3066392k free,    14460k buffers
Swap:  4095996k total,        0k used,  4095996k free,   450128k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND            
 2010 oracle    20   0 3310m  51m  44m D  7.6  1.3   0:00.21 oracle             
 1988 oracle    20   0 3307m  50m  45m D  3.8  1.3   0:00.21 oracle             
 1794 oracle    -2   0 3303m  15m  13m S  1.9  0.4   0:01.07 oracle

Notice that we have about 3 gigabytes free – 3066392k and nothing in swap.

Here is the same system  with 3 gig of HugePages:

top - 07:53:21 up 2 min,  2 users,  load average: 0.81, 0.29, 0.11
Tasks: 179 total,   1 running, 178 sleeping,   0 stopped,   0 zombie
Cpu(s):  2.0%us,  8.6%sy,  0.0%ni, 69.2%id, 20.1%wa,  0.0%hi,  0.1%si,  0.0%st
Mem:   4050836k total,  3860100k used,   190736k free,    14332k buffers
Swap:  4095996k total,        0k used,  4095996k free,   239104k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND            
 1781 oracle    -2   0 3303m  15m  13m S  3.5  0.4   0:01.02 oracle             
    1 root      20   0 19400 1520 1220 S  0.0  0.0   0:01.43 init               
    2 root      20   0     0    0    0 S  0.0  0.0   0:00.00 kthreadd

Now only 190736k is free.  But, note that in both cases top lists the oracle processes with 3300 meg of virtual memory, which is consistent with the 3 gig max sga.

I’ve still got a lot to learn about HugePages but I thought I would pass along these observations.  This article on Oracle’s support site helped me learn about HugePages:

HugePages on Oracle Linux 64-bit (Doc ID 361468.1)

I ended up sizing the HugePages down to 2 gig on my 4 gig test system and reducing sga max size to 2 gig as well.  My system was sluggish with so little free memory when I was using a 3 gig SGA and HugePages.  It was much snappier with only 2 gig tied up in HugePages and dedicated to the SGA, leaving 2 gig for everything else.

This was all done with Oracle’s 64 bit version of Linux and 11.2.0.3 database.

– Bobby

Posted in Uncategorized | Leave a comment

Virtual Circuit Wait

On Monday we had some performance problems on a system that includes a database which uses shared servers.  The top wait was “virtual circuit wait”.  Here are the top 5 events for a 52 minute time frame:

Top 5 Timed Foreground Events

Event Waits Time(s) Avg wait (ms) % DB time Wait Class
virtual circuit wait 1,388,199 17,917 13 50.98 Network
db file sequential read 1,186,933 9,252 8 26.33 User I/O
log file sync 1,185,620 6,429 5 18.29 Commit
DB CPU 5,964 16.97
enq: TX – row lock contention 391 586 1499 1.67 Application

From other monitoring tools there was no sign of poor performance from the database end but virtual circuit wait is not normally the top wait during peak times.  Overall for the time period of this AWR report the shared servers didn’t seem busy:

Shared Servers Utilization

Total Server Time (s) %Busy %Idle Incoming Net % Outgoing Net %
111,963 38.49 61.51 15.99 0.01

We have seen virtual circuit waits ever since we upgraded to 11g on this system so I wanted to learn more about what causes it.  These two Oracle support documents were the most helpful:

Troubleshooting: Virtual Circuit Waits (Doc ID 1415999.1)

Bug 5689608: INACTIVE SESSION IS NOT RELEASING SHARED SERVER PROCESS (closed as not bug)

Evidently when you return a cursor from a package and the cursor includes a sort step a shared server will be hung up in a virtual circuit wait state from the time the cursor is first fetched until the application closes the cursor.  Our application uses cursors in this way so it stands to reason that our virtual circuit wait times we saw in our AWR report represent the time it took for our web servers to fetch from the cursors and close them, at least for the cursors that included sort steps.  So, if our web servers slow down due to some other issue they could potentially take longer to fetch from and close the affected cursors and this could result in higher virtual circuit wait times.

Here is a zip of a test script I ran and its output: zip

I took the test case documented in bug 5689608 and added queries to v$session_wait to show the current session’s virtual circuit waits.

Here are the first steps of the test case:

CREATE TABLE TEST AS SELECT * FROM DBA_OBJECTS; 
     
create or replace package cursor_package as
cursor mycursor is select * from test order by object_name;
end;
/
       
begin
 open cursor_package.mycursor;
end;
/
 
create or replace procedure test_case is
l_row TEST%rowtype;
begin
if cursor_package.mycursor%isopen then
fetch cursor_package.mycursor into l_row;
end if;
end;
/

These steps do the following:

  1. Create a test table
  2. Create a package with a cursor that includes an order by to force a sort
  3. Open the cursor
  4. Create a procedure to fetch the first row from the cursor

At this point I queried v$session_wait and found no waits:

SQL> select * from v$session_event
  2  where sid=
  3  (SELECT sid from v$session where audsid=USERENV('SESSIONID')) 
     and
  4  event='virtual circuit wait';

no rows selected

The next step of the test case fetched the first row and then I queried and found the first wait:

SQL> exec test_case;

SQL> select * from v$session_event
  2  where sid=
  3  (SELECT sid from v$session where audsid=USERENV('SESSIONID')) 
     and
  4  event='virtual circuit wait';

       SID EVENT                          TIME_WAITED
---------- --------------------------------------------------------
       783 virtual circuit wait           0

Note that time_waited is 0 which means the time was less than one hundredth of a second.  Next I made my sqlplus client sleep for five seconds using a host command and looked at the wait again:

SQL> host sleep 5

SQL> select * from v$session_event
  2  where sid=
  3  (SELECT sid from v$session where audsid=USERENV('SESSIONID')) 
     and
  4  event='virtual circuit wait';

       SID EVENT                             TIME_WAITED
---------- --------------------------------------------------------
       783 virtual circuit wait              507

Total time is now 507 centiseconds = 5 seconds, same as the sleep time.  So, the time for the virtual circuit wait includes the time after the client does the first fetch, even if the client is idle.  Next I closed the cursor and slept another 5 seconds:

SQL> begin
  2   close cursor_package.mycursor;
  3  end;
  4  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01
SQL> 
SQL> host sleep 5

SQL> 
SQL> select * from v$session_event
  2  where sid=
  3  (SELECT sid from v$session where audsid=USERENV('SESSIONID')) 
     and
  4  event='virtual circuit wait';

       SID EVENT                                 TIME_WAITED
---------- --------------------------------------------------------
       783 virtual circuit wait                  509

The time waited is still just about 5 seconds so the clock stops on the virtual circuit wait after the sqlplus script closes the cursor.  If the session was still waiting on virtual circuit wait after the close of the cursor the time would have been 10 seconds.

This was all new to me.  Even though we have plenty of shared servers to handle the active sessions we still see virtual circuit waits.  These waits correspond to time on the clients fetching from and closing cursors from called packages.  As a result, these wait times represent time outside of the database and not time spent within the database.  These waits tie up shared servers but as long as they are short enough and you have shared servers free they don’t represent a problem.

– Bobby

p.s. This is on hp-ux 11.31 ia64 Oracle 11.2.0.3

Posted in Uncategorized | 2 Comments

First AZORA usergroup meeting October 23

Just got the invitation to the first AZORA (Arizona Oracle user group) meeting on October 23.  Here is the link: url

It’s 2 pm at Oracle’s office, 2355 E Camelback Rd Ste 950, Phoenix, AZ.

I’m looking forward to it!

– Bobby

Posted in Uncategorized | Leave a comment

OS Watcher Paging Example

I have an example of paging in some Exadata OS Watcher log files.  We got an error in the alert log about a high load but reviewing AWR reports it did not seem like we had an unusual number of active sessions.  Also, the CPU use seemed low but the system load was high.  Reviewing the OS Watcher logs and finding a similar case on Oracle’s support site convinced me that our issue was due to a large amount memory being paged out to the swap file.

Here is the top report from the problematic time:

top - 11:07:58 up 239 days, 14:57,  0 users,  load average: 204.84, 183.48, 106.74
Tasks: 1097 total,  71 running, 1022 sleeping,   0 stopped,   4 zombie
Cpu(s):  1.3%us, 98.0%sy,  0.0%ni,  0.2%id,  0.5%wa,  0.0%hi,  0.1%si,  0.0%st
Mem:  74026796k total, 72456380k used,  1570416k free,     2436k buffers
Swap: 16771852k total,  7884656k used,  8887196k free,   597336k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND                                                                                   
  894 oracle    15   0  324m  26m  10m S 309.1  0.0 129:46.34 /u01/app/11.2.0.2/grid/bin/diskmon.bin -d -f                                             
 1053 root      20  -5     0    0    0 R 100.0  0.0  32:10.62 [kswapd0]                                                                                
 3366 oracle    16   0 1440m  54m  18m S 146.6  0.1   3:03.54 /u01/app/11.2.0.2/grid/bin/oraagent.bin                                                  
23809 oracle    18   0  992m 179m  16m S 128.1  0.2 158:45.67 /u01/app/oracle/Middleware/agent11g/bin/emagent                                          
28878 orarom    18   0  982m 360m  11m S 101.5  0.5  94:58.36 /opt/OracleHomes/agent_home/core/12.1.0.3.0/jdk/bin/java -Xmx154M -XX:MaxPermSize=96M -se
 4501 oracle    19   0 4423m  96m  24m R 89.6  0.1   2435:54 ora_dia0_ourprdfs1                                                                        
18318 oracle    20   0 26.3g 610m  70m R 86.3  0.8   2025:17 ora_p001_ourprd1                                                                          
18226 oracle    18   0 26.3g 595m  61m D 82.3  0.8 212:20.62 ora_p013_ourprd1                                                                          
 7619 oracle    19   0 1727m  16m  14m R 80.5  0.0   0:50.29 oracle+ASM1 (LOCAL=NO)                                                                    
 4389 oracle    25   0 25.9g 158m  27m R 78.5  0.2   2557:06 ora_dia0_ourprd1                                                                          
11008 oracle    18   0 26.3g 561m 133m D 77.9  0.8 373:34.37 ora_p010_ourprd1                                                                          
 8484 root      25   0  238m 6276 2600 R 77.3  0.0   0:46.60 python /usr/local/bin/dcli -s -q -l root -g cell_group exec 2>&1; cellcli -e list quaranti
 7937 oracle    18   0 26.3g 603m  86m R 76.5  0.8   1656:21 ora_p021_ourprd1                                                                          
24118 oracle    25   0 26.3g 602m 103m R 72.7  0.8   2997:08 ora_p007_ourprd1                                                                          
 4491 oracle    16   0 5912m 121m  25m D 72.6  0.2   2407:59 ora_dia0_obrprd1                                                                          
 7919 oracle    25   0 26.4g 680m 247m R 72.2  0.9   4256:57 ora_p011_ourprd1                                                                          
 4384 oracle    18   0 4407m  95m  25m D 70.8  0.1   2344:36 ora_dia0_otrprd1                                                                          
 4397 oracle    23   0 25.8g  44m  23m D 69.5  0.1   1695:21 ora_lmd0_ourprd1                                                                          
 2548 oracle    20   0 1809m 116m  44m R 69.3  0.2   2393:41 asm_dia0_+ASM1

Note the load of 204 and the system cpu % of 98%.  By 11:07 CPU utilization had gone back up so I assume that the swapping I/O had completed but then a bunch of system I/O kicked off.  Also I assume that [kswapd0] has something to do with swapping.  Here is the mpstat report:

             CPU   %user   %nice    %sys %iowait    %irq   %soft  %steal   %idle    intr/s
10:59:00     all    3.49    0.00    2.05   93.81    0.01    0.16    0.00    0.48   4144.00
10:59:05     all    4.94    0.00    2.45   92.21    0.01    0.11    0.00    0.28   2548.00
10:59:16     all    1.42    0.00    1.82   95.64    0.02    0.13    0.00    0.97   3108.66
10:59:21     all    3.84    0.00    3.58   90.68    0.04    0.49    0.00    1.38   3868.40
10:59:26     all    1.99    0.00    1.85   93.56    0.01    0.13    0.00    2.46   2942.48
10:59:31     all    1.38    0.00    1.85   91.37    0.01    0.10    0.00    5.29   2479.04
10:59:36     all    1.75    0.00    2.81   94.19    0.01    0.16    0.00    1.08   2762.60
10:59:41     all    0.89    0.00    1.09   97.41    0.00    0.05    0.00    0.56   2116.80
10:59:46     all    1.19    0.00    1.21   93.91    0.03    0.11    0.00    3.55   2992.40
10:59:51     all    1.79    0.00    1.75   95.88    0.01    0.28    0.00    0.29   3820.00
10:59:56     all    2.09    0.00    1.55   92.85    0.03    0.21    0.00    3.28   4410.80
11:00:01     all    1.85    0.00    1.91   96.00    0.01    0.11    0.00    0.11   3433.47
11:00:06     all    1.50    0.00    1.45   89.07    0.01    0.11    0.00    7.85   3746.00
11:00:11     all    6.41    0.00   10.68   80.29    0.09    0.80    0.00    1.73   9269.46
11:00:16     all   17.14    0.00   19.42   57.12    0.14    0.83    0.00    5.35  17018.64
11:00:21     all    8.58    0.00   80.85    9.01    0.05    0.23    0.00    1.29   6951.70
11:00:27     all    6.21    0.00   91.77    1.12    0.04    0.09    0.00    0.78   5417.11
11:00:32     all    5.65    0.00   93.75    0.53    0.01    0.04    0.00    0.02   3289.82
11:00:38     all    2.72    0.00   96.81    0.35    0.01    0.02    0.00    0.09   2480.87
11:00:44     all    1.73    0.00   97.89    0.28    0.00    0.02    0.00    0.08   1745.89
11:01:00     all    1.85    0.00   97.72    0.16    0.01    0.04    0.00    0.22   2710.93
11:01:05     all    0.79    0.00   98.84    0.25    0.00    0.04    0.00    0.09   2543.39
11:02:23     all    1.00    0.00   98.72    0.15    0.00    0.02    0.00    0.10   1558.00
11:02:33     all    0.95    0.00   98.74    0.09    0.00    0.09    0.00    0.13   1488.49
11:02:39     all    1.64    0.00   98.13    0.08    0.01    0.05    0.00    0.08   1441.73
11:02:44     all    0.46    0.00   99.29    0.10    0.00    0.09    0.00    0.06   1465.73
11:03:20     all    1.54    0.00   97.96    0.25    0.01    0.07    0.00    0.17   2757.95
11:03:25     all    0.40    0.00   99.24    0.21    0.00    0.02    0.00    0.12   1550.60
11:03:30     all    0.39    0.00   99.22    0.23    0.00    0.01    0.00    0.15   1421.20
11:03:35     all    3.29    0.00   96.32    0.25    0.01    0.03    0.00    0.10   1952.51
11:03:40     all    5.90    0.00   93.76    0.28    0.00    0.01    0.00    0.05   1407.00
11:03:45     all    1.14    0.00   98.56    0.20    0.00    0.01    0.00    0.09   1418.00
11:03:50     all    0.19    0.00   99.55    0.21    0.00    0.03    0.00    0.03   1356.69
11:04:23     all    0.36    0.00   99.31    0.18    0.00    0.04    0.00    0.10   1621.67
11:06:13     all    0.58    0.00   99.13    0.14    0.00    0.04    0.00    0.11   1550.73
11:06:18     all    3.32    0.00   96.36    0.23    0.00    0.04    0.00    0.06   2055.30
11:06:24     all    4.39    0.00   95.32    0.24    0.00    0.02    0.00    0.03   1991.53
11:06:31     all    4.14    0.00   95.48    0.28    0.01    0.02    0.00    0.07   1598.53
11:07:21     all    1.38    0.00   98.32    0.19    0.00    0.01    0.00    0.09   1444.49

Notice how it is almost all IO until around 11 am and then it switches to all system cpu%.  I assume the I/O was swapping memory out.  The meminfo report has these numbers:

zzz ***Mon Sep 1 11:00:11 CDT 2014
MemTotal:     74026796 kB
MemFree:        330888 kB
...
SwapTotal:    16771852 kB
SwapFree:      8153352 kB

I take this to mean that we have about 330 meg free out of 74 gig total, but still 8 gig of swap.  Presumably when free memory gets low then idle processes get swapped out.  A member of our Unix team pointed out the swapping on the vmstat output.  Here is a section that shows a lot of pages getting swapped out (so).  Each line represents 5 seconds and I think it starts 10:57:56.

zzz ***Mon Sep 1 10:05:31 CDT 2014 Sample interval 5 seconds
-- deleted 629 lines = 629*5 seconds = 3145 seconds
-- 10:57:56 am
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
86 16 7567956 328800    800 610940   56 5350   157  5381 1626 10013  4 96  0  0  0
44 32 7602804 327956    996 613940  222 10464   554 10510 2177 13818  0 99  0  0  0
 1 109 7632732 330412   2284 619780  290 7986  1738  8130 3382 14002  1 21  7 70  0
35 41 7702076 353744   2504 621392  623 16054  1290 16206 8776 19234  2  3  2 92  0
 1 103 7749296 327480   2692 622004  422 11307   789 11574 3839 16138  3  2  1 94  0
 4 109 7849812 330972   3036 624808 1974 24691  2844 25110 11705 36080  4  3  1 92  0
 0 66 7869628 346228   3108 626444  859 6309  1055  6309 5538 16922  3  1  1 94  0
23 104 7895176 1998796   3300 624864 1032 7582  1283  7784 4032 24443 56  3  1 40  0
 3 90 7918016 328212   4144 632880 4910 5939  6866  6406 13908 47820 40  8  3 49  0
 1 101 7983136 333008   4480 627416 1180 15655  1523 15960 3977 16945  5  3  1 91  0
 1 125 8033148 327376   4712 628312  922 12213  1302 12295 3545 16946  4  3  0 93  0
 1 141 8072048 329060   4856 626976  764 10862   986 11082 3684 21305  2  1  1 96  0
 2 76 8106056 327556   5056 627124  797 8855  1048  8946 3968 15044  2  2  0 96  0
 0 103 8181216 328348   5284 611140 2415 16583  3076 16938 4374 17445  4  5  2 90  0
 4 92 8222500 330044   5408 610708 1470 9698  1874  9898 2842 15110  2  2  2 94  0
 4 94 8285204 328932   5936 614128 1015 14754  1678 15213 3324 18240  2  2  5 91  0
 1 111 8326908 327348   6184 614600 1034 9759  1465 10012 2577 14809  2  2  1 95  0
 1 119 8338520 328352   6332 616076  610 4896   834  4952 2505 15373  1  1  1 97  0
 0 98 8378304 329308   6536 615944  975 9795  1334  9931 3802 15897  2  2  3 93  0
 0 89 8404456 327204   6800 618872 1232 8327  1662  8413 4557 16856  2  2  2 94  0
 0 123 8430092 327272   7092 619016 1004 7379  1393  7672 2852 14689  1  1  2 96  0
 1 92 8453420 327212   7284 622700 1738 7250  2259  7494 4241 17781  2  2  2 94  0
 7 76 8562308 329132   7860 628932 2616 23847  3409 25162 6833 20938  4  7  7 82  0
 5 60 8691628 334124   7012 635948 6066 28059  8119 32201 13584 32154 15 23  1 61  0
63 37 8683600 327404   2812 582200 8138  661 12947  2870 12507 26754 12 44  6 38  0
98 22 8696516 328056   1260 560572 10494 7070 15550  9394 28854 128944  3 96  0  0  0
72 14 8694008 327148   1960 557616 1693    3  3222   161 5014 29030  5 95  0  0  0
125 19 8693816 327172   1020 558488  226    1   569   227 1991 13841  0 99  0  0  0
97 15 8703600 327044   1032 554792 5083 3217 10447  3722 26436 202860  1 99  0  0  0
102 20 8733556 327272   1432 556224 2573 6713  5399  7194 16118 66262  1 98  0  0  0
84  9 8732984 327164   1100 555640  470    2  1090   117 1585 11922  0 99  0  0  0
85 13 8732940 327304    880 554024   60    0   317    50 1470 11247  0 99  0  0  0
149 24 8732808 327428    988 553988  133    0   814   334 1962 12802  4 95  0  0  0
153 22 8732640 327284    592 553576  187    0   875   548 4619 36573  2 98  0  0  0
159 15 8732584 327104    604 552956   50    0    96   118 2294 11239  0 99  0  0  0
142 14 8732432 328644   1524 554196  174    0  1706   260 6035 45843  0 99  0  0  0
158 12 8755436 327460   2704 555568 1130 4927  2564  5486 14539 103001  0 99  0  0  0
125 14 8754920 327980    748 554368  406    0   662     6 1583 10952  0 100  0  0  0
115 16 8752204 327472   1724 553740 2453    5  6014   532 11366 82957  1 99  0  0  0
124 19 8752276 327076   1092 554200   78   28   113    81 1767 11712  1 99  0  0  0
82  8 8757568 346356   2384 556132 1316 3202  4373  3837 6633 42986  1 99  0  0  0
151 57 8756348 329524   3632 559164  828    0  3718   429 2076 11613  3 96  0  0  0
159 49 8755120 328488   3248 557720 1033    9  4298   212 2248 12981  4 95  0  0  0
107 36 8771748 332068   1016 557272 2765 3977  9481  4370 14773 103588  2 98  0  0  0
117 16 8771592 327180   1168 558788  156    0   979    58 1466 9988  1 99  0  0  0
 3 101 7875688 1448712   5300 599300 29898 73754 52723 83528 62497 360936  2 94  0  4  0
 3 96 7874004 1260800   8856 624552 7470    0 14911   569 3814 17649  5  1 33 61  0
 1 92 7859256 558968  11228 649948 5590    0 11118  2798 10027 41964 20  5 12 63  0
 0 96 7915604 327600  14060 652636 4674 11822  7818 12173 9053 22067  8  7 20 65  0
 8 98 7963060 327240  11800 635164 1994 11097  3294 12031 6762 16549  3  7  1 90  0
 1 104 7969976 340320   8812 579564 9600 1558 12002  2998 8588 22984 17 34 10 39  0
 4 89 7642788 666372  10244 603740 7154    0 12148  2375 9681 23420  7  5 22 67  0
 1 94 7642548 327912  12556 625672 7011    3 12173  1704 7865 21465  6  5 18 71  0
 7 95 7630564 327572   3396 557488 5970   33 10358  1827 4756 15284  5 67  3 25  0
113 45 7630124 330244   1900 545524 3819    5  7540  1314 3781 13454  2 88  0 10  0
 0 88 6897380 1321012   3560 573532 3635    1  9076  2802 3414 11193  3 64  7 26  0
 1 94 6889436 981560   6188 624532 8463    0 16870  4122 4949 17243  2  1 31 65  0
 0 79 6889308 890800   8444 666940 7332    0 13390  3077 3337 15930  5  1 33 60  0
 1 74 6878180 797212  10720 715612 6776    0 14430  4653 3424 13069 11  6 37 47  0
 0 141 6876812 406704  12856 739628 7202    0 11406  2342 4737 23064  6  5 24 64  0
 2 66 6874848 336480  13428 656312 6296    1 10126  1848 4375 16755  7  9 32 52  0
 2 69 6874192 338016   8288 577196 7297    0 14230  1208 5771 16410 10  5 38 47  0
 1 72 6873696 342908   3796 576568 6161    1 12562  1149 5506 13421 11  5 34 50  0
 2 78 6873524 336716   3992 589684 5718    0 12693   280 5268 11889  9  2 31 58  0
97 48 6873180 331280   2884 559732 5110    4  7616   934 6994 14502  7 38 12 43  0
120 41 6872636 327192   1504 549972 4309   32  7965  1497 7029 14159  5 92  0  3  0
101 26 6870204 327620   1992 548548 4122   19  8110  1191 9559 20993  4 96  0  0  0
82 16 6886048 327184    984 547140 2050 3186  3662  3385 4978 16982  2 98  0  0  0
83  6 6886384 327096   1376 545516  563   69  1284   184 1946 7812  1 99  0  0  0
93 11 6886364 328100    836 545120  798    0   911    18 5495 23655  0 100  0  0  0
27 42 7008372 328376   3216 550820 7050 26176  9220 26390 9919 42056  0 100  0  0  0
 3 112 7044188 328628   4552 554876 2424 8493  3526  8570 2402 7432  2 78  1 19  0
 3 113 7108916 348172   5532 560420 3186 14382  4325 14512 9715 15529  4  6  4 87  0
 0 125 7136504 366528   7040 567140 4077 10618  5413 10906 15231 26777  4  1 12 83  0
 1 111 7149172 387580   7320 568044  391 4344   719  4394 2460 7519  1  1  0 99  0
 1 94 7178000 328136   9984 580152 5768 7927  8617  8990 20026 30064  8  3  9 80  0
 0 86 7204696 327100  10268 583912 1972 6991  2825  7515 5943 12124  3  2  8 87  0
 0 103 7218656 347300  10564 585360  457 4985   733  5144 1936 8120  0  1  6 93  0
 5 98 7236196 345460  10784 585224 1298 5318  1467  5477 7973 13715  3  2  6 89  0
 1 111 7243488 328920  10932 586500  488 3097   839  3136 1843 7555  1  1  6 92  0
 0 103 7265100 328016  11132 591452 1439 6088  2170  6241 3984 9707  2  1  2 95  0
 2 91 7274576 335116  11272 591240  709 3526   789  3750 2162 7734  3  1  2 95  0
 1 124 7303804 329200  11408 593516 1170 7348  1587  7614 4230 9955  3  1  6 90  0
 4 61 7403224 343940  11724 601684 3143 20878  4191 22518 7000 14053  4  4  3 89  0
11 62 7589208 349448  12504 607764 4350 38374  5784 38847 13038 21696  6  6  9 79  0
 3 56 7777608 338348  13328 614952 6506 38412  7919 38711 11993 18575  6  6 16 71  0
 2 49 8025300 331604  13568 618408 5642 50410  6727 50490 14740 19779  7  7 35 51  0
 1 69 8159548 977560  14304 624024 4725 27423  6727 28257 13122 22037 10  7 12 72  0
 5 50 8159156 869324  18188 647964 9421    0 13542  1034 14117 30070 16  4 24 55  0
 2 46 8158644 768600  20324 663440 10323    0 13395  2023 18356 27259 15  3 52 30  0
 3 43 8158308 551160  22344 678548 12241    0 16135   623 14080 23349 19  3 42 36  0

We got this message in the alert log:

Mon Sep 01 11:09:28 2014
 GES: System Load is HIGH.
 GES: Current load is 204.89 and high load threshold is 80.00

After this the instance crashed:

Instance terminated by LMS0, pid = 4532

Bug 17316830, “ROOT CAUSE ALL DATABASE INSTANCES CRASHED ON ONE OF OUR NODES”, relates high paging and low free memory to the same GES load messages and to instances crashing as ours did.  So, it seems that when you have a lot of paging it can cause an instance to crash.  Is this a bug or a feature? 🙂

It was interesting to look at the OS Watcher files to see information about the nature of the system when our issues occurred.  Looks like lots of paging.  The only question is what caused the paging?  We are trying to figure out if we just had some abnormally high memory use from our application or if some bug caused some system processes to eat up the normally free memory.  But, I thought I would share the investigation we did using the OS Watcher logs because it was interesting to see an example with paging, even though we have not gotten to the root of the problem.

– Bobby

 

Posted in Uncategorized | 2 Comments

force_match => TRUE option of DBMS_SQLTUNE.IMPORT_SQL_PROFILE

Yesterday and today I’ve read or heard two people mention the force_match => TRUE parameter value for DBMS_SQLTUNE.IMPORT_SQL_PROFILE and how it forces a profile to work on all SQL statements that are the same except for their literal values.  So, I ran a quick test using the coe_xfr_sql_profile.sql utility that comes with the SQLT scripts that are available for download on Oracle’s support site.

I’ve mentioned in earlier posts how we use coe_xfr_sql_profile.sql to force plans on particular SQL statements using the sql_id of the SQL statement and the plan_hash_value of the plan:

July 2013 post

October 2013 post

March 2014 post

May 2014 post

Yesterday I read this post by David Kurtz where he mentions force_match: post

Today I heard Karen Morton mention force_match in her webinar which should soon be posted here: url

So, after the webinar completed I built a test case to see how the force_match=>TRUE option works.  I created a test table and ran a query with a literal in the where clause and got its plan showing its sql_id and plan_hash_value:

ORCL:SYSTEM>create table test as select * from dba_tables;
ORCL:SYSTEM>SELECT sum(blocks) from test
  2  where owner='SYS';

SUM(BLOCKS)
-----------
      34633

ORCL:SYSTEM>select * from
  2  table(dbms_xplan.display_cursor(null,null,'ALL'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------
SQL_ID  10g08ytt2m5mu, child number 0
-------------------------------------
SELECT sum(blocks) from test where owner='SYS'

Plan hash value: 1950795681

---------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)|
----------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |       |       |    29 (100)| 
|   1 |  SORT AGGREGATE    |      |     1 |    30 |            |
|*  2 |   TABLE ACCESS FULL| TEST |   992 | 29760 |    29   (0)|
----------------------------------------------------------------

Then I ran coe_xfr_sql_profile.sql to create a profile that forces the plan on the given sql_id:

SQL> @coe_xfr_sql_profile.sql 10g08ytt2m5mu 1950795681

Then, using vi I edited the output of coe_xfr_sql_profile.sql:

vi coe_xfr_sql_profile_10g08ytt2m5mu_1950795681.sql

I searched for force_match and changed the line to read like this:

force_match => TRUE

instead of

force_match => FALSE

There are comments in the script explaining the meaning of these two values but I don’t want to plagiarize the script by including them here.  Next I ran the edited script:

sqlplus system/password < coe_xfr_sql_profile_10g08ytt2m5mu_1950795681.sql

Then I ran a test showing that not only the original query with the where clause literal ‘SYS’ would use the profile but the same query with a different literal ‘SYSTEM’ would use the  created profile.

ORCL:SYSTEM>SELECT sum(blocks) from test
  2  where owner='SYS';

SUM(BLOCKS)
-----------
      34633

ORCL:SYSTEM>select * from
  2  table(dbms_xplan.display_cursor(null,null,'ALL'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------
SQL_ID  10g08ytt2m5mu, child number 0
-------------------------------------
SELECT sum(blocks) from test where owner='SYS'

Plan hash value: 1950795681

----------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)|
----------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |       |       |    29 (100)|
|   1 |  SORT AGGREGATE    |      |     1 |    30 |            |
|*  2 |   TABLE ACCESS FULL| TEST |    81 |  2430 |    29   (0)|
----------------------------------------------------------------

Note
-----
  - SQL profile coe_10g08ytt2m5mu_1950795681 used for this statement

ORCL:SYSTEM>SELECT sum(blocks) from test
  2  where owner='SYSTEM';

SUM(BLOCKS)
-----------
        520

ORCL:SYSTEM>
ORCL:SYSTEM>select * from
  2  table(dbms_xplan.display_cursor(null,null,'ALL'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------
SQL_ID  806ncj0a5fgus, child number 0
-------------------------------------
SELECT sum(blocks) from test where owner='SYSTEM'

Plan hash value: 1950795681

----------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)|
----------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |       |       |    29 (100)|
|   1 |  SORT AGGREGATE    |      |     1 |    30 |            |
|*  2 |   TABLE ACCESS FULL| TEST |    81 |  2430 |    29   (0)|
----------------------------------------------------------------

Note
-----
  - SQL profile coe_10g08ytt2m5mu_1950795681 used for this statement

Note that a different sql_id = 806ncj0a5fgus represents the second statement but the same plan_hash_value = 1950795681.  Also note that the SQL profile has the same name in both plans = coe_10g08ytt2m5mu_1950795681.

Now that I’m aware of the force_match=>TRUE option of DBMS_SQLTUNE.IMPORT_SQL_PROFILE I can use SQL profiles to force plans on queries that have different literal values, but are otherwise identical.  This adds a whole new set of problems that can be resolved without modifying the existing code which can really help in a performance firefight.

– Bobby

 

 

 

Posted in Uncategorized | 5 Comments

Arizona Oracle User Group (AZORA)

I saw this fun blog post about the Arizona Oracle User Group getting organized: blog post(updated with meetup page).

I’m definitely interested in being involved.  Please pass this on to any Oracle DBAs, developers, etc. that you know in the Phoenix area.

– Bobby

Posted in Uncategorized | Leave a comment

Commit Puzzle

This graph represents commit time compared to CPU utilization and redo log write time.  I’ve included only the hourly intervals with more than 1,000,000 commits.  At these peaks the number of commits ranges 1 to 1.6 million commits per hour so each point on the graph represents roughly the same commit rate.  I’m puzzled by why the commit time bounces around peaking above 5 milliseconds when I can’t see any peaks in I/O or CPU that correspond to the commit time peaks.

commitvscpuio

I derived CPU% from DBA_HIST_OSSTAT.  I got the other values by getting wait events from DBA_HIST_SYSTEM_EVENT.  Commit time is log file sync wait time.  Redo write time is log file parallel write wait time.  I converted the wait times to milliseconds so they fit nicely on the chart with CPU%.

I thought I would pass this along as a puzzle that I haven’t figured out.

Here is a zip of the script I used to get the data, its raw output, and the spreadsheet I used to make the chart: zip

– Bobby

P.S.  This is on HP-UX 11.31, Itanium, Oracle 11.2.0.3

P.P.S  Did some more work on this today.  Looks like the high commit time periods have short spikes of long redo log writes even though the average over the hour is still low.  I’m looking at DBA_HIST_SYSTEM_EVENT to get a histogram of the log file parallel write waits and there are a number in the 1024 bucket when the log file sync time is high on average.

END_INTERVAL_TIME   LFPW_MILLI LFPW_COUNT AVG_COMMIT_MS AVG_WRITE_MS
------------------- ---------- ---------- ------------- ------------
21-AUG-14 11.00 AM           1     268136    9.14914833   2.45438987
21-AUG-14 11.00 AM           2     453913    9.14914833   2.45438987
21-AUG-14 11.00 AM           4     168370    9.14914833   2.45438987
21-AUG-14 11.00 AM           8      24436    9.14914833   2.45438987
21-AUG-14 11.00 AM          16       5675    9.14914833   2.45438987
21-AUG-14 11.00 AM          32       6122    9.14914833   2.45438987
21-AUG-14 11.00 AM          64       3369    9.14914833   2.45438987
21-AUG-14 11.00 AM         128       2198    9.14914833   2.45438987
21-AUG-14 11.00 AM         256       1009    9.14914833   2.45438987
21-AUG-14 11.00 AM         512        236    9.14914833   2.45438987
21-AUG-14 11.00 AM        1024         19    9.14914833   2.45438987
21-AUG-14 11.00 AM        2048          0    9.14914833   2.45438987
21-AUG-14 02.00 PM           1     522165    2.97787777   1.64840599
21-AUG-14 02.00 PM           2     462917    2.97787777   1.64840599
21-AUG-14 02.00 PM           4     142612    2.97787777   1.64840599
21-AUG-14 02.00 PM           8      17014    2.97787777   1.64840599
21-AUG-14 02.00 PM          16       4656    2.97787777   1.64840599
21-AUG-14 02.00 PM          32       5241    2.97787777   1.64840599
21-AUG-14 02.00 PM          64       1882    2.97787777   1.64840599
21-AUG-14 02.00 PM         128        820    2.97787777   1.64840599
21-AUG-14 02.00 PM         256        149    2.97787777   1.64840599
21-AUG-14 02.00 PM         512         10    2.97787777   1.64840599
21-AUG-14 02.00 PM        1024          2    2.97787777   1.64840599
21-AUG-14 02.00 PM        2048          0    2.97787777   1.64840599

There were 19 waits over half a second in the first hour and only 2 in the second hour.  Maybe all the log file sync waits pile up waiting for those long writes.  Here is a graph that compares the number of waits over half a second – the 1024 ms bucket – to the average log file sync and log file parallel write times for the hour:

longwrites

You can see that the average redo write time goes up a little but the commit time goes up more.  Maybe commit time is more affected by a few long spikes than by a lot of slightly longer write times.

Found a cool blog post that seems to explain exactly what we are seeing: blog post(DOES NOT EXIST)

 

Posted in Uncategorized | 6 Comments