v$sql_cs_statistics bug in 12c?

I think I’ve run across a bug in 12c where v$sql_cs_statistics is not populated in 12c as it was in 11.2.0.3.

I’ve messed with adaptive cursor sharing using an example on Tim Hall’s web site: web example

I’ve tried to break it and change it in various ways to understand adaptive cursor sharing.  I was using some 11g databases and on those systems the example results in output from v$sql_cs_statistics.  But in 12c the view returns nothing.  Maybe this is normal but it seems bug like to me.

Here is a zip of my version of Tim’s example and it’s output on an 11.2.0.3 and 12.1 database: zip

In the working 11.2.0.3 example you have this output from v$sql_cs_statistics:

ORCL::SYSTEM>SELECT * FROM v$sql_cs_statistics WHERE sql_id = '9bmm6cmwa8saf';

ADDRESS          HASH_VALUE SQL_ID        CHILD_NUMBER BIND_SET_HASH_VALUE P EXECUTIONS ROWS_PROCESSED BUFFER_GETS   CPU_TIME
---------------- ---------- ------------- ------------ ------------------- - ---------- -------------- ----------- ----------
0000000089035390 4171522382 9bmm6cmwa8saf            1          2064090006 Y          1          50001         501          0
0000000089035390 4171522382 9bmm6cmwa8saf            0          2342552567 Y          1              3           3          0

On 12c you get nothing:

ORCL:CDB$ROOT:SYSTEM>SELECT * FROM v$sql_cs_statistics WHERE sql_id = '9bmm6cmwa8saf';

no rows selected

The other queries in the example all return results so I guess it is a bug.

– Bobby

Posted in Uncategorized | 2 Comments

Updated 12c OCP upgrade Self Test software has more questions

I guess I should have waited around a bit longer before taking my 12c OCP upgrade exam.  I purchased Kaplan’s Self Test software to prepare for the 12c OCP upgrade exam but it only included 50 questions and they only covered the first part of the OCP exam – the part I passed!

I noticed on the Self Test website that the test now includes 105 questions.  So, I got the bright idea of seeing if there was some way to refresh my software with an update and voilà! Now I have 105 questions and 128 flash cards.  Probably I should have waited for this practice software to mature before attempting the 12c OCP upgrade test.  One thing of particular interest to me is that it does include practice questions related to the second part of the exam which I failed.

So, hopefully I’ll be in good shape for the retest.  I am doing a lot of hacking around with different features but I need to figure out a more methodical way to study.  Still, having taken the test once I know some of the features that I need to study.  Combine that with the new questions on the Self Test software and that will get me closer.  Lastly, I pre-ordered Sam Alapati’s book on Amazon so that should put me over the top.  I was really only two questions away on my first attempt but I definitely should have waited and have done more thorough preparation.

– Bobby

Posted in Uncategorized | 2 Comments

First draft Sudden SQL Slowness Toastmasters talk

I’m doing Toastmasters to try to improve my speaking and my third talk will be related to why a function on a web site can suddenly slow down due to a change in plan.  It has to be 5 to 7 minutes long and the audience is non-technical.

So, just for fun, here is my first draft.  If you feel like giving me any suggestions that would be great.

First draft of Toastmasters Sudden SQL Slowness talk

– Bobby

Posted in Uncategorized | 2 Comments

SQL Profile appears to not work but really does

Over the weekend a coworker applied a SQL profile to a long running SQL statement but it looked like the profile was not applied.  For one thing, the SQL statement ran just as long after applying the profile.  Also, the hash value for the plan that the optimizer chose for the statement after applying the SQL Profile was not the same as the desired plan.  But, after looking at it today I’ve realized that the SQL Profile did apply the desired plan.  It was just that the desired plan wasn’t really any more efficient than the one picked by the optimizer.

Here are earlier posts related to our use of SQLT to force a given SQL statement to run with a good plan:

Using coe_xfr_sql_profile.sql to force a good plan

SQLT Emergency Plan Change

In this weekend’s case we were looking at sql_id fpa2sb9pt9d6s.  It looked like plan 1394068771 was faster than the most recently used plan because it had a lower average elapsed time:

SQL_ID        PLAN_HASH_VALUE END_INTERVAL_TIME         EXECUTIONS_DELTA Elapsed Average ms CPU Average ms IO Average ms Cluster Average ms Application Average ms Concurrency Average ms Average buffer gets Average disk reads Average rows processed
------------- --------------- ------------------------- ---------------- ------------------ -------------- ------------- ------------------ ---------------------- ---------------------- ------------------- ------------------ ----------------------
fpa2sb9pt9d6s      1615990741 11-MAY-14 11.00.06.474 AM                1         21550951.2        9720630    467066.276                  0                  27.24              82825.793           126238536            1913606                      0
fpa2sb9pt9d6s      1394068771 18-MAY-14 12.00.52.752 PM                1         1687493.09         657760    402607.781                  0                 39.814               36561.61             4701985             566205                      0
fpa2sb9pt9d6s      1611789738 25-MAY-14 10.00.59.900 AM                1         11390629.8        6835390    462073.968                  0                 32.818              39010.945            86448768             993957                      0

So we used SQLT to force fpa2sb9pt9d6s to use plan 1394068771 but instead it appeared to use plan 420390762 and it didn’t run any faster:

SQL_ID        PLAN_HASH_VALUE END_INTERVAL_TIME         EXECUTIONS_DELTA Elapsed Average ms CPU Average ms IO Average ms Cluster Average ms Application Average ms Concurrency Average ms Average buffer gets Average disk reads Average rows processed
------------- --------------- ------------------------- ---------------- ------------------ -------------- ------------- ------------------ ---------------------- ---------------------- ------------------- ------------------ ----------------------
fpa2sb9pt9d6s      1615990741 11-MAY-14 11.00.06.474 AM                1         21550951.2        9720630    467066.276                  0                  27.24              82825.793           126238536            1913606                      0
fpa2sb9pt9d6s      1394068771 18-MAY-14 12.00.52.752 PM                1         1687493.09         657760    402607.781                  0                 39.814               36561.61             4701985             566205                      0
fpa2sb9pt9d6s      1611789738 25-MAY-14 10.00.59.900 AM                1         11390629.8        6835390    462073.968                  0                 32.818              39010.945            86448768             993957                      0
fpa2sb9pt9d6s       420390762 25-MAY-14 11.00.27.175 PM                1         19055402.5        8975130    492406.429                  0                  24.93              64279.331           120027479            1943256                      0

But if you look at the plan for fpa2sb9pt9d6s that has hash value 420390762 it says it is using the sql profile we created which should result in a plan with hash value 1394068771:

Note
-----
   - SQL profile "coe_fpa2sb9pt9d6s_1394068771" used for this statement

So, I dumped out both plans – 420390762 and 1394068771 and edited out the columns from the plan output that were irrelevant and did a diff and came up with this result:

45c45
< |  41 |                          TABLE ACCESS FULL      | SYS_TEMP_0FD9D67B9_9DB13F0D |       |       |  Q2,01 | PCWP |            |
---
> |  41 |                          TABLE ACCESS FULL      | SYS_TEMP_0FD9D6897_9DB13F0D |       |       |  Q2,01 | PCWP |            |

420390762 and 1394068771 were identical except that the system generated temporary segment name was slightly different.  The temp segment name affects the hash value for the plan so this is a gotcha on using plan hash value to compare plans.  Two distinct plan hash values could really represent the same plan if they have system generated temporary segments in the plan.

So, SQLT did successfully apply a given plan to a given sql_id using a SQL Profile but it didn’t improve performance and the resulting plan hash value was not what we expected.

– Bobby

Posted in Uncategorized | 3 Comments

Standalone 12c grid install example

Here are my rough screenshots from an install of 12c grid on a standalone Linux vm: pdf

I’m running version 6 of Oracle’s Linux in a VMWare Player 4 gigabyte virtual machine on my Windows 7 laptop.  I ran into a couple of key points that I wanted to document.

newdisks

I added two new disks to my VM but after booting my machine they disappeared.  I ended up adding them back in and rebooting again and they stayed.  Not sure what to say except after you add them reboot and make sure they are still there.

diskdevices

The new disk devices showed up as /dev/sdb and /dev/sdc.  Then I had to run fdisk on each one to create a partition that took up the entire disk.  That led to devices being out there called /dev/sdb1 and /dev/sdc1.

Next I had to do some steps related to the oracleasm utility but I found out later that the next step I should have done was disable SE Linux:

disableselinux

I had to edit the file /etc/selinux/config so that SELINUX=disabled and reboot.  Then I ran these commands to configure oracleasm:

yum install oracleasm-support
/usr/sbin/oracleasm configure -i
/usr/sbin/oracleasm init
/usr/sbin/oracleasm createdisk DISK1 /dev/sdb1
/usr/sbin/oracleasm createdisk DISK2 /dev/sdc1

The next tricky thing was figuring out the path to these two disks during the creation of the disk group as part of the grid install.  Here is what the install screen looked like:

noasmdisks

It didn’t have any disks to choose from.  I tried various discovery paths such as /dev/sd* and ORCL:* but finally found one that worked:

oracleasmdiscoverypath

Now the disks showed up and I was able to continue:

asmdisksvisible

So, that’s all I wanted to point out, just a few key things I had to understand.  You can check out my link for all the screenshots I took.

– Bobby

Posted in Uncategorized | Leave a comment

12c Online Table Redefinition Examples

I’ve been learning about online table redefinition in Oracle 12c.  Here is a zip of some test scripts I built for my own understanding: zip

I spent some time trying out DBMS_REDEFINITION on an 11.2 Exadata test system and then started messing with it on 12c in a generic Linux VM.  The 12c version of the DBMS_REDEFINITION includes a new procedure called REDEF_TABLE which lets you do in one step certain things you did with multiple calls to the package in 11.2.  This is an online table compress on 12c:

BEGIN
  DBMS_REDEFINITION.REDEF_TABLE(
    uname                      => user,
    tname                      => 'TEST',
    table_compression_type     => 'COMPRESS');
END;
/

Things that can’t be done in one step like this require calls to procedures such as CAN_REDEF_TABLE, START_REDEF_TABLE, REGISTER_DEPENDENT_OBJECT, COPY_TABLE_DEPENDENTS, and FINISH_REDEF_TABLE.  Example online12c5.sql uses all of these.  Here is a summary of each included file’s test:

online12c1.sql – compress table in one step

online12c2.sql – compress table in multiple steps and show that it creates a hidden column without a primary key

online12c3.sql – same as previous test but with primary key

online12c4.sql – copy contraints and indexes

online12c5.sql – change the columns for the non-unique index

online12c6.sql – change order and type of columns

– Bobby

Posted in Uncategorized | 4 Comments

Create new cdb with two pdbs

Quick post.  I’m messing with 12c again.  I finished recreating my cdb but this time found the option to create multiple pdbs on the initial creation.  I’m using DBCA.

PDF with screenshots of me creating a cdb with two pdbs

– Bobby

Posted in Uncategorized | Leave a comment

Simple tuning example but uncertain test results

Last week I came across what seemed like a simple query tuning problem.  A PeopleSoft batch job ran for many hours and when I did an AWR report I found that the top query was doing a full scan when an index should help.

Here is the query and its bad plan:

SQL_ID 1jx5w9ybgb51g
--------------------
UPDATE PS_JGEN_ACCT_ENTRY 
SET 
JOURNAL_ID = :1, 
JOURNAL_DATE = TO_DATE(:2,'YYYY-MM-DD'), 
FISCAL_YEAR = :3,
ACCOUNTING_PERIOD = :4,
GL_DISTRIB_STATUS = 'D', 
JOURNAL_LINE = :5 
WHERE 
PROCESS_INSTANCE = 6692638 AND
GL_DISTRIB_STATUS = 'J'  AND 
ACCOUNT=:6 AND 
DEPTID=:7 AND 
CURRENCY_CD=:8 AND
FOREIGN_CURRENCY=:9

Plan hash value: 1919611120

-----------------------------------------------------------------------------------------
| Id  | Operation          | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------
|   0 | UPDATE STATEMENT   |                    |       |       | 21649 (100)|          |
|   1 |  UPDATE            | PS_JGEN_ACCT_ENTRY |       |       |            |          |
|   2 |   TABLE ACCESS FULL| PS_JGEN_ACCT_ENTRY |     1 |    58 | 21649   (5)| 00:01:27 |
-----------------------------------------------------------------------------------------

The problematic batch job ran variations of this query with different literal values for PROCESS_INSTANCE corresponding to each flat file being loaded.  Three updates of this type were in the awr report for the 16 hour period that covered the run of the batch job:

  Elapsed      CPU                  Elap per  % Total
  Time (s)   Time (s)  Executions   Exec (s)  DB Time    SQL Id
---------- ---------- ------------ ---------- ------- -------------
    16,899      5,836        3,811        4.4     3.5 4h54qqmbkynaj

UPDATE PS_JGEN_ACCT_ENTRY SET JOURNAL_ID = :1, JOURNAL_DATE = TO_DATE(:2,'YYYY-M
M-DD'), FISCAL_YEAR = :3, ACCOUNTING_PERIOD = :4, GL_DISTRIB_STATUS = 'D', JOURN
AL_LINE = :5 WHERE PROCESS_INSTANCE = 6692549 AND GL_DISTRIB_STATUS = 'J' AND A
CCOUNT=:6 AND DEPTID=:7 AND CURRENCY_CD=:8 AND FOREIGN_CURRENCY=:9

     6,170      2,190        1,480        4.2     1.3 a5rd6vx6sm8p3

UPDATE PS_JGEN_ACCT_ENTRY SET JOURNAL_ID = :1, JOURNAL_DATE = TO_DATE(:2,'YYYY-M
M-DD'), FISCAL_YEAR = :3, ACCOUNTING_PERIOD = :4, GL_DISTRIB_STATUS = 'D', JOURN
AL_LINE = :5 WHERE PROCESS_INSTANCE = 6692572 AND GL_DISTRIB_STATUS = 'J' AND A
CCOUNT=:6 AND DEPTID=:7 AND CURRENCY_CD=:8 AND FOREIGN_CURRENCY=:9

     6,141      1,983        1,288        4.8     1.3 1jx5w9ybgb51g

UPDATE PS_JGEN_ACCT_ENTRY SET JOURNAL_ID = :1, JOURNAL_DATE = TO_DATE(:2,'YYYY-M
M-DD'), FISCAL_YEAR = :3, ACCOUNTING_PERIOD = :4, GL_DISTRIB_STATUS = 'D', JOURN
AL_LINE = :5 WHERE PROCESS_INSTANCE = 6692638 AND GL_DISTRIB_STATUS = 'J' AND A
CCOUNT=:6 AND DEPTID=:7 AND CURRENCY_CD=:8 AND FOREIGN_CURRENCY=:9

The batch job ran about 15 and a half hours so these three plus others like them surely combined to make up the bulk of the run time.

It made sense to me to just add an index on all the columns in the where clause – PROCESS_INSTANCE,GL_DISTRIB_STATUS,ACCOUNT,DEPTID,CURRENCY_CD,FOREIGN_CURRENCY

Just to check how selective this combination of columns might be I did a count on each grouping of these columns and came up with about 50 rows per combination:

>select max(cnt),avg(cnt),min(cnt)
  2  from
  3  (select
  4  PROCESS_INSTANCE,
  5  GL_DISTRIB_STATUS,
  6  ACCOUNT,
  7  DEPTID,
  8  CURRENCY_CD,
  9  FOREIGN_CURRENCY,
 10  count(*) cnt
 11  from sysadm.PS_JGEN_ACCT_ENTRY
 12  group by
 13  PROCESS_INSTANCE,
 14  GL_DISTRIB_STATUS,
 15  ACCOUNT,
 16  DEPTID,
 17  CURRENCY_CD,
 18  FOREIGN_CURRENCY);

  MAX(CNT)   AVG(CNT)   MIN(CNT)
---------- ---------- ----------
      9404  50.167041          1

The table itself has 3 million rows so this is pretty selective:

OWNER                TABLE_NAME                       NUM_ROWS     BLOCKS AVG_ROW_LEN SAMPLE_SIZE LAST_ANALYZED       DEGREE     INSTANCES
-------------------- ------------------------------ ---------- ---------- ----------- ----------- ------------------- ---------- ----------
SYSADM               PS_JGEN_ACCT_ENTRY                3145253      82857         204     3145253 2014-04-21 21:07:02          1          1

But, the strange thing was when we added the index on our test system we didn’t see any performance improvement!  We ran the largest production file on test and it ran in ten minutes with or without the index.  Yack!

So, I tried my own test in sqlplus with the select equivalent of the update and hardcoded values instead of bind variables – quick and dirty.  I thought I had extracted some valid values although I later realized they weren’t.  Here is what I ran and notice the full scan ran just as fast as with the index:

>select * from
  2  sysadm.PS_JGEN_ACCT_ENTRY
  3  WHERE PROCESS_INSTANCE = 6138803 AND
  4  GL_DISTRIB_STATUS = 'J'  AND ACCOUNT=1234567 AND DEPTID=567 AND CURRENCY_CD='USD' AND
  5  FOREIGN_CURRENCY = NULL;

no rows selected

Elapsed: 00:00:00.30

Execution Plan
----------------------------------------------------------
Plan hash value: 1762298626

---------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                    |     1 |   203 |     0   (0)|          |
|*  1 |  FILTER                      |                    |       |       |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID| PS_JGEN_ACCT_ENTRY |     1 |   203 |     5   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN          | PSAJGEN_ACCT_ENTRY |     1 |       |     4   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(NULL IS NOT NULL)
   3 - access("PROCESS_INSTANCE"=6138803 AND "GL_DISTRIB_STATUS"='J' AND
              "CURRENCY_CD"='USD')
       filter(TO_NUMBER("ACCOUNT")=1234567 AND TO_NUMBER("DEPTID")=567 AND
              "CURRENCY_CD"='USD')


Statistics
----------------------------------------------------------
       1761  recursive calls
          0  db block gets
        557  consistent gets
         14  physical reads
          0  redo size
       1866  bytes sent via SQL*Net to client
        239  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          5  sorts (memory)
          0  sorts (disk)
          0  rows processed

>
>select /*+full(PS_JGEN_ACCT_ENTRY) */ * from
  2  sysadm.PS_JGEN_ACCT_ENTRY
  3  WHERE PROCESS_INSTANCE = 6138803 AND
  4  GL_DISTRIB_STATUS = 'J'  AND ACCOUNT=1234567 AND DEPTID=567 AND CURRENCY_CD='USD' AND
  5  FOREIGN_CURRENCY = NULL;

no rows selected

Elapsed: 00:00:00.17

Execution Plan
----------------------------------------------------------
Plan hash value: 3728573827

-----------------------------------------------------------------------------------------
| Id  | Operation          | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                    |     1 |   203 |     0   (0)|          |
|*  1 |  FILTER            |                    |       |       |            |          |
|*  2 |   TABLE ACCESS FULL| PS_JGEN_ACCT_ENTRY |     1 |   203 | 12185   (2)| 00:02:27 |
-----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(NULL IS NOT NULL)
   2 - filter("PROCESS_INSTANCE"=6138803 AND "GL_DISTRIB_STATUS"='J' AND
              TO_NUMBER("ACCOUNT")=1234567 AND TO_NUMBER("DEPTID")=567 AND "CURRENCY_CD"='USD')


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
          0  consistent gets
          0  physical reads
          0  redo size
       1873  bytes sent via SQL*Net to client
        239  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          0  rows processed

It looks like I passed in a NULL for a column with a NOT NULL constraint and that is what made the full scan version just as fast as the indexed one.  The FILTER condition must have realized no rows could meet both NULL and NOT NULL conditions.  With both plans the database realized immediately that there were no rows matching this bogus collection of constants.  So, then I replaced the NULL with a zero and finally we had proof of the performance improvement of the index:

>select * from
  2  sysadm.PS_JGEN_ACCT_ENTRY
  3  WHERE PROCESS_INSTANCE = 6138803 AND
  4  GL_DISTRIB_STATUS = 'J'  AND ACCOUNT=1234567 AND DEPTID=567 AND CURRENCY_CD='USD' AND
  5  FOREIGN_CURRENCY = 0;

no rows selected

Elapsed: 00:00:00.02

Execution Plan
----------------------------------------------------------
Plan hash value: 2047014499

--------------------------------------------------------------------------------------------------
| Id  | Operation                   | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |                    |     1 |   203 |     5   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| PS_JGEN_ACCT_ENTRY |     1 |   203 |     5   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | PSAJGEN_ACCT_ENTRY |     1 |       |     4   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("PROCESS_INSTANCE"=6138803 AND "GL_DISTRIB_STATUS"='J' AND
              "CURRENCY_CD"='USD')
       filter(TO_NUMBER("ACCOUNT")=1234567 AND TO_NUMBER("DEPTID")=567 AND
              TO_NUMBER("FOREIGN_CURRENCY")=0 AND "CURRENCY_CD"='USD')


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
          3  consistent gets
          0  physical reads
          0  redo size
       1866  bytes sent via SQL*Net to client
        239  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          0  rows processed

>
>select /*+full(PS_JGEN_ACCT_ENTRY) */ * from
  2  sysadm.PS_JGEN_ACCT_ENTRY
  3  WHERE PROCESS_INSTANCE = 6138803 AND
  4  GL_DISTRIB_STATUS = 'J'  AND ACCOUNT=1234567 AND DEPTID=567 AND CURRENCY_CD='USD' AND
  5  FOREIGN_CURRENCY = 0;

no rows selected

Elapsed: 00:00:37.11

Execution Plan
----------------------------------------------------------
Plan hash value: 1758291200

----------------------------------------------------------------------------------------
| Id  | Operation         | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |                    |     1 |   203 | 12185   (2)| 00:02:27 |
|*  1 |  TABLE ACCESS FULL| PS_JGEN_ACCT_ENTRY |     1 |   203 | 12185   (2)| 00:02:27 |
----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("PROCESS_INSTANCE"=6138803 AND "GL_DISTRIB_STATUS"='J' AND
              TO_NUMBER("ACCOUNT")=1234567 AND TO_NUMBER("DEPTID")=567 AND
              TO_NUMBER("FOREIGN_CURRENCY")=0 AND "CURRENCY_CD"='USD')


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
      56110  consistent gets
      55409  physical reads
          0  redo size
       1873  bytes sent via SQL*Net to client
        239  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          0  rows processed

So, I can’t tell you what happened in test but I suspect that we were passing null into one of the bind variables and got a similar efficient filter due to some data that was missing running a production file on an out of date test system.  But, once I forced the equivalent to the production full scan by supplying non-null values for all the constants the value of the index became clear.  It went into production last week and this weekend’s run ran in a few minutes instead of 15 hours.

– Bobby

 

 

Posted in Uncategorized | 2 Comments

Epic fail: Flunked my 12c OCP upgrade test

Well, I took the Oracle 12c OCP upgrade exam this morning and didn’t pass it.  I’ve spent many hours over the past weeks and months reading up on new 12c features and it wasn’t enough.  I also installed 12c databases and tested out a variety of features to no avail.

Ultimately my downfall was that I tried to prepare for the test from the manuals and the descriptions of what the test covered.  Also, I purchased the Kaplan TestPrep software which only covered the first part of the 12c upgrade exam and only included 50 questions.

I should have waited until an upgrade exam book came out and I should have gotten the more comprehensive Transcender software so I would have a more complete set of questions and a better idea what would be covered.  I can’t study everything.

If you don’t know the 12c OCP upgrade exam has a new “Key DBA Skills” section which wasn’t present in earlier exams.  You have to pass both sections.  The first section covers the new 12c features and corresponds to the previous exams.  I passed this section, although barely, even though I felt confident going in that I would get a high percentage.  The breadth of the topics covered by the second section worried me because I can’t study everything.  It definitely covered things I didn’t study including some features I’ve never used.

Both parts of the exam were challenging.  It seems like a pretty tough test.  I’ve got my Oracle 7, 9,10, and 11 certifications and I’ve passed all of those tests on the first try so this is my first failure.  So, I’m trying to regroup and think about where to go from here.

Ideally, I’d like to get Sam Alapati’s book after it comes out on Amazon and get the Transcender software as well but that costs some money.  Also, I’m thinking I need to take some time and write-up some study sheets for myself instead of trying to commit everything to memory and hoping I remember during the test.

Anyway, I thought I would share my failure with the community and hope it helps someone else prepare.  The truth is that even though it is embarrassing to fail the test I learned things in the process that I can use at my job.  It would be great to get the piece of paper and I hope to do so by the end of the year, but I’ve already learned a ton through what I’ve done so far.

– Bobby

Posted in Uncategorized | 27 Comments

Useful Oracle 12c OCP exam blog post

I found this blog post about the Oracle 12c OCP exam useful: NO LONGER EXISTS

In particular it explained why my Kaplan SelfTest software only covers the new 12c features and not general DBA skills section of the OCP exam.

The Kaplan software I purchased has 50 questions and they are only about new features.  The software showed me the gaps in my 12c new features knowledge and gave me practice taking a multiple choice computerized test and I believe the value of these benefits exceed the $99 cost of the software.  But, the software surprised me when I discovered that it didn’t cover all the areas that will be on the OCP 12c upgrade exam.  The blog post I’ve referenced explains that in the near future Transcender will produce software that includes both sections of the OCP 12c upgrade exam.

– Bobby

Posted in Uncategorized | 2 Comments