Tuesday, April 30, 2019

Using RULE Based Optimizer When Querying GV$ Views

I had a complaint from one of the readers that dbalarm script for monitoring the DB is taking very long time to run against one RAC DB [11.2.0.3] especially in the part of reporting locked sessions on the database, when I dug deeper found that most of time is getting consumed by this statement:

select
substr(s.INST_ID||'|'||s.OSUSER||'/'||s.USERNAME||'| '||s.sid||','||s.serial#||' |'||substr(s.MACHINE,1,22)||'|'||substr(s.MODULE,1,18),1,75)"I|OS/DB USER|SID,SER|MACHN|MOD"
,substr(s.status||'|'||round(w.WAIT_TIME_MICRO/1000000)||'|'||LAST_CALL_ET||'|'||to_char(LOGON_TIME,'ddMon HH24:MI'),1,34) "ST|WAITD|ACT_SINC|LOGIN"
,substr(w.event,1,24) "EVENT"
,s.PREV_SQL_ID||'|'||s.SQL_ID||'|'||round(w.TIME_REMAINING_MICRO/1000000) "PREV|CURRENT_SQL|REMAIN_SEC"
from    gv$session s, gv$session_wait w
where   s.sid in (select distinct FINAL_BLOCKING_SESSION from gv$session where FINAL_BLOCKING_SESSION is not null)
and     s.USERNAME is not null
and     s.sid=w.sid
and     s.FINAL_BLOCKING_SESSION is null
/

Elapsed: 00:00:11.34

Execution plan was showing this:
-------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation        | Name | Rows  | Bytes | Cost (%CPU)| Time |    TQ  |IN-OUT| PQ Distrib |
-------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        | |     1 |   271 |     1 (100)| 00:00:01 |   | |       |
|*  1 |  FILTER        | | | |       | |   | |       |
|*  2 |   HASH JOIN        | |   100 | 27100 |     1 (100)| 00:00:01 |   | |       |
|   3 |    PX COORDINATOR        | |     1 |   198 |     0   (0)| 00:00:01 |   | |       |
|   4 |     PX SEND QC (RANDOM)        | :TQ20000 |     1 |   270 |     0   (0)| 00:00:01 |  Q2,00 | P->S | QC (RAND)  |
|*  5 |      VIEW        | GV$SESSION | | |       | |  Q2,00 | PCWP |       |
|   6 |       NESTED LOOPS        | |     1 |   270 |     0   (0)| 00:00:01 |  Q2,00 | PCWP |       |
|   7 |        NESTED LOOPS        | |     1 |   257 |     0   (0)| 00:00:01 |  Q2,00 | PCWP |       |
|*  8 | FIXED TABLE FULL       | X$KSUSE |     1 |   231 |     0   (0)| 00:00:01 |  Q2,00 | PCWP |       |
|*  9 | FIXED TABLE FIXED INDEX| X$KSLWT (ind:1) |     1 |    26 |     0   (0)| 00:00:01 |  Q2,00 | PCWP |       |
|* 10 |        FIXED TABLE FIXED INDEX | X$KSLED (ind:2) |     1 |    13 |     0   (0)| 00:00:01 |  Q2,00 | PCWP |       |
|  11 |    PX COORDINATOR        | |   100 |  7300 |     0   (0)| 00:00:01 |   | |       |
|  12 |     PX SEND QC (RANDOM)        | :TQ30000 |   100 | 12500 |     0   (0)| 00:00:01 |  Q3,00 | P->S | QC (RAND)  |
|  13 |      VIEW        | GV$SESSION_WAIT | | |       | |  Q3,00 | PCWP |       |
|  14 |       NESTED LOOPS        | |   100 | 12500 |     0   (0)| 00:00:01 |  Q3,00 | PCWP |       |
|  15 |        FIXED TABLE FULL        | X$KSLWT |   100 |  7800 |     0   (0)| 00:00:01 |  Q3,00 | PCWP |       |
|* 16 |        FIXED TABLE FIXED INDEX | X$KSLED (ind:2) |     1 |    47 |     0   (0)| 00:00:01 |  Q3,00 | PCWP |       |
|  17 |   PX COORDINATOR        | |     1 |    13 |     0   (0)| 00:00:01 |   | |       |
|  18 |    PX SEND QC (RANDOM)        | :TQ10000 |     1 |    91 |     0   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
|* 19 |     VIEW        | GV$SESSION | | |       | |  Q1,00 | PCWP |       |
|  20 |      NESTED LOOPS        | |     1 |    91 |     0   (0)| 00:00:01 |  Q1,00 | PCWP |       |
|  21 |       NESTED LOOPS        | |     1 |    78 |     0   (0)| 00:00:01 |  Q1,00 | PCWP |       |
|* 22 |        FIXED TABLE FULL        | X$KSUSE |     1 |    52 |     0   (0)| 00:00:01 |  Q1,00 | PCWP |       |
|* 23 |        FIXED TABLE FIXED INDEX | X$KSLWT (ind:1) |     1 |    26 |     0   (0)| 00:00:01 |  Q1,00 | PCWP |       |
|* 24 |       FIXED TABLE FIXED INDEX  | X$KSLED (ind:2) |     1 |    13 |     0   (0)| 00:00:01 |  Q1,00 | PCWP |       |
-------------------------------------------------------------------------------------------------------------------------------

When traced the session found the following in the trace:

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         0          0          0  FILTER  (cr=0 pr=0 pw=0 time=11647894 us)
      2818       2818       2818   HASH JOIN  (cr=0 pr=0 pw=0 time=74814 us cost=1 size=27100 card=100)
      1875       1875       1875    PX COORDINATOR  (cr=0 pr=0 pw=0 time=35972 us cost=0 size=198 card=1)
         0          0          0     PX SEND QC (RANDOM) :TQ20000 (cr=0 pr=0 pw=0 time=0 us cost=0 size=270 card=1)
         0          0          0      VIEW  GV$SESSION (cr=0 pr=0 pw=0 time=0 us)
         0          0          0       NESTED LOOPS  (cr=0 pr=0 pw=0 time=0 us cost=0 size=270 card=1)
         0          0          0        NESTED LOOPS  (cr=0 pr=0 pw=0 time=0 us cost=0 size=257 card=1)
         0          0          0         FIXED TABLE FULL X$KSUSE (cr=0 pr=0 pw=0 time=0 us cost=0 size=231 card=1)
         0          0          0         FIXED TABLE FIXED INDEX X$KSLWT (ind:1) (cr=0 pr=0 pw=0 time=0 us cost=0 size=26 card=1)
         0          0          0        FIXED TABLE FIXED INDEX X$KSLED (ind:2) (cr=0 pr=0 pw=0 time=0 us cost=0 size=13 card=1)
      2018       2018       2018    PX COORDINATOR  (cr=0 pr=0 pw=0 time=5021 us cost=0 size=7300 card=100)
         0          0          0     PX SEND QC (RANDOM) :TQ30000 (cr=0 pr=0 pw=0 time=0 us cost=0 size=12500 card=100)
         0          0          0      VIEW  GV$SESSION_WAIT (cr=0 pr=0 pw=0 time=0 us)
         0          0          0       NESTED LOOPS  (cr=0 pr=0 pw=0 time=0 us cost=0 size=12500 card=100)
         0          0          0        FIXED TABLE FULL X$KSLWT (cr=0 pr=0 pw=0 time=0 us cost=0 size=7800 card=100)
         0          0          0        FIXED TABLE FIXED INDEX X$KSLED (ind:2) (cr=0 pr=0 pw=0 time=0 us cost=0 size=47 card=1)
         0          0          0   PX COORDINATOR  (cr=0 pr=0 pw=0 time=11562142 us cost=0 size=13 card=1)
         0          0          0    PX SEND QC (RANDOM) :TQ10000 (cr=0 pr=0 pw=0 time=7157212 us cost=0 size=91 card=1)
         0          0          0     VIEW  GV$SESSION (cr=0 pr=0 pw=0 time=7156047 us)
   1488896    1488896    1488896      NESTED LOOPS  (cr=0 pr=0 pw=0 time=6553126 us cost=0 size=91 card=1)
   1488896    1488896    1488896       NESTED LOOPS  (cr=0 pr=0 pw=0 time=5569809 us cost=0 size=78 card=1)
   1488896    1488896    1488896        FIXED TABLE FULL X$KSUSE (cr=0 pr=0 pw=0 time=3967204 us cost=0 size=52 card=1)
   1488896    1488896    1488896        FIXED TABLE FIXED INDEX X$KSLWT (ind:1) (cr=0 pr=0 pw=0 time=1142831 us cost=0 size=26 card=1)
   1488896    1488896    1488896       FIXED TABLE FIXED INDEX X$KSLED (ind:2) (cr=0 pr=0 pw=0 time=567563 us cost=0 size=13 card=1)

Ok. Now I can see that most of the time is getting consumed on the fixed tables in red color.

Now, let's check executing the same statement but using RULE Based Optimizer (RBO) this time:

select  /*+RULE*/
substr(s.INST_ID||'|'||s.OSUSER||'/'||s.USERNAME||'| '||s.sid||','||s.serial#||' |'||substr(s.MACHINE,1,22)||'|'||substr(s.MODULE,1,18),1,75)"I|OS/DB USER|SID,SER|MACHN|MOD"
,substr(s.status||'|'||round(w.WAIT_TIME_MICRO/1000000)||'|'||LAST_CALL_ET||'|'||to_char(LOGON_TIME,'ddMon HH24:MI'),1,34) "ST|WAITD|ACT_SINC|LOGIN"
,substr(w.event,1,24) "EVENT"
,s.PREV_SQL_ID||'|'||s.SQL_ID||'|'||round(w.TIME_REMAINING_MICRO/1000000) "PREV|CURRENT_SQL|REMAIN_SEC"
from    gv$session s, gv$session_wait w
where   s.sid in (select distinct FINAL_BLOCKING_SESSION from gv$session where FINAL_BLOCKING_SESSION is not null)
and     s.USERNAME is not null
and     s.sid=w.sid
and     s.FINAL_BLOCKING_SESSION is null
/


Elapsed: 00:00:00.09

Wow, RULE Based Optimizer (RBO) runs the statement in 9 centiseconds much faster than CBO which ran it in 11 seconds. How this happened?

The following was the execution plan when using RBO
------------------------------------------------------------------------------------
| Id  | Operation     | Name       |    TQ  |IN-OUT| PQ Distrib |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |       |        |      |    |
|   1 |  MERGE JOIN     |       |        |      |    |
|   2 |   MERGE JOIN     |       |        |      |    |
|   3 |    SORT JOIN     |       |        |      |    |
|   4 |     PX COORDINATOR     |       |        |      |    |
|   5 |      PX SEND QC (RANDOM)    | :TQ10000       |  Q1,00 | P->S | QC (RAND)  |
|   6 |       VIEW     | GV$SESSION_WAIT |  Q1,00 | PCWP |    |
|   7 |        MERGE JOIN     |       |  Q1,00 | PCWP |    |
|   8 | FIXED TABLE FULL    | X$KSLED       |  Q1,00 | PCWP |    |
|*  9 | SORT JOIN     |       |  Q1,00 | PCWP |    |
|  10 | FIXED TABLE FULL   | X$KSLWT       |  Q1,00 | PCWP |    |
|* 11 |    SORT JOIN     |       |        |      |    |
|  12 |     PX COORDINATOR     |       |        |      |    |
|  13 |      PX SEND QC (RANDOM)    | :TQ20000       |  Q2,00 | P->S | QC (RAND)  |
|* 14 |       VIEW     | GV$SESSION      |  Q2,00 | PCWP |    |
|  15 |        MERGE JOIN     |       |  Q2,00 | PCWP |    |
|  16 | SORT JOIN     |       |  Q2,00 | PCWP |    |
|  17 | MERGE JOIN     |       |  Q2,00 | PCWP |    |
|  18 |   SORT JOIN     |       |  Q2,00 | PCWP |    |
|  19 |    FIXED TABLE FULL | X$KSLWT       |  Q2,00 | PCWP |    |
|* 20 |   SORT JOIN     |       |  Q2,00 | PCWP |    |
|  21 |    FIXED TABLE FULL | X$KSLED       |  Q2,00 | PCWP |    |
|* 22 | SORT JOIN     |       |  Q2,00 | PCWP |    |
|* 23 | FIXED TABLE FULL   | X$KSUSE       |  Q2,00 | PCWP |    |
|* 24 |   SORT JOIN     |       |        |      |    |
|  25 |    VIEW     | VW_NSO_1       |        |      |    |
|  26 |     SORT UNIQUE     |       |        |      |    |
|  27 |      PX COORDINATOR     |       |        |      |    |
|  28 |       PX SEND QC (RANDOM)   | :TQ30000       |  Q3,00 | P->S | QC (RAND)  |
|* 29 |        VIEW     | GV$SESSION      |  Q3,00 | PCWP |    |
|  30 | MERGE JOIN     |       |  Q3,00 | PCWP |    |
|  31 | SORT JOIN     |       |  Q3,00 | PCWP |    |
|  32 |   MERGE JOIN     |       |  Q3,00 | PCWP |    |
|  33 |    SORT JOIN     |       |  Q3,00 | PCWP |    |
|  34 |     FIXED TABLE FULL| X$KSLWT       |  Q3,00 | PCWP |    |
|* 35 |    SORT JOIN     |       |  Q3,00 | PCWP |    |
|  36 |     FIXED TABLE FULL| X$KSLED       |  Q3,00 | PCWP |    |
|* 37 | SORT JOIN     |       |  Q3,00 | PCWP |    |
|* 38 |   FIXED TABLE FULL  | X$KSUSE       |  Q3,00 | PCWP |    |
------------------------------------------------------------------------------------

Checking the trace for the execution time when using RBO:

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         0          0          0  MERGE JOIN  (cr=0 pr=0 pw=0 time=87623 us)
         1          1          1   MERGE JOIN  (cr=0 pr=0 pw=0 time=65256 us)
         7          7          7    SORT JOIN (cr=0 pr=0 pw=0 time=23492 us)
      2024       2024       2024     PX COORDINATOR  (cr=0 pr=0 pw=0 time=14913 us)
         0          0          0      PX SEND QC (RANDOM) :TQ10000 (cr=0 pr=0 pw=0 time=0 us)
         0          0          0       VIEW  GV$SESSION_WAIT (cr=0 pr=0 pw=0 time=0 us)
         0          0          0        MERGE JOIN  (cr=0 pr=0 pw=0 time=0 us)
         0          0          0         FIXED TABLE FULL X$KSLED (cr=0 pr=0 pw=0 time=0 us)
         0          0          0         SORT JOIN (cr=0 pr=0 pw=0 time=0 us)
         0          0          0          FIXED TABLE FULL X$KSLWT (cr=0 pr=0 pw=0 time=0 us)
         1          1          1    SORT JOIN (cr=0 pr=0 pw=0 time=41766 us)
      1877       1877       1877     PX COORDINATOR  (cr=0 pr=0 pw=0 time=29246 us)
         0          0          0      PX SEND QC (RANDOM) :TQ20000 (cr=0 pr=0 pw=0 time=0 us)
         0          0          0       VIEW  GV$SESSION (cr=0 pr=0 pw=0 time=0 us)
         0          0          0        MERGE JOIN  (cr=0 pr=0 pw=0 time=0 us)
         0          0          0         SORT JOIN (cr=0 pr=0 pw=0 time=0 us)
         0          0          0          MERGE JOIN  (cr=0 pr=0 pw=0 time=0 us)
         0          0          0           SORT JOIN (cr=0 pr=0 pw=0 time=0 us)
         0          0          0            FIXED TABLE FULL X$KSLWT (cr=0 pr=0 pw=0 time=0 us)
         0          0          0           SORT JOIN (cr=0 pr=0 pw=0 time=0 us)
         0          0          0            FIXED TABLE FULL X$KSLED (cr=0 pr=0 pw=0 time=0 us)
         0          0          0         SORT JOIN (cr=0 pr=0 pw=0 time=0 us)
         0          0          0          FIXED TABLE FULL X$KSUSE (cr=0 pr=0 pw=0 time=0 us)
         0          0          0   SORT JOIN (cr=0 pr=0 pw=0 time=22335 us)
         0          0          0    VIEW  VW_NSO_1 (cr=0 pr=0 pw=0 time=22327 us)
         0          0          0     SORT UNIQUE (cr=0 pr=0 pw=0 time=22325 us)
         0          0          0      PX COORDINATOR  (cr=0 pr=0 pw=0 time=22293 us)
         0          0          0       PX SEND QC (RANDOM) :TQ30000 (cr=0 pr=0 pw=0 time=0 us)
         0          0          0        VIEW  GV$SESSION (cr=0 pr=0 pw=0 time=0 us)
         0          0          0         MERGE JOIN  (cr=0 pr=0 pw=0 time=0 us)
         0          0          0          SORT JOIN (cr=0 pr=0 pw=0 time=0 us)
         0          0          0           MERGE JOIN  (cr=0 pr=0 pw=0 time=0 us)
         0          0          0            SORT JOIN (cr=0 pr=0 pw=0 time=0 us)
         0          0          0             FIXED TABLE FULL X$KSLWT (cr=0 pr=0 pw=0 time=0 us)
         0          0          0            SORT JOIN (cr=0 pr=0 pw=0 time=0 us)
         0          0          0             FIXED TABLE FULL X$KSLED (cr=0 pr=0 pw=0 time=0 us)
         0          0          0          SORT JOIN (cr=0 pr=0 pw=0 time=0 us)
         0          0          0           FIXED TABLE FULL X$KSUSE (cr=0 pr=0 pw=0 time=0 us)

Querying fixed objects (X$KSLWT, X$KSLED, X$KSUSE) with RBO took less than second.

When tried to check the statistics date on referenced fixed objects found no statistics:

SQL> select OWNER, TABLE_NAME, LAST_ANALYZED from DBA_TAB_STATISTICS where TABLE_NAME in ('X$KSUSE','X$KSLWT','X$KSLED');

OWNER  TABLE_NAME               LAST_ANALYZED
------------------  ------------------------------ --------------------
SYS   X$KSLED
SYS   X$KSLWT
SYS   X$KSUSE

OK, this may explain why CBO was producing a bad plan when querying those fixed objects. Statistics are the eyes of the CBO if they are not exist CBO will generate plans based on Dynamic Sampling which may generate inaccurate plans.

Now let's prove this by gathering fixed objects statistics while the DB is highly used in order to get accurate statistics on X$ tables representing the actual load:

Check this link to know more about Fixed Objects Statistics and how to gather:
http://dba-tips.blogspot.com/2012/11/all-about-statistics-in-oracle.html

SQL> exec dbms_stats.gather_fixed_objects_stats; 

Let's check if the new Fixed Objects statistics will make the difference:

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         0          0          0  FILTER  (cr=0 pr=0 pw=0 time=14075373 us)
      3882       3882       3882   HASH JOIN  (cr=0 pr=0 pw=0 time=104766 us cost=1 size=262 card=1)
      2376       2376       2376    PX COORDINATOR  (cr=0 pr=0 pw=0 time=54948 us cost=0 size=198 card=1)
         0          0          0     PX SEND QC (RANDOM) :TQ20000 (cr=0 pr=0 pw=0 time=0 us cost=0 size=243 card=1)
         0          0          0      VIEW  GV$SESSION (cr=0 pr=0 pw=0 time=0 us)
         0          0          0       NESTED LOOPS  (cr=0 pr=0 pw=0 time=0 us cost=0 size=243 card=1)
         0          0          0        NESTED LOOPS  (cr=0 pr=0 pw=0 time=0 us cost=0 size=239 card=1)
         0          0          0         FIXED TABLE FULL X$KSUSE (cr=0 pr=0 pw=0 time=0 us cost=0 size=231 card=1)
         0          0          0         FIXED TABLE FIXED INDEX X$KSLWT (ind:1) (cr=0 pr=0 pw=0 time=0 us cost=0 size=8 card=1)
         0          0          0        FIXED TABLE FIXED INDEX X$KSLED (ind:2) (cr=0 pr=0 pw=0 time=0 us cost=0 size=4 card=1)
      2525       2525       2525    PX COORDINATOR  (cr=0 pr=0 pw=0 time=8645 us cost=0 size=85376 card=1334)
         0          0          0     PX SEND QC (RANDOM) :TQ30000 (cr=0 pr=0 pw=0 time=0 us cost=0 size=77372 card=1334)
         0          0          0      VIEW  GV$SESSION_WAIT (cr=0 pr=0 pw=0 time=0 us)
         0          0          0       NESTED LOOPS  (cr=0 pr=0 pw=0 time=0 us cost=0 size=77372 card=1334)
         0          0          0        FIXED TABLE FULL X$KSLWT (cr=0 pr=0 pw=0 time=0 us cost=0 size=37352 card=1334)
         0          0          0        FIXED TABLE FIXED INDEX X$KSLED (ind:2) (cr=0 pr=0 pw=0 time=0 us cost=0 size=30 card=1)
         0          0          0   PX COORDINATOR  (cr=0 pr=0 pw=0 time=13959372 us cost=0 size=13 card=1)
         0          0          0    PX SEND QC (RANDOM) :TQ10000 (cr=0 pr=0 pw=0 time=10443626 us cost=0 size=64 card=1)
         0          0          0     VIEW  GV$SESSION (cr=0 pr=0 pw=0 time=10442090 us)
   2339876    2339876    2339876      NESTED LOOPS  (cr=0 pr=0 pw=0 time=10767975 us cost=0 size=64 card=1)
   2339876    2339876    2339876       NESTED LOOPS  (cr=0 pr=0 pw=0 time=9109717 us cost=0 size=60 card=1)
   2339876    2339876    2339876        FIXED TABLE FULL X$KSUSE (cr=0 pr=0 pw=0 time=6389468 us cost=0 size=52 card=1)
   2339876    2339876    2339876        FIXED TABLE FIXED INDEX X$KSLWT (ind:1) (cr=0 pr=0 pw=0 time=1872304 us cost=0 size=8 card=1)
   2339876    2339876    2339876       FIXED TABLE FIXED INDEX X$KSLED (ind:2) (cr=0 pr=0 pw=0 time=962703 us cost=0 size=4 card=1)

Theoretically, this is embarrassing :-) No big improvement after gathering statistics on fixed objects.
RULE Based Optimizer managed to beat CBO  when querying V$ views at least in this scenario.

Conclusion:
- RULE Based Optimizer can produce better plans than CBO when querying V$/GV$ views.
- It's always recommended to have good statistics on fixed objects representing the real system load to help CBO produce good execution plans when V$ views are queried, this can also improve the performance of your monitoring tools, RMAN, Enterprise Manager performance as well as they are heavily dependant on querying V$/GV$ views.

Above experiment was done on 11.2.0.3 but I've experienced the same on 11.2.0.4 and 12c as well.

Update: I've opened an SR with Oracle Support to get an explanation for this behavior and after going through many actions plans they finally suggested to use RULE Based Optimizer.

Sunday, April 7, 2019

ORA-04088: error during execution of trigger 'SYS.XDB_PI_TRIG' PLS-00302: component 'IS_VPD_ENABLED' must be declared

While I was testing something on a 12.1 test database got this below error whenever I'm trying to execute specific admin commands:

SQL> drop user xx;
drop user xx
*
ERROR at line 1:
ORA-04088: error during execution of trigger 'SYS.XDB_PI_TRIG'
ORA-00604: error occurred at recursive SQL level 1
ORA-06550: line 3, column 13:
PLS-00302: component 'IS_VPD_ENABLED' must be declared
ORA-06550: line 3, column 5:
PL/SQL: Statement ignored


SQL> alter table bb move online compress;  
alter table bb move online compress
            *
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-04088: error during execution of trigger 'SYS.XDB_PI_TRIG'
ORA-00604: error occurred at recursive SQL level 2
ORA-06550: line 3, column 13:
PLS-00302: component 'IS_VPD_ENABLED' must be declared
ORA-06550: line 3, column 5:
PL/SQL: Statement ignored

The above was just a sample but the error with showing up with lots of admin commands!

I checked the trigger SYS.XDB_PI_TRIG which causing this error and it was already valid, so I decided to DISABLE it, and then admin commands ran as usual:

SQL> alter trigger SYS.XDB_PI_TRIG disable;

Trigger altered.


Above failing admin commands have run smoothly:

SQL> alter table bb move online compress; 

Table altered.

Frankly speaking, I tried to google that error without any success, I didn't dig deeper, so I took the shortest/laziest way and disabled the root cause trigger as a dirty fix, the database where I disabled that trigger was a test DB, most probably one of my fancy test scenarios caused this issue to happen.

In case you have the same error on a Production Database I strongly recommend you to contact Oracle Support before disabling the above-mentioned trigger.

Update: I've dug more and found that the root cause was that someone created a table with the name "sys" under SYS user. Bug 17431402 yes it's a bug because the engine should throw an error if someone tries to create an object with a "reserved word".

I've dropped that object "sys.sys" and the error disappeared:

SQL> alter trigger SYS.XDB_PI_TRIG disable;

Trigger altered.

SQL> drop table sys.sys;

Table dropped.

SQL> alter trigger SYS.XDB_PI_TRIG enable;

Trigger altered.

SQL> alter table bb move online compress; 

Table altered.

phew!

Thursday, April 4, 2019

How To Import A Schema on Amazon RDS

As you know, there are two types of cloud services AWS provides (EC2 & RDS) while EC2 let you have the full control over the Operating System OS including root access, RDS doesn't give you any kind of OS access. Because RDS instance is managed by AWS they provide you a master admin user , this user has limited admin privileges (neither a SYSDBA nor DBA), making regular DBA tasks such as importing a schema a bit challenging.
Without having an OS access you won't be able to use commands like: exp ,expdp, imp, impdp and rman.

In this post I'll explain step-by-step how to import a schema into RDS using Oracle built-in packages. Luckily Oracle provides many built-in packages enable you to perform lots of tasks without the need to have an OS access.

Actually, Amazon already well documented importing a schema into RDS but I thought to explain it in a fashion of a real-world scenario:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Oracle.Procedural.Importing.html

Task Summary:
Export a schema with name "EPS_OWNER" on an 11.2.0.3 database resides on AWS EC2 Linux instance and upload the export dump file to S3 bucket, then import the dump file into a 12.2.0.1 AWS RDS database along with changing the schema name to "EPS.

Prerequisites:
- An AWS S3 bucket must be created and Both Source EC2 and Target RDS must have RW access to it through a role. [S3 bucket is a kind of a shared storage between AWS cloud systems where you can upload/download the files to/from it, it will be used during this demo to transfer the export dump file between EC2 source instance and RDS target instance].

Step1: Export the schema on Source [EC2 instance]:
I've already an OS access to oracle user on the source EC2 instance so I used exportdata script to export EPS_OWNER schema.

Note: In case you are importing from Enterprise Edition DB to Standard Edition DB make sure to reset all tables having COMPRESSION option enabled to NOCOMPRESS before exporting the data:
i.e.
alter table eps_owner.compressed_table NOCOMPRESS;

This is because Standard Edition doesn't have COMPRESSION feature. Otherwise the table creation will fail with ORA-39083 error during the import on the Standard Edition DB.


Step2: Upload the export file to S3 Bucket from Source [EC2 instance]:
In case the bucket is not yet configured on the source machine you can use the following AWSCLI command to configure it providing the bucket's "Access Key" and "Secret Access Key":

  # aws configure
  AWS Access Key ID [None]: AFFFTFWNINVCRTWLBWII5Q
  AWS Secret Access Key [None]: EdfeergzA1+kErse7kg53nudSv/Il/xx2x2wvwiuy
  Default region name [None]: 
  Default output format [None]: 

Note: The keys above are dummy ones, you have to put your own bucket key.

 Upload the export dump files to the S3 bucket:
  # cd /backup
  # aws s3 cp EXPORT_eps_owner_STG_04-03-19.dmp  s3://eps-bucket

In case you are using S3 Browser from a Windows machine, configure the bucket using this flow:
Open S3 Browser -> Accounts -> Add New Account:
<you will use your bucket details here I'm just giving an example>
Account Name: eps-bucket
Account Type: Amazon S3 Storage
Access Key ID: AFFFTFWNINVCRTWLBWII5Q
Secret Access Key: EdfeergzA1+kErss2kg53nudSv/Il/xx2x2wvwiuy
Click "Add New Account"
Accounts -> click "eps-bucket" -> Click "Yes" to add 'External bucket' -> Bucket Name: "eps-bucket"

Note: S3 Browser is a Windows GUI tool provided by AWS that help you deal with uploading/downlading the file to/from S3 bucket. you can download it from here:
https://s3browser.com/download.aspx

Step2: Download the export file from the S3 Bucket to the Target [RDS instance]:
Remember, there is no OS access on RDS, so we will connect to the database using any tools such as SQL Developer using the RDS master user credentials.

Use the AWS built-in package "rdsadmin.rdsadmin_s3_tasks" to download the dump file from S3 bucket to DATA_PUMP_DIR:

Warning: The following command will download all the files in the bucket, so make sure before running this command to remove all the files except the export dump files.

SELECT rdsadmin.rdsadmin_s3_tasks.download_from_s3(
      p_bucket_name    =>  'eps-bucket',       
      p_directory_name =>  'DATA_PUMP_DIR') 
   AS TASK_ID FROM DUAL; 

In case you have the export files stored under a specific directory, you can tell the download procedure to download all the files under that specific directory by using p_s3_prefix parameters like this: [don't forget the slash / after the directory name]

SELECT rdsadmin.rdsadmin_s3_tasks.download_from_s3(
      p_bucket_name    =>  'eps-bucket',      
      p_s3_prefix          =>  'export_files/', 
      p_directory_name =>  'DATA_PUMP_DIR') 
   AS TASK_ID FROM DUAL;

Or, in case you only want to download a single named file i.e. EXPORT_eps_owner_STG_04-03-19.dmp located under a specific directory i.e. export_files, just execute the command like this:

SELECT rdsadmin.rdsadmin_s3_tasks.download_from_s3(
      p_bucket_name    =>  'eps-bucket',      

      p_s3_prefix          =>  'export_files/EXPORT_eps_owner_STG_04-03-19.dmp',
      p_directory_name =>  'DATA_PUMP_DIR')
   AS TASK_ID FROM DUAL; 


Above command will return a TASK ID:

TASK_ID                                                                        
--------------------------
1554286165468-636   

Use that TASK_ID to monitor the download progress by running this statement:
SELECT text FROM table(rdsadmin.rds_file_util.read_text_file('BDUMP','dbtask-1554286165468-636.log'));

In case you get this error:
ORA-00904: "RDSADMIN"."RDSADMIN_S3_TASKS"."DOWNLOAD_FROM_S3": invalid identifier

This means S3 integration is not configured with your RDS.
To configure S3 integration: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/oracle-s3-integration.html
Open the IAM Management Console: https://console.aws.amazon.com/iam/home?#/home
In the navigation pane, choose Policies -> Create policy On the Visual editor tab, choose Choose a service, and then choose S3 -> Check All S3 actions
Choose Resources, and choose Add ARN for the bucket -> Enter the Bucket name: eps-bucket
Click Review Policy -> Give it a name "eps-s3-integration" -> Create Policy
Associate your IAM role with your RDS DB:
Sign in to the AWS Management Console: https://console.aws.amazon.com/rds/
Choose the Oracle DB instance name -> On the Connectivity & security tab -> Manage IAM roles section:
IAM roles to this instance: -> "eps-s3-integration"
Feature -> S3_INTEGRATION
Click "Add role"
Make sure that your database is running with "rds-s3-integration" option group parameters.

Once the download is complete, query the downloaded files under DATA_PUMP_DIR using this query:
select * from table(RDSADMIN.RDS_FILE_UTIL.LISTDIR('DATA_PUMP_DIR')) order by mtime;

Any file having "incomplete" keyword, means it still getting downloaded.

Now the AWS related tasks are done, let's jump to the import part which is purely Oracle's.

Step3: Create the tablespace and the target schema user on the Target [RDS instance]:
In case the target user does not yet exist on the target RDS database, you can go ahead and create it along with its tablespace.

-- Create a tablespace: [Using Oracle Managed Files OMF]
CREATE SMALLFILE TABLESPACE "TBS_EPS" DATAFILE SIZE 100M AUTOEXTEND ON NEXT 100M LOGGING EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO;

-- In case you need to create a Password Verify Function on RDS:
Note: As you cannot create objects under SYS in RDS you have to use the following ready made procedure by AWS to create the Verify Function:
Note: The verify function name should contains one of these keywords: "PASSWORD", "VERIFY", "COMPLEXITY", "ENFORCE", or "STRENGTH"


begin
    rdsadmin.rdsadmin_password_verify.create_verify_function(
        p_verify_function_name     => 'CUSTOM_PASSWORD_VFY_FUNCTION',
        p_min_length                      => 8,
        p_max_length                     => 256,
        p_min_letters                      => 1,
        p_min_lowercase                => 1,
        p_min_uppercase                => 1,
        p_min_digits                       => 3,
        p_min_special                     => 2,
        p_disallow_simple_strings => true,
        p_disallow_whitespace       => true,
        p_disallow_username         => true,
        p_disallow_reverse             => true,
        p_disallow_db_name          => true,
        p_disallow_at_sign             => false);
end;
/

-- In case you want to create a new profile:
create profile APP_USERS limit
LOGICAL_READS_PER_SESSION DEFAULT
PRIVATE_SGA          DEFAULT
CPU_PER_SESSION         DEFAULT
PASSWORD_REUSE_TIME      DEFAULT
COMPOSITE_LIMIT         DEFAULT
PASSWORD_VERIFY_FUNCTION CUSTOM_PASSWORD_VFY_FUNCTION
PASSWORD_GRACE_TIME      DEFAULT
PASSWORD_LIFE_TIME     90
SESSIONS_PER_USER     DEFAULT
CONNECT_TIME         DEFAULT
CPU_PER_CALL         DEFAULT
FAILED_LOGIN_ATTEMPTS     6
PASSWORD_LOCK_TIME     DEFAULT
PASSWORD_REUSE_MAX     12
LOGICAL_READS_PER_CALL     DEFAULT
IDLE_TIME         DEFAULT;

 -- Create the user: [Here the user as per my business requirements will be different than the original user on the Source DB]
CREATE USER EPS IDENTIFIED  BY "test123" DEFAULT TABLESPACE TBS_EPS TEMPORARY TABLESPACE TEMP QUOTA UNLIMITED ON TBS_EPS PROFILE APP_USERS;
GRANT CREATE SESSION TO EPS;
GRANT CREATE JOB TO EPS;
GRANT CREATE PROCEDURE TO EPS;
GRANT CREATE SEQUENCE TO EPS;
GRANT CREATE TABLE TO EPS;

Step4: Import the dump file on the Target [RDS instance]:
Open a session from SQL Developer and make sure this session will not disconnect as far as the import is running, by the RDS master user execute the following block of code which will keep running in the foreground allowing you to monitor the import job on the fly and see any incoming errors:

DECLARE
  ind NUMBER;                      -- Loop index
  h1 NUMBER;                       -- Data Pump job handle
  percent_done NUMBER;     -- Percentage of job complete
  job_state VARCHAR2(30);  -- To keep track of job state
  le ku$_LogEntry;         -- For WIP and error messages
  js ku$_JobStatus;        -- The job status from get_status
  jd ku$_JobDesc;         -- The job description from get_status
  sts ku$_Status;            -- The status object returned by get_status
BEGIN

  h1 := DBMS_DATAPUMP.OPEN( operation => 'IMPORT', job_mode => 'SCHEMA', job_name=>null);

-- Specify the single dump file and its directory   DBMS_DATAPUMP.ADD_FILE(handle => h1, directory => 'DATA_PUMP_DIR', filename => 'EXPORT_eps_owner_STG_04-03-19.dmp');
-- Specify the logfile for the import process: [Very important to read it later after the completion of the import]  DBMS_DATAPUMP.ADD_FILE(handle => h1, directory => 'DATA_PUMP_DIR', filename => 'import_eps_owner_STG_04-03-19.LOG', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);

-- Disable Archivelog for the import: [12c new feature]  DBMS_DATAPUMP.metadata_transform ( handle => h1, name => 'DISABLE_ARCHIVE_LOGGING', value => 1);

-- REMAP SCHEMA:
--  DBMS_DATAPUMP.METADATA_REMAP(h1,'REMAP_SCHEMA','EPS_OWNER','EPS');

-- If a table already exists: [SKIP, REPLACE, TRUNCATE]
  DBMS_DATAPUMP.SET_PARAMETER(h1,'TABLE_EXISTS_ACTION','SKIP');

-- REMAP TABLESPACE:  DBMS_DATAPUMP.METADATA_REMAP(h1,'REMAP_TABLESPACE','EPS','TBS_EPS');

-- Start the job. An exception is returned if something is not set up properly.  DBMS_DATAPUMP.START_JOB(h1);

-- The following loop will monitor the job until it get complete.meantime the progress information will be displayed:
 percent_done := 0;
  job_state := 'UNDEFINED';
  while (job_state != 'COMPLETED') and (job_state != 'STOPPED') loop
    dbms_datapump.get_status(h1,
           dbms_datapump.ku$_status_job_error +
           dbms_datapump.ku$_status_job_status +
           dbms_datapump.ku$_status_wip,-1,job_state,sts);
    js := sts.job_status;

-- If the percentage done changed, display the new value.     if js.percent_done != percent_done
    then
      dbms_output.put_line('*** Job percent done = ' ||
                           to_char(js.percent_done));
      percent_done := js.percent_done;
    end if;

-- If any work-in-progress (WIP) or Error messages were received for the job, display them.       if (bitand(sts.mask,dbms_datapump.ku$_status_wip) != 0)
    then
      le := sts.wip;
    else
      if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
      then
        le := sts.error;
      else
        le := null;
      end if;
    end if;
    if le is not null
    then
      ind := le.FIRST;
      while ind is not null loop
        dbms_output.put_line(le(ind).LogText);
        ind := le.NEXT(ind);
      end loop;
    end if;
  end loop;

-- Indicate that the job finished and gracefully detach from it.   dbms_output.put_line('Job has completed');
  dbms_output.put_line('Final job state = ' || job_state);
  dbms_datapump.detach(h1);
END;
/

In case you have used wrong parameters or bad combination e.g. using METADATA_FILTER instead of METDATA_REMAP when importing to a schema having a different name, you will get a bunch of errors similar to the below cute vague ones:

ORA-31627: API call succeeded but more information is available
ORA-06512: at "SYS.DBMS_DATAPUMP", line 7143
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
ORA-06512: at "SYS.DBMS_DATAPUMP", line 4932
ORA-06512: at "SYS.DBMS_DATAPUMP", line 7137

ORA-06512: at line 7

You can also monitor the execution of the import job using this query:
SQL> SELECT owner_name, job_name, operation, job_mode,DEGREE, state FROM dba_datapump_jobs where state='EXECUTING';

In case you want to Kill the job: <Provide the '<JOB_NAME>','<OWNER>'>
SQL> DECLARE
           h1 NUMBER;
      BEGIN
           h1:=DBMS_DATAPUMP.ATTACH('SYS_IMPORT_SCHEMA_01','EPS');
           DBMS_DATAPUMP.STOP_JOB (h1, 1, 0);
        END;


Once the job is complete compare the number of objects between source and target DBs:
SQL> select object_type,count(*) from dba_objects where owner='EPS' group by object_type;

Also you can view the import log on RDS using this query:
SQL> set lines 10000 pages 0
           SELECT text FROM table(rdsadmin.rds_file_util.read_text_file('
DATA_PUMP_DIR','import_eps_owner_STG_04-03-19.LOG'));           

Or: You can upload the log to S3 bucket and get it from there:
SQL> select * from table(RDSADMIN.RDS_FILE_UTIL.LISTDIR('DATA_PUMP_DIR')) order by mtime;

SQL> SELECT rdsadmin.rdsadmin_s3_tasks.upload_to_s3( p_bucket_name => '<bucket_name>', p_prefix => '<file_name>', prefix => '', p_directory_name => '
DATA_PUMP_DIR') AS TASK_ID FROM DUAL;
   

Run the After Import script that generated by exportdata script at Step 1 after replacing the original exported schema name EPS_OWNER with the target imported schema name EPS.

Check the invalid objects:
SQL> col object_name for a45
select object_name,object_type,status from dba_objects where owner='EPS' and status<>'VALID';

Compile invalid object: [If found]
SQL> EXEC SYS.UTL_RECOMP.recomp_parallel(4, 'EPS');

Step5: [Optional] Delete the dump file from the Target [RDS instance]:
Check the exist files under DATA_PUMP_DIR directory:
SQL> select * from table(RDSADMIN.RDS_FILE_UTIL.LISTDIR('DATA_PUMP_DIR')) order by mtime;
Generate delete script for all files:
SQL> select 'exec utl_file.fremove(''DATA_PUMP_DIR'','''||filename||''');' from table(RDSADMIN.RDS_FILE_UTIL.LISTDIR('DATA_PUMP_DIR')) order by mtime;
Run the output script:
e.g. exec utl_file.fremove('DATA_PUMP_DIR','EXPORT_eps_owner_STG_04-03-19.dmp');

For more reading on a similar common DBA tasks on RDS:
http://dba-tips.blogspot.com/2020/02/the-dba-guide-for-managing-oracle.html

References:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Oracle.Procedural.Importing.html
S3 Bucket creation:
https://docs.aws.amazon.com/AmazonS3/latest/gsg/CreatingABucket.html
DBMS_DATAPUMP:
https://docs.oracle.com/database/121/ARPLS/d_datpmp.htm#ARPLS356
RDS Master Admin User:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.MasterAccounts.html
Import