Sunday, September 22, 2013

Upgrade RAC 11.2.0.1 to 11.2.0.3 (Part IV RAC Database Upgrade from 11.2.0.1 to 11.2.03)

In Part I I've installed the Linux OS and prepared the shared filesystem (ISCSI configuration & OCFS2)
In Part II I've prepared the Linux OS for the RAC installation, installed 11.2.0.3 Grid Infrastructure and Database software.
In Part III I've created a physical standby database under 11.2.0.3 Oracle Home on the new RAC server being refreshed from the 11.2.0.1 primary database, for the purpose of minimizing the downtime of copying the database from old SAN to the new NAS during the upgrade phase.

In this part I'll upgrade the 11.2.0.1 standby database resides on the new 11.2.0.3 servers.

In brief:
  > I'll convert the 11.2.0.1 standby database to be a primary, I'll shutdown the original primary DB.
  > Start upgrading the new primary DB from 11.2.0.1 to 11.2.0.3.
  > I'll do the post upgrade steps plus RAC configurations.

Recommended Metalink notes:
========================
Metalink [ID 730365.1]  Includes all patchset downloads + How to upgrade from any Oracle DB version to another one.
Metalink [ID 1276368.1] Out-of-place manual upgrade from previous 11.2.0.N version to the latest 11.2.0.N patchset.

If you're not following the way I'm using in this implementation (out-place upgrade) or you're willing to use a standby DB for the purpose of the upgrade you can easily skip the Failover paragraph in the green color.

Let's get started...

########  
Failover:  
########  
On Primary:
-----------
Shutdown the primary DB to ensure there is no further updates.

On Standby:
-------------
Terminate managed recovery mode by:
----------------------------------------
Make sure that all archivelogs been copied from the primary to the standby DB, and also been applied.
SQL> SELECT THREAD#,SEQUENCE#, FIRST_TIME, NEXT_TIME,applied FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#,applied;
SQL> recover managed standby database nodelay;
SQL> alter database recover managed standby database finish;

Convert the standby database to primary role and open it in normal mode:
-----------------------------------------------------------------------
SQL> alter database commit to switchover to primary;
SQL> shu immediate;
SQL> startup upgrade;

In case having a problem with tempfiles:
SQL> alter database tempfile '/ora_temp1/pefms/temp_01.dbf' drop;
SQL> alter tablespace temp add tempfile '/ora_temp1/pefms/temp_01.dbf' reuse;
and so on for the rest of tempfiles...

=> Proceed with DB upgrade steps.


Before the upgrade it's good to do the following:

Purge DBA_RECYCLEBIN:
---------------------------
Purging DBA_RECYCLEBIN will speed up the upgrade process:
SQL> PURGE DBA_RECYCLEBIN;

Keep a list of all DB initialization Parameters (normal & hidden):
--------------------------------------------------------------
The following statement will list the non default parameters :

SQL> Spool All_parameters.txt
     set linesize 170
     col Parameter for a50
     col SESSION FOR a28
     col Instance FOR a55
     col S FOR a1
     col I FOR a1
     col D FOR a1
SELECT * FROM (SELECT  
 a.ksppinm  "Parameter", 
 decode(p.isses_modifiable,'FALSE',NULL,NULL,NULL,b.ksppstvl) "Session", 
 c.ksppstvl "Instance",
 decode(p.isses_modifiable,'FALSE','F','TRUE','T') "S",
 decode(p.issys_modifiable,'FALSE','F','TRUE','T','IMMEDIATE','I','DEFERRED','D') "I",
 decode(p.isdefault,'FALSE','F','TRUE','T') "D"
 FROM x$ksppi a, x$ksppcv b, x$ksppsv c, v$parameter p
 WHERE a.indx = b.indx AND a.indx = c.indx
 AND p.name(+) = a.ksppinm
 ORDER BY a.ksppinm) WHERE d='F';
     Spool off

Oracle strongly recommend to reset the hidden parameter to it's default values before starting the upgrade:
e.g.
SQL> alter system reset scope=spfile sid='*';

##################
PRE-UPGRADE STEPS:
##################

Step 1: Software Installation
######
Install 11.2.0.3 RDBMS Software in a new ORACLE_HOME. 
Apply the latest CPU/SPU patch on the 11.2.0.3 ORACLE_HOME before upgrading the database,  .

11.2.0.3 Grid Infrastructure & Database software already installed in Part II

Create the default directory that holds sqlnet log files:
-------------------------------------------------
# mkdir -p /u01/oracle/11.2.0.3/db/log/diag/clients
# chmod 700 /u01/oracle/11.2.0.3/db/log/diag/clients

Pre-upgrade Diagnose:
================
The following script checks the database and give recommendations before the upgrade.
Download latest version of utlu112i_5.sql script: Note 884522.1
Note: This script already exist under the new $ORACLE_HOME/rdbms/admin but usually the one in the Metalink is updated with most recent upgrade checks.

SQL> SPOOL upgrade_info.log
SQL> @?/rdbms/admin/utlu112i.sql
SQL> SPOOL OFF

Note: Here I've ran the script which under ORACLE_HOME but I strongly recommend to down load the latest version of that script from Metalink and run it against the database.

Step 2: Dictionary Check
#####
Verify the validity of data dictionary objects by running dbupgdiag.sql script:

If the dbupgdiag.sql script reports any invalid objects, run utlrp (multiple times) to validate the invalid objects in the database, until there is no change in the number of invalid objects: 

SQL> @/home/oracle/dbupgdiag.sql
SQL> @?/rdbms/admin/utlrp.sql 

Note: In my case scenario, in the 11.2.0.1 primary DB I had only 32 invalid object and after switching over the standby DB to a primary for the upgrade purpose I got more than 200 invalid objects (views, packages, synonyms) most of them under SYS schema, no need to worry, all of them will be valid after running step 9.

Gather Dictionary Statistics:
--------------------------
SQL> EXECUTE dbms_stats.gather_dictionary_stats;
=> In my case scenario it failed due to invalid objects under SYS schema:
             ORA-04063: package body "SYS.DBMS_SQLTUNE" has errors 
          .


Step 3: TIMEZONE Version
#####
In case you have TIMESTAMP WITH TIME ZONE datatype in your database, you need to Upgrade the Time Zone version to version 14.

The possibilities are:
 >If current version < 14 ,You need to upgrade to version 14 after you done with the upgrade to 11.2.0.3.
 >If current version = 14 ,No need to upgrade, Skip the whole Step (skip STEP 10).
 >If current version > 14 ,You must upgrade your Time Zone version before upgrading to 11.2.0.3 or your data stored in TIMESTAMP WITH TIME ZONE datatype can become corrupted during the upgrade.

Check your current Time Zone version:
SQL> SELECT version FROM v$timezone_file;

     VERSION
     ----------
    4

In my case scenario the version is older, so we'll do this step in post upgrade steps (STEP 10).

Check National Characterset is UTF8 or AL16UTF16:
============================================
SQL> select value from NLS_DATABASE_PARAMETERS where parameter = 'NLS_NCHAR_CHARACTERSET';


Step 4: Disable Cluster option
#####
Set parameter cluster_database to FLASE for RAC DB.

SQL> ALTER SYSTEM SET cluster_database = false scope=spfile;


Step 5: Disable Vault | Adjust parameters for JVM
#####
>Disable Database Vault if enabled:

>IF JVM installed, java_pool_size and shared_pool_size must be set to at least 250MB prior to the upgrade.


Step 6: Last checks:
#####
SQL> SELECT * FROM v$recover_file;
no rows selected

SQL> SELECT * FROM v$backup WHERE status != 'NOT ACTIVE';
no rows selected

SQL> SELECT * FROM dba_2pc_pending; --outstanding distributed transactions 
no rows selected

SQL> SELECT name FROM sys.user$ WHERE ext_username IS NOT NULL AND password = 'GLOBAL';


Step 7: Disable all batch and cron jobs:
#####
Disable crontab scripts:
---------------------
 
# crontab /root/crontab_root 
# crontab /dev/null
# crontab -l

 
# crontab -l > /home/oracle/oracle_crontab
# crontab /dev/null
# crontab -l

Disable Database scheduler:
------------------------
SQL> EXEC dbms_scheduler.set_scheduler_attribute('SCHEDULER_DISABLED','TRUE');
SQL> alter system set job_queue_processes=0 scope=both;
It's strongly recommended to take a database cold backup before going through next steps.
 > Although I heavily rely on RMAN on many cases but actually in such scenarios I usually prefer cold backup technique, feel free to take an RMAN  or any kind of backup you can rely on.


Step 8: Set the Database in the Noarchivelog mode.
#####
>Stop listeners: lsnrctl stop
>Stop DBCONSOLE: emctl stop dbconsole

Putting the database in the noarchivelog speed up the upgrade and minimize the downtime window.

$ sqlplus "/as sysdba"
SQL> shutdown immediate;
SQL> startup mount
SQL> alter database flashback off;
SQL> alter database noarchivelog;
SQL> archive log stop;
SQL> shutdown immediate;


##############
UPGRADE STEPS:
##############

Step 9: Execute the upgrade script
####
> If you encounter a message listing obsolete initialization parameters during "startup upgrade", remove the obsolete parameters from the PFILE.

# sqlplus / as sysdba

SQL> startup upgrade
SQL> spool upgrade.log
SQL> @?/rdbms/admin/catupgrd.sql 

Note: In case you experience errors during the upgrade, fix it first then re-run catupgrd.sql script as many times as necessary using this order:
1)Shu immediate   
2)Startup Upgrade  
        3)@?/rdbms/admin/catupgrd.sql 

Once upgrade script is done:
> Check the spool file for errors.
> Restart the database in normal mode and run the following scripts:

SQL> startup
        -- Check the validity of Oracle installed options:
SQL> @?/rdbms/admin/utlu112s.sql
SQL> @?/rdbms/admin/catuppst.sql
        -- Compile invalid objects:
SQL> @?/rdbms/admin/utlrp.sql
        -- .
SQL> @?/rdbms/admin/utluiobj.sql
        -- :
       --e.g.:
      SQL> alter trigger SYSADM.TR_AFTER_LOGON compile;

Run dbupgdiag.sql script (See note: 556610.1): 
dbupgdiag.sql script verifies all dba_registry components are valid:
SQL> @/home/oracle/dbupgdiag.sql


###############
Post Upgrade Steps:
###############

Step 10: Upgrade the TimeZone version
######
PREPARE Stage:
=============

SQL> SHU IMMEDIATE
SQL> startup upgrade
SQL> SELECT PROPERTY_NAME, SUBSTR(property_value, 1, 30) value  
     FROM DATABASE_PROPERTIES WHERE PROPERTY_NAME LIKE 'DST_%' ORDER BY PROPERTY_NAME;
     
     PROPERTY_NAME                       VALUE
     ------------------------------              ------------------------------
     DST_PRIMARY_TT_VERSION              4
     DST_SECONDARY_TT_VERSION        0
     DST_UPGRADE_STATE                      NONE

SQL> alter session set "_with_subquery"=materialize;
SQL> alter session set "_simple_view_merging"=TRUE;
SQL> exec DBMS_DST.BEGIN_PREPARE(14)

SQL> SELECT PROPERTY_NAME, SUBSTR(property_value, 1, 30) value  
     FROM DATABASE_PROPERTIES WHERE PROPERTY_NAME LIKE 'DST_%' ORDER BY PROPERTY_NAME;
     
     PROPERTY_NAME                       VALUE
     ------------------------------              ------------------------------
     DST_PRIMARY_TT_VERSION              4
     DST_SECONDARY_TT_VERSION        14
     DST_UPGRADE_STATE                      PREPARE

SQL> TRUNCATE TABLE SYS.DST$TRIGGER_TABLE;
SQL> TRUNCATE TABLE sys.dst$affected_tables;
SQL> TRUNCATE TABLE sys.dst$error_table;

SQL> set serveroutput on
     BEGIN  
     DBMS_DST.FIND_AFFECTED_TABLES  
     (affected_tables => 'sys.dst$affected_tables',  
     log_errors => TRUE,  
     log_errors_table => 'sys.dst$error_table');  
     END;  
     /
SQL> SELECT * FROM sys.dst$affected_tables;
no rows selected

SQL> SELECT * FROM sys.dst$error_table;
no rows selected

SQL> EXEC DBMS_DST.END_PREPARE;
A prepare window has been successfully ended.

PL/SQL procedure successfully completed.

Upgrade Stage:
==============

SQL> purge dba_recyclebin;
SQL> alter session set "_with_subquery"=materialize;
SQL> alter session set "_simple_view_merging"=TRUE;
SQL> EXEC DBMS_DST.BEGIN_UPGRADE(14);

SQL> SELECT PROPERTY_NAME, SUBSTR(property_value, 1, 30) value  
     FROM DATABASE_PROPERTIES WHERE PROPERTY_NAME LIKE 'DST_%' ORDER BY PROPERTY_NAME;
     
     PROPERTY_NAME                           VALUE
     ------------------------------                   --------------------
     DST_PRIMARY_TT_VERSION                14
     DST_SECONDARY_TT_VERSION           4
     DST_UPGRADE_STATE                         UPGRADE

SQL> SELECT OWNER, TABLE_NAME, UPGRADE_IN_PROGRESS FROM ALL_TSTZ_TABLES where UPGRADE_IN_PROGRESS='YES';
no rows selected

SQL> shutdown immediate
SQL> startup
SQL> alter session set "_with_subquery"=materialize;
SQL> alter session set "_simple_view_merging"=TRUE;

SQL> set serveroutput on
     VAR numfail number
     BEGIN
     DBMS_DST.UPGRADE_DATABASE(:numfail,
     parallel => TRUE,
     log_errors => TRUE,
     log_errors_table => 'SYS.DST$ERROR_TABLE',
     log_triggers_table => 'SYS.DST$TRIGGER_TABLE',
     error_on_overlap_time => FALSE,
     error_on_nonexisting_time => FALSE);
     DBMS_OUTPUT.PUT_LINE('Failures:'|| :numfail);
     END;
     /
.
.
...
Number of failures: 0
Failures:0

SQL> VAR fail number
     BEGIN
     DBMS_DST.END_UPGRADE(:fail);
     DBMS_OUTPUT.PUT_LINE('Failures:'|| :fail);
     END;
     /

An upgrade window has been successfully ended.
Failures:0

PL/SQL procedure successfully completed.

SQL> SELECT PROPERTY_NAME, SUBSTR(property_value, 1, 30) value  
     FROM DATABASE_PROPERTIES WHERE PROPERTY_NAME LIKE 'DST_%' ORDER BY PROPERTY_NAME;
     
     PROPERTY_NAME                        VALUE
     ------------------------------                ------------------------------
     DST_PRIMARY_TT_VERSION              14
     DST_SECONDARY_TT_VERSION        0
     DST_UPGRADE_STATE                       NONE

SQL> SELECT * FROM v$timezone_file;

     FILENAME                VERSION
     --------------------         ----------
     timezlrg_14.dat              14

SQL> select TZ_VERSION from registry$database;

     TZ_VERSION
     ----------
              4

SQL> update registry$database set TZ_VERSION = (select version FROM v$timezone_file);
SQL> commit;
SQL> select TZ_VERSION from registry$database;

     TZ_VERSION
     ----------
             14

SQL> SELECT value$ FROM sys.props$ WHERE NAME = 'DST_PRIMARY_TT_VERSION';

     VALUE$
     --------
           14

SQL> exit;


STEP 11: Set CLUSTER_DATABASE=TRUE In case of RAC DB
######
Create the a pfile holds RAC configurations:

# cd $ORACLE_HOME/dbs
# mv initpefms1.ora initpefms1.ora.old
# mv spfilepefms1.ora spfilepefms1.ora.old
# vi initpefms1.ora

######################################################
#Memory Parameters:
######################################################
*.sga_max_size=19327352832
*.sga_target=19327352832
*.db_cache_size=8589934592
*.java_pool_size=104857600
*.large_pool_size=314572800
*.shared_pool_reserved_size=52428800
*.shared_pool_size=2618767104
*.sort_area_size=83886080
*.log_buffer=31457280
*.pga_aggregate_target=10737418240
######################################################
#Parameters with destinations:
######################################################
*.control_files='/ora_control1/pefms/control01.ctl','/ora_control2/pefms/control02.ctl'
*.db_recovery_file_dest='/ora_backupdisk/flash_recovery_area'
*.diagnostic_dest='/u01/oracle/11.2.0.3/db/diagnostics/pefms'
*.log_archive_dest_1='LOCATION=/ora_archive1/pefms'
*.log_archive_dest_2='LOCATION=/ora_archive2/pefms'
######################################################
#Important Parameters:
######################################################
*.db_flashback_retention_target=21600
*.db_recovery_file_dest_size=536870912000
*.fast_start_mttr_target=300
*.undo_management='AUTO'
*.undo_retention=172800
*.archive_lag_target=1800
*.cluster_database_instances=2
*.CLUSTER_DATABASE=true
*.compatible='11.2.0.3'
*.control_file_record_keep_time=30
*.log_archive_dest_state_1='ENABLE'
*.log_archive_dest_state_2='ENABLE'
*.service_names='PEFMS'
######################################################
#Instance Specific Parameters:
######################################################
pefms1.instance_name='pefms1'
pefms2.instance_name='pefms2'
pefms1.instance_number=1
pefms2.instance_number=2
pefms1.thread=1
pefms2.thread=2
pefms2.undo_retention=172800
pefms1.undo_tablespace='UNDOTBS1'
pefms2.undo_tablespace='UNDOTBS2'
######################################################
#Features Enable|Disable Parameters:
######################################################
*._mmv_query_rewrite_enabled=TRUE
*._optim_peek_user_binds=FALSE
*.control_management_pack_access='DIAGNOSTIC+TUNING'
*.query_rewrite_enabled='TRUE'
*.resource_limit=TRUE
*.result_cache_max_size=0
*.result_cache_mode='MANUAL'
*.star_transformation_enabled='FALSE'
######################################################
#Performance Parameters:
######################################################
*.cursor_sharing='EXACT'
*.session_cached_cursors=100
*.open_cursors=500
*.processes=1000
*.db_file_multiblock_read_count=16
*.log_archive_max_processes=3
*.optimizer_mode='ALL_ROWS'
*.parallel_degree_limit='4'
*.parallel_max_servers=2
*.timed_statistics=TRUE
*.transactions=2000
*.transactions_per_rollback_segment=10
######################################################
#Security Parameters:
######################################################
*.remote_login_passwordfile='EXCLUSIVE'
*.sql92_security=TRUE
*.sec_case_sensitive_logon=FALSE
######################################################
#Other Parameters:
######################################################
*.db_block_size=8192
*.db_name='pefms'
*.job_queue_processes=10
*.log_archive_format='%d_%t_%r_%s.arc'


SQL> shu immediate
SQL> startup mount pfile='/u01/oracle/11.2.0.3/db/dbs/initpefms1.ora'
SQL> ALTER DATABASE ADD LOGFILE THREAD 2 
     GROUP 4 (
    '/ora_redo1/pefms/pefms2_redo04_a.log',
    '/ora_redo2/pefms/pefms2_redo04_b.log'
  ) SIZE 100M BLOCKSIZE 512 REUSE ,
  GROUP 5 (
    '/ora_redo1/pefms/pefms2_redo05_a.log',
    '/ora_redo2/pefms/pefms2_redo05_b.log'
  ) SIZE 100M BLOCKSIZE 512 REUSE ,
  GROUP 6 (
    '/ora_redo1/pefms/pefms2_redo06_a.log',
    '/ora_redo2/pefms/pefms2_redo06_b.log'
  ) SIZE 100M BLOCKSIZE 512 REUSE ;

SQL> alter database archivelog;
SQL> alter database flashback on;
SQL> alter database open;
         

Copy the pfile to the other RAC node:
# scp $ORACLE_HOME/dbs/initpefms1.ora oracle@node2:$ORACLE_HOME/dbs/initpefms2.ora


STEP 12: Clusterware Configurations 
######
Start RAC Services on Node2:
---------------------------- 
# crsctl start crs

Add Database/Instances/listeners services:
---------------------------------------
Note: Don't include any UPPERCASE characters => [Metalink Note ID:  372145.1]
Note: Hashed commands #>> may be used if things didn't go smooth.
Note: Below steps will be done automatically if DBUA is used to upgrade the DB.
#>>

# srvctl add database -d pefms -o /u01/oracle/11.2.0.3/db
# srvctl add instance -d pefms -i pefms1 -n vla-ora-node1
# srvctl add instance -d pefms -i pefms2 -n vla-ora-node2
# srvctl add listener -o /u01/oracle/11.2.0.3/db -l LISTENER_PEFMS1 -s
# srvctl add listener -o /u01/oracle/11.2.0.3/db -l LISTENER_PEFMS2 -s 

-d  database name
-i   instance name
-n  node name
-o  ORACLE_HOME path
-l   listener_name

In case you did an out-place upgrade on the same server this command will upgrade RAC configuration with the new Oracle Home:
# srvctl upgrade database -d  pefms -o /u01/oracle/11.2.0.3/db

Check RAC configuration:
-------------------------
# srvctl config database -d pefms

Database unique name: pefms
Database name: 
Oracle home: /u01/oracle/11.2.0.3/db
Oracle user: oracle
Spfile: 
Domain: 
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: pefms
Database instances: pefms1,pefms2
Disk Groups: 
Mount point paths: 
Services: pefms11g
Type: RAC
Database is administrator managed

Create SPFILE:
---------------
On Node1:
SQL> create spfile=/ora_data1/pefms/spfilepefms.ora from pfile;

Note: The spfile will be on the shared storage to be readable from both RAC nodes.

# cd $ORACLE_HOME/dbs
# mv initpefms1.ora initpefms1.sav
# vi initpefms1.ora
# Add this line only:
spfile=/ora_data1/pefms/spfilepefms.ora

On Node2:
# cd $ORACLE_HOME/dbs
# mv initpefms2.ora initpefms2.sav
# vi initpefms2.ora
# Add this line only:
spfile=/ora_data1/pefms/spfilepefms.ora

In the next startup of RAC instance it will use the new spfile.

Test the RAC configurations: 
# srvctl stop database -d pefms
# srvctl start database -d pefms
# srvctl stop instance -d pefms -i pefms1 
# srvctl stop instance -d pefms -i pefms2 

 
Troubleshooting:
---------------
=>If an instance didn't startup, may be it's marked DISABLED in the configurations, 
            enable it with:
 e.g. For specific instance:
    # srvctl enable instance -d pefms -i pefms2
                For the database:
            # srvctl enable database -d pefms

Create TAF services:  
=================
The following is just an example:

# srvctl add service -d pefms -s pefms11g -r pefms1,pefms2 -P basic
# srvctl start service -d pefms -s pefms11g 

Troubleshooting:   To Stop or Delete the service:
---------------    ----------------------------------------
# srvctl stop service -d pefms -s pefms11g 
# crs_stop
# srvctl remove service -d pefms -s pefms11g 
Delete the Service on the DB:
SQL> EXEC dbms_service.delete_service('pefms11g');

Then you can use this service PEFMS11g  as a connection string when the application connect to the database.


STEP 13: Upgrade the RMAN Recovery Catalog
######
If you're using the Recovery Catalog to backup your database you have to upgrade it using "UPGRADE CATALOG;" command.


STEP 14: Upgrade Statistics Tables
######
If you created statistics tables before, using the DBMS_STATS.CREATE_STAT_TABLE, then upgrade each table by running:
e.g.
SQL> EXECUTE DBMS_STATS.UPGRADE_STAT_TABLE('scott', 'stat_table');


STEP 15: Enable Database Vault
######
Enable Oracle Database Vault and Revoke the DV_PATCH_ADMIN Role [Note ID: 453903.1]


STEP 16: RMAN Configuration
######
RMAN> CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/ora_backupdisk/rmanbkps/controlfiles/snapcf_pefms.f';
RMAN> CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/ora_backupdisk/rmanbkps/controlfiles/%F';
RMAN> CONFIGURE CONTROLFILE AUTOBACKUP ON;
RMAN> CONFIGURE BACKUP OPTIMIZATION ON;
RMAN> CONFIGURE DEVICE TYPE DISK PARALLELISM 4;

Note: Starting with 11.2.0.2 onward the snapshot controlfile must be located on a shared accessible location by all nodes in the cluster [ID 1472171.1].
In 11.2.0.3 controlfile backups (auto & manual) must created on a shared device [Bug 13780443].

STEP 17: Enable Cron & DB Jobs:
######
Enable Crontab Jobs: (which we stopped in step 7)
----------------------
 
# crontab /root/crontab_root 

# crontab /home/oracle/crontab_oracle

Enable DB jobs:
----------------
SQL> EXEC dbms_scheduler.set_scheduler_attribute('SCHEDULER_DISABLED','FALSE');


STEP 18: Configure the Enterprise Manager
######

Configure the SCAN LISTENER: 
----------------------------------
EM depends on the scan listener to run.

Ensure that SCAN LISTENER is exist:
# vi $GRID_HOME/network/admin/listener.ora

ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON
LISTENER_SCAN1 =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
    )
  )

Ensure that SCAN listener is up:
# crs_start ora.LISTENER_SCAN1.lsnr

Add the SCAN listener tns entry to tnsnames.ora on all nodes:
# vi $ORACLE_HOME/network/admin/tnsnames.ora

LISTENER_SCAN1 =
  (ADDRESS_LIST =
    (ADDRESS = (PROTOCOL = TCP)(HOST = cluster-scan)(PORT = 1523))
  )

Set the parameter remote_listener to point to the SCAN listener in all RAC instances:
SQL> alter system set remote_listener=LISTENER_SCAN1 sid='*';

Using Command line way:
====================
First Change this parameter in DEFAULT profile:
----------------------------------------------
Note down the DEFAULT profile settings:
SQL> spool default_profile_settings.log
SQL> select * from dba_profiles where profile='DEFAULT';  
SQL> spool off
SQL> alter profile default limit PASSWORD_REUSE_MAX unlimited;
SQL> alter user dbsnmp profile default;
SQL> alter profile default limit PASSWORD_VERIFY_FUNCTION null;
SQL> alter profile default limit PASSWORD_LOCK_TIME unlimited;

# emca -config dbcontrol db -repos create -cluster

STARTED EMCA at Sep 9, 2013 4:51:28 PM
EM Configuration Assistant, Version 11.2.0.3.0 Production
Copyright (c) 2003, 2011, Oracle.  All rights reserved.

Enter the following information:
Database unique name: pefms
Service name: pefms11g
Listener port number: 1521
Listener ORACLE_HOME [ /u01/grid/11.2.0.3/grid ]: /u01/oracle/11.2.0.3/db
Password for SYS user:  
Password for DBSNMP user:  
Password for SYSMAN user:  
Cluster name: cluster
Email address for notifications (optional): 
Outgoing Mail (SMTP) server for notifications (optional): 

Once EM setup is done, return back the default profile settings
SQL> alter profile default limit PASSWORD_REUSE_MAX x;
SQL> alter profile default limit PASSWORD_VERIFY_FUNCTION VERIFY_FUNCTION;
SQL> alter profile default limit PASSWORD_LOCK_TIME 1;

UPGRADE IS DONE


############
Optional STEPS: Good To Do (Not mandatory for the upgrade)
############


STEP 19: Rebuild Indexes & Gather Statistics
######

Rebuild Unusable indexes: 
======================
SQL> select 'ALTER INDEX '||OWNER||'.'||INDEX_NAME||' REBUILD ONLINE;'from dba_indexes where status ='UNUSABLE';

Gather FIXED OBJECTS statistics:
=============================
Fixed objects are the x$ tables (being loaded in SGA during startup) on which V$ views are built (V$SQL etc.).
If the statistics are not gathered on fixed objects, the Optimizer will use predefined default values for the statistics. These defaults may lead to inaccurate execution plans.
Statistics on fixed objects are not being gathered automatically nor within gathering DB stats.

Backup current fixed objects statistics:
SQL> exec DBMS_STATS.CREATE_STAT_TABLE('SYS','STATS_TABLE','USERS');
SQL> exec dbms_stats.export_fixed_objects_stats(stattab=>'STATS_TABLE',statown=>'SYS');

During Peek hours run:
SQL> Exec DBMS_STATS.GATHER_FIXED_OBJECTS_STATS (no_invalidate => FALSE );

In case you encountered a bad performance after gathering new fixed object stats, restore the old stats:
SQL> exec dbms_stats.delete_fixed_objects_stats(); 
SQL> exec dbms_stats.import_fixed_objects_stats(stattab =>'STATS_TABLE',statown=>'SYS');

In general, fixed object statistics should be gathered after:
-A major database or application upgrade.
-Implementing a new module.
-Changing the database configuration. e.g. changing the size of memory pools (sga,pga,..).
-Poor performance/Hang encountered while querying dynamic views e.g. V$ views, RMAN repository.
Note: 
-It's recommended to Gather the fixed object stats during peak hours (system is busy) or after the peak hours but the sessions are still connected (even if they idle), to guarantee that the fixed object tables been populated and the statistics well represent the DB activity. also note that performance degradation may be experienced while the statistics are gathering.
-Having no statistics is better than having a non represented statistics.

Gather SYSTEM Statistics:    
======================
System statistics are statistics about CPU speed and IO performance, it enables the CBO to 
effectively cost each operation in an execution plan. Introduced in Oracle 9i.

Oracle highly recommends gathering system statistics during a representative workload, 
ideally at peak workload time, in order to provide more accurate CPU/IO cost estimates to the optimizer. You only need to gather system statistics one time.

There are two types of system statistics (NOWORKLOAD statistics & WORKLOAD statistics):
NOWORKLOAD statistics: 
This will simulates a workload -not the real one but a simulation- and will not collect full statistics, it's less accurate than "WORKLOAD statistics" but if you can't capture the statistics during a typical workload you can use noworkload statistics.
To gather noworkload statistics:
SQL> execute dbms_stats.gather_system_stats();

WORKLOAD statistics: The one I recommend
This will gather statistics during a current workload -which supposed to be representative of actual system I/O and CPU workload on the DB-.
To gather WORKLOAD statistics:
e.g. In thursday at 11:00am run the following:
SQL> execute dbms_stats.gather_system_stats('start');

Once the workload window ends after 1,2,3.. hours e.g at 2:00pm or whatever, stop the system statistics gathering:
SQL> execute dbms_stats.gather_system_stats('stop');

OR, You can use time interval (minutes) instead of issuing start/stop command manually:
SQL> execute dbms_stats.gather_system_stats('interval',60);

Gather DICTIONARY Statistics:
==========================
SQL> EXEC DBMS_STATS.GATHER_DICTIONARY_STATS; ->Will gather stats on dictionary tables 20% of SYS schema tables.
or...
SQL> EXEC DBMS_STATS.GATHER_SCHEMA_STATS ('SYS'); ->Will gather stats on 100% of SYS schema tables including dictionary tables.
or...
SQL> EXEC DBMS_STATS.GATHER_DATABASE_STATS (gather_sys=>TRUE); ->Will gather stats on the whole DB including all SYS & dictionary tables.


Gather Database Statistics:
======================
BEWARE: You have to do an intensive test before gathering the full database statistics as this will change the execution plans of SQL statements on the database to better or worst, your test will determine this.

SQL> EXEC DBMS_STATS.GATHER_DATABASE_STATS(ESTIMATE_PERCENT=>DBMS_STATS.AUTO_SAMPLE_SIZE,degree => 8);

Or...
In case you want to gather Database Stats + Histograms on all columns :
SQL> EXEC DBMS_STATS.GATHER_DATABASE_STATS(ESTIMATE_PERCENT=>100,METHOD_OPT=>'FOR ALL COLUMNS SIZE SKEWONLY',cascade => TRUE,degree => 8);

if used ESTIMATE_PERCENT=>DBMS_STATS.AUTO_SAMPLE_SIZE  will Let Oracle estimate skewed values, always gives excellent results and faster than using 100.
if removed "cascade => TRUE"  Oracle will determine whether index statistics to be collected or not.


Step 20: Check Oracle Recommended Patches:
#######
Note ID 756671.1 includes the recommended patches for releases starting from 10.2.0.3.
Note ID 742060.1 represents the most accurate information Oracle can provide for coming releases.

PSU patch and CPU|SPU patch: [ID 854428.1]
==========================
*You can check the latest applied patches on the database by running this query:
SQL> select * from DBA_REGISTRY_HISTORY;

-->The July SPU patch I had applied in Part II before on the ORACLE_HOME is not showing a record in DBA_REGISTRY_HISTORY after the upgrade , so I ran:
 SQL> @?/rdbms/admin/catbundle.sql cpu apply
 SQL> select * from DBA_REGISTRY_HISTORY;

RAC Configuration Audit Tool 

This tool is a script provided by Oracle to help you perform a Health Check on your Database environment (specially RAC environments), it also checks the implementation of Maximum Availability Architecture on your environment.

For all information about this tool + download visit METALINK [Doc ID 1268927.1]
For a brief explanation check this link: http://dba-tips.blogspot.ae/2013/12/introduction-to-great-tool-rac.html

After downloading the zip file available in METALINK [Doc ID 1268927.1], extract it under a local mount point e.g. /home/oracle, then run the script:

cd /home/oracle
./raccheck

=> Provide the root user password/ or select to use sudo privilege, to use SUDO privilege without password do the following in a new session:
visudo
#Allow oracle to run raccheck.sh script:
oracle ALL=(root) NOPASSWD:/tmp/root_raccheck.sh

=>/tmp/root_raccheck.sh will be created during running raccheck, after script is done it will  be deleted.

-> Go to raccheck session and Enter 2 to run the script with SUDO privilege.

* Once script is done it will create an HTML report, upload it to your PC and open it.
* Script may take from 15 to 30min to finish with a minimal performance overhead from 3% to 20%
* Feel free to remove/Hash the entry you added to sudoers file after script is done.

Note that time by time Oracle release a new version of this script, it's recommended to download the latest version of that script every three months.


Exclusive Environment configuration:
############################
Enable supplemental logging (Data Mining):
====================================
Enable supplemental logging for easy checking transactions happen in the past:
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
      
     =>This statement may take long time as it waits for all transactions to on the DB to finish.
=>Check that minimal supplemental logging is enabled:

SQL> SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;

SUPPLEME
--------
YES


Mission Accomplished.


////////////////////////////////////////////////////////
Post Upgrade Performance Issues:

Most problems being raised after upgrading from a release to another are performance related ones, In case you have some queries are performing bad, you can tell the optimizer to perform as it was with the older release, by setting this parameter to the older release:
SQL> Alter session set optimizer_features_enable='11.2.0.1';

=>Check and see if it makes difference, if so try to dig more (check the execution plans, gather statistics/histograms, create sql profiles,..)

It's not recommended to set "optimizer_features_enable"parameter on instance level.
Also you can change this parameter on the query level by using this hint
/*+ optimizer_features_enable('11.2.0.1') */  

Make sure that the Oracle client version installed on the application servers (if applicable) is same as the database version that connects to. 
e.g. if your database version is 11.2.0.3, the client version should be 11.2.0.3 as well, otherwise you may end up with a scenario where queries & reports are performing bad when they run from the application side, but they run faster when you run them directly from the database server or from different machine having the client version same as the database version.
////////////////////////////////////////////////////////



Friday, September 20, 2013

Upgrade from RAC 11.2.0.1 to 11.2.0.3 (Part III Create Standby database being synchronized from the 11.2.0.1 primary DB)


In Part I I've installed the Linux OS and prepared the shared filesystem (ISCSI configuration & OCFS2) 
In Part II I've prepared the Linux OS for the RAC installation, installed 11.2.0.3 Grid Infrastructure and Database software.

In this part I'll create a physical standby database under 11.2.0.3 Oracle Home on the new RAC server will be refreshed from the 11.2.0.1 primary database for the purpose of minimizing the downtime by eliminating the time wasted on copying the 11.2.0.1 datafiles from the old SAN to the new NAS where the new 11.2.0.3 resides. There are many technologies/means can do the same job and get you rid of creating a standby database for the purpose of minimize the time of copying datafiles to the new 11.2.0.3 server shared storage, as simple as just dismount the file system where the datafiles located from the old 11.2.0.1 server and mount it on the new servers if the new server are connected to the same SAN/NAS (this will be done from the NAS/SAN console), or utilize a new technology like SAN to SAN replication if the new 11.2.0.3 servers are connected to different SAN/NAS storage, that was just an example but there are many other solutions in the market can get this job done without creating a standby DB.

Any ways I'll take the hard & cheapest route and create a standby database on the new 11.2.0.3 servers located on the new NAS storage, again I'm doing so for the purpose of minimizing the copying time of datafiles during the upgrade phase.

My primary database is located on SAN storage connected to 11.2.0.1 RAC server through fiber cables, the new 11.2.0.3 database will be located on NAS storage connected to 11.2.0.3 servers through Ethernet using ISCSI protocol. (ISCSI configuration and file system preparation already done in Part I).

This post can be also used to create a standby database for disaster recovery purpose.

It's good to know the following:

License:
-------
-Data Guard license comes free with Enterprise Edition license.

Docs:
----
Standby Creation: -Single Node- Oracle Doc.

Standby Creation: -RAC 2 nodes- MAA.

Also Check this OBE link:

Creating a Standby DB on a different OS/Endian than primary: [Metalink Note ID 413484.1]
----------------------------------------------------------
If the primary OS is:          Standby DB can be created on the following OS:
--------------------                 -----------------------------------------------
Linux (32-bit)   Linux (32-bit)
           Microsoft Windows (32-bit)  =>Oracle 11g onward
           Linux (64-bit)                          =>Oracle 10g onward

Linux (64-bit)   Linux (64-bit)
            Linux (32-bit)                   =>Oracle 10g onward
            Microsoft Windows (64-bit)  =>Oracle 11g onward  
            Microsoft Windows (32-bit)  =>Oracle 11g onward  
            Solaris (64-bit) -Non SPark-  =>Oracle 11g onward

Microsoft Windows (32-bit) Microsoft Windows (32-bit)
             Microsoft Windows (64-bit)  =>Oracle 10g onward
             Linux (32-bit)             =>Oracle 11g onward
             Linux (64-bit)              =>Oracle 11g onward

Microsoft Windows (64-bit) Microsoft Windows (64-bit)
             Microsoft Windows (32-bit)  =>Oracle 10g onward
             Linux (32-bit)             =>Oracle 11g onward
             Linux (64-bit)              =>Oracle 11g onward

Solaris (64-bit)Non SPark    Solaris (64-bit) -Non SPark-
             Solaris (32-bit) -Non SPark- =>Oracle 10g onward
             Linux (64-bit)             =>Oracle 11g onward


Note: to see all operating systems endians run the following:
SQL> SELECT *FROM V$TRANSPORTABLE_PLATFORM;

Note: to see your OS endian run the following:
SQL> SELECT PLATFORM_ID,PLATFORM_NAME FROM V$DATABASE;

In brief, the main steps of creating a standby database are the following:
1. Perform an RMAN backup of the primary database.
2. Create the standby controlfile of the primary database.
3. Copy the backup of primary database/standby controlfile/SPFILE to the standby DB server.
4. Copy the password file orapw to the standby DB server.
5. Restore the SPFILE and standby controlfile on the standby DB.
6. Restore the database from the RMAN backup.
7. Configure both primary and standby database with Data Guard initialization parameters.
8. Start Managed Recovery Process to automate recovering the standby DB.

Note: the first three steps can be done in one step if RMAN command “duplicate target database for standby from active database” used to create the standby database.

Extra steps are related to Oracle Maximum Availability Architecture (MAA)
To get a deep knowledge of Data Guard Maximum Availability Architecture technical practices I strongly recommend this paper:

Now let's get started...

########################
Operating System Preparation:
########################
I'll refer to the 11.2.0.1 database | server as primary. (ora1121-node1 & ora1121-node2)
I'll refer to the 11.2.0.3 database | server as standby. (ora1123-node1 & ora1123-node2)

Host equivalence between primary servers and standby server:
====================================================
On Primary Node1:
cd /home/oracle/.ssh
scp authorized_keys oracle@ora1123-node1:/home/oracle/.ssh/authorized_keys.primary

On Standby Node1:
cd /home/oracle/.ssh
cat authorized_keys.primary >> authorized_keys
Now the authorized_keys file on standby node1 has all keys for both Primary & standby servers, now we will overwrite this file on all primary RAC nodes to complete the host equivalence between all primary and standby nodes.
scp authorized_keys oracle@ora1123-node2:/home/oracle/.ssh/authorized_keys
scp authorized_keys oracle@ora1121-node1:/home/oracle/.ssh/authorized_keys
scp authorized_keys oracle@ora1121-node2:/home/oracle/.ssh/authorized_keys

On Primary node1: (by oracle user) Answer all question with YES
ssh ora1121-node1 date
ssh ora1121-node2 date
ssh ora1123-node1 date
ssh ora1123-node2 date

On Primary node2: (by oracle user) Answer all question with YES
ssh ora1121-node1 date
ssh ora1121-node2 date
ssh ora1123-node1 date
ssh ora1123-node2 date

On Standby node1: (by oracle user) Answer all question with YES
ssh ora1121-node1 date
ssh ora1121-node2 date
ssh ora1123-node1 date
ssh ora1123-node2 date

On Standby node2: (by oracle user) Answer all question with YES
ssh ora1121-node1 date
ssh ora1121-node2 date
ssh ora1123-node1 date
ssh ora1123-node2 date


#######################
Create The Standby Database:
#######################

Create Directories Tree:
=====================
On Standby node1:
mkdir -p /u01/oracle/11.2.0.3/db/diagnostics/pefms
mkdir -p /u01/oracle/diag/tnslsnr/ora1123-node1/listener_pefms1

mkdir -p /ora_control1/pefms
mkdir -p /ora_control2/pefms
mkdir -p /ora_redo1/pefms
mkdir -p /ora_redo2/pefms
mkdir -p /ora_archive1/pefms
mkdir -p /ora_archive2/pefms
mkdir -p /ora_temp1/pefms
mkdir -p /ora_undo2/pefms
mkdir -p /ora_undo1/pefms
mkdir -p /ora_index1/pefms
mkdir -p /ora_data1/pefms
mkdir -p /ora_backupdisk/flash_recovery_area/PEFMS/flashback

chown -R oracle:oinstall /ora_control1
chown -R oracle:oinstall /ora_control2
chown -R oracle:oinstall /ora_redo1
chown -R oracle:oinstall /ora_redo2
chown -R oracle:oinstall /ora_archive1
chown -R oracle:oinstall /ora_archive2
chown -R oracle:oinstall /ora_temp1
chown -R oracle:oinstall /ora_undo1
chown -R oracle:oinstall /ora_undo2
chown -R oracle:oinstall /ora_index1
chown -R oracle:oinstall /ora_data1
chown -R oracle:oinstall /ora_backupdisk

chmod -R 750 /ora_control1
chmod -R 750 /ora_control2
chmod -R 750 /ora_redo1
chmod -R 750 /ora_redo2
chmod -R 750 /ora_archive1
chmod -R 750 /ora_archive2
chmod -R 750 /ora_temp1
chmod -R 750 /ora_undo1
chmod -R 750 /ora_undo2
chmod -R 750 /ora_index1
chmod -R 750 /ora_data1
chmod -R 750 /ora_backupdisk


Create the listener.ora and tnsnames.ora files:
====================================
vi $ORACLE_HOME/network/admin/listener.ora
# Add the following lines

LISTENER=
  (DESCRIPTION=
    (ADDRESS_LIST=
      (ADDRESS=(PROTOCOL=tcp)(HOST=ora1123-node1)(PORT=1521))))

SID_LIST_LISTENER=
  (SID_LIST=
    (SID_DESC=
     (SDU=32767)
      (ORACLE_HOME=/u01/oracle/11.2.0.3/db)
      (SID_NAME=pefms1)))
Note: this line (SDU=32767) is part of MAA.

vi $ORACLE_HOME/network/admin/tnsnames.ora
# Add the following lines
pefm1=(DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP) (Host = ora1123-node1) (Port = 1521))) (CONNECT_DATA = (SID = pefms1)))
pefms_pri=(DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP) (Host = idub-ora-node1) (Port = 1521))) (CONNECT_DATA = (SID = pefms1)))
pefms_dr=(DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP) (Host = ora1123-node1) (Port = 1521))) (CONNECT_DATA = (SID = pefms1)))
The last two entries will be used later in the standby configurations to help copying the redo data and fixing the gaps between the primary and the standby DB.

Copy the Password File from primary to standby:
=========================================
On Primary node1:
# cd /u01/oracle/11.2.0.3/db/11.2.0.1/dbs
# scp orapwpefms1 oracle@ora1123-node1:/u01/oracle/11.2.0.3/db/dbs

 In case there is no password file created yet on the primary server:
 ======================================================
On Primary node1:
 Stop the case sensitivity for password:
 SQL> ALTER SYSTEM SET SEC_CASE_SENSITIVE_LOGON=FALSE SCOPE=BOTH;

 Create the password file:
 # cd $ORACLE_HOME/dbs
 # orapwd file=orapwpefms1 password=xxxxxxxxx ignorecase=y

 Copy the password file to the standby node1:
 # scp orapwpefms2 oracle@ora1123-node1:/u01/oracle/11.2.0.3/db/dbs/orapwpefms

 Reboot the Primary DB
 Return back the case sensitivity parameter:
 SQL> ALTER SYSTEM SET SEC_CASE_SENSITIVE_LOGON=TRUE SCOPE=BOTH;

On Standby node1:
Create SQLNET.ora file:

# vi $ORACLE_HOME/network/admin/sqlnet.ora
#Add this parameter to sqlnet.ora file:
NAMES.DIRECTORY_PATH= (TNSNAMES, ONAMES, HOSTNAME)

Create the pfile:
==============
Total Memory:  33G
Memory reserved for OS: 5G
Memory reserved for DB: 28G
 -SGA  18G
  -Minimum DB_CACHE 8G
  -Minimum SHARED         2G
  -Minimum LARGE         300M
  -Minimum JAVA         100M
  -Log Buffer                 30M
 -PGA  10G
  -Minimum  SORTAREA   80M

# vi $ORACLE_HOME/dbs/initpefms1.ora

#Memory Parameters:
##################
sga_max_size=19327352832
sga_target=19327352832
*.db_cache_size=8589934592
*.java_pool_size=104857600
*.large_pool_size=314572800
*.shared_pool_reserved_size=52428800
*.shared_pool_size=2618767104
*.sort_area_size=83886080
*.log_buffer=31457280
*.pga_aggregate_target=10737418240
#Destination Parameters:
#######################
*.control_files='/ora_control1/pefms/control01.ctl','/ora_control2/pefms/control02.ctl'
*.db_recovery_file_dest='/ora_backupdisk/flash_recovery_area'
*.diagnostic_dest='/u01/oracle/11.2.0.3/db/diagnostics/pefms'
*.log_archive_dest_1='LOCATION=/ora_archive1/pefms'
#Other Parameters:
#################
*.compatible='11.2.0.1'
*.db_flashback_retention_target=21600
*.db_name='pefms'
*.db_recovery_file_dest_size=536870912000
*.fast_start_mttr_target=300
instance_name='pefms1'
log_archive_config='dg_config=(pefms_pri,pefms_dr)'
*.log_archive_dest_state_1='ENABLE'
*.log_archive_format='%d_%t_%r_%s.arc'
*.log_archive_max_processes=3
*.open_cursors=500
*.processes=1000
*.remote_login_passwordfile='EXCLUSIVE'
*.resource_limit=TRUE
*.standby_file_management='AUTO'
thread=1
*.undo_management='AUTO'
*.undo_retention=172800
undo_tablespace='UNDOTBS1'
*.fal_server='PEFMS_PRI'
*.fal_client='PEFMS_DR'
*.db_unique_name='pefmsdr'

Note: Parameter db_unique_name must be set to "pefmsdr"
Note: Parameter log_archive_config must be set to "'dg_config=(pefms_pri,pefms_dr)'".

On the Primary Node1:

Backup the primary database:
=========================
# $ORACLE_HOME/bin/rman target /
RMAN> run {
allocate channel c1 type disk;
allocate channel c2 type disk;
allocate channel c3 type disk;
allocate channel c4 type disk;
change archivelog all crosscheck;
backup as compressed backupset incremental level=0 format '/backupdisk/rmanbkps/%d_%t_%s_%p' tag='fullprodbk'
filesperset 100 database plus archivelog;
}

copy the backup to the DR server:
=============================
# cd /backupdisk/rmanbkps/
# scp * oracle@ora1123-node1:/ora_backupdisk/rmanbkps

create standby controlfile: -On primary-
-------------------------
SQL> alter database create standby controlfile as '/home/oracle/standby.ctl' reuse;

move it to the standby server:
# scp /home/oracle/standby.ctl oracle@ora1123-node1:/ora_control1/pefms/control01.ctl

On the standby Node1:

Start the Standby database creation:
==============================
Multiplex the standby controlfiles:
---------------------------------
# cp /ora_control1/pefms/control01.ctl /ora_control2/pefms/control02.ctl
# chmod 640 /ora_control1/pefms/control01.ctl
# chmod 640 /ora_control2/pefms/control02.ctl

Mount the standby DB:
---------------------
# sqlplus '/ as sysdba' 

SQL> STARTUP NOMOUNT;
SQL> create spfile='/u01/oracle/11.2.0.3/db/dbs/spfilepefms1.ora' from pfile='/u01/oracle/11.2.0.3/db/dbs/initpefms1.ora';
SQL> alter database mount standby database;
SQL> exit

Catalog the RMAN backup been copied from Primary site, start DB restoration:
--------------------------------------------------------------
# $ORACLE_HOME/bin/rman target /
RMAN> catalog start with '/ora_backupdisk/rmanbkps/';
RMAN> restore database;

When it done check the archives inside the backup:
-----------------------------------------------
RMAN> list backup of archivelog all; 
RMAN> list backup of archivelog from time 'sysdate-10/24';

Recover the database to the latest scn you have in the backup:
-------------------------------------------------------------
RMAN> recover database until scn xxx;

No need to worry about the following error, just move to the next step:

ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
ORA-01152: file 1 was not restored from a sufficiently old backup 
ORA-01110: data file 1: '/ora_data1/pefms/system01.dbf'

Put the standby database in recover managed mode:
-------------------------------------------------
SQL> alter database recover managed standby database disconnect from session;  

set log_archive_config:
SQL> alter system set log_archive_config='dg_config=(pefms_pri,pefms_dr)' sid='*' scope=both;

Note: pefms_pri & pefmsdr are two services already added to tnsnames.ora file located on the standby node1, pefms_pri will point to the primary DB in the primary server and pefms_dr pointing to the standby database on the standby server.

Now you have a standby database............................

Start the listener:
---------------
# lsnrctl start


Configure archive logs shipping:
########################
On Primary Node1:
Set the primary database in FORCE LOGGING mode, to ensure that all transactions are being written to the redologs:
SQL> ALTER DATABASE FORCE LOGGING;

=insert the following lines inside the tnsnames.ora located on the primary node1 that represent PEFMS_DR service:

PEFMS_DR =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = ora1123-node1)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SERVICE_NAME = pefms1)
    )
  )

Note: Make sure that the DR is in the mount mode.

On Primary Node1:
SQL> alter system set log_archive_config='dg_config=(PEFMS_DR,PEFMS)' sid='*' scope=both;
SQL> alter system set log_archive_dest_3='service="PEFMS_DR" valid_for=(online_logfiles,primary_role) db_unique_name=PEFMS_DR sid='*' scope=both;
SQL> alter system set log_archive_dest_state_3='enable' scope=both sid='*';
SQL> alter system set standby_file_management=auto sid='*' scope=both;
SQL> alter system set fal_server='PEFMS_DR' sid='*' scope=both;
SQL> alter system set fal_client='PEFMS1' sid='pefms1' scope=both;
SQL> alter system set fal_client='PEFMS2' sid='pefms2' scope=both;
SQL> alter system set service_names='PEFMS' sid='*' scope=both;
The following are MAA recommendations:
SQL> ALTER SYSTEM SET ARCHIVE_LAG_TARGET=1800 sid='*' scope=both;
     -->REDOLOG switch will be forced every 30min.
SQL> ALTER SYSTEM SET LOG_ARCHIVE_MAX_PROCESSES=4 sid='*' scope=both;
     -->To quickly resolve gaps in the redo stream to a standby database.

Create standby redo logs on the primary database: (MAA)
========================================
- Even though standby redo logs are required for maximum protection and maximum availability modes and with the LGWR ASYNC transport mode and not required for Maximum Performance mode (which I'm using), it's recommend to create them as they will speed up (redo transport, data recovery, speedup the switchover).
- As the primary database possibly becoming the standby database as a result of a database switchover or failover, standby redologs should be created on the primary database as well.

The minimum number of standby redolog groups is = the number of online redo logs.
The best practice: Number of standby redologs = (Number of redologs on production) +1
Standby redologs size = Primary redologs size
Standby redologs should not be multiplexed.

SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 
 GROUP 7 ('/ora_redo1/pefms/pefms1_redo07_a.log')SIZE 100M reuse,
 GROUP 8 ('/ora_redo1/pefms/pefms1_redo08_a.log')SIZE 100M reuse,
 GROUP 9 ('/ora_redo1/pefms/pefms1_redo09_a.log')SIZE 100M reuse,
 GROUP 10('/ora_redo1/pefms/pefms1_redo10_a.log')SIZE 100M reuse;

SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 2
 GROUP 11 ('/ora_redo1/pefms/pefms2_redo11_a.log')SIZE 100M reuse,
 GROUP 12 ('/ora_redo1/pefms/pefms2_redo12_a.log')SIZE 100M reuse,
 GROUP 13 ('/ora_redo1/pefms/pefms2_redo13_a.log')SIZE 100M reuse,
 GROUP 14 ('/ora_redo1/pefms/pefms2_redo14_a.log')SIZE 100M reuse;

SQL> SELECT * FROM V$LOG;
SQL> SELECT * FROM V$STANDBY_LOG;


On the Standby Node1:
##################
create standby redo logs on the standby database:
=========================================
SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 1
 GROUP 7 ('/ora_redo1/pefms/pefms1_redo07_a.log')SIZE 100M reuse,
 GROUP 8 ('/ora_redo1/pefms/pefms1_redo08_a.log')SIZE 100M reuse,
 GROUP 9 ('/ora_redo1/pefms/pefms1_redo09_a.log')SIZE 100M reuse,
 GROUP 10('/ora_redo1/pefms/pefms1_redo10_a.log')SIZE 100M reuse;

SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 2
 GROUP 11 ('/ora_redo1/pefms/pefms2_redo11_a.log')SIZE 100M reuse,
 GROUP 12 ('/ora_redo1/pefms/pefms2_redo12_a.log')SIZE 100M reuse,
 GROUP 13 ('/ora_redo1/pefms/pefms2_redo13_a.log')SIZE 100M reuse,
 GROUP 14 ('/ora_redo1/pefms/pefms2_redo14_a.log')SIZE 100M reuse;

SQL> alter system set standby_file_management='AUTO';

SQL> SELECT * FROM V$LOG;
     SELECT * FROM V$STANDBY_LOG;

Maximum Availability Architecture (MAA) recommendations:
===================================================
> Speed up the parallel recovery:
     SQL> ALTER SYSTEM SET parallel_execution_message_size=16384;
     -->16384 is the 11gr2 default, the larger the faster the parallel recovery

> On the standby DB you can shrink the SHARED_POOL to increase the DB_CACHE_SIZE, since the recovery process does not require much shared pool memory.


Enable Flashback on the Standby DB:
-------------------------------
That helps in fixing logical corruption scenarios, easily re-instate the primary database after failing over to the standby.
SQL> ALTER database flashback on;

Enable the real-time apply on the standby database:
---------------------------------------------
Apply the changes on the DR as soon as the redo data is received:
SQL> RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;


VERIFY DATA GUARD Archivelog shipping:
=======================================
ON Primary:
------------
SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;

ON DR:
--------
SQL> SELECT DATABASE_ROLE,OPEN_MODE,PROTECTION_MODE from v$database;
     SELECT THREAD#,SEQUENCE#, FIRST_TIME, NEXT_TIME,applied FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#,applied;
     SELECT thread#,max(SEQUENCE#) from v$archived_log group by thread#;

Parallel the recovery process:
--------------------------
SQL> ALTER system set recovery_parallelism=16 scope=spfile;

Check the time lag between Primary & Standby:
-----------------------------------------
SQL> col NAME for a15
          col VALUE for a15
          SELECT NAME,VALUE FROM V$DATAGUARD_STATS WHERE NAME='apply lag';

NAME  VALUE
--------------- -------------
apply lag +00 00:04:43

The lag is 4.43 minutes

RMAN configuration:
==================
# rman target /
RMAN> CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/ora_backupdisk/rmanbkps/controlfiles/%F';

Add the database services to the clusterware:
====================================
If you want the database and the listener to be managed by clusterware:
# srvctl add database -d pefms -o /u01/oracle/11.2.0.3/db
# srvctl add instance -d pefms -i pefms1 -n ora1123-node1
# srvctl add listener -o /u01/oracle/11.2.0.3/db -l LISTENER_PEFMS1 -s

Now the standby DB is done.

Next:
In the next part Part IV I'll cover the steps of upgrading this standby DB from 11.2.0.1 to 11.2.0.3 and configure it with RAC and make it ready for production use.


The following information is good to know about Data Guard:

Using compression option for (ASYNC only) -one of 11g new features-. (compression=enable)
-------------------------------------------
redo compression can:
-Improve data protection by reducing redo transport lag.
-Reduce network utilization.
-Provide faster redo gap resolution.
-Reduce redo transfer time.
-Need to buy advanced compression license.
To implement:
             --Use compression=enable in the dest_* parameter
      --alter system set "_REDO_TRANSPORT_COMPRESS_ALL"=TRUE scope=both;

>This option is recommended when the link speed between the primary and the standby.
 is not fast enough.

Discover ARCHIVE_DEST_N options:
--------------------------------------
reopen: The time that the primary database reconnect to standby DB when connection cut between.
compression: Requires the Oracle Advanced Compression license, it compress redo data when transfer it to DR.[ID 729551.1]
             If you're using Maximum Performance, consider setting _REDO_TRANSPORT_COMPRESS_ALL=TRUE
DELAY:  delay sending redo data, to mitigate potential human induced errors and logical corruption. I don't recommend to set it
        The best approach is to delay applying the redo data on the standby DB or to use FLASHBACK DATABASE feature on both sites.

Examples:
--SQL> alter system set log_archive_dest_3='service="PEFMS_DR" valid_for=(online_logfiles,primary_role) db_unique_name=PEFMS_DR DELAY=240' sid='*' scope=both;
--SQL> alter system set log_archive_dest_3='service="PEFMS_DR" valid_for=(online_logfiles,primary_role) db_unique_name=PEFMS_DR delay=60 sid='*' scope=both;
--SQL> alter system set log_archive_dest_3='service="PEFMS_DR", ARCH NOAFFIRM delay=0 optional reopen=15 register max_failure=10 db_unique_name=PEFMS_DR 

compression=enable';
Note: In case of using compression option alter below parameter:
--SQL> alter system set "_REDO_TRANSPORT_COMPRESS_ALL"=TRUE scope=both;

Using maximum availability option with LGWR with SYNC:
--SQL> alter system set log_archive_dest_3='service=PEFMS_DR LGWR SYNC AFFIRM db_unique_name=PEFMS_DR VALID_FOR=(ALL_LOGFILES,PRIMARY_ROLE) compression=enable';
--SQL> Alter database set standby database to maximize availability;
reboot the production and the standby databases.