In Part I I've installed the Linux OS and prepared the shared filesystem (ISCSI configuration & OCFS2)
In Part II I've
prepared the Linux OS for the RAC installation,
installed 11.2.0.3 Grid Infrastructure and Database software.
In this part I'll create
a physical standby database under 11.2.0.3 Oracle Home on the new RAC server
will be refreshed from the 11.2.0.1 primary database for the purpose
of minimizing the downtime by eliminating the time wasted on copying the 11.2.0.1
datafiles from the old SAN to the new NAS where the new 11.2.0.3 resides. There
are many technologies/means can do the same job and get you rid of creating a
standby database for the purpose of minimize the time of copying datafiles to
the new 11.2.0.3 server shared storage, as simple as just dismount the file
system where the datafiles located from the old 11.2.0.1 server and mount it on
the new servers if the new server are connected to the same SAN/NAS (this will
be done from the NAS/SAN console), or utilize a new technology like SAN to SAN
replication if the new 11.2.0.3 servers are connected to different SAN/NAS
storage, that was just an example but there are many other solutions in the
market can get this job done without creating a standby DB.
Any ways I'll take the
hard & cheapest route and create a standby database on the new 11.2.0.3
servers located on the new NAS storage, again I'm doing so for the purpose of
minimizing the copying time of datafiles during the upgrade phase.
My primary database is
located on SAN storage connected to 11.2.0.1 RAC server through fiber cables,
the new 11.2.0.3 database will be located on NAS storage connected to 11.2.0.3
servers through Ethernet using ISCSI protocol. (ISCSI configuration and file
system preparation already done in Part I).
This post can be also
used to create a standby database for disaster recovery purpose.
It's good to know the
following:
License:
-------
-Data Guard license
comes free with Enterprise Edition license.
Docs:
----
Standby Creation:
-Single Node- Oracle Doc.
Standby Creation: -RAC 2
nodes- MAA.
Also Check this OBE
link:
Creating a Standby DB on
a different OS/Endian than primary: [Metalink Note ID 413484.1]
----------------------------------------------------------
If the primary OS
is:
Standby DB can be created on the following OS:
--------------------
-----------------------------------------------
Linux
(32-bit) Linux (32-bit)
Microsoft Windows (32-bit) =>Oracle 11g
onward
Linux (64-bit)
=>Oracle
10g onward
Linux
(64-bit) Linux (64-bit)
Linux (32-bit)
=>Oracle 10g onward
Microsoft Windows (64-bit)
=>Oracle 11g onward
Microsoft Windows (32-bit) =>Oracle
11g onward
Solaris (64-bit) -Non SPark-
=>Oracle 11g onward
Microsoft Windows
(32-bit) Microsoft Windows (32-bit)
Microsoft Windows (64-bit)
=>Oracle 10g onward
Linux (32-bit)
=>Oracle 11g onward
Linux (64-bit)
=>Oracle 11g onward
Microsoft Windows
(64-bit) Microsoft Windows (64-bit)
Microsoft Windows (32-bit)
=>Oracle 10g onward
Linux (32-bit)
=>Oracle 11g onward
Linux (64-bit)
=>Oracle 11g onward
Solaris (64-bit)Non
SPark Solaris (64-bit) -Non SPark-
Solaris (32-bit) -Non SPark- =>Oracle
10g onward
Linux (64-bit)
=>Oracle 11g onward
Note: to see all
operating systems endians run the following:
SQL> SELECT
*FROM V$TRANSPORTABLE_PLATFORM;
Note: to see your OS
endian run the following:
SQL> SELECT
PLATFORM_ID,PLATFORM_NAME FROM V$DATABASE;
In brief, the main steps
of creating a standby database are the following:
1. Perform an RMAN
backup of the primary database.
2. Create the standby
controlfile of the primary database.
3. Copy the backup of
primary database/standby controlfile/SPFILE to the standby DB server.
4. Copy the password
file orapw to the standby DB server.
5. Restore the SPFILE
and standby controlfile on the standby DB.
6. Restore the database
from the RMAN backup.
7. Configure both
primary and standby database with Data Guard initialization parameters.
8. Start Managed
Recovery Process to automate recovering the standby DB.
Note: the first three
steps can be done in one step if RMAN command “duplicate target database for standby from active database” used to create the standby database.
Extra steps are related
to Oracle Maximum Availability Architecture (MAA)
To get a deep knowledge
of Data Guard Maximum Availability Architecture technical practices I strongly
recommend this paper:
Now let's get
started...
########################
Operating System Preparation:
########################
I'll refer to the
11.2.0.1 database | server as primary. (ora1121-node1 & ora1121-node2)
I'll refer to the
11.2.0.3 database | server as standby. (ora1123-node1 & ora1123-node2)
Host equivalence between
primary servers and standby server:
====================================================
On Primary Node1:
cd /home/oracle/.ssh
scp authorized_keys
oracle@ora1123-node1:/home/oracle/.ssh/authorized_keys.primary
On Standby Node1:
cd /home/oracle/.ssh
cat
authorized_keys.primary >> authorized_keys
Now the authorized_keys
file on standby node1 has all keys for both Primary & standby servers,
now we will overwrite this file on all primary RAC nodes to complete the host
equivalence between all primary and standby nodes.
scp authorized_keys
oracle@ora1123-node2:/home/oracle/.ssh/authorized_keys
scp authorized_keys
oracle@ora1121-node1:/home/oracle/.ssh/authorized_keys
scp authorized_keys
oracle@ora1121-node2:/home/oracle/.ssh/authorized_keys
On Primary node1: (by oracle user) Answer all question with YES
ssh ora1121-node1 date
ssh ora1121-node2 date
ssh ora1123-node1 date
ssh ora1123-node2 date
On Primary node2: (by oracle user) Answer all question with YES
ssh ora1121-node1 date
ssh ora1121-node2 date
ssh ora1123-node1 date
ssh ora1123-node2 date
On Standby node1: (by oracle user) Answer all question with YES
ssh ora1121-node1 date
ssh ora1121-node2 date
ssh ora1123-node1 date
ssh ora1123-node2 date
On Standby node2: (by oracle user) Answer all question with YES
ssh ora1121-node1 date
ssh ora1121-node2 date
ssh ora1123-node1 date
ssh ora1123-node2 date
#######################
Create The Standby
Database:
#######################
Create Directories Tree:
=====================
On Standby node1:
mkdir -p
/u01/oracle/11.2.0.3/db/diagnostics/pefms
mkdir -p
/u01/oracle/diag/tnslsnr/ora1123-node1/listener_pefms1
mkdir -p /ora_control1/pefms
mkdir -p
/ora_control2/pefms
mkdir -p
/ora_redo1/pefms
mkdir -p
/ora_redo2/pefms
mkdir -p
/ora_archive1/pefms
mkdir -p
/ora_archive2/pefms
mkdir -p
/ora_temp1/pefms
mkdir -p
/ora_undo2/pefms
mkdir -p
/ora_undo1/pefms
mkdir -p
/ora_index1/pefms
mkdir -p
/ora_data1/pefms
mkdir -p
/ora_backupdisk/flash_recovery_area/PEFMS/flashback
chown -R
oracle:oinstall /ora_control1
chown -R
oracle:oinstall /ora_control2
chown -R
oracle:oinstall /ora_redo1
chown -R
oracle:oinstall /ora_redo2
chown -R oracle:oinstall
/ora_archive1
chown -R
oracle:oinstall /ora_archive2
chown -R
oracle:oinstall /ora_temp1
chown -R
oracle:oinstall /ora_undo1
chown -R
oracle:oinstall /ora_undo2
chown -R
oracle:oinstall /ora_index1
chown -R
oracle:oinstall /ora_data1
chown -R
oracle:oinstall /ora_backupdisk
chmod -R 750 /ora_control1
chmod -R 750
/ora_control2
chmod -R 750
/ora_redo1
chmod -R 750
/ora_redo2
chmod -R 750
/ora_archive1
chmod -R 750
/ora_archive2
chmod -R 750
/ora_temp1
chmod -R 750
/ora_undo1
chmod -R 750
/ora_undo2
chmod -R 750
/ora_index1
chmod -R 750
/ora_data1
chmod -R 750
/ora_backupdisk
Create the listener.ora
and tnsnames.ora files:
====================================
vi
$ORACLE_HOME/network/admin/listener.ora
# Add the following
lines
LISTENER=
(DESCRIPTION=
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=tcp)(HOST=ora1123-node1)(PORT=1521))))
SID_LIST_LISTENER=
(SID_LIST=
(SID_DESC=
(SDU=32767)
(ORACLE_HOME=/u01/oracle/11.2.0.3/db)
(SID_NAME=pefms1)))
Note: this line (SDU=32767) is part of MAA.
vi
$ORACLE_HOME/network/admin/tnsnames.ora
# Add the following
lines
pefm1=(DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)
(Host = ora1123-node1) (Port = 1521))) (CONNECT_DATA = (SID = pefms1)))
pefms_pri=(DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL =
TCP) (Host = idub-ora-node1) (Port = 1521))) (CONNECT_DATA = (SID = pefms1)))
pefms_dr=(DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP) (Host = ora1123-node1) (Port = 1521)))
(CONNECT_DATA = (SID = pefms1)))
The last two entries
will be used later in the standby configurations to help copying the redo data
and fixing the gaps between the primary and the standby DB.
Copy the Password File
from primary to standby:
=========================================
On Primary node1:
# cd
/u01/oracle/11.2.0.3/db/11.2.0.1/dbs
# scp orapwpefms1
oracle@ora1123-node1:/u01/oracle/11.2.0.3/db/dbs
In case there is
no password file created yet on the primary server:
======================================================
On Primary node1:
Stop the case
sensitivity for password:
SQL>
ALTER SYSTEM SET SEC_CASE_SENSITIVE_LOGON=FALSE SCOPE=BOTH;
Create the
password file:
# cd $ORACLE_HOME/dbs
# orapwd
file=orapwpefms1 password=xxxxxxxxx ignorecase=y
Copy the password
file to the standby node1:
# scp orapwpefms2 oracle@ora1123-node1:/u01/oracle/11.2.0.3/db/dbs/orapwpefms
Reboot the Primary
DB
Return back the
case sensitivity parameter:
SQL>
ALTER SYSTEM SET SEC_CASE_SENSITIVE_LOGON=TRUE SCOPE=BOTH;
On Standby node1:
Create SQLNET.ora file:
# vi
$ORACLE_HOME/network/admin/sqlnet.ora
#Add this parameter to
sqlnet.ora file:
NAMES.DIRECTORY_PATH=
(TNSNAMES, ONAMES, HOSTNAME)
Create the pfile:
==============
Total Memory: 33G
Memory reserved for OS:
5G
Memory reserved for DB:
28G
-SGA 18G
-Minimum DB_CACHE
8G
-Minimum SHARED 2G
-Minimum LARGE 300M
-Minimum JAVA
100M
-Log Buffer
30M
-PGA 10G
-Minimum SORTAREA 80M
# vi
$ORACLE_HOME/dbs/initpefms1.ora
#Memory Parameters:
##################
sga_max_size=19327352832
sga_target=19327352832
*.db_cache_size=8589934592
*.java_pool_size=104857600
*.large_pool_size=314572800
*.shared_pool_reserved_size=52428800
*.shared_pool_size=2618767104
*.sort_area_size=83886080
*.log_buffer=31457280
*.pga_aggregate_target=10737418240
#Destination
Parameters:
#######################
*.control_files='/ora_control1/pefms/control01.ctl','/ora_control2/pefms/control02.ctl'
*.db_recovery_file_dest='/ora_backupdisk/flash_recovery_area'
*.diagnostic_dest='/u01/oracle/11.2.0.3/db/diagnostics/pefms'
*.log_archive_dest_1='LOCATION=/ora_archive1/pefms'
#Other Parameters:
#################
*.compatible='11.2.0.1'
*.db_flashback_retention_target=21600
*.db_name='pefms'
*.db_recovery_file_dest_size=536870912000
*.fast_start_mttr_target=300
instance_name='pefms1'
log_archive_config='dg_config=(pefms_pri,pefms_dr)'
*.log_archive_dest_state_1='ENABLE'
*.log_archive_format='%d_%t_%r_%s.arc'
*.log_archive_max_processes=3
*.open_cursors=500
*.processes=1000
*.remote_login_passwordfile='EXCLUSIVE'
*.resource_limit=TRUE
*.standby_file_management='AUTO'
thread=1
*.undo_management='AUTO'
*.undo_retention=172800
undo_tablespace='UNDOTBS1'
*.fal_server='PEFMS_PRI'
*.fal_client='PEFMS_DR'
*.db_unique_name='pefmsdr'
Note: Parameter
db_unique_name must be set to "pefmsdr"
Note: Parameter
log_archive_config must be set to "'dg_config=(pefms_pri,pefms_dr)'".
On the Primary Node1:
Backup the primary
database:
=========================
#
$ORACLE_HOME/bin/rman target /
RMAN> run {
allocate channel c1
type disk;
allocate channel c2
type disk;
allocate channel c3
type disk;
allocate channel c4
type disk;
change archivelog all
crosscheck;
backup as compressed
backupset incremental level=0 format '/backupdisk/rmanbkps/%d_%t_%s_%p'
tag='fullprodbk'
filesperset 100
database plus archivelog;
}
copy the backup to the
DR server:
=============================
# cd
/backupdisk/rmanbkps/
# scp * oracle@ora1123-node1:/ora_backupdisk/rmanbkps
create standby
controlfile: -On primary-
-------------------------
SQL> alter
database create standby controlfile as '/home/oracle/standby.ctl' reuse;
move it to the standby
server:
# scp
/home/oracle/standby.ctl oracle@ora1123-node1:/ora_control1/pefms/control01.ctl
On the standby Node1:
Start the Standby
database creation:
==============================
Multiplex the standby
controlfiles:
---------------------------------
# cp
/ora_control1/pefms/control01.ctl /ora_control2/pefms/control02.ctl
# chmod 640
/ora_control1/pefms/control01.ctl
# chmod 640
/ora_control2/pefms/control02.ctl
Mount the standby DB:
---------------------
# sqlplus '/ as
sysdba'
SQL> STARTUP NOMOUNT;
SQL> create
spfile='/u01/oracle/11.2.0.3/db/dbs/spfilepefms1.ora' from
pfile='/u01/oracle/11.2.0.3/db/dbs/initpefms1.ora';
SQL> alter
database mount standby database;
SQL> exit
Catalog the RMAN backup
been copied from Primary site, start DB restoration:
--------------------------------------------------------------
#
$ORACLE_HOME/bin/rman target /
RMAN> catalog
start with '/ora_backupdisk/rmanbkps/';
RMAN> restore
database;
When it done check the archives inside the backup:
-----------------------------------------------
RMAN> list
backup of archivelog all;
RMAN> list
backup of archivelog from time 'sysdate-10/24';
Recover the database to the latest scn you have in the backup:
-------------------------------------------------------------
RMAN> recover
database until scn xxx;
No need to worry about
the following error, just move to the next step:
ORA-01547: warning:
RECOVER succeeded but OPEN RESETLOGS would get error below
ORA-01152: file 1 was
not restored from a sufficiently old backup
ORA-01110: data file
1: '/ora_data1/pefms/system01.dbf'
Put the standby database
in recover managed mode:
-------------------------------------------------
SQL> alter
database recover managed standby database disconnect from session;
set log_archive_config:
SQL> alter
system set log_archive_config='dg_config=(pefms_pri,pefms_dr)' sid='*'
scope=both;
Note: pefms_pri &
pefmsdr are two services already added to tnsnames.ora file located on the
standby node1, pefms_pri will point to the primary DB in the primary server and
pefms_dr pointing to the standby database on the standby server.
Now you have a standby
database............................
Start the listener:
---------------
# lsnrctl start
Configure archive logs
shipping:
########################
On Primary Node1:
Set the primary database
in FORCE LOGGING mode, to ensure that all transactions are being written to the
redologs:
SQL> ALTER
DATABASE FORCE LOGGING;
=insert the following
lines inside the tnsnames.ora located on the primary node1 that represent
PEFMS_DR service:
PEFMS_DR =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = ora1123-node1)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = pefms1)
)
)
Note: Make sure that the
DR is in the mount mode.
On Primary Node1:
SQL> alter
system set log_archive_config='dg_config=(PEFMS_DR,PEFMS)' sid='*' scope=both;
SQL> alter
system set log_archive_dest_3='service="PEFMS_DR"
valid_for=(online_logfiles,primary_role) db_unique_name=PEFMS_DR sid='*'
scope=both;
SQL> alter system
set log_archive_dest_state_3='enable' scope=both sid='*';
SQL> alter
system set standby_file_management=auto sid='*' scope=both;
SQL> alter
system set fal_server='PEFMS_DR' sid='*' scope=both;
SQL> alter
system set fal_client='PEFMS1' sid='pefms1' scope=both;
SQL> alter
system set fal_client='PEFMS2' sid='pefms2' scope=both;
SQL> alter
system set service_names='PEFMS' sid='*' scope=both;
The following are MAA
recommendations:
SQL> ALTER
SYSTEM SET ARCHIVE_LAG_TARGET=1800 sid='*' scope=both;
-->REDOLOG switch will be forced every 30min.
SQL> ALTER
SYSTEM SET LOG_ARCHIVE_MAX_PROCESSES=4 sid='*' scope=both;
-->To quickly resolve gaps in the redo stream to a standby
database.
Create standby redo logs
on the primary database: (MAA)
========================================
- Even though standby
redo logs are required for maximum protection and maximum availability modes
and with the LGWR ASYNC transport mode and not required for Maximum Performance
mode (which I'm using), it's recommend to create them as they will speed up
(redo transport, data recovery, speedup the switchover).
- As the primary
database possibly becoming the standby database as a result of a database
switchover or failover, standby redologs should be created on the primary
database as well.
The minimum number of
standby redolog groups is = the number of online redo logs.
The best practice:
Number of standby redologs = (Number of redologs on production) +1
Standby redologs size =
Primary redologs size
Standby redologs should
not be multiplexed.
SQL> ALTER
DATABASE ADD STANDBY LOGFILE THREAD 1
GROUP 7
('/ora_redo1/pefms/pefms1_redo07_a.log')SIZE 100M reuse,
GROUP 8
('/ora_redo1/pefms/pefms1_redo08_a.log')SIZE 100M reuse,
GROUP 9
('/ora_redo1/pefms/pefms1_redo09_a.log')SIZE 100M reuse,
GROUP
10('/ora_redo1/pefms/pefms1_redo10_a.log')SIZE 100M reuse;
SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 2
GROUP 11
('/ora_redo1/pefms/pefms2_redo11_a.log')SIZE 100M reuse,
GROUP 12
('/ora_redo1/pefms/pefms2_redo12_a.log')SIZE 100M reuse,
GROUP 13
('/ora_redo1/pefms/pefms2_redo13_a.log')SIZE 100M reuse,
GROUP 14
('/ora_redo1/pefms/pefms2_redo14_a.log')SIZE 100M reuse;
SQL> SELECT * FROM V$LOG;
SQL> SELECT *
FROM V$STANDBY_LOG;
On the Standby Node1:
##################
create standby redo logs
on the standby database:
=========================================
SQL> ALTER
DATABASE ADD STANDBY LOGFILE THREAD 1
GROUP 7
('/ora_redo1/pefms/pefms1_redo07_a.log')SIZE 100M reuse,
GROUP 8
('/ora_redo1/pefms/pefms1_redo08_a.log')SIZE 100M reuse,
GROUP 9
('/ora_redo1/pefms/pefms1_redo09_a.log')SIZE 100M reuse,
GROUP
10('/ora_redo1/pefms/pefms1_redo10_a.log')SIZE 100M reuse;
SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 2
GROUP 11
('/ora_redo1/pefms/pefms2_redo11_a.log')SIZE 100M reuse,
GROUP 12
('/ora_redo1/pefms/pefms2_redo12_a.log')SIZE 100M reuse,
GROUP 13
('/ora_redo1/pefms/pefms2_redo13_a.log')SIZE 100M reuse,
GROUP 14
('/ora_redo1/pefms/pefms2_redo14_a.log')SIZE 100M reuse;
SQL> alter system set standby_file_management='AUTO';
SQL> SELECT * FROM V$LOG;
SELECT * FROM V$STANDBY_LOG;
Maximum Availability
Architecture (MAA) recommendations:
===================================================
> Speed up the
parallel recovery:
SQL> ALTER SYSTEM SET
parallel_execution_message_size=16384;
-->16384 is the 11gr2
default, the larger the faster the parallel recovery
> On the standby
DB you can shrink the SHARED_POOL to increase the DB_CACHE_SIZE, since the
recovery process does not require much shared pool memory.
Enable Flashback on the
Standby DB:
-------------------------------
That helps in fixing
logical corruption scenarios, easily re-instate the primary database after failing
over to the standby.
SQL> ALTER
database flashback on;
Enable the real-time
apply on the standby database:
---------------------------------------------
Apply the changes on the
DR as soon as the redo data is received:
SQL> RECOVER
MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;
VERIFY DATA GUARD
Archivelog shipping:
=======================================
ON Primary:
------------
SQL> ALTER
SYSTEM ARCHIVE LOG CURRENT;
ON DR:
--------
SQL> SELECT
DATABASE_ROLE,OPEN_MODE,PROTECTION_MODE from v$database;
SELECT THREAD#,SEQUENCE#, FIRST_TIME, NEXT_TIME,applied FROM
V$ARCHIVED_LOG ORDER BY SEQUENCE#,applied;
SELECT thread#,max(SEQUENCE#) from v$archived_log group by thread#;
Parallel the recovery
process:
--------------------------
SQL> ALTER
system set recovery_parallelism=16 scope=spfile;
Check the time lag
between Primary & Standby:
-----------------------------------------
SQL> col NAME
for a15
col VALUE for a15
SELECT NAME,VALUE FROM V$DATAGUARD_STATS WHERE NAME='apply lag';
NAME VALUE
---------------
-------------
apply lag +00 00:04:43
The lag is 4.43 minutes
RMAN configuration:
==================
# rman target /
RMAN> CONFIGURE
CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO
'/ora_backupdisk/rmanbkps/controlfiles/%F';
Add the database
services to the clusterware:
====================================
If you want the database
and the listener to be managed by clusterware:
# srvctl add database
-d pefms -o /u01/oracle/11.2.0.3/db
# srvctl add instance
-d pefms -i pefms1 -n ora1123-node1
# srvctl add listener
-o /u01/oracle/11.2.0.3/db -l LISTENER_PEFMS1 -s
Now the standby DB is
done.
Next:
In the next part Part IV I'll cover the steps of upgrading this standby DB from 11.2.0.1 to 11.2.0.3
and configure it with RAC and make it ready for production use.
The following
information is good to know about Data Guard:
Using compression
option for (ASYNC only) -one of 11g new features-. (compression=enable)
-------------------------------------------
redo compression can:
-Improve data
protection by reducing redo transport lag.
-Reduce network
utilization.
-Provide faster redo
gap resolution.
-Reduce redo transfer
time.
-Need to buy advanced
compression license.
To implement:
--Use compression=enable in the dest_* parameter
--alter system set "_REDO_TRANSPORT_COMPRESS_ALL"=TRUE
scope=both;
>This option is recommended when the link speed between the primary and the standby.
is not fast
enough.
Discover ARCHIVE_DEST_N options:
--------------------------------------
reopen: The time that
the primary database reconnect to standby DB when connection cut between.
compression: Requires
the Oracle Advanced Compression license, it compress redo data when transfer it
to DR.[ID 729551.1]
If you're using Maximum Performance, consider
setting _REDO_TRANSPORT_COMPRESS_ALL=TRUE
DELAY: delay
sending redo data, to mitigate potential human induced errors and logical
corruption. I don't recommend to set it
The best approach is to delay applying the redo data on the standby DB
or to use FLASHBACK DATABASE feature on both sites.
Examples:
--SQL> alter
system set log_archive_dest_3='service="PEFMS_DR"
valid_for=(online_logfiles,primary_role) db_unique_name=PEFMS_DR DELAY=240'
sid='*' scope=both;
--SQL> alter
system set log_archive_dest_3='service="PEFMS_DR"
valid_for=(online_logfiles,primary_role) db_unique_name=PEFMS_DR delay=60
sid='*' scope=both;
--SQL> alter
system set log_archive_dest_3='service="PEFMS_DR", ARCH NOAFFIRM
delay=0 optional reopen=15 register max_failure=10
db_unique_name=PEFMS_DR
compression=enable';
Note: In case of using
compression option alter below parameter:
--SQL> alter
system set "_REDO_TRANSPORT_COMPRESS_ALL"=TRUE scope=both;
Using maximum availability option with LGWR with SYNC:
--SQL> alter
system set log_archive_dest_3='service=PEFMS_DR LGWR SYNC AFFIRM
db_unique_name=PEFMS_DR VALID_FOR=(ALL_LOGFILES,PRIMARY_ROLE)
compression=enable';
--SQL> Alter
database set standby database to maximize availability;
reboot the production
and the standby databases.
No comments:
Post a Comment