RAC MOST USED COMMANDS: [10g]
srvctl: For Database and Instances:
Start / Stop a RAC database:
----------------------------
srvctl start database -d
srvctl stop database -d
Start / Stop a RAC instance:
----------------------------
srvctl start instance-d database_name -i instance_name-o open
srvctl stop instance-d database_name -i instance_name -o immediate
srvctl stop instance-d
-------------------
srvctl start nodeapps -n node_name
srvctl stop nodeapps -n
To add a database/instance to RAC cluster:
--------------------------------------
srvctl add database -d
Start / Stop the Listener:
-------------------------
srvctl start listener –l listener_name
To add a node:
-------------
srvctl add nodeapps -n
Start/stop asm:
--------------
srvctl start asm -n
srvctl stop asm -n
To prevent a database from starting at boot time:
------------------------------------------------
srvctl disable database -d
CRS RESOURCE STATUS:
-------------------
srvctl status service -d
CRSCTL: Resources and Nodes:
To Stop all RAC resources on the node you step on it: (By root user)
-----------------------------------------------------
crsctl stop crs
To Start all RAC resources:(By root user)
--------------------------
crsctl start crs
Check RAC status:
----------------
crs_stat -t
Crs health check:
--------------
crsctl check crs
Clusterware version:
-------------------
crsctl query crs softwareversion
crsctl query crs activeversion
Prevent the CRS from starting at boot time:
=============================
# crsctl stop crs --> (On the failing node) will stop CRS.
# crsctl disable crs -->(On the failing node) Will disable crs from starting next reboot.
After you fix the problem re-enable the CRS on the node to let it start after rebooting the OS:
# crsctl enable crs
Voting disks:
Voting Disks are for Disk Heartbeat, which are essential in the detection and resolution of cluster "split brain" situation.
Backing up Vote disks:
------------------------
In 10g this can be done while the CRS is running: 10g
================================
# dd if=voting_disk_name of=backup_file_name
In 11g you must shutdown the CRS: 11g
========================
# crsctl stop crs (On all nodes)
# dd if=voting_disk_name of=backup_file_name
Note: Don't use copy command "cp" use "dd" command only.
When to back up vote disks:
=================
You do not have to back up the voting disk every day. Back up only in the following cases:
-After RAC installation.
-After add or delete a node on the cluster.
-After adding or removing a votedisk using CRSCTL command.
Note: 11gR2 Voting disks contents are backed up automatically in OCR, you're not required to manually backup the Voting disks.
Check Voting Disk:
------------------
# crsctl query css votedisk
Restore votedisks: (By root user)
---------------------
Case of losing all of votedisks:
====================
1-Shutdown CRS: (On all Nodes)
- ---------------
# crsctl stop crs
2-Locate the current location of the Votedisks:
- -----------------------------------------
# crsctl query css votedisk
3-Restore all votedisks from a previous good backup taken by "dd" command: (On One node only)
- -----------------------------------------------------------------------
# dd if=Votedisk_backup_file of=Votedisk_file <<-- do this for all the votedisks.
4-Start CRS: (On all Nodes)
- ------------
# crsctl start crs
Case of losing ONE voting disk:
==================
1-Start the clusterware in exclusive mode: (On One node only)
- --------------------------------------------------------
# crsctl start crs -excl
2-Retrieve the list of voting disks currently defined: -if found-
- ---------------------------------------------------
# crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ON 938e3a4fd41a4f5bbf8d0d5c676aca9a ( /oracle/ORCL/voting/voting_01.dbf) []
2. ON 99f4b4f7d65f4f05bf237ad06c52b65a ( /oracle/ORCL/voting/voting_02.dbf) []
3. OFF 0578603d0f6b4f21bfb2eb22ae82f00d ( /oracle/ORCL/voting/voting_03.dbf) []
This list may be empty if all voting disks were corrupted, or "STATE" will be "3" or "OFF".
3-Delete the corrupted voting disks:
- ------------------------------
# crsctl delete css votedisk /oracle/ORCL/voting/voting_03.dbf
Note: You can also use the "File Universal Id" instead of the full path:
Note:
=It is not recommended to use "-force" attribute to add or delete a voting disk while the Clusterware is running. This is known to corrupt the OCR (no errors will appear but will cause node eviction).
=The "-force" attribute can be safely used ONLY if the Clusterware is stopped on all the nodes of the cluster.
4- Add the voting disks again:
- ------------------------
First: touch the corrupted file:
# touch /oracle/ORCL/voting/voting_03.dbf
Second: Add the touched file to votedisk list:
# crsctl add css votedisk /oracle/ORCL/voting/voting_03.dbf
Note: You can copy a good votedisk to the corrupted one, you can use links to back up locations to save time.
Restart the clusterware:
-----------------------
# crsctl stop crs -f --> -f because we started it in exclusive mode.
# crsctl start crs --> On both Nodes.
OCR disks:
OCR disks hold the clusterware configuration information (Nodes info, registered resources,databases,instances ,listeners,services,.....etc), It's somehow similar to the "Registry" in Windows OS.
Checking OCR disks:
-----------------------
# ocrcheck
------------------------------------
Restore OCR from automatic backups being taken every 4 hours:
------------------------------------------------------------
# crsctl stop crs -> On all RAC nodes.
# ocrconfig -restore /CRS_HOME/cdata/CLUSTER_NAME/xxxx.ocr -> From one node only.
# crsctl start crs -> On all RAC nodes.
Restore OCR from export file been taken manually using "ocrconfig -export" command:
----------------------------------------------------------------------------------
# ocrconfig -import /backupdisk/xxxx.dmp -> On one RAC node only.
# crsctl start crs ->On all RAC nodes.
Miscellaneous:
Check if a database is RAC or not:
========================
SQL> show parameter CLUSTER_DATABASE;
OR:
--
SQL> set serveroutput on;
SQL> BEGIN
IF dbms_utility.is_cluster_database THEN
dbms_output.put_line('Running in SHARED/RAC mode.');
ELSE
dbms_output.put_line('Running in EXCLUSIVE mode.');
END IF;
END;
/
Check the active instance and its Host:
==========================
SQL> SELECT * FROM SYS.V_$ACTIVE_INSTANCES;
SQL> SELECT * FROM SYS.V_$THREAD;
No comments:
Post a Comment