Thursday, July 28, 2011

View Hardware & OS Statistics Using V$OSSTAT View

While I was checking the audit logs for a database I found a log tracking EM activities, I take a look inside and found it gathers some information on HW & OS using a view called v$osstat, I look at that view and found it will be helpful if  I include it in the database health check script.

In this view you can see the main server hardware info + Operating System statistics:

SQL> select stat_name,value from v$osstat;

STAT_NAME VALUE
----------- ------
NUM_CPUS 8 ->Available CPUs in the machine.
IDLE_TIME 1708663701 ->Idle time for ALL CPUs combined [Since the instance startup].
BUSY_TIME 11490541 ->Busy time for the CPUs combined [Since the instance startup].
USER_TIME 10565287 ->Time spent in User activities [Since the instance startup].
SYS_TIME 733707 ->Time spent in kernel activities [Since the instance startup].
IOWAIT_TIME 8517935 ->Time spent waiting for I/O to complete [Since the instance startup].
NICE_TIME 20455 ->Time spent in "low priority users"activities [Since the instance startup].
RSRC_MGR_CPU_WAIT_TIME 0 ->I don't know! -For the time being-
LOAD .009765625 ->Processes number waiting on the "run queue" = uptime command in Linux [Getting updated instantly]
NUM_CPU_CORES 8 ->Number of CPU cores.
NUM_CPU_SOCKETS 2 ->Number of Physical CPUs.
PHYSICAL_MEMORY_BYTES 8372563968 ->Physical memory size -in Bytes-.
TCP_SEND_SIZE_MIN 4096 ->>The rest are NICs related activities...
TCP_SEND_SIZE_DEFAULT 16384
TCP_SEND_SIZE_MAX 4194304
TCP_RECEIVE_SIZE_MIN 4096
TCP_RECEIVE_SIZE_DEFAULT 87380
TCP_RECEIVE_SIZE_MAX 4194304
GLOBAL_SEND_SIZE_MAX 1048576
GLOBAL_RECEIVE_SIZE_MAX 4194304


Monday, July 25, 2011

Fixing Migrated/Chained Rows

Definitions:

Row Chaining: A row is too large to fit into a single database block. For example, if you use 8 KB block size for your database, and you need to insert a row of 16 KB into it, Oracle will use 2 blocks and store the row in pieces.
Row Chaining is often unavoidable with tables have (LONG, large CHAR, VARCHAR2) columns.

Row Migration: A row will migrate when an update to that row would cause it to not fit on the block, the row will move to a new block leaving a link(forwarding address) in its original block pointing to the new block location.

The Harm of Migrated/Chained Rows:
Migrated/Chained rows can cause bad database performance by affecting index reads and full table scans.

How to fix:

You can use one of the following solutions the first one is 2 steps and the other is 10 steps :-)
This will depends on the number of chained rows and the downtime you can take for applying the fix:

Solution #1: 2 steps
========


Note: This solution will take longer downtime on the table. (preferred when chained rows are much)

1- Rebuild the table having chained rows:
SQL> alter table TEST move;
         -->This will invalidate all indexes that are associated with the table TEST.

Note: You can increase the PCTFREE within the move command to reduce row chaining possibility in the future:

      SQL> alter table TEST move pctfree 30;
               -->Note: PCTFREE default value is 10

2- Rebuild ALL Indexes associated with the table:
SQL> SELECT 'alter index '||OWNER||'.'||INDEX_NAME||' rebuild online ;'FROM DBA_INDEXES where TABLE_NAME='TEST';

Done.


Solution #2: 10 Steps
========

1- Create a table contains the chained rows result using this script:

$ORACLE_HOME/rdbms/admin/utlchain.sql

2- Collect information about migrated and chained rows:

SQL> ANALYZE TABLE TEST LIST CHAINED ROWS;

3- Query the output table:

SELECT * FROM CHAINED_ROWS WHERE TABLE_NAME = 'TEST';

OWNER_NAME TABLE_NAME CLUST... HEAD_ROWID TIMESTAMP
---------- ---------- -----... ------------------ ---------
SCOTT TEST ... AAAVdkAATAAAfpfAA1 25-JUL-11
SCOTT TEST ... AAAVdkAATAAAfpfAA9 25-JUL-11

If there are many migrated/chained rows you can go ahead through following steps:

4- Create an intermediate table holding chained rows with the same structure as org table:

SQL> CREATE TABLE int_TEST AS SELECT * FROM TEST WHERE ROWID IN
(SELECT HEAD_ROWID FROM CHAINED_ROWS WHERE TABLE_NAME = 'TEST');

Note: "Create Table As" statement will fail if the original table includes LONG datatype

5- Delete the migrated/chained rows from the original table:

SQL> DELETE FROM TEST
WHERE ROWID IN (SELECT HEAD_ROWID FROM CHAINED_ROWS WHERE TABLE_NAME = 'TEST');

If it returned back ORA-02292: integrity constraint (xxx)violated, disable that referential constraint and try again.

6-Insert the rows back from the intermediate table into the original table:

SQL> INSERT INTO TEST SELECT * FROM int_TEST;

7-Drop the intermediate table:

SQL> DROP TABLE int_TEST;

8-Delete the information collected in step 1 from the output table:

SQL> DELETE FROM CHAINED_ROWS WHERE TABLE_NAME = 'TEST';

9- Use the ANALYZE statement again:

SQL> ANALYZE TABLE TEST LIST CHAINED ROWS;

10- Check for chained rows: -Should be 0-

SQL> SELECT * FROM CHAINED_ROWS WHERE TABLE_NAME = 'TEST';

no rows selected

Done.

Which is better for gathering statistics? DBMS_STATS or ANALYZE command

Use DBMS_STATS for gathering statistics that related to the optimizer,execution plans and so forth.
Cost-Based Optimizer (CBO) will only use the statistics been gathered by DBMS_STATS.

Use ANALYZE command to collect statistics NOT related to Cost-Based Optimizer (CBO)
Such statistics can be used by VALIDATE or LIST CHAINED ROWS clauses, can help in gathering accurate data such as empty blocks, average space, Freelist blocks and so forth.

For more information please check this link:
http://download.oracle.com/docs/cd/E11882_01/server.112/e17120/general002.htm#ADMIN11525
Database Administrator's Guide(11.2)

Thursday, July 21, 2011

Sharing the Same Read Only Tablespace between Multiple Databases

The Theory:
Separate databases can share the same read only datafiles on disk. The datafiles must be accessible by all databases.

The Method:
This will be implemented using transportable tablespaces.

Note: The Shared tablespace must remain read-only in all databases mounting it.

The Way:
-The Shared read only Tablespace is already belongs to a database (ORCL).

-Make the tablespace read only:
SQL> ALTER TABLESPACE USERS READ ONLY;

-Export tablespace metadata using expdp utility:

export ORACLE_SID=ORCL
expdp system dumpfile=expdat.dmp directory=data_pump_dir
transport_tablespaces=USERS logfile=ts_export.log

-Import tablespace metadata to the destination database using impdp utility:

export ORACLE_SID=FOO
impdp system dumpfile=expdat.dmp directory=data_pump_dir
transport_datafiles=/ORA11G/ORADATA/ORCL/USERS01.DBF logfile=ts_import.log

*I suppose that directory "data_pump_dir" have the same path on both databases (ORCL &FOO).

Note: Dropping the read only tablespace in some databases will not modify the datafiles for the tablespace.Thus, the drop operation does not corrupt the tablespace,
Sure you will not use -including contents and datafiles- clause to delete the shared tablespace on one database or this will delete the tablespace datafiles on the Filesystem.


Warning:
Switching the read-only shared tablespace to read-write mode will lead to database corruption if it's mounted in more than one database.