I intended to post about 19c new features, but thought that most of the DBAs have their productions on 11g and it will be informative if I cover 12c [12.1 & 12.2] new features first "with the implementation steps as possible", before I jump to 18 & 19c features which I'll cover in the next posts; just to put the things in a good order.
Architectural New Features:
======================
12c database introduced a new type of database architecture called Multitenant "which is similar to the database architecture in SQL Server"-- where one instance can serve more than one database in the same server, the old architecture which is now called Non Container database is still supported as well]
Container [multitenant] DB: Is a DB holds Root + Seed + one or more pluggable DB.
> Only the CDB has a SPFILE, there are few parameters that can be changed on PDB level.
> Every container DB will have SYSTEM, SYSAUX, UNDO, and Temporary tablespaces plus REDO and controlfiles.
> Every Pluggable DB will have it's own SYSTEM & SYSAUX and [optional] temporary tablespace and it's default tablespaces
and share the usage of the UNDO and TEMPORARY tablespaces of the container DB with other pluggable DBs.
> You can log in directly from OS to the container DB using OS authentication, for the pluggable DB you have to use Ezconnect to connect directly from OS.
> When starting up a container DB it will not start up automatically it's pluggable DBs, you have to do this manually:
SQL> ALTER PLUGGABLE DATABASE PDB1 OPEN;
SQL> ALTER PLUGGABLE DATABASE ALL OPEN;
SQL> ALTER PLUGGABLE DATABASE ALL EXCEPT <ONE_PDB> OPEN/CLOSE;
> When Shutting down the container DB it will close all pluggable DBs automatically.
> You can close one or more pluggable DB alone without affecting other pluggable DBs on the container:
SQL> ALTER PLUGGABLE DATABASE PDB1 CLOSE IMMEDIATE;
> You can restore one or more pluggable DB with Full or PITR recovery or open it with restricted/read-only option without affecting other pluggable DBS on the container.
> Users can be COMMON user, that can access all databases on the container and its name must start with C##
> Users can be local, just can be created on a pluggable DB and will ONLY access the pluggable DB they created under.
> Pluggable DB can be cloned on the same container DB using one command:
SQL> CREATE PLUGGABLE DATABASE PDBTEST FROM PDB1 FILE_NAME_CONVERT=('/oradata/orcl/pdb1.dbf','/oradata/orcl/pdbtest.dbf');
> Pluggable DB can be cloned to another container DB in the same server or remotely using a DB link using a single command:
SQL> CREATE PLUGGABLE DATABASE PDBTEST FROM PDB1@PDB1DBLINK FILE_NAME_CONVERT=('/oradata/orcl/pdb1.dbf','/oradata/neworcl/pdbtest.dbf');
> Pluggable DB can be unplugged and plugged [Moved] to a different container DB.
The aim of the container DB is to combine multiple numbers of DBs in One DB for ease of administration and well use of the resources.
But what is the difference between using Container DB and merge all database on one DB:
> Container DB can segregate the duties between DBAs. (can be done on Non-Container using Vault).
> Easy to clone the pluggable DB on the same Container DB / same server or remote server. (can be done on non-container using export/import or transportable tablespace)
> CDB_XXX views show information about the CDB$ROOT and ALL the PDBS, while DBA_XXX views show information for the current container (PDB or CDB).
I'm still searching for more pros for using container DB over older releases style...
Performance New Features:
=======================
[12.1] Adaptive Query Optimization: Enabled by default through parameter "optimizer_adaptive_features"
In case the optimizer choose a bad execution plan and during the execution, it figures out a better plan it will switch to use the better plan on the fly.
Starting from 12.2 this parameter been removed and replaced with two parameters:
OPTIMIZER_ADAPTIVE_PLANS: Default (TRUE). Enables/disables adaptive plans.
OPTIMIZER_ADAPTIVE_STATISTICS: Default (FALSE). Enables/disables SQL plan directives, statistics feedback.
because it adds more time to the parsing phase it's recommended to be kept FALSE for OLTP databases.
[12.2] Real-Time SQL Monitoring:
SQL> VARIABLE my_rept CLOB
BEGIN
:my_rept :=DBMS_SQLTUNE.REPORT_SQL_MONITOR();
END;
/
PRINT :my_rept
Memory New Features:
===================
In-memory Full database caching: [12.1.0.2]
This feature will buffer the objects into the buffer cache once they get accessed subsequently, starting from 12.1.0.2, If Oracle determines that the buffer cache is big enough to hold the entire database it will cache all blocks automatically without Admin intervention this is called default caching mode.
In older releases the caching was happening but not for all data. e.g. if a user query a large table Oracle might not cache the whole table data as it might remove useful data from buffer cache in order to fit that whole table inside.
Force entire DB caching in the buffer cache:
Implementation: [Downtime required]
SQL> shutdown immediate;
startup mount;
--Enable Force caching
ALTER DATABASE FORCE FULL DATABASE CACHING;
--Disable Force caching:
--ALTER DATABASE NO FORCE FULL DATABASE CACHING;
alter database open;
SELECT force_full_db_caching FROM v$database;
Note: Accessed blocks will be cached inside buffer cache if the buffer cache is smaller than buffered DB blocks the feature will be automatically turned off with this message in the Alert log:
Buffer Cache Force Full DB Caching mode on when DB does not fit in the cache. Turning off Force Full DB Caching advisable
[12.1] PGA_AGGREGATE_LIMIT parameter introduced to limit the PGA size if it reaches its max the biggest PGA consumer sessions will get terminated. [Risky option!]
TABLES New Features:
==================
Online table REBUILD: [12.2]
ONLINE table move is now available which DO NOT interrupt DMLs against the table and keep the indexes USABLE:
SQL> ALTER TABLE XX MOVE ONLINE;
-- DMLS will work fine and not be interrupted.
-- INDEXES will be USABLE during and after the MOVE operation.
Below is not authentic:
"UPDATE INDEXES" can maintain that the indexes will be USABLE after the MOVE but NO DMLs will be allowed during the MOVE:
SQL> ALTER TABLE XX MOVE UPDATE INDEXES;
-- DMLS will NOT work.
-- INDEXES will be UNSABLE during the operation but will be USABLE after the MOVE operation.
-- NO INDEXES REBUILD required after the MOVE operation.
Invisible Columns: [12.1]
You can make individual table columns invisible. Any generic access like select *, desc does not show the invisible columns, but if you explicitly specify the invisible column name in the query it will show its data.
SQL> ALTER TABLE mytable MODIFY (b INVISIBLE);
SQL> set colinvisible on
desc mytable
SQL> ALTER TABLE mytable MODIFY (b VISIBLE);
Invisible Indexes: [in 12c Allow multiple indexes on the same column with different type]
SQL> create bitmap index indx1 on t(empno) invisible;
[12.2] Advanced index compression [High level] provides significant space savings while also improving performance.
SQLPLUS New Features:
====================
- Select top n rows is now available: [Simiar to SQL Server]
SQL> SELECT sal FROM t ORDER BY sal DESC FETCH FIRST 5 ROWS ONLY;
- Columns with default value will insert the default value instead of NULL in case the user inserts NULL:
SQL> create table tt(empno number,ename varchar2(10) default on null 10);
SQL> insert into tt(empno,ename)values(101,null);
SQL> select * from tt;
EMPNO ENAME
---------- ----------
101 10
Enable Extended Data Types:
VARCHAR2 – 32767 bytes. [Was 4000]
NVARCHAR – 32767 bytes. [Was 4000]
RAW – 32767 bytes. [Was 2000]
[Not enabled by default and requires downtime to get it enabled using below steps:]
SQL> PURGE DBA_RECYCLEBIN
SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP UPGRADE;
SQL> ALTER SYSTEM SET max_string_size=extended;
SQL> @?/rdbms/admin/utl32.k.sql
SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP;
COMPRESSION New Features:
=========================
You can compress inactive rows which are not getting accessed nor modified in the last n number of days:
Ex: Compress inactive ROWs not accessed/modified in the last 31 days:
Tablespace Level: SQL> ALTER TABLESPACE tbs1 DEFAULT ILM ADD POLICY ROW STORE COMPRESS ADVANCED SEGMENT AFTER 31 DAYS OF NO ACCESS;
Table Group Level: SQL> ALTER TABLE tab1 ILM ADD POLICY ROW STORE COMPRESS ADVANCED GROUP AFTER 31 DAYS OF NO ACCESS;
Segment Level: SQL> ALTER TABLE tab1 ILM ADD POLICY ROW STORE COMPRESS ADVANCED ROW AFTER 31 DAYS OF NO MODIFICATION;
RMAN New Features:
==================
RMAN can directly run SQLPLUS commands without the need for "sql '<sql statement...>'" identifier.
Recovery of dropped/purged table:
As we know this requires a full database flashback/ tablespace restore/ point in time recovery for the DB in a different location.
SQL> Drop table emp purge;
RMAN> recover table emp until scn XXXX auxiliary destination '/tmp/oracle';
Duplicate Enhancements:
[12.1] Active duplication to support SECTION SIZE for faster backup and COMPRESSION for less network overhead:
RMAN> DUPLICATE TARGET DATABASE TO orcl2 FROM ACTIVE DATABASE
[USING BACKUPSET]
[SECTION SIZE …]
[USING COMPRESSED BACKUPSET] …;
SECTION SIZE Can improve the backup speed it breaks the datafile into the specified chunks to be backed up in parallel.
COMPRESSED BACKUPSET Send the backup over the network to the target in compressed format
NOOPEN Do not perform the final step of opening the DB in resetlogs mode and leave it in the mount mode.
RECOVER DATABASE UNTIL AVAILABLE REDO; Automatically recovers the database to the last available archive/redo log file.
CloneDB New Features:
===================
CloneDB enables you to clone a database in a non-multitenant environment multiple times without copying the data files into several different locations. Instead, CloneDB uses copy-on-write technology, so that only the blocks that are modified require additional storage on disk.
Requirements:
- Storage type that stores the Production DB backup should be NFS.
- A full RMAN backup (normal, as a copy) should be stored on the NFS location where the cloneDB will read from and writing changed blocks.
- Create a PFILE for the Production DB:
SQL> create pfile='/backup/initorcl.ora' from spfile;
- From OS:
# export MASTER_COPY_DIR=<The Production RMAN backup full path>
# export CLONE_FILE_CREATE_DEST=<The location where CloneDB datafiles,logfiles, controlfiles will be stored>
# export CLONEDB_NAME=<CloneDB database Name>
- Run a Perl script clonedb.pl
# $ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/install/clonedb.pl prod_db_pfile
# $ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/install/clonedb.pl /backup/initorcl.ora
Modify the following generated files: [if necessary]
PFILE for the cloneDB (SGA, PGA, CLONEDB=TRUE
crtdb.sql
dbren.sql
- Connect to the Newly created instance AS SYSDBA and run the generated script respectively:
SQL> @crtdb.sql
SQL> @dbren.sql
RAC New Features:
Architectural New Features:
======================
12c database introduced a new type of database architecture called Multitenant "which is similar to the database architecture in SQL Server"-- where one instance can serve more than one database in the same server, the old architecture which is now called Non Container database is still supported as well]
Container [multitenant] DB: Is a DB holds Root + Seed + one or more pluggable DB.
> Only the CDB has a SPFILE, there are few parameters that can be changed on PDB level.
> Every container DB will have SYSTEM, SYSAUX, UNDO, and Temporary tablespaces plus REDO and controlfiles.
> Every Pluggable DB will have it's own SYSTEM & SYSAUX and [optional] temporary tablespace and it's default tablespaces
and share the usage of the UNDO and TEMPORARY tablespaces of the container DB with other pluggable DBs.
> You can log in directly from OS to the container DB using OS authentication, for the pluggable DB you have to use Ezconnect to connect directly from OS.
> When starting up a container DB it will not start up automatically it's pluggable DBs, you have to do this manually:
SQL> ALTER PLUGGABLE DATABASE PDB1 OPEN;
SQL> ALTER PLUGGABLE DATABASE ALL OPEN;
SQL> ALTER PLUGGABLE DATABASE ALL EXCEPT <ONE_PDB> OPEN/CLOSE;
> When Shutting down the container DB it will close all pluggable DBs automatically.
> You can close one or more pluggable DB alone without affecting other pluggable DBs on the container:
SQL> ALTER PLUGGABLE DATABASE PDB1 CLOSE IMMEDIATE;
> You can restore one or more pluggable DB with Full or PITR recovery or open it with restricted/read-only option without affecting other pluggable DBS on the container.
> Users can be COMMON user, that can access all databases on the container and its name must start with C##
> Users can be local, just can be created on a pluggable DB and will ONLY access the pluggable DB they created under.
> Pluggable DB can be cloned on the same container DB using one command:
SQL> CREATE PLUGGABLE DATABASE PDBTEST FROM PDB1 FILE_NAME_CONVERT=('/oradata/orcl/pdb1.dbf','/oradata/orcl/pdbtest.dbf');
> Pluggable DB can be cloned to another container DB in the same server or remotely using a DB link using a single command:
SQL> CREATE PLUGGABLE DATABASE PDBTEST FROM PDB1@PDB1DBLINK FILE_NAME_CONVERT=('/oradata/orcl/pdb1.dbf','/oradata/neworcl/pdbtest.dbf');
> Pluggable DB can be unplugged and plugged [Moved] to a different container DB.
The aim of the container DB is to combine multiple numbers of DBs in One DB for ease of administration and well use of the resources.
But what is the difference between using Container DB and merge all database on one DB:
> Container DB can segregate the duties between DBAs. (can be done on Non-Container using Vault).
> Easy to clone the pluggable DB on the same Container DB / same server or remote server. (can be done on non-container using export/import or transportable tablespace)
> CDB_XXX views show information about the CDB$ROOT and ALL the PDBS, while DBA_XXX views show information for the current container (PDB or CDB).
I'm still searching for more pros for using container DB over older releases style...
Performance New Features:
=======================
[12.1] Adaptive Query Optimization: Enabled by default through parameter "optimizer_adaptive_features"
In case the optimizer choose a bad execution plan and during the execution, it figures out a better plan it will switch to use the better plan on the fly.
Starting from 12.2 this parameter been removed and replaced with two parameters:
OPTIMIZER_ADAPTIVE_PLANS: Default (TRUE). Enables/disables adaptive plans.
OPTIMIZER_ADAPTIVE_STATISTICS: Default (FALSE). Enables/disables SQL plan directives, statistics feedback.
because it adds more time to the parsing phase it's recommended to be kept FALSE for OLTP databases.
[12.2] Real-Time SQL Monitoring:
SQL> VARIABLE my_rept CLOB
BEGIN
:my_rept :=DBMS_SQLTUNE.REPORT_SQL_MONITOR();
END;
/
PRINT :my_rept
Memory New Features:
===================
In-memory Full database caching: [12.1.0.2]
This feature will buffer the objects into the buffer cache once they get accessed subsequently, starting from 12.1.0.2, If Oracle determines that the buffer cache is big enough to hold the entire database it will cache all blocks automatically without Admin intervention this is called default caching mode.
In older releases the caching was happening but not for all data. e.g. if a user query a large table Oracle might not cache the whole table data as it might remove useful data from buffer cache in order to fit that whole table inside.
Force entire DB caching in the buffer cache:
Implementation: [Downtime required]
SQL> shutdown immediate;
startup mount;
--Enable Force caching
ALTER DATABASE FORCE FULL DATABASE CACHING;
--Disable Force caching:
--ALTER DATABASE NO FORCE FULL DATABASE CACHING;
alter database open;
SELECT force_full_db_caching FROM v$database;
Note: Accessed blocks will be cached inside buffer cache if the buffer cache is smaller than buffered DB blocks the feature will be automatically turned off with this message in the Alert log:
Buffer Cache Force Full DB Caching mode on when DB does not fit in the cache. Turning off Force Full DB Caching advisable
[12.1] PGA_AGGREGATE_LIMIT parameter introduced to limit the PGA size if it reaches its max the biggest PGA consumer sessions will get terminated. [Risky option!]
TABLES New Features:
==================
Online table REBUILD: [12.2]
ONLINE table move is now available which DO NOT interrupt DMLs against the table and keep the indexes USABLE:
SQL> ALTER TABLE XX MOVE ONLINE;
-- DMLS will work fine and not be interrupted.
-- INDEXES will be USABLE during and after the MOVE operation.
Below is not authentic:
"UPDATE INDEXES" can maintain that the indexes will be USABLE after the MOVE but NO DMLs will be allowed during the MOVE:
SQL> ALTER TABLE XX MOVE UPDATE INDEXES;
-- DMLS will NOT work.
-- INDEXES will be UNSABLE during the operation but will be USABLE after the MOVE operation.
-- NO INDEXES REBUILD required after the MOVE operation.
Invisible Columns: [12.1]
You can make individual table columns invisible. Any generic access like select *, desc does not show the invisible columns, but if you explicitly specify the invisible column name in the query it will show its data.
SQL> ALTER TABLE mytable MODIFY (b INVISIBLE);
SQL> set colinvisible on
desc mytable
SQL> ALTER TABLE mytable MODIFY (b VISIBLE);
Invisible Indexes: [in 12c Allow multiple indexes on the same column with different type]
SQL> create bitmap index indx1 on t(empno) invisible;
[12.2] Advanced index compression [High level] provides significant space savings while also improving performance.
SQLPLUS New Features:
====================
- Select top n rows is now available: [Simiar to SQL Server]
SQL> SELECT sal FROM t ORDER BY sal DESC FETCH FIRST 5 ROWS ONLY;
- Columns with default value will insert the default value instead of NULL in case the user inserts NULL:
SQL> create table tt(empno number,ename varchar2(10) default on null 10);
SQL> insert into tt(empno,ename)values(101,null);
SQL> select * from tt;
EMPNO ENAME
---------- ----------
101 10
Enable Extended Data Types:
VARCHAR2 – 32767 bytes. [Was 4000]
NVARCHAR – 32767 bytes. [Was 4000]
RAW – 32767 bytes. [Was 2000]
[Not enabled by default and requires downtime to get it enabled using below steps:]
SQL> PURGE DBA_RECYCLEBIN
SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP UPGRADE;
SQL> ALTER SYSTEM SET max_string_size=extended;
SQL> @?/rdbms/admin/utl32.k.sql
SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP;
COMPRESSION New Features:
=========================
You can compress inactive rows which are not getting accessed nor modified in the last n number of days:
Ex: Compress inactive ROWs not accessed/modified in the last 31 days:
Tablespace Level: SQL> ALTER TABLESPACE tbs1 DEFAULT ILM ADD POLICY ROW STORE COMPRESS ADVANCED SEGMENT AFTER 31 DAYS OF NO ACCESS;
Table Group Level: SQL> ALTER TABLE tab1 ILM ADD POLICY ROW STORE COMPRESS ADVANCED GROUP AFTER 31 DAYS OF NO ACCESS;
Segment Level: SQL> ALTER TABLE tab1 ILM ADD POLICY ROW STORE COMPRESS ADVANCED ROW AFTER 31 DAYS OF NO MODIFICATION;
RMAN New Features:
==================
RMAN can directly run SQLPLUS commands without the need for "sql '<sql statement...>'" identifier.
Recovery of dropped/purged table:
As we know this requires a full database flashback/ tablespace restore/ point in time recovery for the DB in a different location.
SQL> Drop table emp purge;
RMAN> recover table emp until scn XXXX auxiliary destination '/tmp/oracle';
Duplicate Enhancements:
[12.1] Active duplication to support SECTION SIZE for faster backup and COMPRESSION for less network overhead:
RMAN> DUPLICATE TARGET DATABASE TO orcl2 FROM ACTIVE DATABASE
[USING BACKUPSET]
[SECTION SIZE …]
[USING COMPRESSED BACKUPSET] …;
SECTION SIZE Can improve the backup speed it breaks the datafile into the specified chunks to be backed up in parallel.
COMPRESSED BACKUPSET Send the backup over the network to the target in compressed format
NOOPEN Do not perform the final step of opening the DB in resetlogs mode and leave it in the mount mode.
RECOVER DATABASE UNTIL AVAILABLE REDO; Automatically recovers the database to the last available archive/redo log file.
CloneDB New Features:
===================
CloneDB enables you to clone a database in a non-multitenant environment multiple times without copying the data files into several different locations. Instead, CloneDB uses copy-on-write technology, so that only the blocks that are modified require additional storage on disk.
Requirements:
- Storage type that stores the Production DB backup should be NFS.
- A full RMAN backup (normal, as a copy) should be stored on the NFS location where the cloneDB will read from and writing changed blocks.
- Create a PFILE for the Production DB:
SQL> create pfile='/backup/initorcl.ora' from spfile;
- From OS:
# export MASTER_COPY_DIR=<The Production RMAN backup full path>
# export CLONE_FILE_CREATE_DEST=<The location where CloneDB datafiles,logfiles, controlfiles will be stored>
# export CLONEDB_NAME=<CloneDB database Name>
- Run a Perl script clonedb.pl
# $ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/install/clonedb.pl prod_db_pfile
# $ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/install/clonedb.pl /backup/initorcl.ora
Modify the following generated files: [if necessary]
PFILE for the cloneDB (SGA, PGA, CLONEDB=TRUE
crtdb.sql
dbren.sql
- Connect to the Newly created instance AS SYSDBA and run the generated script respectively:
SQL> @crtdb.sql
SQL> @dbren.sql
RAC New Features:
=================
[12.1] Online Resource Attribute Modification. e.g. srvctl modify service -db orcl -s app ...
[12.2] Application Continuity: Read Write Transactions DMLs can failover to the available node if the service and client prerequisites are met: https://www.oracle.com/technetwork/database/options/clustering/applicationcontinuity/adb-continuousavailability-5169724.pdf
Dataguard New Features:
=====================
[12.1] Real Time ADG cascade, Cascade Standby used to lag 1 redo log behind the primary starting from 12.1 the Cascade Standby can be synced in a real time.
[12.2] Recovery of NOLOGGING Blocks on Standby is possible: https://www.doag.org/formes/pubfiles/11053935/2_DataGuard_122.pdf
Before 12.2 if a no-logging transaction happens on the primary it will not be replicated to the standby and in order to replicate them you have to restore the complete datafiles impacted by that NOLOGGING operations. Starting from 12.2 Oracle provides the following command to scan the primary for no logging blocks and recover them on the standby:
RMAN> validate/recover .. nonlogged block
ASM New Features:
================
[12.1] ASM disk scrubbing checks/automatic repair logical data corruptions in NORMAL and HIGH redundancy disks groups with minimal I/O.
Security New Features:
===================
[12.1] UNLIMITED TABLESPACE privilege removed from RESOURCE role.
[12.1] SYSBACKUP role introduced for backup admins.
[12.1] SELECT ANY DICTIONARY privilege no longer permits the access to sensitive dictionary tables DEFAULT_PWD$, ENC$, LINK$, USER$, USER_HISTORY$ and XS$VERIFIERS.
[12.2] INACTIVE_ACCOUNT_TIME is a New profile parameter introduced to lock the account if it's inactive for specific days.
[12.2] TDE tablespace encryption include SYSTEM, SYSAUX, and UNDO to cover the whole DB.
[12.2] TDE can encrypt, decrypt, and rekey existing tablespaces online.
[12.2] TDE Tablespace Encryption can happen online without application downtime:
SQL> alter tablespace users encryption encrypt;
[12.1] Data Redaction:
- It masks (redact) sensitive data returned from application queries for specific users, other users with EXEMPT REDACTION POLICY privilege can see.
- Oracle Data Redaction doesn’t make a change to data on disk, the sensitive data is redacted on the fly before it gets returned to the application.
EX: Redact all data in SALARY column in HR.EMPLOYEES
EX1: SALARY column will show 0 value:
BEGIN
DBMS_REDACT.ADD_POLICY
(object_schema => 'HR',
object_name => 'EMPLOYEES',
policy_name => 'redact_EMP_SAL_0',
column_name => 'SALARY',
function_type => DBMS_REDACT.FULL,
expression => '1=1');
END;
/
EX2: SALARY column will show RANDOM values:
BEGIN
DBMS_REDACT.ADD_POLICY(
object_schema => 'HR',
object_name => 'employees',
column_name => 'salary',
policy_name => 'redact_EMP_SAL_random',
function_type => DBMS_REDACT.RANDOM,
expression => 'SYS_CONTEXT(''USERENV'',''SESSION_USER'') = ''HR''');
END;
/
Auditing New Features:
===================
[12.1] Auditing is enabled by default. 😞
[12.1] DBA_USERS records the last_login time for each user: SQL> select username,last_login from dba_users where username like 'P_FZ%';
[12.2] Role auditing. For example, auditing for new users with the DBA role would begin automatically when they are granted the role.
Privilege Monitoring: [Extra License]
If database vault option is enabled, You can audit the privileges if they are getting used, or what is the privilege used by a specific user/module.
--Create a database privilege analysis policy
BEGIN
DBMS_PRIVILEGE_CAPTURE.CREATE_CAPTURE(
name => 'all_priv_analysis_policy',
description => 'database-wide policy to analyze all privileges',
type => DBMS_PRIVILEGE_CAPTURE.G_DATABASE);
END;
/
--Create a privilege analysis policy to analyze privileges from the role e.g. PUBLIC
BEGIN
DBMS_PRIVILEGE_CAPTURE.CREATE_CAPTURE(
name => 'pub_analysis_pol',
description => 'Policy to record privilege use by PUBLIC',
type => DBMS_PRIVILEGE_CAPTURE.G_ROLE,
roles => role_name_list('PUBLIC'));
END;
/
-- Create a policy to analyze privileges from the application module, "Account Payable"
BEGIN
DBMS_PRIVILEGE_CAPTURE.CREATE_CAPTURE(
name => 'acc_pay_analysis_pol',
type => DBMS_PRIVILEGE_CAPTURE.G_CONTEXT,
condition => 'SYS_CONTEXT(''USERENV'', ''MODULE'') = ''Account Payable''');
END;
/
-- Create a policy that records privileges for session user APPS when running module "Account Payable"
BEGIN
DBMS_PRIVILEGE_CAPTURE.CREATE_CAPTURE(
name => 'acc_pay_analysis_pol',
type => DBMS_PRIVILEGE_CAPTURE.G_CONTEXT,
condition => 'SYS_CONTEXT(''USERENV'', ''MODULE'') = ''Account Payable'' AND
SYS_CONTEXT(''USERENV'', ''SESSION_USER'') = ''APPS''');
END;
/
--Enable the Capture:
SQL> EXEC DBMS_PRIVILEGE_CAPTURE.ENABLE_CAPTURE('MY_CREATED_CAPTURE');
-- Generate the report:
SQL> EXEC DBMS_PRIVILEGE_CAPTURE.GENERATE_RESULT('MY_CREATED_CAPTURE');
Note: You can consult views DBA_USED_SYSPRIVS and DBA_USED_OBJPRIVS for USED privielges and DBA_UNUSED_PRIVS for UNUSED privileges.
-- Disable the Capture:
SQL> EXEC DBMS_PRIVILEGE_CAPTURE.DISABLE_CAPTURE('MY_CREATED_CAPTURE');
-- Drop the Capture:
SQL> EXEC DBMS_PRIVILEGE_CAPTURE.DROP_CAPTURE('MY_CREATED_CAPTURE');
Auditing Data PUMP expdp operation:
Drawbacks: it will not show what tables were actually exported. 😕
-- Create Audit Policy:
SQL> CREATE AUDIT POLICY audit_datapump ACTIONS COMPONENT=DATAPUMP ALL;
SQL> AUDIT POLICY audit_datapump BY sys,system;
# expdp \'/ as sysdba \' tables=p_fz.test directory=test dumpfile=test.dmp REUSE_DUMPFILES=Y
SQL> SELECT event_timestamp, dp_text_parameters1, dp_boolean_parameters1 FROM unified_audit_trail WHERE audit_type = 'Datapump';
-- Drop Audit Policy:
SQL> select ENTITY_NAME from AUDIT_UNIFIED_ENABLED_POLICIES where POLICY_NAME=upper('audit_datapump');
SQL> NOAUDIT policy audit_datapump by sys,system;
SQL> drop AUDIT POLICY audit_datapump;
Export/Import New Features:
========================
New parameters introduced:
include=NETWORK_ACL Starting from 12c you can export ACLs using this parameter.
TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y Disable Logging while importing data. [It sounds cool but I still see no difference when using it, redo log switches keep happening at the same pace, situation is same in 19c as well, sounds like a bug! 😒].
logtime=all Log the time for each export/import step.
COMPRESSION_ALGORITHM = {BASIC | LOW | MEDIUM | HIGH} The higher the slower but a better compression ratio. can be changed during the operation. [Advanced compression license required]
MISC New Features:
================
- Writes all DDL operations into the specific DDL log instead of writing to alert log, When setting enable_ddl_logging=true
- Network Compression can be used to reduce the data volume transferred between DB server and clients: [Advanced Compression License]
SQLNET.COMPRESSION=: (off | on) ON to activate the compression. [Default: off]
SQLNET.COMPRESSION_LEVELS=: Determines the compression level (LOW | HIGH) the higher the more CPU to be used. [Default: low]
SQLNET.COMPRESSION_THRESHOLD=: Determines the minimum data size needed to trigger the compression. [Default: 1024 bytes]
- [12.1] Datafiles (including system Tablespace) can be moved ONLINE while being accessed, and Oracle will automatically move the datafile on OS side to the new location as well.
This operation doesn't get replicated on the standby DB.
SQL> ALTER DATABASE MOVE DATAFILE '/u01/oracle/rbdb1/user1.dbf' TO '+DATA1/orcl/datafile/user1.dbf' <keep> <reuse>;
keep -> Will keep the original datafile in its location and will not delete it by the end of the operation.
reuse -> In case the datafile already exists in the target location it will overwrite it. [Not recommended]
Note: In case you are moving OFFLINE datafiles, you have to copy them first from OS side before issue ALTER DATABASE MOVE command.
- [12.1] You can create multiple indexes on the same set of columns to perform application migrations without dropping an existing index.
e.g. A B-tree index and a bitmap index can be created on the same set of columns.
Limitations: Only one index should be VISIBLE at a time.
The following operations can run ONLINE without locking the table: [ONLINE keywork must be mentioned to avail it]
DROP INDEX e.g. DROP INDEX schema.index ONLINE;
DROP CONSTRAINT e.g. ALTER TABLE emp DROP CONSTRAINT emp_email_uk ONLINE;
ALTER INDEX UNUSABLE e.g. ALTER INDEX emp_ix UNUSABLE ONLINE;
SET COLUMN UNUSED: e.g. ALTER TABLE emp SET UNUSED (ename) ONLINE;
ALTER TABLE MOVE e.g. ALTER TABLE emp MOVE TABLESPACE tbs2 ONLINE UPDATE INDEXES;
ALTER TABLE MODIFY PARTITION e.g. ALTER TABLE sales MODIFY PARTITION BY RANGE (c1) INTERVAL (100)(PARTITION p1 …, PARTITION p2 …) ONLINE UPDATE INDEXES;
ALTER TABLE SPLIT PARTITION e.g. ALTER TABLE orders SPLIT PARTITION ord_p1 AT ('01-JUN-2015') INTO (PARTITION ord_p1_1 COMPRESS, PARTITION ord_p1_2) ONLINE UPDATE INDEXES;
DATAFILE MOVE ONLINE: e.g. ALTER DATABASE MOVE datafile '/disk1/myexample01.dbf' TO '+DATA' REUSE KEEP;
REUSE: overwrite the datafile if it already exists in the destination.
KEEP: keep the original datafile and don't delete it after the completion of the move operation.
Allow rename/move datafiles to the same or different storage (e.g. from non-ASM to ASM).
- [12.1] Partition or subpartition can be moved ONLINE without interrupting the DMLs (but not the DMLs that run in Parallel/Direct Path as they require an exclusive lock on the table).
- [12.1] Online redefinition of a table can happen in one step.
- [12.1] You can make individual table columns invisible.
- [12.1] Undo data on temporary tables can be placed on TEMPORARY tablespace instead of UNDO tablespace [Not Default need to be enabled]
To enable TEMPORARY UNDO you have to set the parameter TEMP_UNDO_ENABLED=TRUE, this will boost the performance as such data will not be written to REDOLOGS.
It's ENABLED by Default on ADG standby DB to enable DML on TEMP tables. Once it gets enabled you can monitor the TEMP UNDO generation using V$TEMPUNDOSTAT.
- [12.2] SYS authentication happens in the Password File, not through the DB Dictionary.
- Whenever you change the password for any user granted SYSDBA, SYSOPER, SYSBACKUP, SYSKM, SYSDG you have to sync it inside the Password File by revoking the privilege and re-grant it back again, so the user will be removed and re-added back to the Password File with its new password: [All Oracle Versions]
SQL> SELECT USERNAME FROM V$PWFILE_USERS WHERE USERNAME != 'SYS' AND SYSDBA='TRUE';
SQL> REVOKE SYSDBA from DBA1;
SQL> GRANT SYSDBA to DBA1;
- [12.2] New view called DBA_DB_LINK_SOURCES to show the information of the source databases that opened database links to the local database.
- [12.2] Objects that cause ORA-00600 and ORA-07445 and can cause the whole instance to crash can be quarantined and isolated to avoid having DB to crash until the database gets restarted. V$QUARANTINE stores information about isolated objects. [Frankly, I cannot understand the purpose of this feature from Oracle documentation; so I won't comment!]
- [12.2] INSTANCE_ABORT_DELAY_TIME initialization parameter specifies a delay time in seconds when an error causes an instance to abort to help the DBA gather information. [I don't see this helpful, as it will increase the outage time and most probably the DBA will not be able to intervene within that time; most probably the DBA will not be staring at the alertlog at that time!]
- [12.2] Password file will be automatically synced on the standby when it gets changed on the primary.
Deprecated Features:
==================
- All SRVCTL commands are now using full-word options instead of the single-letter options which will be unavailable in future releases.
e.g. instead of using # srvctl start database -d orcl
Use: # srvctl start database -database orcl
References:
===========
https://apex.oracle.com/database-features
Oracle Database 12c Release 1 (12.1) Upgrade New Features [ID 1515747.1]
NOTE:1520299.1 - Master Note For Oracle Database 12c Release 1 (12.1) Database/Client Installation/Upgrade/Migration Standalone Environment (Non-RAC)
Oracle Database 12c Release 1 (12.1) Upgrade New Features [ID 1515747.1]
RMAN RECOVER TABLE Feature New to Oracle Database 12c [ID 1521524.1]
[12.1] Online Resource Attribute Modification. e.g. srvctl modify service -db orcl -s app ...
[12.2] Application Continuity: Read Write Transactions DMLs can failover to the available node if the service and client prerequisites are met: https://www.oracle.com/technetwork/database/options/clustering/applicationcontinuity/adb-continuousavailability-5169724.pdf
Dataguard New Features:
=====================
[12.1] Real Time ADG cascade, Cascade Standby used to lag 1 redo log behind the primary starting from 12.1 the Cascade Standby can be synced in a real time.
[12.2] Recovery of NOLOGGING Blocks on Standby is possible: https://www.doag.org/formes/pubfiles/11053935/2_DataGuard_122.pdf
Before 12.2 if a no-logging transaction happens on the primary it will not be replicated to the standby and in order to replicate them you have to restore the complete datafiles impacted by that NOLOGGING operations. Starting from 12.2 Oracle provides the following command to scan the primary for no logging blocks and recover them on the standby:
RMAN> validate/recover .. nonlogged block
ASM New Features:
================
[12.1] ASM disk scrubbing checks/automatic repair logical data corruptions in NORMAL and HIGH redundancy disks groups with minimal I/O.
Security New Features:
===================
[12.1] UNLIMITED TABLESPACE privilege removed from RESOURCE role.
[12.1] SYSBACKUP role introduced for backup admins.
[12.1] SELECT ANY DICTIONARY privilege no longer permits the access to sensitive dictionary tables DEFAULT_PWD$, ENC$, LINK$, USER$, USER_HISTORY$ and XS$VERIFIERS.
[12.2] INACTIVE_ACCOUNT_TIME is a New profile parameter introduced to lock the account if it's inactive for specific days.
[12.2] TDE tablespace encryption include SYSTEM, SYSAUX, and UNDO to cover the whole DB.
[12.2] TDE can encrypt, decrypt, and rekey existing tablespaces online.
[12.2] TDE Tablespace Encryption can happen online without application downtime:
SQL> alter tablespace users encryption encrypt;
[12.1] Data Redaction:
- It masks (redact) sensitive data returned from application queries for specific users, other users with EXEMPT REDACTION POLICY privilege can see.
- Oracle Data Redaction doesn’t make a change to data on disk, the sensitive data is redacted on the fly before it gets returned to the application.
EX: Redact all data in SALARY column in HR.EMPLOYEES
EX1: SALARY column will show 0 value:
BEGIN
DBMS_REDACT.ADD_POLICY
(object_schema => 'HR',
object_name => 'EMPLOYEES',
policy_name => 'redact_EMP_SAL_0',
column_name => 'SALARY',
function_type => DBMS_REDACT.FULL,
expression => '1=1');
END;
/
EX2: SALARY column will show RANDOM values:
BEGIN
DBMS_REDACT.ADD_POLICY(
object_schema => 'HR',
object_name => 'employees',
column_name => 'salary',
policy_name => 'redact_EMP_SAL_random',
function_type => DBMS_REDACT.RANDOM,
expression => 'SYS_CONTEXT(''USERENV'',''SESSION_USER'') = ''HR''');
END;
/
Auditing New Features:
===================
[12.1] Auditing is enabled by default. 😞
[12.1] DBA_USERS records the last_login time for each user: SQL> select username,last_login from dba_users where username like 'P_FZ%';
[12.2] Role auditing. For example, auditing for new users with the DBA role would begin automatically when they are granted the role.
Privilege Monitoring: [Extra License]
If database vault option is enabled, You can audit the privileges if they are getting used, or what is the privilege used by a specific user/module.
--Create a database privilege analysis policy
BEGIN
DBMS_PRIVILEGE_CAPTURE.CREATE_CAPTURE(
name => 'all_priv_analysis_policy',
description => 'database-wide policy to analyze all privileges',
type => DBMS_PRIVILEGE_CAPTURE.G_DATABASE);
END;
/
--Create a privilege analysis policy to analyze privileges from the role e.g. PUBLIC
BEGIN
DBMS_PRIVILEGE_CAPTURE.CREATE_CAPTURE(
name => 'pub_analysis_pol',
description => 'Policy to record privilege use by PUBLIC',
type => DBMS_PRIVILEGE_CAPTURE.G_ROLE,
roles => role_name_list('PUBLIC'));
END;
/
-- Create a policy to analyze privileges from the application module, "Account Payable"
BEGIN
DBMS_PRIVILEGE_CAPTURE.CREATE_CAPTURE(
name => 'acc_pay_analysis_pol',
type => DBMS_PRIVILEGE_CAPTURE.G_CONTEXT,
condition => 'SYS_CONTEXT(''USERENV'', ''MODULE'') = ''Account Payable''');
END;
/
-- Create a policy that records privileges for session user APPS when running module "Account Payable"
BEGIN
DBMS_PRIVILEGE_CAPTURE.CREATE_CAPTURE(
name => 'acc_pay_analysis_pol',
type => DBMS_PRIVILEGE_CAPTURE.G_CONTEXT,
condition => 'SYS_CONTEXT(''USERENV'', ''MODULE'') = ''Account Payable'' AND
SYS_CONTEXT(''USERENV'', ''SESSION_USER'') = ''APPS''');
END;
/
--Enable the Capture:
SQL> EXEC DBMS_PRIVILEGE_CAPTURE.ENABLE_CAPTURE('MY_CREATED_CAPTURE');
-- Generate the report:
SQL> EXEC DBMS_PRIVILEGE_CAPTURE.GENERATE_RESULT('MY_CREATED_CAPTURE');
Note: You can consult views DBA_USED_SYSPRIVS and DBA_USED_OBJPRIVS for USED privielges and DBA_UNUSED_PRIVS for UNUSED privileges.
-- Disable the Capture:
SQL> EXEC DBMS_PRIVILEGE_CAPTURE.DISABLE_CAPTURE('MY_CREATED_CAPTURE');
-- Drop the Capture:
SQL> EXEC DBMS_PRIVILEGE_CAPTURE.DROP_CAPTURE('MY_CREATED_CAPTURE');
Auditing Data PUMP expdp operation:
Drawbacks: it will not show what tables were actually exported. 😕
-- Create Audit Policy:
SQL> CREATE AUDIT POLICY audit_datapump ACTIONS COMPONENT=DATAPUMP ALL;
SQL> AUDIT POLICY audit_datapump BY sys,system;
# expdp \'/ as sysdba \' tables=p_fz.test directory=test dumpfile=test.dmp REUSE_DUMPFILES=Y
SQL> SELECT event_timestamp, dp_text_parameters1, dp_boolean_parameters1 FROM unified_audit_trail WHERE audit_type = 'Datapump';
-- Drop Audit Policy:
SQL> select ENTITY_NAME from AUDIT_UNIFIED_ENABLED_POLICIES where POLICY_NAME=upper('audit_datapump');
SQL> NOAUDIT policy audit_datapump by sys,system;
SQL> drop AUDIT POLICY audit_datapump;
Export/Import New Features:
========================
New parameters introduced:
include=NETWORK_ACL Starting from 12c you can export ACLs using this parameter.
TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y Disable Logging while importing data. [It sounds cool but I still see no difference when using it, redo log switches keep happening at the same pace, situation is same in 19c as well, sounds like a bug! 😒].
logtime=all Log the time for each export/import step.
COMPRESSION_ALGORITHM = {BASIC | LOW | MEDIUM | HIGH} The higher the slower but a better compression ratio. can be changed during the operation. [Advanced compression license required]
MISC New Features:
================
- Writes all DDL operations into the specific DDL log instead of writing to alert log, When setting enable_ddl_logging=true
- Network Compression can be used to reduce the data volume transferred between DB server and clients: [Advanced Compression License]
SQLNET.COMPRESSION=: (off | on) ON to activate the compression. [Default: off]
SQLNET.COMPRESSION_LEVELS=: Determines the compression level (LOW | HIGH) the higher the more CPU to be used. [Default: low]
SQLNET.COMPRESSION_THRESHOLD=: Determines the minimum data size needed to trigger the compression. [Default: 1024 bytes]
- [12.1] Datafiles (including system Tablespace) can be moved ONLINE while being accessed, and Oracle will automatically move the datafile on OS side to the new location as well.
This operation doesn't get replicated on the standby DB.
SQL> ALTER DATABASE MOVE DATAFILE '/u01/oracle/rbdb1/user1.dbf' TO '+DATA1/orcl/datafile/user1.dbf' <keep> <reuse>;
keep -> Will keep the original datafile in its location and will not delete it by the end of the operation.
reuse -> In case the datafile already exists in the target location it will overwrite it. [Not recommended]
Note: In case you are moving OFFLINE datafiles, you have to copy them first from OS side before issue ALTER DATABASE MOVE command.
- [12.1] You can create multiple indexes on the same set of columns to perform application migrations without dropping an existing index.
e.g. A B-tree index and a bitmap index can be created on the same set of columns.
Limitations: Only one index should be VISIBLE at a time.
The following operations can run ONLINE without locking the table: [ONLINE keywork must be mentioned to avail it]
DROP INDEX e.g. DROP INDEX schema.index ONLINE;
DROP CONSTRAINT e.g. ALTER TABLE emp DROP CONSTRAINT emp_email_uk ONLINE;
ALTER INDEX UNUSABLE e.g. ALTER INDEX emp_ix UNUSABLE ONLINE;
SET COLUMN UNUSED: e.g. ALTER TABLE emp SET UNUSED (ename) ONLINE;
ALTER TABLE MOVE e.g. ALTER TABLE emp MOVE TABLESPACE tbs2 ONLINE UPDATE INDEXES;
ALTER TABLE MODIFY PARTITION e.g. ALTER TABLE sales MODIFY PARTITION BY RANGE (c1) INTERVAL (100)(PARTITION p1 …, PARTITION p2 …) ONLINE UPDATE INDEXES;
ALTER TABLE SPLIT PARTITION e.g. ALTER TABLE orders SPLIT PARTITION ord_p1 AT ('01-JUN-2015') INTO (PARTITION ord_p1_1 COMPRESS, PARTITION ord_p1_2) ONLINE UPDATE INDEXES;
DATAFILE MOVE ONLINE: e.g. ALTER DATABASE MOVE datafile '/disk1/myexample01.dbf' TO '+DATA' REUSE KEEP;
REUSE: overwrite the datafile if it already exists in the destination.
KEEP: keep the original datafile and don't delete it after the completion of the move operation.
Allow rename/move datafiles to the same or different storage (e.g. from non-ASM to ASM).
- [12.1] Partition or subpartition can be moved ONLINE without interrupting the DMLs (but not the DMLs that run in Parallel/Direct Path as they require an exclusive lock on the table).
- [12.1] Online redefinition of a table can happen in one step.
- [12.1] You can make individual table columns invisible.
- [12.1] Undo data on temporary tables can be placed on TEMPORARY tablespace instead of UNDO tablespace [Not Default need to be enabled]
To enable TEMPORARY UNDO you have to set the parameter TEMP_UNDO_ENABLED=TRUE, this will boost the performance as such data will not be written to REDOLOGS.
It's ENABLED by Default on ADG standby DB to enable DML on TEMP tables. Once it gets enabled you can monitor the TEMP UNDO generation using V$TEMPUNDOSTAT.
- [12.2] SYS authentication happens in the Password File, not through the DB Dictionary.
- Whenever you change the password for any user granted SYSDBA, SYSOPER, SYSBACKUP, SYSKM, SYSDG you have to sync it inside the Password File by revoking the privilege and re-grant it back again, so the user will be removed and re-added back to the Password File with its new password: [All Oracle Versions]
SQL> SELECT USERNAME FROM V$PWFILE_USERS WHERE USERNAME != 'SYS' AND SYSDBA='TRUE';
SQL> REVOKE SYSDBA from DBA1;
SQL> GRANT SYSDBA to DBA1;
- [12.2] New view called DBA_DB_LINK_SOURCES to show the information of the source databases that opened database links to the local database.
- [12.2] Objects that cause ORA-00600 and ORA-07445 and can cause the whole instance to crash can be quarantined and isolated to avoid having DB to crash until the database gets restarted. V$QUARANTINE stores information about isolated objects. [Frankly, I cannot understand the purpose of this feature from Oracle documentation; so I won't comment!]
- [12.2] INSTANCE_ABORT_DELAY_TIME initialization parameter specifies a delay time in seconds when an error causes an instance to abort to help the DBA gather information. [I don't see this helpful, as it will increase the outage time and most probably the DBA will not be able to intervene within that time; most probably the DBA will not be staring at the alertlog at that time!]
- [12.2] Password file will be automatically synced on the standby when it gets changed on the primary.
Deprecated Features:
==================
- All SRVCTL commands are now using full-word options instead of the single-letter options which will be unavailable in future releases.
e.g. instead of using # srvctl start database -d orcl
Use: # srvctl start database -database orcl
References:
===========
https://apex.oracle.com/database-features
Oracle Database 12c Release 1 (12.1) Upgrade New Features [ID 1515747.1]
NOTE:1520299.1 - Master Note For Oracle Database 12c Release 1 (12.1) Database/Client Installation/Upgrade/Migration Standalone Environment (Non-RAC)
Oracle Database 12c Release 1 (12.1) Upgrade New Features [ID 1515747.1]
RMAN RECOVER TABLE Feature New to Oracle Database 12c [ID 1521524.1]