Thursday, July 9, 2020

Ready RMAN Backup Script To Schedule from Crontab

This shell script can help you schedule RMAN full backup from the Crontab:
https://www.dropbox.com/home?preview=schedule_rman_full_bkp.sh

In case you are interested in an interactive script to run it manually then use this script:
https://dba-tips.blogspot.com/2017/03/rman-backup-script.html

For the first script before you schedule it to run in the crontab you have to do the following:

Modify the following mandatory parameters at the top of the script to match your database environment details you want to back up:

ORACLE_SID=orcl              The instance name you want to back up. i.e. orcl
ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1 The Oracle Home location i.e. /u01/app/oracle/product/11.2.0/dbhome_1
BACKUPLOC=/backup        The Backup location "Full path". i.e. /backup

If you still have time you can modify the optional parameters:

COMPRESSION=Y             In case you want to enable compression i.e. enabled
BKP_RETENTION=7          Backups older than 7 days will be deleted. [MAINTENANCEFLAG must =Y for this parameter to take effect]
ARCH_RETENTION=7       Archivelogs older than 7 days will be deleted. [MAINTENANCEFLAG must =Y for this parameter to take effect]
CTRL_AUTOBKP_RETENTION=7 Controlfile autobackup files older than 7 days will be deleted.  [MAINTENANCEFLAG must =Y for this parameter to take effect]
MAINTENANCEFLAG=N  If enabled it will perform the following tasks: [Default Disabled]
                                                - Enable CONTROLFILE AUTOBACKUP mode.
                                                - Enable backups to be resumed if crashed.
                                                - Maintain the good naming of the CONTROLFILE backup piece.
                                                - Crosscheck old backups.
                                                - Crosscheck Archivelog files.
                                                - Delete Expired old backups from the disk. [Based on the Backup retention you defined]
                                                - Delete Obsolete backups which are no more exist from the catalog.
                                                - Delete the old Archivelogs. [Based on the Archivelog retention you defined]
                                                - Delete the old Controlfile backups. [Based on the Controlfile retention you defined]

That's all!

This script is part of the DBA BUNDLE, to read more about it please visit this link:
http://dba-tips.blogspot.ae/2014/02/oracle-database-administration-scripts.html

DISCLAIMER: THIS SCRIPT IS DISTRIBUTED IN THE HOPE THAT IT WILL BE USEFUL, BUT WITHOUT ANY WARRANTY. IT IS PROVIDED "AS IS".

GitHUB Version: 

Oracle Resource Manager | Decoding The Enegma Machine

Introduction:
Oracle Resource Manager (available since 8i) is one of the underestimated Oracle features, because it's not commonly used among DBAs.
In this article I'll try to explain it in a simple way to make you change your mind and start making use of it.
 
What is Resource Manager:  
In short, it gives the DBA the power of controlling system resources i.e. (CPU) by specifying who has the proirity of using system resources at the time of resource starvation. This means if you don't have CPU bottle necks throughout the day, you don't need it!

Licensing:
Resource Manager comes with Enterprise Edition without extra cost.

Why DBA should use Resource Manager:
The following scenarios illustrate the benefit of using the Resource Manager:

- You want to control the resource consumed by users using tools like SQL Developer/TOAD. (i.e. limit the CPU consumption/ Cancel the query/ Kill the session/ Limit the used degree of parallelism).
- In case there is a resource contention occur at night and you have a very important ETL that should complete on time, and you want Oracle to give this job the highest priority.
- There is an important DB service being used by finance applications at the end of the month, and you don't want other departments connecting via different services to compete the CPU resource with that finance service.
- You want the night RMAN full backup job to release the CPU resource for production applications in case a CPU contention happen at night.

How to use Resource Manager:
Configuring Resource Manager is a bit complicated, but if you follow the right sequence it will be a piece of cake.

Now it's time to setup a Plan and its Consumer Groups and Map these consumer Groups to Consumers. Before getting confused with these terms, let me first start with an example simplifies them to you:

Imagine you have opened a new fried chicken chain called Zeko Fried Chickens ZFC:
You will start with putting a business plan that will boost the food production and get the customers served as quickly as possible [let's refer to this plan as Resource Plan] the plan is as follows:
- You categorized the food production into 3 main meals [Spicy , Non-Spicy and Kids meal]. [let's refer to the meals as Consumer Groups]- You categorize your customers into three categories (Adults like spicy food, Adults don't like spicy food and kids). [Let's refer to the customers as Consumers]
- When customers come in, they will order one of the above mentioned meals, each meal will be ready within 2 minutes. [Let's refer to the process of delivering the meals to the customers as Plan Directive]

Congrats! if you understood the above example you understood the Resource Manager components;
Resource Plan specifies how much CPU will be shared with consumer groups, who will be assigned to these consumer groups, and the distribution of the rest of unused CPU if available.
Consumer Group defines how much CPU resource will be used by specific consumers.
Consumers are the ones who will utilize the resources allocated to the consumer group.
Plan Directive defines the criteria by which the Consumers will get assigned to Consumer Groups.
 
Next, let's move to the creation of the Resource Manager components ...

- First, create something called Pending Area, it's kind of a temporary area Oracle  use to track/validate the changes to the resource manager to maintain the integrity of the Resource Manager configurations and avoid any miss up could happen. This Pending Area will be associated with your session only, at the end of setting up the Resource Manager you have to close/finish this Pending Area in order to validate and submit the changes. It's similar of BEGIN & END in PL/SQL but this one is specific to Resource Manager.
 
EXEC  DBMS_RESOURCE_MANAGER.clear_pending_area();
EXEC  DBMS_RESOURCE_MANAGER.create_pending_area(); 



Create a Plan:
EXEC  DBMS_RESOURCE_MANAGER.create_plan('prod_plan', 'Plan for high and low priority tasks.');

    Rollback: Delete Plan:
    -- EXEC dbms_resource_manager.DELETE_PLAN_CASCADE ('prod_plan');

Create consumer groups:
EXEC  DBMS_RESOURCE_MANAGER.create_consumer_group('high_cg',       'high priority free to utilize the cpu');
EXEC  DBMS_RESOURCE_MANAGER.create_consumer_group('low_cg',        'low priority');
EXEC  DBMS_RESOURCE_MANAGER.create_consumer_group('conditioned_cg','low priority when met some conditions');


    Rollback: Delete Consumer Groups:
    -- EXEC DBMS_RESOURCE_MANAGER.delete_consumer_group('high_cg');
    -- EXEC DBMS_RESOURCE_MANAGER.delete_consumer_group('low_cg');
    -- EXEC DBMS_RESOURCE_MANAGER.delete_consumer_group('conditioned_cg');


Create Plan Directive:
Plan Directive defines the resource limits for each Consumer group: [Here is the core idea of the Resource Manager, if you understand it, you understand the whole topic]

Note: MGMT_Px represent the CPU level, MGMT_P1 is the capacity of 100% of all CPUs (cores), whatever will remain unused (leftover) from MGMT_P1 it can be utilized by MGMT_P2 (level2), and whatever leftover from level2 can be used by MGMT_P3 and so on up to 8 levels. Which means if MGMT_P1 is utilizing 100% no CPU will remain left for MGMT_P2.

Ideally, all apps/services should have a share in level 1 (MGMT_P1) each share represent % of total CPU, all shares combined should not exceed 100%, This share doesn't restrict the App/service of using 100% of the CPU if CPU is not utilized by other services (as long as the active running sessions is less than the number of CPUs), but it will maintain that at anytime the Apps/services will get their defined % share in case there are other sessions competing for CPU resources (CPU is busy).
 
OK, if all applications will be under level 1 then each application will have a defined share in the CPU, but what is the difference between the plan directive levels (MGMT_P1, MGMT_P2, ...)?
The difference between plan directive levels is similar to the lion family, when they are not hungry, and meat is available, anyone of the lion family can eat peacfully from the meat (no CPU contention), but when they all become hungry and meat is not enough for all (CPU contention), here the rule will apply (Plan Directive), the lion will eat first till it get stomach full, then the lionesses and last the lion kids.

 
 
Note: There is a consumer group exist by default called "OTHER_GROUPS", it MUST be part of any defined plan directive.

Note: To drop a consumer group you must start with dropping the lowest MGMT_Px consumer group.
Note: Plan directive can be modified (dropped/recreated) on the fly and it will reflect immediately on the current connected sessions once the pending area get submitted.

Define a separate plan directive for default "OTHER_GROUPS": [80% o CPU in Level 1 - highest]
BEGIN
  DBMS_RESOURCE_MANAGER.create_plan_directive (
    plan             => 'prod_plan',
    group_or_subplan => 'OTHER_GROUPS',
    comment          => 'high priority by default',
    MGMT_P1          => 80,
    MGMT_P2          => 0);
End;
/
 
Note: When CPU resources are not enough for everyone, the sessions assigned to this group will gurantee that 80% of CPU resources are reserved for them to utilize.
 
    Rollback: Delete Plan Directive:
    -- EXEC DBMS_RESOURCE_MANAGER.DELETE_PLAN_DIRECTIVE (plan=>'prod_plan', group_or_subplan=>'OTHER_GROUPS');
 
Define the LOW Priority group: [20% of CPU in level 1]
BEGIN
  DBMS_RESOURCE_MANAGER.create_plan_directive (
    plan                      => 'prod_plan',
    group_or_subplan => 'low_cg',
    comment              => 'Low Priority - level 1',
    -- Limit the number of the concurrent active sessions at any given time for the sessions associated to this consumer group:
    --ACTIVE_SESS_POOL_P1 => 32
    -- Limit the degree of parallelism for the sessions associated to this consumer group:
    --PARALLEL_DEGREE_LIMIT_P1 => 2,
    MGMT_P1          => 20
   );
End;
/
 
Note: When CPU resources are not enough for everyone, the sessions assigned to this group will gurantee that 20% of CPU resources are reserved for them to utilize.
 
    Rollback: Delete Plan Directive:
    -- EXEC DBMS_RESOURCE_MANAGER.DELETE_PLAN_DIRECTIVE (plan=>'prod_plan',group_or_subplan=>'low_cg');

 
Define the Conditioned group: [the least priority but 100% of level 2]
BEGIN
  DBMS_RESOURCE_MANAGER.create_plan_directive(
    plan             => 'prod_plan',
    group_or_subplan => 'conditioned_cg',
    comment          => 'if the utilization met specific parameter lower the priority',
    MGMT_P1          => 0,
    MGMT_P2          => 100,
    MGMT_P3          => 0,
    -- Switch to other group (with lower CPU)/Cancel Call/Kill Session if the switch conditions are met:
--  switch_group        => 'low_cg',
-- CANCEL_SQL is available only in 12c onwards.
--  switch_group        => 'CANCEL_SQL',
    switch_group        => 'KILL_SESSION',
    -- If Elapsed Time exceeds 30 minutes (1800 seconds):
    switch_time          => 1800,
    -- If the Number of I/O requests exceeds 500000:
(11g+)     
    switch_io_reqs      => 500000,
    -- If the Amount of I/O requested exceeds 1024M:
(11g+)
    switch_io_megabytes => 1024,
    -- switch_for call: (12c+)
    switch_for_call      => TRUE
    );
End;
/
 
Note: When CPU resources are not enough for everyone, the sessions assigned to this group will not utilize any CPU resources until the sessions assigned to level 1 groups (OTHER_GROUPS & LOW_CG) are not utilizing the CPU anymore. (remember the lionesses that have to what for the lion to finish his meal in the early example).
 
    Rollback: Delete Plan Directive:
    -- EXEC DBMS_RESOURCE_MANAGER.DELETE_PLAN_DIRECTIVE (plan=>'prod_plan',group_or_subplan=>'conditioned_cg');
 
 

Validate/Apply the resource plan:
EXEC DBMS_RESOURCE_MANAGER.validate_pending_area;
EXEC DBMS_RESOURCE_MANAGER.submit_pending_area();


Map/Assign Users/Services/Modules/Machines to Consumer Groups:

Let's suppose that we don't want the reporting applications to compete with the other production Apps for the CPU resources, in this case we will assign the sessions connecting through the reporting application service and let's say it's name is reporting_svc, to conditioned_cg consumer group:

create a pending area: [temporary work area for Resource Management configuration until it get submitted]

EXEC  DBMS_RESOURCE_MANAGER.clear_pending_area();
EXEC  DBMS_RESOURCE_MANAGER.create_pending_area();


BEGIN
DBMS_RESOURCE_MANAGER.set_consumer_group_mapping (
  --attribute     => DBMS_RESOURCE_MANAGER.oracle_user,
  --attribute     => DBMS_RESOURCE_MANAGER.module_name,
  --attribute     => DBMS_RESOURCE_MANAGER.client_os_user,
  --attribute     => DBMS_RESOURCE_MANAGER.client_program,
  --attribute     => DBMS_RESOURCE_MANAGER.client_machine,
  --attribute     => DBMS_RESOURCE_MANAGER.service_module
,
  attribute       => DBMS_RESOURCE_MANAGER.service_name,
  value           => 'reporting_svc',
  consumer_group  => 'conditioned_cg');
END;
/

    Rollback: To Remove an attribute, run the same command with the same attribute & value, but remove "consumer_group" parameter:
    BEGIN
    DBMS_RESOURCE_MANAGER.set_consumer_group_mapping (
      attribute       => DBMS_RESOURCE_MANAGER.service_name,
      value           => 'reporting_svc');
    END;
    /

Here we are providing service_name but you can assign the sessions to the consumer group based on the other available attributes (which are commented and grayed. 
 
Now what will happen:
As the sessions connected to reporting_svc became part of conditioned_cg consumer group the following will be applied to them:

- Sessions connected to reporting_svc service will not compete with the other applications when CPU reach 100% and will wait for CPU to come below 100% to start to work.
- If any session connected to reporting_svc service ran a query for more than 30 minutes the session will be killed.
- If any session connected to reporting_svc service exceed 50000 I/O requests the session will be killed.
- If any session connected to reporting_svc service exceed 1GB of the I/O volume the session will be killed. 

You may ask, but you didn't assign the rest of application sessions yet to the other consumer groups, how they will have a high priority then?
 
Any session that is not yet manually been assigned to any of the consumer groups will be part of OTHER_GROUPS consumer group by default, this explains why OTHER_GROUPS must be part of any plan --to not leave any session not mapped to a consumer group; also explains why we configured it with MGMT_P1 which has the highest priority.

Example of mapping SQL Developer sessions to conditioned_cg group:
 
           BEGIN
           DBMS_RESOURCE_MANAGER.set_consumer_group_mapping (
           attribute     => DBMS_RESOURCE_MANAGER.module_name,
          
value          => 'SQL Developer',
           consumer_group  => 'conditioned_cg');
           END;
           /


Validate/Apply the resource plan:
EXEC DBMS_RESOURCE_MANAGER.validate_pending_area;
EXEC DBMS_RESOURCE_MANAGER.submit_pending_area();


Grant permission to All users to use/switch to the created Consumer Groups:

EXEC dbms_resource_manager_privs.grant_switch_consumer_group('public', 'high_cg', FALSE);
EXEC dbms_resource_manager_privs.grant_switch_consumer_group('public', 'low_cg',  FALSE);
EXEC dbms_resource_manager_privs.grant_switch_consumer_group('public', 'conditioned_cg', FALSE);


Activiate the Resource Plan:
SQL> ALTER SYSTEM SET RESOURCE_MANAGER_PLAN = prod_plan SCOPE=BOTH SID='*';
Or:
SQL> EXEC DBMS_RESOURCE_MANAGER.SWITCH_PLAN ('prod_plan');
 
   Rollback: Reset the plan in RAC setup:
   ALTER SYSTEM SET RESOURCE_MANAGER_PLAN = DEFAULT_PLAN;

   Rollback: Reset the plan in a Standalone setup:
   ALTER SYSTEM SET RESOURCE_MANAGER_PLAN = '';

To prevent a resource plan from being changed during such maintenance windows use FORCE keyword:

SQL> ALTER SYSTEM SET RESOURCE_MANAGER_PLAN = 'FORCE:prod_plan' SCOPE=BOTH SID='*';

The advantage of keeping default resource plan DEFAULT_PLAN on RAC protects PMON and LMS, from CPU starvation, SYS & SYSTEM sessions will have the highest priority, maintenance AUTO TASKS (i.e. Gather Statistics, SQL Tuning, Segment Advisor) will have the lowest priority.

Implementation is complete, you can tailor the above steps to cater to your business needs.
 
Monitoring The Resource Manager:
 
View which Resource Plan is currently Active: [CPU should be ON]
SQL> select name, cpu_managed from v$rsrc_plan where is_top_plan = 'TRUE';  

View a Consumer Plan and its consumer Groups:
SQL> select group_or_subplan from dba_rsrc_plan_directives where plan = 'prod_plan';

View the History of activated Resource Plans:
SQL> select name, to_char(start_time, 'MON DD HH24:MI') start_time, to_char(end_time, 'MON DD HH24:MI') end_time from v$rsrc_plan_history order by start_time;

View the Consumer Groups Mapping Attributes:
SQL> select * from DBA_RSRC_GROUP_MAPPINGS order by consumer_group;
 
View the mapped Consumer Group to the connected sessions:
SQL> SELECT username,initial_rsrc_consumer_group FROM dba_users;
 
SQL> select name,active_sessions,io_service_time,small_read_megabytes from V$RSRC_CONSUMER_GROUP;
 
SQL> SELECT username,module,sid,serial#,resource_consumer_group,count(*) from v$session
     where username is not null group by username,module,sid,serial#,resource_consumer_group order by 2,1;
 
SQL> SELECT username,sid, resource_consumer_group FROM  v$session WHERE  SERVICE_NAME='reporting_svc';

     -- Display the Sessions and their current consumer group:
     col ORIGINAL_CONSUMER_GROUP for a30
     select s.username,r.sid, r.mapped_consumer_group, r.mapping_attribute,  c.consumer_group original_consumer_group
     from v$rsrc_session_info r, dba_rsrc_consumer_groups c, v$session s where r.orig_consumer_group_id = c.consumer_group_id and s.sid=r.sid;


     -- Display the Sessions that moved between consumer groups:
     select r.sid, c1.consumer_group original_consumer_group, c2.consumer_group current_consumer_group
     from v$rsrc_session_info r, dba_rsrc_consumer_groups c1, dba_rsrc_consumer_groups c2  where r.orig_consumer_group_id = c1.consumer_group_id
     and r.current_consumer_group_id = c2.consumer_group_id and r.orig_consumer_group_id != r.current_consumer_group_id;


View the Consumer Groups:
col CONSUMER_GROUP     for a30
col CATEGORY        for a20
col COMMENTS        for a30
col CPU_METHOD        for a20
col MGMT_METHOD        for a20
col status        for a10
col MANDATORY        for a10
select CONSUMER_GROUP,status,mandatory,CPU_METHOD,MGMT_METHOD,CATEGORY,substr(COMMENTS,1,30) "COMMENTS" from DBA_RSRC_CONSUMER_GROUPS
where  consumer_group not like '%ORA$%'
and    CATEGORY='OTHER';


View the Consumer Group attributes:
SQL> select ATTRIBUTE,value,status from DBA_RSRC_GROUP_MAPPINGS where CONSUMER_GROUP='CONDITIONED_CG';

Consumer Groups CPU Consumption:
SQL> select to_char(begin_time, 'HH:MI') time, consumer_group_name,   60 * (select value from v$osstat where stat_name = 'NUM_CPUS') total_CPU,   60 * (select value from v$parameter where name = 'cpu_count') db_total, cpu_consumed_time / 1000 consumed, cpu_consumed_time / (select value from v$parameter where name =       'cpu_count') / 600 %cpu_utilization, cpu_wait_time / 1000 throttled from v$rsrcmgrmetric_history  order by begin_time;

Note: The existance of wait event "resmgr:cpu quantum" represent the CPU throttling that caused by the Resource Manager when DB has a workload exceeds the CPU capacity.

References:
https://www.oracle.com/technetwork/database/performance/resource-manager-twp-133705.pdf
https://www.oracle.com/technetwork/articles/servers-storage-admin/o11-056-oracledb-rm-419380.pdf
https://oracle-base.com/articles/8i/resource-manager-8i
http://www.dba-oracle.com/t_consumer_groups.htm

Saturday, July 4, 2020

DBA Bundle V5.7

The July release of DBA Bundle is now available:
https://www.dropbox.com/s/k96rl0f4g39ukih/DBA_BUNDLE5.tar?dl=0


The new features include:

- Adding Execution Plan History and enhancing the output formatting for sql_id_details.sh script for showing the details/execution plan/history of a SQL statement.
- Enhanced locating listener's log and bdump logs in oracle_cleanup.sh for cleaning DB logs.
- Enhanced the check of backup location validity in RMAN_full.sh script for taking on-demand DB backup.
- Adding multipath setting to the configuration baseline log in configuration_baseline.sh script which save the details of DBs and server configurations.
- Enhanced the 12c compatibility of dbdailychk.sh which creates health check report for the database.
- Exclude golden gate processes from the long-running queries reporting in
dbdailychk.sh and active_sessions.sh scripts.
- Added the options of modifying the backup tag, overwrite the last backup in schedule_rman_full_bkp.sh which can be configured to perform a regular database full backup.
- Minor functionality enhancements and bug fixes in the following scripts:
export_data.sh   For exporting data.
rebuild_table.sh For rebuilding tables online.
oradebug.sh       For gathering an oradebug report.

For more reading on how to use the DBA Bundle:
http://dba-tips.blogspot.com/2014/02/oracle-database-administration-scripts.html

Friday, July 3, 2020

19c Remarkable New Features

Oracle 19c is packed up with lots of cool features, I've summarized the significant ones in below categories.

Support:

[Supported untill 2027. As per Note 742060.1 --which keeps changing]

 














Performance Features:

- SQL plan management automatically evolve the plans and accept the best ones.

- Gather Statistics Auto task can run more frequently:
    - Enable high-frequency task:    
EXEC DBMS_STATS.SET_GLOBAL_PREFS('AUTO_TASK_STATUS','ON');
    - Set Maximum job run time duration: [e.g. 10min = 600 sec]
EXEC DBMS_STATS.SET_GLOBAL_PREFS('AUTO_TASK_MAX_RUN_TIME','600');
    - Job execution frequency:  [e.g. 6hours = 21600 seconds]
EXEC DBMS_STATS.SET_GLOBAL_PREFS('AUTO_TASK_INTERVAL','21600');

- Caching Big tables:
Specify how much % of the db_buffer_cache will be allocated for caching big tables: [40%]
SQL> alter system set db_big_table_cache_percent_target = 40;

Note: Oracle will automatically cache any big table in the memory if its temperature goes off 1000.

- Automatic Indexing:
Automatic indexing analyze the application workload every 15 minutes and automatically creates, drop and rebuilds unusable B-tree indexes in a database based on the changes in application workload.

Auto Index Creation:
Indexes will be intially create as INVISIBLE indexes if the analysis show performance improvement on candidate SQL statements it will be converted to VISIABLE and will be used by the application, if not, the created indexes will be marked as UNSUABLE and will be dropped later and the candidate SQL statements will be blacklisted to not use use Auto Indexes in the future.

Auto Index Deletion:
If Automatic Indexing found an index created by "Auto Indexing feature" is unsed for 373 days it will be dropped automatically.
This role doesn't apply on the manually created indexes "not created by Auto Indexing feature".

Technical Details:
------------------
Enable/Disable:
..............
Enable Auto Indexing and Allow creation of VISIBLE auto indexes to be used immediately by the Application:
SQL> EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_MODE','IMPLEMENT');

Enable Auto Indexing but Allow only the creation of INVISIBLE auto indexes, so it will NOT be used by the Application:
SQL> EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_MODE','REPORT ONLY');

Disable the Auto Indexing feature:
SQL> EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_MODE','OFF');

Scope of Analysis:

ADD ALL schemas to the Auto Indexing scope:
SQL> EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_SCHEMA', NULL, TRUE);

REMOVE a schema from Auto Indexing list:
SQL> EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_SCHEMA', 'HR', FALSE);

REMOVE ALL schemas from the Auto Indexing scope:
SQL> EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_SCHEMA', NULL, FALSE);

Add a schema to the Auto Indexing scope:
SQL> EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_SCHEMA', 'HR', NULL);

Control the Auto Deletion RETENTION of the UNUSED indexes created by AUTO INDEXING feature: [e.g. 90 days | The default is 373 days]
SQL> EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_RETENTION_FOR_AUTO', '90');

Control the Auto Deletion RETENTION of the UNUSED indexes created manually: [Default is never to be deleted]
[Strongly recommended to NOT set this parameter]
SQL> EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_RETENTION_FOR_MANUAL', '90');
    To Reset it back to Never Delete Manually created indexes:
    SQL> EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_RETENTION_FOR_MANUAL', NULL);

Specify the Default Tablespace to store Auto indexes: [e.g. IDX_TBS tablespace | The default is to store them on the DEFAULT TABLESPACE for the database]
SQL> EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_DEFAULT_TABLESPACE', 'IDX_TBS');

Specify percentage of tablespace to allocate for new Auto indexes: [e.g. 5%]
SQL> EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_SPACE_BUDGET', '5');

Show automatic Indexing information for last 24 hours:
declare
 report clob := null;
begin
 report := DBMS_AUTO_INDEX.REPORT_ACTIVITY();
end;
/

Show the last activity for Automatic Indexing operation:
declare
 report clob := null;
begin
 report := DBMS_AUTO_INDEX.REPORT_LAST_ACTIVITY(
 type => 'HTML',
 section => 'SUMMARY +INDEX_DETAILS +ERRORS',
 level => 'BASIC');
end;
/

Show automatic Indexing information for a specific time: [HTML]
declare
 report clob := null;
begin
 report := DBMS_AUTO_INDEX.REPORT_ACTIVITY(
 activity_start => TO_TIMESTAMP('2018-11-01', 'YYYY-MM-DD'),
 activity_end => TO_TIMESTAMP('2018-12-01', 'YYYY-MM-DD'),
 type => 'HTML',
 section => 'SUMMARY',
 level => 'BASIC');
end;
/

Configurations:
...............
SQL> SELECT * FROM DBA_AUTO_INDEX_CONFIG;
SQL> SELECT OWNER,INDEX_NAME,AUTO FROM DBA_INDEXES where AUTO='YES';


New Non blocking DDL statements added in 19c:

Just to remind you with the non-blocking DDL statements that were added in the previous releases:

Non-blocking ddl's added as of 11.2:

CREATE INDEX online
ALTER INDEX rebuild online
ALTER TABLE add column not null with default value
ALTER TABLE add constraint enable no validate
ALTER TABLE modify constraint validate
ALTER TABLE add column (without any default)
ALTER INDEX visible / invisible
ALTER INDEX parallel / no parallel

Non-blocking ddl's added to the list in 12.1:

DROP INDEX online (backported to 11.2)
ALTER TABLE set unused column online
ALTER TABLE drop constraint online
ALTER INDEX unusable online
ALTER TABLE modify column visible / invisible
ALTER TABLE move partition / sub-partition online
ALTER TABLE add nullable column with default value

Non-blocking ddl's added to the list in 12.2:

ALTER TABLE split partition [sub-partition] online
ALTER TABLE move online (move of a non-partitioned table)
ALTER TABLEe modify partition by .. online (to convert a non-partitioned table to partitioned state)

Non-blocking ddl's added to the list in 18.1:

ALTER TABLE merge partition online
ALTER TABLE modify partition by .. online (to change the partitioning schema of a table)


Security Features:

- 19c Cluster interconnect traffic is automatically secured by Transport Layer Security (TLS) no manual configuration needed by the DBA.

- Schema-only accounts can have no passwords to now allow login:
SQL> CREATE USER TEST NO AUTHENTICATION;

- Privilege analysis is now available as part of Oracle Database Enterprise Edition to show exactly what privileges are used or not used by each account.
  You will create a policy and enable it for some time to do the analysis, then you disable the police.


RAC/Clusterware Features:

- 19c Cluster interconnect trafic is automatically secured by Transport Layer Security (TLS) no manual configuration needed by the DBA.
- Starting from 19.3 Oracle back supports placing OCR/VOTEDISKs on a non-ASM shared filesystem.


Import/Export Features:


New parameters introduced to impdp:
TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y     Disable Logging for the imported object during the import and set it back to LOGGING after.
TRANSFORM=TABLE_COMPRESSION_CLAUSE:COMPRESS     Import tables with BASIC compression option
TRANSFORM=TABLE_COMPRESSION_CLAUSE:'ROW STORE COMPRESS ADVANCED'    Import tables with OLTP compression option
logtime=all     Enable time logging for output log messages for each step
COMPRESSION_ALGORITHM=[BASIC | LOW | MEDIUM | HIGH]     Compress the dumpfile with different compression algorithms.

A new audit policy introduced to audit EXPORT/IMPORT operations:
CREATE AUDIT POLICY policy_name  ACTIONS COMPONENT=DATAPUMP [EXPORT | IMPORT | ALL];
AUDIT POLICY audit_dp_all_policy BY all;


Backup & Recovery Features:

- Clone & Standby DB can be created using dbca -createDuplicateDB command:
# dbca -createDuplicateDB
    -gdbName             global_database_name
    -primaryDBConnectionString     easy_db_connection_string
    -sid             database_system_identifier
    [-initParams         initialization_parameters
        [-initParamsEscapeChar initialization_parameters_escape_character]]
    [-sysPassword         SYS_user_password]
    [-adminManaged]
    [-nodelist             database_nodes_list]
    [-datafileDestination     data_files_directory]
    [-recoveryAreaDestination     recovery_files_directory
        [-recoveryAreaSize     fast_recovery_area_size]]
    [-databaseConfigType     {SINGLE | RAC | RACONENODE}
        [-RACOneNodeServiceName service_name_for_RAC_One_Node_database]]
    [-useOMF {true | false}]
    [-storageType {FS | ASM}
        [-asmsnmpPassword     ASMSNMP_password]
        -datafileDestination     database_files_directory]
    [-createListener         new_database_listener]
    [-createAsStandby
        [-dbUniqueName db_unique_name_for_standby_database]]


Availability Features:


- The restore points created on Primary are automatically propagated to the Standby.

- Easy connect Plus can initiate connection with simple command:
sales-server//inst1

This Translates to:
(DESCRIPTION=
   (CONNECT_DATA=
      (SERVICE_NAME=)
      (INSTANCE_NAME=inst1))
   (ADDRESS=
      (PROTOCOL=TCP)
      (HOST=sales-server)
      (PORT=1521)))

e.g.
conn aa/aa@1.1.1.1/orcl


Data Guard Features:

- Duplicate standby DB from active primary database using Compression:
In 19c you can duplicate a standby database from an active primary DB using COMPRESSION feature which can send the database block via the network in a compressed format saving the bandwidth and the time as well.

Command example:

run {
allocate channel disk1 type disk;
allocate auxiliary channel aux1 type disk;
duplicate target database for standby from active database USING COMPRESSED BACKUPSET
spfile
parameter_value_convert 'orcl','orcls'
set db_name='orcl'
set db_unique_name='orcls';

}

- Automatic flashback of the standby:
When a FLASHBACK operation happen on the primary DB the Standby DB will be automatically flashed back.

- Restore points propagation:
  After the switchover between primary and standby the resotre points created on primary before the switchover will be available on the standby after the switchover.

- DML Operations on Active Data Guard: [insignificant feature! but thought to mention it]

If enabled, DML transactions can happen on the Standby DB, first DMls will not apply directly on the standby but will be first shipped to the Primary and applied on the Primary then changes will be replicated on the Standby then the DML will be completed (long path and can slow down the primary).

 To Enable DMLs on Standby:
 SQL> ALTER SYSTEM SET ADG_REDIRECT_DML=TRUE;
     Or: Enable it on session level:
     SQL> ALTER SESSION ENABLE ADG_REDIRECT_DML;
 To Enable PL/SQL operations on Standby:
 SQL> ALTER SESSION ENABLE ADG_REDIRECT_PLSQL;

- New Parameters to restart ADG process in case of hung network/IO:

  - DATA_GUARD_MAX_IO_TIME sets the maximum number of seconds that can elapse before a process is considered hung while performing a regular I/O operation in an Oracle Data Guard environment. [Default 240 sec]
  - DATA_GUARD_MAX_LONGIO_TIME sets the maximum number of seconds that can elapse before a process is considered hung while performing a long I/O operation in an Oracle Data Guard environment. [Default 240 sec]

- Sync the gap between the primary and the standby using one RMAN command: [From Standby side]
RMAN> recover standby database from service orclpr;

Note: orclpr is the service which connects to the Primary DB from the standby.

- Desupported ADG parameters:
  - MAX_CONNECTIONS attribute of the LOG_ARCHIVE_DEST_n initialization parameter.

Miscellaneous Features:

- DBA_REGISTRY_BACKPORTS view introduced to show the bugs that have been fixed by the patches applied to the database.
- Oracle Multimedia is desupported. Oracle recommends to store multimedia content in SecureFiles LOBs, and use third party products for image processing and conversion.
- Oracle Streams feature is desupported. Oracle recommends to use Oracle GoldenGate product to replace all replication features of Oracle Streams.
- DBMS_JOB Jobs Converted to DBMS_SCHEDULER Jobs in Oracle Database 19c. DBMS_JOBS already deprecated since 12.2.

References:
https://docs.oracle.com/en/database/oracle/oracle-database/19/admin/database-administrators-guide.pdf
https://apex.oracle.com/database-features/