As you know, there are two types of cloud services AWS provides (EC2 & RDS) while EC2 let you have the full control over the Operating System OS including root access, RDS doesn't give you any kind of OS access. Because RDS instance is managed by AWS they provide you a master admin user , this user has limited admin privileges (neither a SYSDBA nor DBA), making regular DBA tasks such as importing a schema a bit challenging.
Without having an OS access you won't be able to use commands like: exp ,expdp, imp, impdp and rman.
In this post I'll explain step-by-step how to import a schema into RDS using Oracle built-in packages. Luckily Oracle provides many built-in packages enable you to perform lots of tasks without the need to have an OS access.
Actually, Amazon already well documented importing a schema into RDS but I thought to explain it in a fashion of a real-world scenario:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Oracle.Procedural.Importing.html
Task Summary:
Export a schema with name "EPS_OWNER" on an 11.2.0.3 database resides on AWS EC2 Linux instance and upload the export dump file to S3 bucket, then import the dump file into a 12.2.0.1 AWS RDS database along with changing the schema name to "EPS.
Prerequisites:
- An AWS S3 bucket must be created and Both Source EC2 and Target RDS must have RW access to it through a role. [S3 bucket is a kind of a shared storage between AWS cloud systems where you can upload/download the files to/from it, it will be used during this demo to transfer the export dump file between EC2 source instance and RDS target instance].
Step1: Export the schema on Source [EC2 instance]:
I've already an OS access to oracle user on the source EC2 instance so I used exportdata script to export EPS_OWNER schema.
Note: In case you are importing from Enterprise Edition DB to Standard Edition DB make sure to reset all tables having COMPRESSION option enabled to NOCOMPRESS before exporting the data:
i.e.
alter table eps_owner.compressed_table NOCOMPRESS;
This is because Standard Edition doesn't have COMPRESSION feature. Otherwise the table creation will fail with ORA-39083 error during the import on the Standard Edition DB.
Step2: Upload the export file to S3 Bucket from Source [EC2 instance]:
In case the bucket is not yet configured on the source machine you can use the following AWSCLI command to configure it providing the bucket's "Access Key" and "Secret Access Key":
# aws configure
AWS Access Key ID [None]: AFFFTFWNINVCRTWLBWII5Q
AWS Secret Access Key [None]: EdfeergzA1+kErse7kg53nudSv/Il/xx2x2wvwiuy
Default region name [None]:
Default output format [None]:
Note: The keys above are dummy ones, you have to put your own bucket key.
Upload the export dump files to the S3 bucket:
# cd /backup
# aws s3 cp EXPORT_eps_owner_STG_04-03-19.dmp s3://eps-bucket
In case you are using S3 Browser from a Windows machine, configure the bucket using this flow:
Open S3 Browser -> Accounts -> Add New Account:
<you will use your bucket details here I'm just giving an example>
Account Name: eps-bucket
Account Type: Amazon S3 Storage
Access Key ID: AFFFTFWNINVCRTWLBWII5Q
Secret Access Key: EdfeergzA1+kErss2kg53nudSv/Il/xx2x2wvwiuy
Click "Add New Account"
Accounts -> click "eps-bucket" -> Click "Yes" to add 'External bucket' -> Bucket Name: "eps-bucket"
Note: S3 Browser is a Windows GUI tool provided by AWS that help you deal with uploading/downlading the file to/from S3 bucket. you can download it from here:
https://s3browser.com/download.aspx
Step2: Download the export file from the S3 Bucket to the Target [RDS instance]:
Use the AWS built-in package "rdsadmin.rdsadmin_s3_tasks" to download the dump file from S3 bucket to DATA_PUMP_DIR:
Warning: The following command will download all the files in the bucket, so make sure before running this command to remove all the files except the export dump files.
SELECT rdsadmin.rdsadmin_s3_tasks.download_from_s3(
p_bucket_name => 'eps-bucket',
p_directory_name => 'DATA_PUMP_DIR')
AS TASK_ID FROM DUAL;
In case you have the export files stored under a specific directory, you can tell the download procedure to download all the files under that specific directory by using p_s3_prefix parameters like this: [don't forget the slash / after the directory name]
SELECT rdsadmin.rdsadmin_s3_tasks.download_from_s3(
p_bucket_name => 'eps-bucket',
p_s3_prefix => 'export_files/',
p_directory_name => 'DATA_PUMP_DIR')
AS TASK_ID FROM DUAL;
Or, in case you only want to download a single named file i.e. EXPORT_eps_owner_STG_04-03-19.dmp located under a specific directory i.e. export_files, just execute the command like this:
SELECT rdsadmin.rdsadmin_s3_tasks.download_from_s3(
p_bucket_name => 'eps-bucket',
p_s3_prefix => 'export_files/EXPORT_eps_owner_STG_04-03-19.dmp',
Without having an OS access you won't be able to use commands like: exp ,expdp, imp, impdp and rman.
In this post I'll explain step-by-step how to import a schema into RDS using Oracle built-in packages. Luckily Oracle provides many built-in packages enable you to perform lots of tasks without the need to have an OS access.
Actually, Amazon already well documented importing a schema into RDS but I thought to explain it in a fashion of a real-world scenario:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Oracle.Procedural.Importing.html
Task Summary:
Export a schema with name "EPS_OWNER" on an 11.2.0.3 database resides on AWS EC2 Linux instance and upload the export dump file to S3 bucket, then import the dump file into a 12.2.0.1 AWS RDS database along with changing the schema name to "EPS.
Prerequisites:
- An AWS S3 bucket must be created and Both Source EC2 and Target RDS must have RW access to it through a role. [S3 bucket is a kind of a shared storage between AWS cloud systems where you can upload/download the files to/from it, it will be used during this demo to transfer the export dump file between EC2 source instance and RDS target instance].
Step1: Export the schema on Source [EC2 instance]:
I've already an OS access to oracle user on the source EC2 instance so I used exportdata script to export EPS_OWNER schema.
Note: In case you are importing from Enterprise Edition DB to Standard Edition DB make sure to reset all tables having COMPRESSION option enabled to NOCOMPRESS before exporting the data:
i.e.
alter table eps_owner.compressed_table NOCOMPRESS;
This is because Standard Edition doesn't have COMPRESSION feature. Otherwise the table creation will fail with ORA-39083 error during the import on the Standard Edition DB.
Step2: Upload the export file to S3 Bucket from Source [EC2 instance]:
In case the bucket is not yet configured on the source machine you can use the following AWSCLI command to configure it providing the bucket's "Access Key" and "Secret Access Key":
# aws configure
AWS Access Key ID [None]: AFFFTFWNINVCRTWLBWII5Q
AWS Secret Access Key [None]: EdfeergzA1+kErse7kg53nudSv/Il/xx2x2wvwiuy
Default region name [None]:
Default output format [None]:
Note: The keys above are dummy ones, you have to put your own bucket key.
Upload the export dump files to the S3 bucket:
# cd /backup
# aws s3 cp EXPORT_eps_owner_STG_04-03-19.dmp s3://eps-bucket
In case you are using S3 Browser from a Windows machine, configure the bucket using this flow:
Open S3 Browser -> Accounts -> Add New Account:
<you will use your bucket details here I'm just giving an example>
Account Name: eps-bucket
Account Type: Amazon S3 Storage
Access Key ID: AFFFTFWNINVCRTWLBWII5Q
Secret Access Key: EdfeergzA1+kErss2kg53nudSv/Il/xx2x2wvwiuy
Click "Add New Account"
Accounts -> click "eps-bucket" -> Click "Yes" to add 'External bucket' -> Bucket Name: "eps-bucket"
https://s3browser.com/download.aspx
Step2: Download the export file from the S3 Bucket to the Target [RDS instance]:
Remember, there is no OS access on RDS, so we will connect to the database using any tools such as SQL Developer using the RDS master user credentials.
Use the AWS built-in package "rdsadmin.rdsadmin_s3_tasks" to download the dump file from S3 bucket to DATA_PUMP_DIR:
Warning: The following command will download all the files in the bucket, so make sure before running this command to remove all the files except the export dump files.
SELECT rdsadmin.rdsadmin_s3_tasks.download_from_s3(
p_bucket_name => 'eps-bucket',
p_directory_name => 'DATA_PUMP_DIR')
AS TASK_ID FROM DUAL;
In case you have the export files stored under a specific directory, you can tell the download procedure to download all the files under that specific directory by using p_s3_prefix parameters like this: [don't forget the slash / after the directory name]
SELECT rdsadmin.rdsadmin_s3_tasks.download_from_s3(
p_bucket_name => 'eps-bucket',
p_s3_prefix => 'export_files/',
p_directory_name => 'DATA_PUMP_DIR')
AS TASK_ID FROM DUAL;
Or, in case you only want to download a single named file i.e. EXPORT_eps_owner_STG_04-03-19.dmp located under a specific directory i.e. export_files, just execute the command like this:
SELECT rdsadmin.rdsadmin_s3_tasks.download_from_s3(
p_bucket_name => 'eps-bucket',
p_s3_prefix => 'export_files/EXPORT_eps_owner_STG_04-03-19.dmp',
p_directory_name => 'DATA_PUMP_DIR')
AS TASK_ID FROM DUAL;
Above command will return a TASK ID:
TASK_ID
--------------------------
1554286165468-636
Any file having "incomplete" keyword, means it still getting downloaded.
-- Create a tablespace: [Using Oracle Managed Files OMF]
CREATE SMALLFILE TABLESPACE "TBS_EPS" DATAFILE SIZE 100M AUTOEXTEND ON NEXT 100M LOGGING EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO;
-- In case you need to create a Password Verify Function on RDS:
Note: As you cannot create objects under SYS in RDS you have to use the following ready made procedure by AWS to create the Verify Function:
Note: The verify function name should contains one of these keywords: "PASSWORD", "VERIFY", "COMPLEXITY", "ENFORCE", or "STRENGTH"
begin
rdsadmin.rdsadmin_password_verify.create_verify_function(
p_verify_function_name => 'CUSTOM_PASSWORD_VFY_FUNCTION',
p_min_length => 8,
p_max_length => 256,
p_min_letters => 1,
p_min_lowercase => 1,
p_min_uppercase => 1,
p_min_digits => 3,
p_min_special => 2,
p_disallow_simple_strings => true,
p_disallow_whitespace => true,
p_disallow_username => true,
p_disallow_reverse => true,
p_disallow_db_name => true,
p_disallow_at_sign => false);
end;
/
-- In case you want to create a new profile:
create profile APP_USERS limit
LOGICAL_READS_PER_SESSION DEFAULT
PRIVATE_SGA DEFAULT
CPU_PER_SESSION DEFAULT
PASSWORD_REUSE_TIME DEFAULT
COMPOSITE_LIMIT DEFAULT
PASSWORD_VERIFY_FUNCTION CUSTOM_PASSWORD_VFY_FUNCTION
PASSWORD_GRACE_TIME DEFAULT
PASSWORD_LIFE_TIME 90
SESSIONS_PER_USER DEFAULT
CONNECT_TIME DEFAULT
CPU_PER_CALL DEFAULT
FAILED_LOGIN_ATTEMPTS 6
PASSWORD_LOCK_TIME DEFAULT
PASSWORD_REUSE_MAX 12
LOGICAL_READS_PER_CALL DEFAULT
IDLE_TIME DEFAULT;
-- Create the user: [Here the user as per my business requirements will be different than the original user on the Source DB]
CREATE USER EPS IDENTIFIED BY "test123" DEFAULT TABLESPACE TBS_EPS TEMPORARY TABLESPACE TEMP QUOTA UNLIMITED ON TBS_EPS PROFILE APP_USERS;
GRANT CREATE SESSION TO EPS;
GRANT CREATE JOB TO EPS;
GRANT CREATE PROCEDURE TO EPS;
GRANT CREATE SEQUENCE TO EPS;
GRANT CREATE TABLE TO EPS;
DECLARE
ind NUMBER; -- Loop index
h1 NUMBER; -- Data Pump job handle
percent_done NUMBER; -- Percentage of job complete
job_state VARCHAR2(30); -- To keep track of job state
le ku$_LogEntry; -- For WIP and error messages
js ku$_JobStatus; -- The job status from get_status
jd ku$_JobDesc; -- The job description from get_status
sts ku$_Status; -- The status object returned by get_status
BEGIN
h1 := DBMS_DATAPUMP.OPEN( operation => 'IMPORT', job_mode => 'SCHEMA', job_name=>null);
-- Specify the single dump file and its directory DBMS_DATAPUMP.ADD_FILE(handle => h1, directory => 'DATA_PUMP_DIR', filename => 'EXPORT_eps_owner_STG_04-03-19.dmp');
-- Specify the logfile for the import process: [Very important to read it later after the completion of the import] DBMS_DATAPUMP.ADD_FILE(handle => h1, directory => 'DATA_PUMP_DIR', filename => 'import_eps_owner_STG_04-03-19.LOG', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);
-- Disable Archivelog for the import: [12c new feature] DBMS_DATAPUMP.metadata_transform ( handle => h1, name => 'DISABLE_ARCHIVE_LOGGING', value => 1);
-- REMAP SCHEMA:
-- DBMS_DATAPUMP.METADATA_REMAP(h1,'REMAP_SCHEMA','EPS_OWNER','EPS');
-- If a table already exists: [SKIP, REPLACE, TRUNCATE]
DBMS_DATAPUMP.SET_PARAMETER(h1,'TABLE_EXISTS_ACTION','SKIP');
-- REMAP TABLESPACE: DBMS_DATAPUMP.METADATA_REMAP(h1,'REMAP_TABLESPACE','EPS','TBS_EPS');
-- Start the job. An exception is returned if something is not set up properly. DBMS_DATAPUMP.START_JOB(h1);
-- The following loop will monitor the job until it get complete.meantime the progress information will be displayed:
percent_done := 0;
job_state := 'UNDEFINED';
while (job_state != 'COMPLETED') and (job_state != 'STOPPED') loop
dbms_datapump.get_status(h1,
dbms_datapump.ku$_status_job_error +
dbms_datapump.ku$_status_job_status +
dbms_datapump.ku$_status_wip,-1,job_state,sts);
js := sts.job_status;
-- If the percentage done changed, display the new value. if js.percent_done != percent_done
then
dbms_output.put_line('*** Job percent done = ' ||
to_char(js.percent_done));
percent_done := js.percent_done;
end if;
-- If any work-in-progress (WIP) or Error messages were received for the job, display them. if (bitand(sts.mask,dbms_datapump.ku$_status_wip) != 0)
then
le := sts.wip;
else
if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
then
le := sts.error;
else
le := null;
end if;
end if;
if le is not null
then
ind := le.FIRST;
while ind is not null loop
dbms_output.put_line(le(ind).LogText);
ind := le.NEXT(ind);
end loop;
end if;
end loop;
-- Indicate that the job finished and gracefully detach from it. dbms_output.put_line('Job has completed');
dbms_output.put_line('Final job state = ' || job_state);
dbms_datapump.detach(h1);
END;
/
In case you have used wrong parameters or bad combination e.g. using METADATA_FILTER instead of METDATA_REMAP when importing to a schema having a different name, you will get a bunch of errors similar to the below cute vague ones:
You can also monitor the execution of the import job using this query:
Also you can view the import log on RDS using this query:
SQL> set lines 10000 pages 0
SELECT text FROM table(rdsadmin.rds_file_util.read_text_file('DATA_PUMP_DIR','import_eps_owner_STG_04-03-19.LOG'));
Or: You can upload the log to S3 bucket and get it from there:
SQL> select * from table(RDSADMIN.RDS_FILE_UTIL.LISTDIR('DATA_PUMP_DIR')) order by mtime;
SQL> SELECT rdsadmin.rdsadmin_s3_tasks.upload_to_s3( p_bucket_name => '<bucket_name>', p_prefix => '<file_name>', prefix => '', p_directory_name => 'DATA_PUMP_DIR') AS TASK_ID FROM DUAL;
Run the After Import script that generated by exportdata script at Step 1 after replacing the original exported schema name EPS_OWNER with the target imported schema name EPS.
Check the invalid objects:
SQL> col object_name for a45
select object_name,object_type,status from dba_objects where owner='EPS' and status<>'VALID';
Compile invalid object: [If found]
SQL> EXEC SYS.UTL_RECOMP.recomp_parallel(4, 'EPS');
For more reading on a similar common DBA tasks on RDS:
http://dba-tips.blogspot.com/2020/02/the-dba-guide-for-managing-oracle.html
References:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Oracle.Procedural.Importing.html
S3 Bucket creation:
https://docs.aws.amazon.com/AmazonS3/latest/gsg/CreatingABucket.html
DBMS_DATAPUMP:
https://docs.oracle.com/database/121/ARPLS/d_datpmp.htm#ARPLS356
RDS Master Admin User:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.MasterAccounts.html
Import
AS TASK_ID FROM DUAL;
Above command will return a TASK ID:
TASK_ID
--------------------------
1554286165468-636
Use that TASK_ID to monitor the download progress by running this statement:
SELECT text FROM table(rdsadmin.rds_file_util.read_text_file('BDUMP','dbtask-1554286165468-636.log'));
In case you get this error:
ORA-00904: "RDSADMIN"."RDSADMIN_S3_TASKS"."DOWNLOAD_FROM_S3": invalid identifier
This means S3 integration is not configured with your RDS.
To configure S3 integration: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/oracle-s3-integration.html
Open the IAM Management Console: https://console.aws.amazon.com/iam/home?#/home
In the navigation pane, choose Policies -> Create policy On the Visual editor tab, choose Choose a service, and then choose S3 -> Check All S3 actions
Choose Resources, and choose Add ARN for the bucket -> Enter the Bucket name: eps-bucket
Click Review Policy -> Give it a name "eps-s3-integration" -> Create Policy
Associate your IAM role with your RDS DB:
Sign in to the AWS Management Console: https://console.aws.amazon.com/rds/
Choose the Oracle DB instance name -> On the Connectivity & security tab -> Manage IAM roles section:
IAM roles to this instance: -> "eps-s3-integration"
Feature -> S3_INTEGRATION
Click "Add role"
Make sure that your database is running with "rds-s3-integration" option group parameters.
Once the download is complete, query the downloaded files under DATA_PUMP_DIR using this query:
select * from table(RDSADMIN.RDS_FILE_UTIL.LISTDIR('DATA_PUMP_DIR')) order by mtime;
In case you get this error:
ORA-00904: "RDSADMIN"."RDSADMIN_S3_TASKS"."DOWNLOAD_FROM_S3": invalid identifier
This means S3 integration is not configured with your RDS.
To configure S3 integration: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/oracle-s3-integration.html
Open the IAM Management Console: https://console.aws.amazon.com/iam/home?#/home
In the navigation pane, choose Policies -> Create policy On the Visual editor tab, choose Choose a service, and then choose S3 -> Check All S3 actions
Choose Resources, and choose Add ARN for the bucket -> Enter the Bucket name: eps-bucket
Click Review Policy -> Give it a name "eps-s3-integration" -> Create Policy
Associate your IAM role with your RDS DB:
Sign in to the AWS Management Console: https://console.aws.amazon.com/rds/
Choose the Oracle DB instance name -> On the Connectivity & security tab -> Manage IAM roles section:
IAM roles to this instance: -> "eps-s3-integration"
Feature -> S3_INTEGRATION
Click "Add role"
Make sure that your database is running with "rds-s3-integration" option group parameters.
select * from table(RDSADMIN.RDS_FILE_UTIL.LISTDIR('DATA_PUMP_DIR')) order by mtime;
Any file having "incomplete" keyword, means it still getting downloaded.
Now the AWS related tasks are done, let's jump to the import part which is purely Oracle's.
Step3: Create the tablespace and the target schema user on the Target [RDS instance]:
In case the target user does not yet exist on the target RDS database, you can go ahead and create it along with its tablespace.-- Create a tablespace: [Using Oracle Managed Files OMF]
CREATE SMALLFILE TABLESPACE "TBS_EPS" DATAFILE SIZE 100M AUTOEXTEND ON NEXT 100M LOGGING EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO;
-- In case you need to create a Password Verify Function on RDS:
Note: As you cannot create objects under SYS in RDS you have to use the following ready made procedure by AWS to create the Verify Function:
Note: The verify function name should contains one of these keywords: "PASSWORD", "VERIFY", "COMPLEXITY", "ENFORCE", or "STRENGTH"
begin
rdsadmin.rdsadmin_password_verify.create_verify_function(
p_verify_function_name => 'CUSTOM_PASSWORD_VFY_FUNCTION',
p_min_length => 8,
p_max_length => 256,
p_min_letters => 1,
p_min_lowercase => 1,
p_min_uppercase => 1,
p_min_digits => 3,
p_min_special => 2,
p_disallow_simple_strings => true,
p_disallow_whitespace => true,
p_disallow_username => true,
p_disallow_reverse => true,
p_disallow_db_name => true,
p_disallow_at_sign => false);
end;
/
-- In case you want to create a new profile:
create profile APP_USERS limit
LOGICAL_READS_PER_SESSION DEFAULT
PRIVATE_SGA DEFAULT
CPU_PER_SESSION DEFAULT
PASSWORD_REUSE_TIME DEFAULT
COMPOSITE_LIMIT DEFAULT
PASSWORD_VERIFY_FUNCTION CUSTOM_PASSWORD_VFY_FUNCTION
PASSWORD_GRACE_TIME DEFAULT
PASSWORD_LIFE_TIME 90
SESSIONS_PER_USER DEFAULT
CONNECT_TIME DEFAULT
CPU_PER_CALL DEFAULT
FAILED_LOGIN_ATTEMPTS 6
PASSWORD_LOCK_TIME DEFAULT
PASSWORD_REUSE_MAX 12
LOGICAL_READS_PER_CALL DEFAULT
IDLE_TIME DEFAULT;
-- Create the user: [Here the user as per my business requirements will be different than the original user on the Source DB]
CREATE USER EPS IDENTIFIED BY "test123" DEFAULT TABLESPACE TBS_EPS TEMPORARY TABLESPACE TEMP QUOTA UNLIMITED ON TBS_EPS PROFILE APP_USERS;
GRANT CREATE SESSION TO EPS;
GRANT CREATE JOB TO EPS;
GRANT CREATE PROCEDURE TO EPS;
GRANT CREATE SEQUENCE TO EPS;
GRANT CREATE TABLE TO EPS;
Step4: Import the dump file on the Target [RDS instance]:
Open a session from SQL Developer and make sure this session will not disconnect as far as the import is running, by the RDS master user execute the following block of code which will keep running in the foreground allowing you to monitor the import job on the fly and see any incoming errors:DECLARE
ind NUMBER; -- Loop index
h1 NUMBER; -- Data Pump job handle
percent_done NUMBER; -- Percentage of job complete
job_state VARCHAR2(30); -- To keep track of job state
le ku$_LogEntry; -- For WIP and error messages
js ku$_JobStatus; -- The job status from get_status
jd ku$_JobDesc; -- The job description from get_status
sts ku$_Status; -- The status object returned by get_status
BEGIN
h1 := DBMS_DATAPUMP.OPEN( operation => 'IMPORT', job_mode => 'SCHEMA', job_name=>null);
-- Specify the single dump file and its directory DBMS_DATAPUMP.ADD_FILE(handle => h1, directory => 'DATA_PUMP_DIR', filename => 'EXPORT_eps_owner_STG_04-03-19.dmp');
-- Specify the logfile for the import process: [Very important to read it later after the completion of the import] DBMS_DATAPUMP.ADD_FILE(handle => h1, directory => 'DATA_PUMP_DIR', filename => 'import_eps_owner_STG_04-03-19.LOG', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);
-- Disable Archivelog for the import: [12c new feature] DBMS_DATAPUMP.metadata_transform ( handle => h1, name => 'DISABLE_ARCHIVE_LOGGING', value => 1);
-- REMAP SCHEMA:
-- DBMS_DATAPUMP.METADATA_REMAP(h1,'REMAP_SCHEMA','EPS_OWNER','EPS');
-- If a table already exists: [SKIP, REPLACE, TRUNCATE]
DBMS_DATAPUMP.SET_PARAMETER(h1,'TABLE_EXISTS_ACTION','SKIP');
-- REMAP TABLESPACE: DBMS_DATAPUMP.METADATA_REMAP(h1,'REMAP_TABLESPACE','EPS','TBS_EPS');
-- Start the job. An exception is returned if something is not set up properly. DBMS_DATAPUMP.START_JOB(h1);
-- The following loop will monitor the job until it get complete.meantime the progress information will be displayed:
percent_done := 0;
job_state := 'UNDEFINED';
while (job_state != 'COMPLETED') and (job_state != 'STOPPED') loop
dbms_datapump.get_status(h1,
dbms_datapump.ku$_status_job_error +
dbms_datapump.ku$_status_job_status +
dbms_datapump.ku$_status_wip,-1,job_state,sts);
js := sts.job_status;
-- If the percentage done changed, display the new value. if js.percent_done != percent_done
then
dbms_output.put_line('*** Job percent done = ' ||
to_char(js.percent_done));
percent_done := js.percent_done;
end if;
-- If any work-in-progress (WIP) or Error messages were received for the job, display them. if (bitand(sts.mask,dbms_datapump.ku$_status_wip) != 0)
then
le := sts.wip;
else
if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
then
le := sts.error;
else
le := null;
end if;
end if;
if le is not null
then
ind := le.FIRST;
while ind is not null loop
dbms_output.put_line(le(ind).LogText);
ind := le.NEXT(ind);
end loop;
end if;
end loop;
-- Indicate that the job finished and gracefully detach from it. dbms_output.put_line('Job has completed');
dbms_output.put_line('Final job state = ' || job_state);
dbms_datapump.detach(h1);
END;
/
In case you have used wrong parameters or bad combination e.g. using METADATA_FILTER instead of METDATA_REMAP when importing to a schema having a different name, you will get a bunch of errors similar to the below cute vague ones:
ORA-31627: API call succeeded but more information is available
ORA-06512: at "SYS.DBMS_DATAPUMP", line 7143
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
ORA-06512: at "SYS.DBMS_DATAPUMP", line 4932
ORA-06512: at "SYS.DBMS_DATAPUMP", line 7137
ORA-06512: at line 7
SQL> SELECT owner_name, job_name, operation, job_mode,DEGREE, state FROM dba_datapump_jobs where state='EXECUTING';
In case you want to Kill the job: <Provide the '<JOB_NAME>','<OWNER>'>
SQL> DECLARE
h1 NUMBER;
BEGIN
h1:=DBMS_DATAPUMP.ATTACH('SYS_IMPORT_SCHEMA_01','EPS');
DBMS_DATAPUMP.STOP_JOB (h1, 1, 0);
END;
/
In case you want to Kill the job: <Provide the '<JOB_NAME>','<OWNER>'>
SQL> DECLARE
h1 NUMBER;
BEGIN
h1:=DBMS_DATAPUMP.ATTACH('SYS_IMPORT_SCHEMA_01','EPS');
DBMS_DATAPUMP.STOP_JOB (h1, 1, 0);
END;
/
Once the job is complete compare the number of objects between source and target DBs:
SQL> select object_type,count(*) from dba_objects where owner='EPS' group by object_type;
Also you can view the import log on RDS using this query:
SQL> set lines 10000 pages 0
SELECT text FROM table(rdsadmin.rds_file_util.read_text_file('DATA_PUMP_DIR','import_eps_owner_STG_04-03-19.LOG'));
Or: You can upload the log to S3 bucket and get it from there:
SQL> select * from table(RDSADMIN.RDS_FILE_UTIL.LISTDIR('DATA_PUMP_DIR')) order by mtime;
SQL> SELECT rdsadmin.rdsadmin_s3_tasks.upload_to_s3( p_bucket_name => '<bucket_name>', p_prefix => '<file_name>', prefix => '', p_directory_name => 'DATA_PUMP_DIR') AS TASK_ID FROM DUAL;
Run the After Import script that generated by exportdata script at Step 1 after replacing the original exported schema name EPS_OWNER with the target imported schema name EPS.
Check the invalid objects:
SQL> col object_name for a45
select object_name,object_type,status from dba_objects where owner='EPS' and status<>'VALID';
Compile invalid object: [If found]
SQL> EXEC SYS.UTL_RECOMP.recomp_parallel(4, 'EPS');
Step5: [Optional] Delete the dump file from the Target [RDS instance]:
Check the exist files under DATA_PUMP_DIR directory:
SQL> select * from table(RDSADMIN.RDS_FILE_UTIL.LISTDIR('DATA_PUMP_DIR')) order by mtime;Check the exist files under DATA_PUMP_DIR directory:
Generate delete script for all files:
SQL> select 'exec utl_file.fremove(''DATA_PUMP_DIR'','''||filename||''');' from table(RDSADMIN.RDS_FILE_UTIL.LISTDIR('DATA_PUMP_DIR')) order by mtime;
Run the output script:
e.g. exec utl_file.fremove('DATA_PUMP_DIR','EXPORT_eps_owner_STG_04-03-19.dmp');
SQL> select 'exec utl_file.fremove(''DATA_PUMP_DIR'','''||filename||''');' from table(RDSADMIN.RDS_FILE_UTIL.LISTDIR('DATA_PUMP_DIR')) order by mtime;
Run the output script:
e.g. exec utl_file.fremove('DATA_PUMP_DIR','EXPORT_eps_owner_STG_04-03-19.dmp');
http://dba-tips.blogspot.com/2020/02/the-dba-guide-for-managing-oracle.html
References:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Oracle.Procedural.Importing.html
S3 Bucket creation:
https://docs.aws.amazon.com/AmazonS3/latest/gsg/CreatingABucket.html
DBMS_DATAPUMP:
https://docs.oracle.com/database/121/ARPLS/d_datpmp.htm#ARPLS356
RDS Master Admin User:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.MasterAccounts.html
Import
I've the export files stored under the following tree:
ReplyDeleteoracle/files/exports
any idea how to download the files under that tree without downloading the whole bucket? my bucket have TBs of files however my export files are just 12GB.
This format will work for you:
ReplyDeleteSELECT rdsadmin.rdsadmin_s3_tasks.download_from_s3(
p_bucket_name => '',
p_s3_prefix => 'oracle/files/exports/',
p_directory_name => 'DATA_PUMP_DIR')
AS TASK_ID FROM DUAL;
Don't forget the "/" after the last directory in the path :-)
For one table it worked with me and exported successfully. for another table while I'm using exactly the same import block, I'm keep getting this error:
ReplyDeleteORA-06550: line 22, column 46:
PLS-00312: a positional parameter association may not follow a named association
ORA-06550: line 22, column 3:
PL/SQL: Statement ignored
ORA-06550: line 25, column 3:
PLS-00306: wrong number or types of arguments in call to 'SET_PARAMETER'
ORA-06550: line 25, column 3:
PL/SQL: Statement ignored
ORA-06550: line 28, column 46:
PLS-00312: a positional parameter association may not follow a named association
ORA-06550: line 28, column 3:
PL/SQL: Statement ignored
06550. 00000 - "line %s, column %s:\n%s"
I'm familiar with the same error as I faced it many times, just simply remove "handle =>" from all the lines of the import command so it will look like this:
ReplyDeleteDECLARE
ind NUMBER; -- Loop index
h1 NUMBER; -- Data Pump job handle
percent_done NUMBER; -- Percentage of job complete
job_state VARCHAR2(30); -- To keep track of job state
le ku$_LogEntry; -- For WIP and error messages
js ku$_JobStatus; -- The job status from get_status
jd ku$_JobDesc; -- The job description from get_status
sts ku$_Status; -- The status object returned by get_status
BEGIN
h1 := DBMS_DATAPUMP.OPEN( operation => 'IMPORT', job_mode => 'SCHEMA', job_name=>null);
-- Specify the single dump file and its directory DBMS_DATAPUMP.ADD_FILE(h1, directory => 'DATA_PUMP_DIR', filename => 'EXPORT_eps_owner_STG_04-03-19.dmp');
-- Specify the logfile for the import process: [Very important to read it later after the completion of the import] DBMS_DATAPUMP.ADD_FILE(h1, directory => 'DATA_PUMP_DIR', filename => 'import_eps_owner_STG_04-03-19.LOG', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);
-- Disable Archivelog for the import: [12c new feature] DBMS_DATAPUMP.metadata_transform (h1, name => 'DISABLE_ARCHIVE_LOGGING', value => 1);
-- REMAP SCHEMA:
-- DBMS_DATAPUMP.METADATA_REMAP(h1,'REMAP_SCHEMA','EPS_OWNER','EPS');
-- If a table already exists: [SKIP, REPLACE, TRUNCATE]
DBMS_DATAPUMP.SET_PARAMETER(h1,'TABLE_EXISTS_ACTION','SKIP');
-- REMAP TABLESPACE: DBMS_DATAPUMP.METADATA_REMAP(h1,'REMAP_TABLESPACE','EPS','TBS_EPS');
-- Start the job. An exception is returned if something is not set up properly. DBMS_DATAPUMP.START_JOB(h1);
-- The following loop will monitor the job until it get complete.meantime the progress information will be displayed:
percent_done := 0;
job_state := 'UNDEFINED';
while (job_state != 'COMPLETED') and (job_state != 'STOPPED') loop
dbms_datapump.get_status(h1,
dbms_datapump.ku$_status_job_error +
dbms_datapump.ku$_status_job_status +
dbms_datapump.ku$_status_wip,-1,job_state,sts);
js := sts.job_status;
-- If the percentage done changed, display the new value. if js.percent_done != percent_done
then
dbms_output.put_line('*** Job percent done = ' ||
to_char(js.percent_done));
percent_done := js.percent_done;
end if;
-- If any work-in-progress (WIP) or Error messages were received for the job, display them. if (bitand(sts.mask,dbms_datapump.ku$_status_wip) != 0)
then
le := sts.wip;
else
if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
then
le := sts.error;
else
le := null;
end if;
end if;
if le is not null
then
ind := le.FIRST;
while ind is not null loop
dbms_output.put_line(le(ind).LogText);
ind := le.NEXT(ind);
end loop;
end if;
end loop;
-- Indicate that the job finished and gracefully detach from it. dbms_output.put_line('Job has completed');
dbms_output.put_line('Final job state = ' || job_state);
dbms_datapump.detach(h1);
END;
/
Getting below error : Any idea?
ReplyDeleteError starting at line : 1 in command -
DECLARE
ind NUMBER; -- Loop index
h1 NUMBER; -- Data Pump job handle
percent_done NUMBER; -- Percentage of job complete
job_state VARCHAR2(30); -- To keep track of job state
le ku$_LogEntry; -- For WIP and error messages
js ku$_JobStatus; -- The job status from get_status
jd ku$_JobDesc; -- The job description from get_status
sts ku$_Status; -- The status object returned by get_status
BEGIN
h1 := DBMS_DATAPUMP.OPEN( operation => 'IMPORT', job_mode => 'SCHEMA', job_name=>null);
-- Specify the single dump file and its directory
DBMS_DATAPUMP.ADD_FILE(handle => h1, directory => 'DATA_PUMP_DIR', filename => 'RMS_TEST.dmp');
-- Specify the logfile for the import process: [Very important to read it later after the completion of the import]
DBMS_DATAPUMP.ADD_FILE(handle => h1, directory => 'DATA_PUMP_DIR', filename => 'RMS_TEST.LOG', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);
-- Disable Archivelog for the import: [12c new feature] DBMS_DATAPUMP.metadata_transform ( handle => h1, name => 'DISABLE_ARCHIVE_LOGGING', value => 1);
-- REMAP SCHEMA:
-- DBMS_DATAPUMP.METADATA_REMAP(h1,'REMAP_SCHEMA','EPS_OWNER','EPS');
-- If a table already exists: [SKIP, REPLACE, TRUNCATE]
DBMS_DATAPUMP.SET_PARAMETER(h1,'TABLE_EXISTS_ACTION','REPLACE');
-- REMAP TABLESPACE: DBMS_DATAPUMP.METADATA_REMAP(h1,'REMAP_TABLESPACE','EPS','TBS_EPS');
-- Start the job. An exception is returned if something is not set up properly.
DBMS_DATAPUMP.START_JOB(h1);
-- The following loop will monitor the job until it get complete.meantime the progress information will be displayed:
percent_done := 0;
job_state := 'UNDEFINED';
while (job_state != 'COMPLETED') and (job_state != 'STOPPED') loop
dbms_datapump.get_status(h1,
dbms_datapump.ku$_status_job_error +
dbms_datapump.ku$_status_job_status +
dbms_datapump.ku$_status_wip,-1,job_state,sts);
js := sts.job_status;
-- If the percentage done changed, display the new value.
if js.percent_done != percent_done
then
dbms_output.put_line('*** Job percent done = ' ||
to_char(js.percent_done));
percent_done := js.percent_done;
end if;
-- If any work-in-progress (WIP) or Error messages were received for the job, display them.
if (bitand(sts.mask,dbms_datapump.ku$_status_wip) != 0)
then
le := sts.wip;
else
if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
then
le := sts.error;
else
le := null;
end if;
end if;
if le is not null
then
ind := le.FIRST;
while ind is not null loop
dbms_output.put_line(le(ind).LogText);
ind := le.NEXT(ind);
end loop;
end if;
end loop;
-- Indicate that the job finished and gracefully detach from it.
dbms_output.put_line('Job has completed');
dbms_output.put_line('Final job state = ' || job_state);
dbms_datapump.detach(h1);
END;
Error report -
ORA-39001: invalid argument value
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
ORA-06512: at "SYS.DBMS_DATAPUMP", line 3507
ORA-06512: at "SYS.DBMS_DATAPUMP", line 3756
ORA-06512: at line 15
39001. 00000 - "invalid argument value"
*Cause: The user specified API parameters were of the wrong type or
value range. Subsequent messages supplied by
DBMS_DATAPUMP.GET_STATUS will further describe the error.
*Action: Correct the bad argument and retry the API.
Hi Saud,
ReplyDeleteThe error is not clear enough for me, but can you remove this piece "handle =>" from all the lines and try again?
Hello, i had a quick question if i run this from SQLDeveloper it runs ok -
ReplyDeleteSELECT rdsadmin.rdsadmin_s3_tasks.upload_to_s3( p_bucket_name => '', p_prefix => '', prefix => '', p_directory_name => 'DATA_PUMP_DIR') AS TASK_ID FROM DUAL;
But if i put that statement as part of a Stored procedure i get the below error -
"rdsadmin"."rdsadmin_s3_tasks"."upload_to_s3" :invalid identifer. Would you know why?
DBMS_DATAPUMP.METADATA_TRANSFORM(handle => hdnl, name => 'OID', 0, null);
ReplyDelete*
ERROR at line 7:
ORA-06550: line 7, column 1:
PLS-00306: wrong number or types of arguments in call to 'METADATA_TRANSFORM'
ORA-06550: line 7, column 1:
PL/SQL: Statement ignored
I need a hand, how to do this with more than 1 dump file?
ReplyDeleteSimply add more lines for dbms_datapump.add_file, each line represent a dump file; like this:
ReplyDelete....
dbms_datapump.add_file(hdnl,'EXPORT_01.dmp','DATA_PUMP_DIR');
dbms_datapump.add_file(hdnl,'EXPORT_02.dmp','DATA_PUMP_DIR');
....
adding new files as you mentioned would not work.
DeleteWould you mind sharing the complete command you are using along with the error message?
DeleteGreat, thanks for sharing this post.Much thanks again. Awesome.
ReplyDeleteaws online training
Genexbd is capable of storing a large number of records efficiently. New data can be calculated easily and older data can be updated or deleted. With a database, one can search and retrieve information quickly. The role of data in marketing, business, government, etc., has become increasingly important.
ReplyDeletehttps://genexdbs.com/
Thanks for this blog keep sharing your thoughts like this...
ReplyDeleteIntroduction of CC++
What is C++ Programming Language
What would I need to do if I have to remap multiple tablespace into on tablespace
ReplyDeletehi
ReplyDeleteWhile importing data in AWS RDS we are getting below error..Please help us
ReplyDeleteError starting at line : 1 in command -
DECLARE
v_hdnl NUMBER;
BEGIN
v_hdnl := DBMS_DATAPUMP.OPEN(
operation => 'IMPORT',
job_mode => 'TABLE',
version => 'COMPATIBLE',
job_name => null);
DBMS_DATAPUMP.ADD_FILE(
handle => v_hdnl,
filename => 'OBIEEDP_MIG_TABLES_090922.dmp',
directory => 'DATA_PUMP_DIR',
filetype => dbms_datapump.ku$_file_type_dump_file);
DBMS_DATAPUMP.SET_PARALLEL(
handle => v_hdnl,
degree => 3);
DBMS_DATAPUMP.ADD_FILE(
handle => v_hdnl,
filename => 'import_OBIEEDP_MIG_TABLES_090922.log',
directory => 'DATA_PUMP_DIR',
filetype => dbms_datapump.ku$_file_type_log_file);
DBMS_DATAPUMP.METADATA_FILTER(v_hdnl,'NAME_LIST','''PARTY_BASIC_INFO''');
DBMS_DATAPUMP.METADATA_REMAP(v_hdnl,'REMAP_SCHEMA','NEW_CORE_DM','DATAMIG');
DBMS_DATAPUMP.METADATA_REMAP(v_hdnl,'REMAP_TABLESPACE','NEW_CORE_DM_TBS','TS_DATAMIG_01');
DBMS_DATAPUMP.SET_PARAMETER(v_hdnl,'TABLE_EXISTS_ACTION','REPLACE');
DBMS_DATAPUMP.START_JOB(v_hdnl);
END;
Error report -
ORA-31627: API call succeeded but more information is available
ORA-06512: at "SYS.DBMS_DATAPUMP", line 7333
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
ORA-06512: at "SYS.DBMS_DATAPUMP", line 4929
ORA-06512: at "SYS.DBMS_DATAPUMP", line 7327
ORA-06512: at line 26
31627. 00000 - "API call succeeded but more information is available"
*Cause: The user specified job parameters that yielded informational
messages.
*Action: Call DBMS_DATAPUMP.GET_STATUS to retrieve additional information.
Look, I'm having the same issue as you, AWS support wasn't able to help they just suggested running impdp from another EC2 instance using NETWORK_LINK parameter to pump the data to the remote RDS instance using a database link, Oracle support said that if you use DBMS_DATAPUMP.METADATA_FILTER then you cannot use REMAP SCHEMA parameter "DBMS_DATAPUMP.METADATA_REMAP" along with it.
DeleteIt's obvious that this package will start go mad when METADATA_FILTER parameter get called, I'm still having my SR with Oracle support open for weeks without having a solution, and I guess you may went through the same process like me ;-)
7167C
ReplyDeleteKripto Para Kazma Siteleri
Qlc Coin Hangi Borsada
Threads Beğeni Satın Al
Bitcoin Para Kazanma
Threads İzlenme Satın Al
Fuckelon Coin Hangi Borsada
Görüntülü Sohbet
Tiktok İzlenme Hilesi
Bonk Coin Hangi Borsada
F705D
ReplyDeleteGörüntülü Sohbet Parasız
Tiktok İzlenme Satın Al
Kripto Para Üretme Siteleri
Bitcoin Mining Nasıl Yapılır
Tiktok Beğeni Satın Al
Expanse Coin Hangi Borsada
NWC Coin Hangi Borsada
Okex Borsası Güvenilir mi
Coin Kazanma Siteleri
F8907
ReplyDeletedefillama
ellipal
shiba
defilama
safepal
ledger wallet
eigenlayer
solflare
trezor suite