[LINUX]
My Scenario:
I have an oracle database named 'sh' and a schema named 'has'. So I would like to implement data pump technology of oracle means 'expdp' to export dump file from schema 'has' and 'impdp' to import dump file to schema 'has'.
{Export}
Step - 1
Login as oracle user and create a folder where I want to save my all dump files as backup.
[oracle@localhost ~]$ mkdir /u01/dumps
[oracle@localhost ~]$ ls /u01/
app database dumps
[oracle@prod ~]$
Step - 2
To take export for has schema, I have to connect the has schema.
[oracle@prod ~]$ sqlplus has/password
SQL>Now create a logical directory as
SQL> create or replace directory export_dp as '/u01/dumps/';Directory created.
to check above directory
SQL> select directory_name from all_directories;DIRECTORY_NAME
------------------------------
EXPORT_DP
Step - 3
To give read/write rights to /u01/dumps/ folder exit from sql prompt and change oracle user to root user.
[oracle@localhost ~]$ su - root
Password:
[root@localhost ~]# chmod 777 /u01/dumps/
[root@localhost ~]#
Step - 4
Again change root user to oracle user and enter below command to take backup.
[root@localhost ~]# su - oracle
[oracle@localhost ~]$ expdp has/password@has directory=export_dp dumpfile=dumpfile=dump_has.dmp logfile=log_has.log schemas=has
Note: password means your schema password, has means the global name your database i.e SID name.
{Import}
Step - 1
Login to database as sysdba user to create new schema. If you have exported dump file with other than default tablespace then also create tablespace. In my case username is finance and tablespace name is has_tablespace.
[oracle@localhost ~]$ sqlplus
SQL*Plus: Release 11.2.0.1.0 Production on Sun Sep 19 19:39:15 2021
Copyright (c) 1982, 2009, Oracle. All rights reserved.
Enter user-name: / as sysdba
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> create tablespace has_tablespace datafile 'has.dat' size 20M autoextend on;
Determine size of tablespace in oracle 11g so below is the query
SQL> select df.tablespace_name "Tablespace",totalusedspace "Used MB", (df.totalspace - tu.totalusedspace) "Free MB", df.totalspace "Total MB", round(100 * ( (df.totalspace - tu.totalusedspace)/ df.totalspace)) "Pct. Free" from (select tablespace_name, round(sum(bytes) / 1048576) TotalSpace from dba_data_files group by tablespace_name) df, (select round(sum(bytes)/(1024*1024)) totalusedspace, tablespace_name from dba_segments group by tablespace_name) tu where df.tablespace_name = tu.tablespace_name ;
To Drop Tablespace:Example The following statement drops the has_tablespace tablespace and drops all referential integrity constraints that refer to primary and unique keys inside has_tablespace:DROP TABLESPACE has_tablespace INCLUDING CONTENTS CASCADE CONSTRAINTS;
Deleting Operating System Files:
Example The following example drops the has_tablespace tablespace and deletes all associated operating system datafiles:DROP TABLESPACE has_tablespace INCLUDING CONTENTS AND DATAFILES;
Step - 2
Create schema finance and assign has_tablespace to this schema.
User created.
Step - 3
SQL> grant dba to finance;
Grant succeeded.
SQL>
Step - 4
SQL> connect finance/finance
Connected.
SQL> create or replace directory import_dp as '/u01/dumps/';
Directory created.
Exit from sql.
SQL> create or replace directory import_dp as '/u01/dumps/';
Directory created.
Exit from sql.
Step - 5
[oracle@localhost ~]$ impdp finance/finance@has directory=import_dp dumpfile=dump_has.dmp logfile=log_has.log REMAP_SCHEMA=has:finance
[oracle@localhost ~]$ impdp finance/finance@has directory=import_dp dumpfile=dump_has.dmp logfile=log_has.log REMAP_SCHEMA=has:finance
[oracle@localhost ~]$ sqlplus finance/finance
SQL> select * from tab;
SQL> select * from tab;
Note: if you are importing to same user/schema in a different database, then you dont need to use remap_schema clause in impdp command. It's all up to you how you want to do the import.
If you have wrongly created the directory in a wrong place then drop it as follow otherwise skip this step.SQL> drop directory directoryname;
Directory dropped.
[Windows]
{Export}
C:\>mkdir d:\data_pump
C:\>sqlplus has/password
SQL>create or replace directory export_dp as 'd:\data_pump';
SQL> exit
C:\>expdp has/password@has directory=export_dp dumpfile=has_exp.dmp logfile=has_exp.log schemas=sys:has
{Import}
C:\>sqlplus / as sysdba
SQL*Plus: Release 11.2.0.2.0 Production on Mon Jan 21 12:15:52 2019
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Express Edition Release 11.2.0.2.0 - 64bit Production
Create tablespace if schema from data was exported was using other than default tablespace.
SQL>create tablespace has_tablespace datafile 'has1.dat' size 20M autoextend on;
Tablespace created.
SQL>create user finance identified by finance default tablespace has_tablespace temporary tablespace temp quota unlimited on finance;
User created.
SQL>grant dba to finance;
Grant succeeded.
SQL> connect finance/finance;
SQL>create user finance identified by finance default tablespace has_tablespace temporary tablespace temp quota unlimited on finance;
User created.
SQL>grant dba to finance;
Grant succeeded.
SQL> connect finance/finance;
Connected.
SQL>create or replace directory import_dp as 'd:\data_pump';
Directory created.
SQL> exit
Directory created.
SQL> exit
Disconnected from Oracle Database 11g Express Edition Release 11.2.0.2.0 - 64bit Production
C:\>impdp finance/finance@has directory=import_dp dumpfile=finance_exp.DMP REMAP_SCHEMA=sys:finance
C:\>impdp finance/finance@has directory=import_dp dumpfile=finance_exp.DMP REMAP_SCHEMA=sys:finance
Auto Backup of Dump File
[LINUX]
Step - 1
Create a directory
[oracle@localhost ~]$ mkdir /home/oracle/scripts
Step - 2
Write following shell script by using vi or any other editor.
[oracle@localhost ~]$ vi /home/oracle/scripts/expdp.sh
export ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1
export ORACLE_SID=has
export PATH=$PATH:$ORACLE_HOME/bin
expdp has/password@has directory=export_dp dumpfile=has-$(date +%d%m%Y_%H%M%S).dmp logfile=has-$(date +%d%m%Y_%H%M%S).log schemas=has
export ORACLE_SID=has
export PATH=$PATH:$ORACLE_HOME/bin
expdp has/password@has directory=export_dp dumpfile=has-$(date +%d%m%Y_%H%M%S).dmp logfile=has-$(date +%d%m%Y_%H%M%S).log schemas=has
Save and quite if using vi editor then press esc then :x and enter key.
Step - 3
Change user from oracle to root to assign permission of read write to expdp.sh shell script
[root@localhost ~]# chmod u+x /home/oracle/scripts/expdp.sh
[root@localhost ~]# chown oracle.oinstall /home/oracle/scripts/expdp.sh
Step - 4
Change user from root to oracle to create crontab file
[oracle@localhost ~]$ crontab -e
and add following lines
# daily logical export
01 23 * * * /home/oracle/scripts/expdp.sh
save and quite
# daily logical export
01 23 * * * /home/oracle/scripts/expdp.sh
save and quite
Note: 01 means hours and 23 means minutes i.e 01:23
[Windows]
C:\>expdp has/password@has directory=export_dp dumpfile=d:\data_pump\has_exp.dmp logfile=d:\data_pump\has_exp.log schemas=sys:has
Type in search pan as task and click task scheduler
Select create task
Enter name of task like export_data and press OK
Click triggers and new button to add new trigger
In setting pan select repetition i.e one time, daily etc and enter time at which this scheduler should run.
Note: Sometimes it may be required to schedule the same task twice a days . In such cases, we have to create two batch file and scheduling them separately because the generated name may overwrites or throws errors . To overcome from this issue , we can schedule the same task with single batch file. Suppose we have to take the logical backup of a schema(say has) twice a days . We can do this by simply appending the date and time to the dump file name .
Append Date & time to dump File name in LINUX/UNIX/Window[LINUX]
[Windows]
Date: %date:~4,2%-%date:~7,2%-%date:~12,2%
Time: %time:~0,2%-%time:~3,2%-%time:~6,2%
Above function can be use to generate the unique dumpfile name. Below is the Demo of this function .
This problem is related to the new Oracle 11gR2 feature called deferred segment creation and the fact that empty tables are not listed in dba_segments.
This database feature is controlled by the database parameter deferred_segment_creation. This parameter is in default TRUE. If you set this value FALSE, it will export new empty tables those created after the parameter change.
Using following command you will able to export existing empty tables.
SQL> ALTER TABLE <empy table> ALLOCATE EXTENT;
SQL> ALTER TABLE <empy table> ALLOCATE EXTENT;
No comments:
Post a Comment