Export/Import: Posted by Narashim Reddy .R On August 15, 2009
Export/Import: Posted by Narashim Reddy .R On August 15, 2009
Export/Import: Posted by Narashim Reddy .R On August 15, 2009
1 Votes
Can one export to multiple files?/ Can one beat the Unix 2 Gig limit?
The following example demonstrates how the imp/exp utilities are used:
exp scott/tiger file=emp.dmp log=emp.log tables=emp rows=yes indexes=no
imp scott/tiger file=emp.dmp full=yes
NOTE: If you do not like command line utilities, you can import and export data with the Schema
Manager GUI that ships with Oracle Enterprise Manager (OEM).
Can one export a subset of a table?
From Oracle8i one can use the QUERY= export parameter to selectively unload a subset of the data from
a table. Look at this example:
exp scott/tiger tables=emp query=\where deptno=10\
Can one monitor how fast a table is imported?
If you need to monitor how fast rows are imported from a running import job, try one of the following
methods:
Method 1:
select substr(sql_text,instr(sql_text,INTO ),30) table_name,
rows_processed,
round((sysdate-to_date(first_load_time,yyyy-mm-dd hh24:mi:ss))*24*60,1) minutes,
trunc(rows_processed/((sysdate-to_date(first_load_time,yyyy-mm-dd hh24:mi:ss))*24*60))
rows_per_min
from
sys.v_$sqlarea
Run this indexfile against your database, this will create the required tables in the appropriate
tablespaces
Import the table(s) with the IGNORE=Y option.
Change the default tablespace for the user:
Revoke the UNLIMITED TABLESPACE privilege from the user
Revoke the users quota from the tablespace from where the object was exported. This forces the import
utility to create tables in the users default tablespace.
Make the tablespace to which you want to import the default tablespace for the user
Import the table
Does one need to drop/ truncate objects before importing?
Before one import rows into already populated tables, one needs to truncate or drop these tables to get
rid of the old data. If not, the new data will be appended to the existing tables. One must always DROP
existing Sequences before re-importing. If the sequences are not dropped, they will generate numbers
inconsistent with the rest of the database.
Note: It is also advisable to drop indexes before importing to speed up the import process. Indexes can
easily be recreated after the data was successfully imported.
Can one import/export between different versions of Oracle?
Different versions of the import utility is upwards compatible. This means that one can take an export
file created from an old export version, and import it using a later version of the import utility. This is
quite an effective way of upgrading a database from one release of Oracle to the next.
Oracle also ships some previous catexpX.sql scripts that can be executed as user SYS enabling older
imp/exp versions to work (for backwards compatibility). For example, one can run
$ORACLE_HOME/rdbms/admin/catexp7.sql on an Oracle 8 database to allow the Oracle 7.3 exp/imp
utilities to run against an Oracle 8 database.
Can one export to multiple files?/ Can one beat the Unix 2 Gig limit?
From Oracle8i, the export utility supports multiple output files. This feature enables large exports to be
divided into files whose sizes will not exceed any operating system limits (FILESIZE= parameter). When
importing from multi-file export you must provide the same filenames in the same sequence in the
FILE= parameter. Look at this example:
exp SCOTT/TIGER FILE=D:\F1.dmp,E:\F2.dmp FILESIZE=10m LOG=scott.log
Use the following technique if you use an Oracle version prior to 8i:
Create a compressed export on the fly. Depending on the type of data, you probably can export up to 10
gigabytes to a single file. This example uses gzip. It offers the best compression I know of, but you can
also substitute it with zip, compress or whatever.
# create a named pipe
mknod exp.pipe p
# read the pipe output to zip file in the background
gzip < exp.pipe > scott.exp.gz &
# feed the pipe
exp userid=scott/tiger file=exp.pipe
How can one improve Import/ Export performance?
EXPORT:
Set the BUFFER parameter to a high value (eg. 2800000)
Make sure all applications on the site has been closed
Make sure the export file is on a physical drive not used by oracle
DO NOT export to a NFS mounted filesystem. It will take forever.
IMPORT:
Create an indexfile so that you can create indexes AFTER you have imported data. Do this by setting
INDEXFILE to a filename and then import. No data will be imported but a file containing index definitions
will be created. You must edit this file afterwards and supply the passwords for the schemas on all
CONNECT statements.
Place the file to be imported on a separate physical disk from the oracle data files
Increase DB_BLOCK_BUFFERS considerably in the init$SID.ora file
Set the LOG_BUFFER to a big value and restart oracle.
Stop redo log archiving if it is running (ALTER DATABASE NOARCHIVELOG;)
Create a BIG tablespace with a BIG rollback segment inside. Set all other rollback segments offline
(except the SYSTEM rollback segment of course). The rollback segment must be as big as your biggest
table (I think?)
Use COMMIT=N in the import parameter file if you can afford it
Use ANALYZE=N in the import parameter file to avoid time consuming ANALYZE statements
Remember to run the indexfile previously created
What are the common Import/ Export problems?
ORA-00001: Unique constraint () violated You are importing duplicate rows. Use IGNORE=NO to skip
tables that already exist (imp will give an error if the object is re-created).
ORA-01555: Snapshot too old Ask your users to STOP working while you are exporting or use
parameter CONSISTENT=NO
ORA-01562: Failed to extend rollback segment Set COMMIT=Y while importing
IMP-00015: Statement failed object already exists Use the IGNORE=Y import parameter to ignore
these errors, but be careful as you might end up with duplicate rows.
QL*Loader
Rate This
Can one selectively load only the records that one need?
How can get SQL*Loader to commit only at the end of the load file?
tab1
You might also want to investigate third party tools like TOAD or ManageIT Fast Unloader from CA to
help you unload data from Oracle.
Can one load variable and fix length data records?
Yes, look at the following control file examples. In the first we will load delimited data (variable length):
LOAD DATA
INFILE *
INTO TABLE load_delimited_data
FIELDS TERMINATED BY , OPTIONALLY ENCLOSED BY
TRAILING NULLCOLS
( data1,
data2
)
BEGINDATA
11111,AAAAAAAAAA
22222,A,B,C,D,
If you need to load positional data (fixed length), look at the following control file example:
LOAD DATA
INFILE *
INTO TABLE load_positional_data
( data1 POSITION(1:5),
data2 POSITION(6:15)
)
BEGINDATA
11111AAAAAAAAAA
22222BBBBBBBBBB
Can one skip header records load while loading?
Use the SKIP n keyword, where n = number of logical rows to skip. Look at this example:
LOAD DATA
INFILE *
INTO TABLE load_positional_data
SKIP 5
( data1 POSITION(1:5),
data2 POSITION(6:15)
)
BEGINDATA
11111AAAAAAAAAA
22222BBBBBBBBBB
Can one modify data as it loads into the database?
Yes, look at the following examples:
LOAD DATA
INFILE *
INTO TABLE modified_data
( rec_no
my_db_sequence.nextval,
region
CONSTANT 31,
time_loaded
to_char(SYSDATE, HH24:MI),
data1
POSITION(1:5) :data1/100,
data2
POSITION(6:15) upper(:data2),
data3
POSITION(16:22)to_date(:data3, YYMMDD)
)
BEGINDATA
11111AAAAAAAAAA991201
22222BBBBBBBBBB990112
LOAD DATA
INFILE mail_orders.txt
BADFILE bad_orders.txt
APPEND
INTO TABLE mailing_list
FIELDS TERMINATED BY ,
( addr,
city,
state,
zipcode,
mailing_addr
mailing_city
mailing_state
)
Can one load data into multiple tables at once?
Look at the following control file:
LOAD DATA
INFILE *
REPLACE
INTO TABLE emp
WHEN empno !=
( empno POSITION(1:4)
INTEGER EXTERNAL,
)
INTO TABLE proj
WHEN projno !=
( projno POSITION(25:27) INTEGER EXTERNAL,
empno POSITION(1:4)
INTEGER EXTERNAL
)
Can one selectively load only the records that one need?
Look at this example, (01) is the first character, (30:37) are characters 30 to 37:
LOAD DATA
INFILE mydata.dat BADFILE mydata.bad DISCARDFILE mydata.dis
APPEND
INTO TABLE my_selective_table
WHEN (01) <> H and (01) <> T and (30:37) = 19991217
(
region
service_key
call_b_no
CONSTANT 31,
POSITION(01:11)
POSITION(12:29)
INTEGER EXTERNAL,
CHAR
)
Can one skip certain columns while loading data?
One cannot use POSTION(x:y) with delimited data. Luckely, from Oracle 8i one can specify FILLER
columns. FILLER columns are used to skip columns/fields in the load file, ignoring fields that one does
not want. Look at this example:
LOAD DATA
TRUNCATE INTO TABLE T1
FIELDS TERMINATED BY ,
( field1,
field2 FILLER,
field3
)
How does one load multi-line records?
One can create one logical record from multiple physical records using one of the following two clauses:
CONCATENATE: use when SQL*Loader should combine the same number of physical records together
to form one logical record.
CONTINUEIF use if a condition indicates that multiple records should be treated as one. Eg. by having
a # character in column 1.
How can get SQL*Loader to commit only at the end of the load file?
You cannot, but by setting the ROWS= parameter to a large value, committing can be reduced. Make
sure you have big rollback segments ready when you use a high value for ROWS=.
Can one improve the performance of SQL*Loader?
A very simple but easily overlooked hint is not to have any indexes and/or constraints (primary key) on
your load tables during the load process. This will significantly slow down load times even with ROWS=
set to a high value.
Add the following option in the command line: DIRECT=TRUE. This will effectively bypass most of the
RDBMS processing. However, there are cases when you cant use direct load. Refer to chapter 8 on
Oracle server Utilities manual.
Turn off database logging by specifying the UNRECOVERABLE option. This option can only be used with
direct data loads.
Run multiple load jobs concurrently.
Difference between the conventional and direct path loader?
The conventional path loader essentially loads the data by using standard INSERT statements. The direct
path loader bypasses much of the logic involved with that and loads directly into the Oracle data files.