0% found this document useful (0 votes)
7 views61 pages

DOAG2021 DataPumpDeepDive

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 61

Deep Dive into Oracle Data Pump Internals

Bill Beauregard Roy Swonger


Senior Principal Product Manager
Vice President
Data Pump and Utilities
Database Upgrade, Patching & Utilities
Oracle
Oracle
Roy F. Swonger

Vice President
Database Upgrade,
Utilities & Patching
@RoyFSwonger

royfswonger
William Beauregard

Senior Principal Product Manager


Data Pump and Utilities

william-beauregard-3053791
• How Data Pump works
• Parallel Operations
Data Pump • Data Pump Data Movement Methods

Internals
• Oracle Data Pump CONTROL Table
• Marker Objects and Excluding Statistics

Topics • Oracle Data Pump Export & Import Restart


• Include and Exclude Objects in the Same Job
• Helpful Interactive Commands
• Troubleshooting Oracle Data Pump Jobs Using logfile
diagnostics and the TRACE Parameter
How Data Pump
Works

Copyright © 2021, Oracle and/or its affiliates


How Data Pump Works
3 main User Components
• The Command-line clients, expdp and impdp

• The DBMS_DATAPUMP PL/SQL package, the Data Pump API

• The DBMS_METADATA PL/SQL package, the Metadata API

6 Copyright © 2021, Oracle and/or its affiliates


How Data Pump Works
The Clients
• expdp and impdp invoke the Export and Import utilities, respectively

• A shadow process is created when the client logs in to database

• The shadow process calls the DBMS_DATAPUMP.OPEN API.


• Creates the control table and control process.

• The clients use the Data Pump API to execute export and import commands

• DBMS_METADATA used for extracting, manipulating, & re-creating dictionary metadata

Note: DBMS_DATAPUMP & DBMS_METADATA can be used independent of the clients

7 Copyright © 2021, Oracle and/or its affiliates


Object Paths
How is metadata referred to?
An “Object Path” describes a particular category of metadata, such as
• TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
• SCHEMA_EXPORT/SYSTEM_GRANT
• DATABASE_EXPORT/AUDIT_UNIFIED/AUDIT_POLICY

Used in INCLUDE, EXCLUDE, and internally when distributing work to processes

See these views for full list of available paths


• <TABLE|SCHEMA|DATABASE>_EXPORT_OBJECTS

8 Copyright © 2021, Oracle and/or its affiliates


Data Pump: Component Block Diagram

9 Copyright © 2021, Oracle and/or its affiliates


Data Pump
Parallel
Operations

Copyright © 2021, Oracle and/or its affiliates


Data Pump: 2 kinds of Parallelism for Dumpfiles
Overview
• Inter-partition parallelism
• For tables with partitions below a particular threshold in size
• One connection and worker per partition or sub partition

• Parallel query (PQ) intra-partition parallelism


• Invoked for larger tables or partitions
• Worker process is the query coordinator, and not included in the DOP limit

11 Copyright © 2021, Oracle and/or its affiliates


Data Pump: Parallel Operations
Overview
• A job starts w/ a minimum of 2 processes: a Control Process (CP) + 1 Worker

• The user can specify a Degree of Parallelism (DOP) for the Data Pump job
• DOP = maximum number of ACTIVE parallel processes (Active workers + PQ processes)
• DOP does not include the CP or shadow process
• Any additional workers are idle and wait for work
• Data Pump will only start the number of threads needed to complete the task

• Control Process: Verifies parameters & job description, controls & assigns work items to
workers

12 Copyright © 2021, Oracle and/or its affiliates


Data Pump 19c | Set Max Data Pump Jobs & Parallelism
Control the maximum number of jobs and degree of parallelism to control resource
• MAX_DATAPUMP_JOBS_PER_PDB database parameter (Changed)
• Default: 100, Range: 0 to 250 or Auto - 50% of SESSIONS
• Must be same for each RAC instance, dynamic, and modifiable per-PDB

• MAX_DATAPUMP_PARALLEL_PER_JOB database parameter (New)


• Default: 50, Range: 1 to 250 or Auto - 25% of SESSIONS
• Can be different for each RAC instance, dynamic, and modifiable per-PDB

Documentation: Database Utilities Guide

Copyright © 2021 Oracle and/or its affiliates.


Data Pump: Parallel Worker Processes
What do the parallel worker processes do?
Export Import
• Collects object metadata w/ Converts metadata from XML to
DBMS_METADATA API DDL & PL/SQL w/
Metadata
• Writes metadata to a DBMS_METADATA API
dumpfile as XML
Data • Uses the Data Layer to unload Uses the Data Layer to load data
data
• Network mode Import uses DBMS_METADATA API to convert source database metadata
for execution and moves data to the target

13 Copyright © 2021, Oracle and/or its affiliates


Data Pump: TABLE_DATA & Metadata Units of Work
How is work assigned to parallel Worker processes?
• TABLE_DATA work unit is:
• Subpartition for subpartitioned tables
• Partition for partitioned tables
• Table for non-partitioned tables
• Specify all data for the table as one work unit, regardless of partitioning w/
DATA_OPTIONS=GROUP_PARTITION_TABLE_DATA

• The metadata work unit is a single object path

14 Copyright © 2021, Oracle and/or its affiliates


Data Pump
Parallel Export

Copyright © 2021, Oracle and/or its affiliates


Data Pump: How Export works
Overview
• Export (and network mode import) work in multiple phases

• Metadata
• Each object path is a metadata work unit
• Workers begin collecting metadata for export
• Metadata unloaded in any order with parallel worker processes

• Data
• Metadata for TABLE_DATA items contain a size estimate that drives order of export
• Data work items are scheduled as worker processes become available.
Pro Tip: current dictionary and object statistics
are crucial to data pump export performance!

16 Copyright © 2021, Oracle and/or its affiliates


Data Pump | Parallel Metadata Export Comparison
Metadata export happens concurrent w/ estimate phase for table data
Most metadata & data objects are exported in parallel when PARALLEL=2 or greater

Pre-12.2 Since 12.2


• Start with ESTIMATE phase • Export collects information as it progresses

• Gather table data objects • Determines number of workers for metadata
• export
• Other workers remain idle until data
objects are gathered • Reserves workers for data export & begins
• metadata export
• Metadata exported serially
• • Simultaneously begins estimating data export
Data exported in parallel
• Documentation: Database Utilities Guide

Copyright © 2021 Oracle and/or its affiliates.


Data Pump 12.2 | Parallel Metadata Export
Logfile Differences

12.1.0.2 12.2.0.1

Copyright © 2021 Oracle and/or its affiliates.


Data Pump | Parallel Metadata Export
Details
• Source database must be 12.2 or higher
• ESTIMATE phase uses STATISTICS only (pre-12.2 it counted blocks)
• Restart works as always
• Database metadata objects that have dependencies are imported serially
• Types (due to inheritance), schemas and procedural actions.

Documentation: Database Utilities Guide

Copyright © 2021 Oracle and/or its affiliates.


Data Pump | Export Best Practices
Details
1. Collect statistics using the dbms_stats package: gather_table_stats,
gather_schema_stats, gather_database_stats,gather_dictionary_stats
2. Convert LOBs to SecureFiles LOBs
3. Include wildcard characters in the dumpfile filename specification
• %U for 1 to 99 dumpfiles (2-digit, fixed-width, incrementing integer)
• %L for 1 to more than 99 dumpfiles (3-digit to 10-digit, variable-width integer)
• Always use wildcards when exporting with PARALLEL
4. Specify maximum dumpfile size using FILESIZE

Documentation: Database Utilities Guide

Copyright © 2021 Oracle and/or its affiliates.


Data Pump | Export Example
Processes started for a large data export using PARALLEL = 6

PROCESS ACTIVITY PARALLELISM


CP Control process is not parallelized NA

Worker1 Unload metadata 1 parallel process

Worker2 Unload sub-partition 2 degrees of parallelism


user1:tab1:subpart1
• 1 parallel PX process
• Worker2_PX1: Unload data • 1 parallel PX process
• Worker2_PX2: Unload data
Worker3 Unload subpartition 1 degree of parallelism
user1:tab1:subpart2
Worker4 Unload subpartition 2 degrees of parallelism
user1:tab1:subpart3
• 1 parallel PX process
• Worker4_PX1 – Unload data • 1 parallel PX process
• Worker4_PX2 – Unload data
21 Copyright © 2021, Oracle and/or its affiliates
Data Pump
Parallel Import

Copyright © 2021, Oracle and/or its affiliates


Data Pump: How Import works

• No estimate phase
• First loads the metadata, mostly in parallel
• Builds multiple indexes simultaneously w/ data pump workers
• Individual indexes are built serially.
• Can be a performance issue for indexes on very large tables
• Loads package bodies in parallel using multiple workers
• Loads data in parallel using multiple workers and multiple PQ processes.

23 Copyright © 2021, Oracle and/or its affiliates


Data Pump | Parallel Metadata Import
Metadata export happens concurrent w/ estimate phase for table data

Documentation:Pre-12.2
Database Utilities Guide Since 12.2
• One worker per partition/subpartition • Added parallel import of most other metadata
• PQ used if partitions are large enough objects

• Package bodies loaded in parallel
• Some exceptions
• •
With patch for bug 22273229 • Types (due to inheritance)
• •
• Indexes built in parallel • Schemas
• •
• Constraints created in parallel • Procedural actions

• Backport to 12.1.0.2, 11.2.0.4

Copyright © 2021 Oracle and/or its affiliates.


Data Pump | Parallel Metadata Import
Internals
XML Doc 1
• Metadata is exported in XML documents into dumpfile User1
• Each XML document contains N objects of a given type User2 Worker 1
• One XML document allocated to a worker at a time …
• Example: 161 USER objects to import User80
• 2 workers import 80 users per XML document XML Doc 2
User81
• 1 worker imports 1 user, other workers may be idle Worker 2
User82
• Notes: …
• Works for conventional (dumpfile) jobs (& TTS since 21c) User160
• Restart works same as always XML Doc 1
• Status command will show multiple workers on metadata User161 Worker 3

Documentation: Database Utilities Guide


Pro Tip: export and import
parallelism do not have to match
Copyright © 2021 Oracle and/or its affiliates.
Data Pump | Parallel Metadata Import
Logfile
• Comparison with PARALLEL=8 for 27586 object grants and METRICS=Y
• 12.1.0.2

• 12.2.0.1

Documentation: Database Utilities Guide

Copyright © 2021 Oracle and/or its affiliates.


Data Pump | Parallel Metadata Import
Performance
• Examples from E-Business Suite test database *import time in seconds

Documentation: Database Utilities Guide

Copyright © 2021 Oracle and/or its affiliates.


Data Pump Data
Movement
Methods

Copyright © 2021, Oracle and/or its affiliates


Data Pump | Data Movement Methods
Overview
• 3 most common approaches
• Direct Path
• External Table
• Insert as Select over a network link.
• Conventional access method used in some import edge cases
• Syntax (defaults to AUTOMATIC)
ACCESS_METHOD=[AUTOMATIC | DIRECT_PATH | EXTERNAL_TABLE]

Documentation: Database Utilities Guide

Copyright © 2021 Oracle and/or its affiliates.


Data Pump | Data Movement Methods
Internals
• External table method does not support moving LONG or LONG RAW
• Direct path not allowed when importing into pre-existing partitioned tables
• CP chooses optimal method based on table size, job size & DOP
• One large table is faster w/ external tables, PX processes & maximum parallel value
• Many smaller tables are usually faster to use Direct Path
• Desirable to export the larger tables first especially if parallel value > 1

Pro Tip: current table stats will help


Data Pump make the right choice!

30 Copyright © 2021, Oracle and/or its affiliates


Data Pump | Data Movement Methods
What if you choose the wrong method?
Example: table with a LONG RAW column
impdp system/manager tables=scott.foo_long directory=dpump_dir \
dumpfile=s.dmp access_method=external_table


Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
ORA-31696: unable to export/import TABLE_DATA:“SCOTT"."FOO_LONG" using client
specified EXTERNAL_TABLE method

31 Copyright © 2021, Oracle and/or its affiliates


Data Pump
CONTROL Table

Copyright © 2021, Oracle and/or its affiliates


Data Pump | Control Table
Used to keep track of Data pump job progress
• Record of job parameters, current status, object information, index into the dump files
• Can be accessed from SQL*Plus
• Upon successful job completion
• CONTROL TABLE deleted by default, Use KEEP_MASTER=Y to keep anyway
• Table written to the last dump file(s) in the set (depends on size of table & file size)
• First object imported to determine what was exported & what to import
• Table includes rows describing objects, type completions, and job info

33 Copyright © 2021, Oracle and/or its affiliates


Data Pump | Control Table
Benefits
• Can be queried for job status
• Information in the table accelerates importing a subset of the objects
• Allows job restart if stopped or encounters fatal error, such as insufficient disk space

34 Copyright © 2021, Oracle and/or its affiliates


Data Pump Marker
Objects and
Excluding Statistics

Copyright © 2021, Oracle and/or its affiliates


Data Pump | Marker Objects
Purpose
• Used by Export & Import to order (sequence) actions for other object types
• Indicate a point during the job where some action is required
• Cannot be an excluded object type b/c they are used for several internal purposes
• Examples:
• Mark the dependency that package bodies have on package specifications
• If statistics are included in the export a marker is set to trigger a call to the
DBMS_STATS package to import statistics on objects imported.
• If EXCLUDE=STATISTICS is specified on export all objects in an object path for
statistics are excluded, including the STATISTICS marker

36 Copyright © 2021, Oracle and/or its affiliates


Data Pump
Export Restart

Copyright © 2021, Oracle and/or its affiliates


Data Pump | Export Restart
Internals
• Export job can usually be restarted unless the estimate phase is not completed
• Requires the CONTROL TABLE
• Workers create & update “type completion” rows for each object type w/ start &
completion time stamps
• During a restart, a worker process looks for rows with no completion time.
• In the table below “USER” is incomplete, so Worker
• Removes user objects from dumpfile
• Configures the Metadata API to start after TABLESPACE

OBJECT_TYPE START_TIME COMPLETION_TIME


TABLESPACE 12-SEP-2021:9:04.01 12-SEP-2021:9:05.23
USER 12-SEP-2012:9:05.27

38 Copyright © 2021, Oracle and/or its affiliates


Data Pump
Import Restart

Copyright © 2021, Oracle and/or its affiliates


Data Pump | Import Restart
Internals
• Import job can be restarted using the CONTROL TABLE
• Workers check object rows’ state & status fields to detect what’s been imported
OBJECT OBJECT_SCHEMA OBJECT_NAME PROCESSING_STATE PROCESSING_STATUS

TABLE SCOTT FOO W C


TABLE SCOTT BAR U C
INDEX SCOTT FOO_IND R C
INDEX SCOTT BAR_IND R C
R = objects were Retrieved (exported)
C = objects are Current (successfully imported)
W = objects are written (imported)
U = objects are unknown (import started but did not finish)

40 Copyright © 2021, Oracle and/or its affiliates


Data Pump
Include and
Exclude Objects

Copyright © 2021, Oracle and/or its affiliates


Data Pump 21c | Include & Exclude objects in Same Job
Be more specific about job contents
• Include and exclude objects within the same export or import job
• Example:
expdp hr DIRECTORY=dpump_dir1 DUMPFILE=exp%u.dmp SCHEMAS=hr,oe
include=table exclude=statistics
• Details:
• INCLUDE parameter(s) processed first, include all objects identified by the parameter
• EXCLUDE parameter(s) processed next. It removes any objects in the list of include
objects

Documentation: Database Utilities Guide

Copyright © 2021 Oracle and/or its affiliates.


Data Pump 21c | Include & Exclude objects in Same Job
Example
include=table
exclude=statistics

Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 448 KB
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/PRE_TABLE_ACTION
Processing object type SCHEMA_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/TRIGGER
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/POST_TABLE_ACTION

43 Copyright © 2021, Oracle and/or its affiliates


Data Pump
Helpful Interactive
Commands

Copyright © 2021, Oracle and/or its affiliates


Data Pump | Helpful interactive commands
Type ^c while the job is running to interact with the job via the interactive CLI
• Get the job and worker status
• Includes Operation, Mode, State, Percent Done, and Current Parallelism
Import> STATUS
• Change the parallelism for current job
• Increases almost immediately. Decreases after the current assignment is complete
Import> PARALLEL=n
• Add tracing to a running job
• Useful if if a job seems to be hung or performing slowly
Import> trace=XXXXXX

Documentation: Database Utilities Guide

Copyright © 2021 Oracle and/or its affiliates.


Troubleshooting Jobs
Using Logfile Diagnostics

Copyright © 2021, Oracle and/or its affiliates


Data Pump | Logfile Diagnostics
Log the number of objects exported and elapsed time
•Add diagnostics to your logfile
expdp system/manager full=y directory=dpump_dir dumpfile=full.dmp\ metrics=yes
logtime=all
• Without METRICS=Y:
Processing object type DATABASE_EXPORT/FINAL_POST_INSTANCE_IMPCALLOUT/MARKER
Processing object type DATABASE_EXPORT/AUDIT_UNIFIED/AUDIT_POLICY_ENABLE
Processing object type DATABASE_EXPORT/POST_SYSTEM_IMPCALLOUT/MARKER
. . exported "SYS"."KU$_USER_MAPPING_VIEW" 5.890 KB 25 rows
. . exported "SYSTEM"."REDO_DB" 25.59 KB 1 rows

• With METRICS=Y:
DATABASE_EXPORT/FINAL_POST_INSTANCE_IMPCALLOUT/MARKER
02-NOV-21 19:43:56.064: W-1 Completed 1 MARKER objects in 0 seconds
02-NOV-21 19:43:59.171: W-1 Processing object type DATABASE_EXPORT/AUDIT_UNIFIED/AUDIT_POLICY_ENABLE
02-NOV-21 19:43:59.195: W-1 Completed 2 AUDIT_POLICY_ENABLE objects in 0 seconds
02-NOV-21 19:43:59.380: W-1 Processing object type DATABASE_EXPORT/POST_SYSTEM_IMPCALLOUT/MARKER
02-NOV-21 19:43:59.387: W-1 Completed 1 MARKER objects in 0 seconds
02-NOV-21 19:43:59.830: W-1 . . exported "SYS"."KU$_USER_MAPPING_VIEW" 5.890 KB 25 rows in 0 seconds using external_table
02-NOV-21 19:43:59.923: W-1 . . exported "SYSTEM"."REDO_DB" 25.59 KB 1 rows in 0 seconds using direct_path

Copyright © 2021 Oracle and/or its affiliates.


Troubleshooting Jobs
Using the TRACE
Parameter

Copyright © 2021, Oracle and/or its affiliates


Data Pump | TRACE
How to create a Data Pump trace file
• Requires privileged user - DBA role or EXP[/IMP]_FULL_DATABASE Role
• ensure MAX_DUMP_FILE_SIZE is large enough to capture the trace (default=unlimited)

• Use parameter: TRACE= 8-digit hexadecimal mask


• First three digits specify a Data Pump component (case insensitive)
• Last four digits are usually: 0300
• Leading zero can be omitted

• MOS Note:
Export/Import DataPump Parameter TRACE How to Diagnose Oracle Data Pump [ID
286496.1]

Copyright © 2021 Oracle and/or its affiliates.


Data Pump | TRACE Bit Masks
Bit Mask for tracing each component
Trace DM DW ORA Lines
level trc trc trc in
(hex) file file file trace Purpose
------- ---- ---- ---- ------ -----------------------------------------------
10300 x x x SHDW: To trace the Shadow process (API) (expdp/impdp)
20300 x x x KUPV: To trace Fixed table
40300 x x x 'div' To trace Process services
80300 x KUPM: To trace Master Control Process (MCP) (DM)
100300 x x KUPF: To trace File Manager
200300 x x x KUPC: To trace Queue services
400300 x KUPW: To trace Worker process(es) (DW)
800300 x KUPD: To trace Data Package
1000300 x META: To trace Metadata Package
2000300 x PLTS: To trace Pluggable Tablespaces
4000300 x DPEX: To trace Callouts
8000300 x META: To trace Utilities
10000300 x DPEX: To trace extensibility (procedural actions)
------- 'Bit AND'
1FFF0300 x x x 'all' To trace all components (full tracing)
---------
See: Export/Import DataPump Parameter TRACE - How to Diagnose Oracle Data Pump (Doc ID 286496.1)

Copyright © 2021 Oracle and/or its affiliates.


Data Pump | TRACE
Start by tracing the Control Process and the Worker Process(es)
expdp <LOGIN>/<PASSWORD> DIRECTORY=<DIRECTORY_NAME>
DUMPFILE=<DUMP_NAME>.dmp LOGFILE=<LOG_NAME>.log SCHEMAS=<SCHEMA_NAME>\
TRACE=480300
• 'Bit AND’ of Control Process 80300 & Worker 400300 components

Creates 2 trace files:


• <SID>_dm<number>_<process_id>.trc - Control process trace
• <SID>_dw<number>_<process_id>.trc - Worker trace file (one for each worker)

Files are written to the directories specified by the init.ora/spfile initialization parameters
BACKGROUND_DUMP_DEST and USER_DUMP_DEST or in <ADR_HOME>/trace

Copyright © 2021 Oracle and/or its affiliates.


Data Pump | TRACE
Tracing ALL components (if Oracle Support requests this )

impdp <LOGIN>/<PASSWORD> DIRECTORY=<DIRECTORY_NAME> \


DUMPFILE=<DUMP_NAME>.dmp LOGFILE=<LOG_NAME>.log FULL=y \
TRACE=1fff0300

Creates 2 trace files


• <SID>_dm<number>_<process_id>.trc - Control process trace
• <SID>_dw<number>_<process_id>.trc - Worker trace file

Also creates 1 trace file in USER_DUMP_DEST:


• <SID>_ora_<process_id>.trc - Shadow Process trace file

Copyright © 2021 Oracle and/or its affiliates.


Data Pump | How to start tracing the Data Pump job
Use the TRACE parameter in the job command

expdp <LOGIN>/<PASSWORD> DIRECTORY=<DIRECTORY_NAME> \


DUMPFILE=<DUMP_NAME>.dmp LOGFILE=<LOG_NAME>.log \
TABLES=<SCHEMA_NAME>.<TABLE_NAME> \
TRACE=480300

Copyright © 2021 Oracle and/or its affiliates.


Data Pump | How to start tracing the Data Pump job
Add tracing with an interactive command
Type ^c while the job is running to interact with the job via the interactive CLI
• Restart the job and add tracing
• Useful if if a job seems to be hung or performing slowly
• Can be changed in the middle of a job

impdp user/password attach=user.imp_job_1 trace=xxxxxx


Import> start_job [=SKIP_CURRENT=YES]

Copyright © 2021 Oracle and/or its affiliates.


Data Pump | How to start tracing the Data Pump job
Use a database init.ora/spfile event
• Add the following line to init.ora parameter file

EVENT="39089 trace name context forever, level 0x300"


• Restart the database
• Start the Export Data Pump or Import Data Pump job

Event 39089 can also be enabled and disabled at system level

ALTER SYSTEM SET EVENTS = '39089 trace name context forever, level 0x300' ;
ALTER SYSTEM SET EVENTS = '39089 trace name context off' ;

Copyright © 2021 Oracle and/or its affiliates.


Data Pump | SQL Tracing
Create a SQL trace file for an Export Data Pump or Import Data Pump job
• SQL trace file created for a specific process (usually the Worker process).
• Ensure directory has enough free space using init.ora/spfile parameter BACKGROUND_DUMP_DEST
• Event Levels
• event 10046, level 1 = enable standard SQL_TRACE functionality
• event 10046, level 4 = as level 1, plus trace the BIND values
• event 10046, level 8 = as level 1, plus trace the WAITs
• event 10046, level 12 = as level 1, plus trace the BIND values and the WAITs
A level 8 or 12 SQL trace file requires init.ora/spfile parameter TIMED_STATISTICS set to TRUE before the
event is set and before the Data Pump job is started

Documentation: Database Utilities Guide

Copyright © 2021 Oracle and/or its affiliates.


Data Pump | SQL Tracing
Create a SQL trace file for an Export Data Pump or Import Data Pump job
Example:
• Trace Worker process (400300) with standard SQL_TRACE functionality (1)
expdp <LOGIN>/<PASSWORD> DIRECTORY=<DIRECTORY_NAME> DUMPFILE=<DUMP_NAME>.dmp
\
LOGFILE=<LOG_NAME>.log TABLES=<SCHEMA_NAME>/<TABLE_NAME> \
TRACE=400301

Copyright © 2021 Oracle and/or its affiliates.


Recorded Web Seminars

https://MikeDietrichDE.com/videos/

https://dohdatabase.com/webinars/

58 Copyright © 2021, Oracle and/or its affiliates


YouTube | Oracle Database Upgrades and Migrations

https://MikeDietrichDE.com

https://DOHdatabase.com

YouTube Channel

Oracle LiveLabs hands-on workshops


https://livelabs.oracle.com

59 Copyright © 2021, Oracle and/or its affiliates


Thank you!

You might also like