GoldenGate POC
GoldenGate POC
GoldenGate POC
Introduction
Architecture and Supported technologies
Server Architecture
Installation of Golden gate
Configuring Golden gate for Uni Directional setup
Starting the Server Process and Verifying Replication
Introduction
Oracle GoldenGate enables the exchange and manipulation of data at the transaction
Level among multiple, heterogeneous platforms across the enterprise.
Its modular Architecture gives you the flexibility to extract and replicate selected data
records, transactional changes, and changes to DDL (data definition language2) across a
variety of topologies.
With this flexibility, and the filtering, transformation, and custom processing features of
Oracle GoldenGate, you can support numerous business requirements:
Business continuance and high availability.
Initial load and database migration.
Data integration.
Decision support and data warehousing
Oracle Golden gate Architecture and Supported
Technologies
Modular Architecture:
MANAGER: The Manager is the process which starts the other Goldengate processes.
This process must be running on the source and target system for the configuration and
starting up of all the other Goldengate processes. he Manager process also manages the disk
space by purging the old trail files. Only one Manager Process is required for every Goldengate
installation.
EXTRACT: The Extract process is responsible for capturing the committed DML
transactions and the DDL from Oracle Redo logs. Then Extract writes these data changes into
Trail or Extract Files.
DATA PUMP: The Pump process which is also an extract process is optional in the
Goldengate setup. This process copies the Trail files containing the data to the target system.
REPLICAT : The Replicat process is the apply process in the Goldengate configuration. This
process runs at the end point of the data delivery chain on the target database. This process
reads the destination trail files and applies the data changes to the target systems.
TRAIL OR EXTRACT FILES : The Extract process on the source database creates trail
files for consumption by the pump process for transfer to remote database or for
consumption by a local replicate on the source database.
CHECKPOINT: The Extract Pump & Replicat processes use checkpoints for tracking the
progress of these processes. This mechanism marks the location up to point where the data
changes have been retrieved or applied from the trail files. This is useful when processes need
to recover (without any data loss) or need to know the starting point after a failure.
COLLECTOR: The Collector process runs on the target system and writes the data
changes from the source database in the target Trail Files known as RMTTRAIL. Before copying
it to RMTTRAIL it reassembles the files.
Supported Technologies:
Uni-Directional: In this setup the data is replicated in one direction from source to target.
Bi-Directional: The data flows in both direction and stays synced up between site A and site B.
Peer-To-Peer: Similar to Bi-directional but involves more that 2 databases which stay synced
up.
Consolidation: Here data from multiple sources is delivered to one destination database.
Source Node:
Target Node:
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
alias ggsci=/oracle/ggsource/ggsci
Confirming the Database Ping:
Source to Target:
Target to Source:
Installation screenshots:
Preparing the Database for GoldenGate (Uni-Directional
with DML sync)
Configuration steps:
CREATE USER sadm1 IDENTIFIED BY sadm1; CREATE USER tadm1 IDENTIFIED BY tadm1;
EXEC EXEC
DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIV DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIV
ILEGE ILEGE
(grantee=>'SADM1',privilege_type=>'capture',gra (grantee=>'TADM1',privilege_type=>'capture',gra
nt_select_privileges=>true, do_grants=>TRUE); nt_select_privileges=>true, do_grants=>TRUE);
COMMIT; COMMIT;
Create Checkpoint Table
Edit Param ./GLOBALS (or) vi GLOBALS (note: Case Edit Param ./GLOBALS (or) vi GLOBALS (note: Case
Sensitive) Sensitive)
extract ext2
userid sadm1, password sadm1
exttrail ./dirdat/sr
ddl include mapped objname suser1.*;
TRANLOGOPTIONS DBLOGREADER
table suser1.*;
sequence suser1.*;
Configure Secondary Extract (Data Pump)
GGSCI (amsods0004) 4> view param pump2
extract pump2
userid sadm1, password sadm1
rmthost amsods0005, mgrport 7809, compress
rmttrail ./dirdat/sr
passthru
table suser1.*;
sequence suser1.*;
Configure Replicat
GGSCI (amsods0005) 1> edit param rep2
REPLICAT rep2
userid tadm1 ,password tadm1
discardfile ./discard/rep1_discard.txt, append,
megabytes 5
DBOPTIONS DEFERREFCONST
handlecollisions
DDL include all
DDLERROR DEFAULT IGNORE
map suser1.*, target tuser1.*;
Start Extract * Start Replicat *
GGSCI (amsods0004) 5> info all
GGSCI (amsods0005) 2> info all
Program Status Group Lag at Chkpt Time
Since Chkpt Program Status Group Lag at Chkpt Time
Since Chkpt
MANAGER RUNNING
EXTRACT RUNNING EXT2 00:00:00 MANAGER RUNNING
00:00:01 REPLICAT RUNNING REP2 00:00:00
EXTRACT RUNNING PUMP2 00:00:00 00:00:08
00:00:01
Start Extract:
Start Pump:
Start Replicat:
Worked out examples: