Amazon RDS Migration
Amazon RDS Migration
Amazon RDS Migration
Version 1.2.0
June 2015
Amazon RDS Migration Tool User Guide and Reference, Version 1.2.0
Copyright © 2015, Amazon Web Services, Inc. or its affiliates. All rights reserved.
The Programs (which include both the software and documentation) contain proprietary information; they
are provided under a license agreement containing restrictions on use and disclosure and are also protected
by copyright, patent, and other intellectual and industrial property laws. Reverse engineering, disassembly,
or decompilation of the Programs, except to the extent required to obtain interoperability with other
independently created software or as specified by law, is prohibited.
The information contained in this document is subject to change without notice. If you find any problems in
the documentation, please report them to us in writing. This document is not warranted to be error-free.
Except as may be expressly permitted in your license agreement for these Programs, no part of these
Programs may be reproduced or transmitted in any form or by any means, electronic or mechanical, for any
purpose.
Your use of the Amazon RDS Migration Tool is subject to the terms of the existing AWS Customer
Agreement or other agreement with AWS governing your use of AWS services, and the Additional Terms
and Conditions for the Amazon RDS Migration Tool located at
https://s3-us-west-2.amazonaws.com/rdmt/RDSMigrationToolTermsConditions.pdf. If you do not agree to
the foregoing terms, you may not use the Amazon RDS Migration Tool.
Contents
iii
Add a Replication Task....................................................................................................................... 2-5
Add a Replication Task to the Amazon RDS Migration Console ........................................... 2-6
Add the Source and Target Databases to the Task ..................................................................... 2-6
Select Tables for the Replication Task........................................................................................... 2-7
Run and Monitor the Replication Task ........................................................................................... 2-9
View the Replicated Tables in Microsoft SQL Server ............................................................... 2-10
iv
Oracle Target Data Types ............................................................................................................ 5-24
Configuring an Oracle Database as an Amazon RDS Migration Tool Target ..................... 5-26
Provide Oracle Account Access ........................................................................................ 5-26
Setting up an Oracle Database as a Target in Amazon RDS Migration Tool....................... 5-26
Using Advanced Properties for an Oracle Target .......................................................... 5-29
v
Setting up a SAP Sybase ASE Database as a Target in Amazon RDS Migration Tool .......... 7-9
Using Advanced Properties for a SAP Sybase ASE Target........................................... 7-12
vi
Amazon Redshift Data Types......................................................................................................... 10-5
Setting up Amazon Redshift as a Target in Amazon RDS Migration Tool........................... 10-7
Using Advanced Properties for an Amazon Redshift Target ................................................ 10-9
vii
Viewing Specific Tasks.................................................................................................................... 13-4
Design Mode.................................................................................................................................. 13-4
Monitor Mode ............................................................................................................................... 13-6
The Server View................................................................................................................................ 13-8
Security Roles.................................................................................................................................... 13-9
Creating Active Directory Groups ............................................................................................. 13-9
Associating Multiple Groups with the Same Role ....................................................... 13-10
Security Roles and Console Elements...................................................................................... 13-10
14 Designing Tasks
Setting up Tasks................................................................................................................................ 14-1
Working with Databases ................................................................................................................. 14-2
Adding a Database ....................................................................................................................... 14-3
Editing Database Configuration Information........................................................................... 14-3
Viewing Database Configuration Information......................................................................... 14-3
Testing a Database Connection................................................................................................... 14-4
Adding a Source and Target Database to a Task......................................................................... 14-4
Adding Tables to a Task .................................................................................................................. 14-5
Search for Tables to use in a Replication Task.......................................................................... 14-6
Selecting Specific Tables for Replication ................................................................................... 14-7
Removing Specific Tables from a Replication Task................................................................. 14-7
Create Table Selection Patterns................................................................................................... 14-7
Selecting Schemas in a Disaster Recovery Task ......................................................................... 14-9
Editing a Replication Task ............................................................................................................ 14-10
Deleting a Replication Task.......................................................................................................... 14-10
Importing and Exporting Tasks ................................................................................................... 14-10
How to: Export and Import a Task .......................................................................................... 14-11
Editing an Exported (json) File ................................................................................................. 14-12
Making Changes to the Database Connection Information........................................ 14-12
15 Customizing Tasks
Table Settings .................................................................................................................................... 15-1
Carry out General Tasks for a Single Table .............................................................................. 15-2
Define Transformations on a Single Table ................................................................................ 15-3
Using the Transform Tab ................................................................................................... 15-4
Creating an Expression for Transforms ........................................................................... 15-6
Using SQLite Syntax with Transforms............................................................................. 15-6
Using Filters................................................................................................................................... 15-7
Limitations ........................................................................................................................... 15-7
Opening the Filter Table Window .................................................................................... 15-8
Creating a Filter Condition for a Specified Column ...................................................... 15-9
Creating a Record Selection Condition for One or More Columns ........................... 15-10
Using the Range Builder .................................................................................................. 15-11
Using SQLite Syntax with Filtering................................................................................ 15-12
Defining Global Transformations............................................................................................... 15-13
Starting the New Transformation Rule Wizard ..................................................................... 15-13
Selecting the Transformation Type .......................................................................................... 15-14
viii
Defining What to Transform ..................................................................................................... 15-15
Defining the Transformation Rule ........................................................................................... 15-17
Rename Schema................................................................................................................. 15-18
Rename Table..................................................................................................................... 15-21
Rename Column................................................................................................................ 15-24
Add Column ...................................................................................................................... 15-26
Drop Column ..................................................................................................................... 15-27
Convert Data Type............................................................................................................ 15-27
Using the Global Transformation Rules List .......................................................................... 15-27
Edit a Global Transformation Rule................................................................................. 15-28
Delete a Global Transformation Rule............................................................................. 15-28
Editing Global Transformation Rules ...................................................................................... 15-28
Using the Expression Builder (for Filters, Transforms, and Global Transformations)..... 15-29
Overview of the Expression Builder ........................................................................................ 15-29
Build an Expression.................................................................................................................... 15-31
Evaluate an Expression .............................................................................................................. 15-32
Test an Expression ...................................................................................................................... 15-32
Using Elements in the Expression Builder.............................................................................. 15-33
Input Columns (Transforms and Filters only) .............................................................. 15-33
Metadata (Global Transformations only) ...................................................................... 15-34
Operators............................................................................................................................ 15-34
Functions ............................................................................................................................ 15-36
Header Columns ............................................................................................................... 15-45
Task Settings.................................................................................................................................... 15-45
Metadata....................................................................................................................................... 15-46
Target Metadata ................................................................................................................ 15-46
Control Tables.................................................................................................................... 15-46
Bidirectional................................................................................................................................. 15-47
Full Load ...................................................................................................................................... 15-47
Full Load Settings.............................................................................................................. 15-47
Full Load Tuning............................................................................................................... 15-50
Change Processing...................................................................................................................... 15-50
Apply Changes Settings ................................................................................................... 15-50
Changes Processing Tuning ............................................................................................ 15-51
Error Handling ............................................................................................................................ 15-52
Error Handling Settings ................................................................................................... 15-52
Environmental Errors ....................................................................................................... 15-53
Data Error........................................................................................................................... 15-53
Table Error.......................................................................................................................... 15-54
Apply Conflicts.................................................................................................................. 15-54
Logging ........................................................................................................................................ 15-56
Logging Level .................................................................................................................... 15-56
Log File ............................................................................................................................... 15-56
ix
Using the Run Button Options.................................................................................................... 16-2
Start Replication .................................................................................................................. 16-2
Reload Target....................................................................................................................... 16-3
Use Advanced Run Options .............................................................................................. 16-3
Viewing the Task Status .................................................................................................................. 16-4
Reading Messages about a Task .................................................................................................... 16-5
Viewing Notifications .................................................................................................................. 16-5
Using the Notifications List ............................................................................................... 16-5
View Log Messages for a Task.................................................................................................... 16-6
Using the Log Messages List ............................................................................................. 16-6
Viewing the Log file in the Log Viewer ........................................................................... 16-7
x
Creating a Default Recipient List ............................................................................................. 18-13
License Settings .............................................................................................................................. 18-14
Requesting a License .................................................................................................................. 18-14
Using the Advanced License Request Option .............................................................. 18-15
Registering a License.................................................................................................................. 18-15
Viewing a License ....................................................................................................................... 18-16
Error Handling ................................................................................................................................ 18-16
Logging Settings (Server).............................................................................................................. 18-17
Set the Server Logging Level..................................................................................................... 18-17
Configure Log Files .................................................................................................................... 18-18
Schedule Maintenance Operations for Log Files .......................................................... 18-18
Carry out Log File Actions............................................................................................... 18-19
View a Server Log ............................................................................................................. 18-19
File Transfer Service....................................................................................................................... 18-20
Defining a File Transfer Service................................................................................................ 18-20
Editing a File Transfer Service .................................................................................................. 18-21
Deleting a File Transfer Service ................................................................................................ 18-21
Scheduling Jobs .............................................................................................................................. 18-23
A Error Handling
Error Types ......................................................................................................................................... 19-1
Error Handling Properties............................................................................................................... 19-1
Environmental Errors................................................................................................................... 19-2
Data Errors..................................................................................................................................... 19-2
Table Errors ................................................................................................................................... 19-4
Apply Errors .................................................................................................................................. 19-4
Fatal Errors..................................................................................................................................... 19-6
Abnormal Termination ................................................................................................................ 19-7
B Using Encryption
Setting up HTTPS with IIS............................................................................................................. 20-1
Setting up HTTPS without IIS....................................................................................................... 20-3
Creating and Installing a Self-Signed Server Authentication Certificate ............................. 20-3
Setting up the Self-Signed Certificate on the Server ...................................................... 20-3
Removing the Self-Signed Certificate from the Server .................................................. 20-4
Setting Up the Self-Signed Certificate on a Client.......................................................... 20-4
Changing the Server Password ...................................................................................................... 20-5
xi
Define the Dependencies for Each Service................................................................................ 21-4
Enabling Different Console Configurations in a High Availability Environment.............. 21-4
Setting up Amazon RDS Migration Tool in a Linux Cluster .................................................. 21-5
D Control Tables
Apply Exceptions ............................................................................................................................. 22-1
Replication Status ............................................................................................................................ 22-1
Suspended Tables ............................................................................................................................ 22-2
Replication History .......................................................................................................................... 22-3
Glossary
xii
1
Installing Amazon RDS Migration Tool
The following chapter describes how to prepare your system for Amazon RDS
Migration Tool, how to install Amazon RDS Migration Tool and how to access the
Amazon RDS Migration Console.
n Installation Prerequisites
n Installing Amazon RDS Migration Tool
n Accessing the Amazon RDS Migration Console
Installation Prerequisites
This section describes how to prepare your system to use Amazon RDS Migration
Tool.
n Software Requirements
n Additional Required Software
n Supported Databases
n Installing Amazon RDS Migration Tool
n Accessing the Amazon RDS Migration Console
Software Requirements
This section describes what software is required to work with Amazon RDS Migration
Tool.
n Windows Software Requirements
n Linux Software Requirements
All of the data that is created when you use Amazon RDS Migration Tool is stored in a
folder called data. By default, this folder is located in the installation folder where you
install Amazon RDS Migration Tool. If you want to create the data folder in a different
location, select this option in the installation wizard.
If you select this option, all command line actions must be prefixed with reptcl -d
<path to the data folder>
2. To change the default installation directory, edit the first szDir value as necessary.
3. To change the default data directory, edit the second szDir value as necessary.
4. Save the file as <name>.iss e.g. silent_inst_64.iss
Example:
C:\>AmazonRDSMigrationTool_1.1.x.xx_X64.exe /s /f1C:\temp\1\rdsmigrationtool_x64_
ins.iss /f2C:\temp\1\silent_x64_ins.log
If the installation was successful, the log file should contain the following rows:
[ResponseResult]
ResultCode=0
Example:
C:\>AmazonRDSMigrationTool_1.1.x.xx_X64.exe /s /f1C:\temp\1\rep_x64_up.iss
If the upgrade was successful, the log file should contain the following rows:
[ResponseResult]
ResultCode=0
Example:
C:\>AmazonRDSMigrationTool_1.1.x.xx_X64.exe /s /f1C:\temp\1\rep_x64_un.iss
/f2C:\temp\1\silent_x64_un.log
If the uninstall was successful, the log file should contain the following rows:
[ResponseResult]
ResultCode=0
Notes:
When using the Amazon RDS Migration Server on Linux, you must
use the Amazon RDS Migration Console on a Windows computer. See
Windows Software Requirements for a list of the supported Windows
versions.
See Linux Software Requirements for a list of the Linux versions
supported by Amazon RDS Migration Tool.
Pre-Installation Tasks
Before you begin the installation, you must carry out the following tasks:
n Carry out the Windows installation on any Windows computer. For information
on installing Amazon RDS Migration Tool on a Windows computer, see Installing
Amazon RDS Migration Tool on Windows.
n Sign in to the Linux account. The account must be an administrator account to be
able to install Amazon RDS Migration Tool.
n Copy the RPM file to the Linux computer. You can do this in one of two ways:
– On the Linux computer where you want to install the Amazon RDS Migration
Server, open either the Firefox or Chrome Web browser and enter the
following:
http://<host-or-ip>/AmazonRDSMigrationConsole/LinuxInstall
.html.
A "Read Me" file opens. Click the here link and then save the RPM file to any
location on your Linux computer.
– From the Linux command prompt on the Linux computer where you want to
install the Amazon RDS Migration Server, type the following: $ wget
--user <user> --ask-password
http://<host-or-ip>/AmazonRDSMigrationConsole/rdsmigration
tool-1.1.0-7.x86_64.rpm.
The user name that you enter must be an authorized Amazon RDS Migration
Console user.
Save the file to any location on your Linux computer.
Installation Procedures
You can install the Amazon RDS Migration Server using the
ardsmigrationtool-<n>.rpm installation kit.
The following procedure describes how to get the RPM file from the Windows
computer to install Amazon RDS Migration Tool on Linux. Before you carry out the
following procedures, you must carry out the Pre-Installation Tasks procedures.
Before starting the server process, make sure that you are working in the directory
where Amazon RDS Migration Tool is installed.
To change to the correct directory, type the following at the Linux command prompt:
cd /opt/amazon/rdsmigrationtool/bin/
This changes the directory to the default Amazon RDS Migration Tool installation
directory. If you installed Amazon RDS Migration Tool in a different directory, enter
the correct path.
Type the following lines as shown, each at a different Linux command prompt:
source arep_login.sh
./repctl service start
Where:
source arep_login.sh sets the environment variables required by the Amazon
RDS Migration Server.
./repctl service start starts the Amazon RDS Migration Server service.
You will receive a confirmation similar to:
Amazon RDS Migration server was started as PID 5100
For an explanation of how to verify that the Amazon RDS Migration Server service is
running, see Verifying that the Amazon RDS Migration Server is Running.
To ensure that Amazon RDS Migration Tool was removed from the computer, type the
following to list the sub-directories in the rds migration tool directory.
ls /opt/amazon/rdsmigrationtool
Notes:
n You can also access Amazon RDS Migration Tool from any
computer in your system.
To access the Console from a remote computer, type the following
address in the address bar in your Web browser:
http://<computer name>/AmazonRDSMigrationConsole
Where <computer name> is the name of the computer,
including the Windows domain name, where you installed
Amazon RDS Migration Tool.
n The person logged in to the computer where you are accessing the
Console must be authorized as an Amazon RDS Migration Tool
user. For more information, see Security Roles.
To work with Amazon RDS Migration Tool over a secure connection, you need to set it
up for https and replace http with https in the URL above. In this case, you must
change the default URL address. For more information on setting up HTTPS
connections, see Appendix B, "Using Encryption".
The default URL address is defined in a file called ServiceConfiguration.xml.
This file is located on the computer where you installed Amazon RDS Migration Tool
in the following folder:
C:\Program Files\Amazon\RDS Migration Tool\data
If your organization requires a connection to the Amazon RDS Migration Server using
HTTPS, a member of your IT staff should set this up. For instructions on how to set up
your browser for HTTPS, see Appendix B, "Using Encryption".
3. Open the Windows Services console and restart the Amazon RDS Migration
Console service.
4. Connect as described in Accessing the Amazon RDS Migration Console above.
This chapter provides a tutorial of how to set up a basic Replication task. This task will
show how to set up a task that copies the data from an Oracle source database and
replicates the database in Microsoft SQL Server.
Before you begin to work with this tutorial, see What You Need.
Then do the following:
n Open the Amazon RDS Migration Console
n Add an Oracle Database as a Source
n Add a Microsoft SQL Server Database as a Target
n Add a Replication Task
n Run and Monitor the Replication Task
n View the Replicated Tables in Microsoft SQL Server
Notes:
n You can access Amazon RDS Migration Tool from any computer in
your system.
To access the Console from a remote computer, type the following
address in the address bar in your Web browser:
http://<computer name>/amazonrdsconfigurationconsole
Where <computer name> is the name or IP address of the computer
(including the Windows domain name) on which the Amazon RDS
Migration Server is installed.
n The person logged in to the computer where you are accessing the
Console must be authorized as an Amazon RDS Migration Tool user.
For more information, see Security Roles.
To add a database
1. In Tasks view, click Manage Databases.
The Manage Databases dialog box opens.
2. Click Add Database toolbar button.
The following figure shows the data that you need to enter in the Add Database
dialog box when you add an Oracle source database in this tutorial.
n Name: oracle_source.
n Description: Leave this blank.
n Role: Select Source.
n Type: Select Oracle from the list.
n Connection string: Type the connect string to the Oracle database you are
working with. Type the connect string in any Oracle format.
For example, if you are connecting to an Oracle database on a computer called
tomato, using the default Oracle port and default service name, the connect
string looks like this:
tomato:1521/orcl
n User Name: Type SYSTEM. This is the Oracle default administrator user
name. If you know that the Oracle database you are working with has a
different user name, then type that user name.
n Password: Type manager. This is the Oracle default administrator password.
If you know that the Oracle database you are working with has a different
password, then type that password.
n Click Test to see if you entered the information correctly and whether the
Oracle database is available. Then click OK to add the database.
Click New Task to open the New Task dialog box. In the Name field for this dialog
box, type Task_1. Leave the Description field blank and click OK.
The Amazon RDS Migration Console displays a diagram in the middle where you add
the source and target databases used in the task. In addition a pane on the right side
lets you select that tables you want to work with and carry out transforms and filtering
operations. For additional information, see The Tasks View, Viewing Specific Tasks
and Designing Tasks.
You can also make changes to the default task settings if you need to. For information
on changing the task settings, see Task Settings.
2. Drag the sqlserver_target database to the Target area as shown in the figure
in the previous step.
When you have added the source and target databases to the task, you can then select
the tables from the source database to use in the replication task.
For more information, see Designing Tasks.
5. In the toolbar at the top of the console you will see the run buttons. Click Run.
6. Note the gauges and bars as the task runs. You will see the loading and error
information in the bar graphs and the throughput gauge.
The left pane has two tabs. In addition to the Full-Load tab, that we just saw,
Change Processing tab shows information about real-time changes that occur after
the full load is complete. For information on reading the data presented in these
sections, see Viewing Information in the Monitor. The following figure shows the
Full-Load monitoring while a task is running.
10. Click on the other bar graphs, such as the Completed graph and the Loading
graph to see additional information.
For information about the data supplied in these tables, see Monitoring the
Full-Load Operation.
This section describes the main concepts of data replication and shows the main
components of Amazon RDS Migration Tool. It has the following topics:
n Introducing Amazon RDS Migration Tool
n Amazon RDS Migration Tool Features
n What is Replication
n System Architecture
n What is a Replication Task
n Full Load and CDC Processes
n Replication Topologies
What is Replication
Replication is a procedure whereby two or more collections of computerized
information are kept identically synchronized. Possible reasons for synchronization
include:
n Load reduction: It is recommended that you have a copy of all or of a subset of a
collection on a different server to reduce the load on the main server.
n Improved service: Users of the copy of the information may get better access to the
copy of the data than to the original.
n Security considerations: Some users might be allowed access to a subset of the
data and only this subset is made available as a replicated copy to those users.
n Geographic distribution: The enterprise (for example, a chain of retail stores or
warehouses) may be widely distributed and each node uses primarily its own
subset of the data (in addition to all of the data being available at a central location
for less common use).
n Disaster Recovery: A copy of the main data is required for rapid failover (the
capability to switch over to a redundant or standby computer server, in case of
failure of the main system).
n Support the need for implementing "cloud" computing.
The information used is usually stored as files or in a database. In the case of files, the
structure and contents of a file are known only to the specialized programs that use the
file. Databases are managed by database management systems (DBMS), that make use
of standardized descriptions of the structure of the information (such as tables,
columns, rows, and data types). These descriptions are known collectively as
metadata, and allow a general-purpose replicator to carry out relevant operations (for
example, filtering, data transformations) without need to know anything about the
contents or "meaning" of the data. Because file systems do not contain metadata,
operations available for replication are more limited.
During replication, a collection of data is copied from system A to system B. A is
known as the source (for this collection), B is known as the target. A system can be
either a source or a target or even both (within certain restrictions). When a number of
sources and targets and data collections are defined, the replication topology can be
quite complex.
Since copies cannot be made or maintained instantaneously, and since it is assumed
that the source computer cannot be stopped and the information cannot be kept
"frozen," replication must take into account that the source data may be changing
while it is being copied. This problem includes three main issues:
System Architecture
In the initial load process, Amazon RDS Migration Tool reads a filtered stream of rows
(with relevant columns only) and passes them to the transformation process for
further filtering writing to the target database (in the expected output format).
The CDC (Change Data Capture) process obtains a stream of filtered events or changes
in data or metadata from the transaction log file. One of its most important functions is
to buffer all of the changes for a given transaction into a single unit before they can be
sent forward to the target when the transaction commits. During the initial load
process CDC also buffers all of the changes that occur within a transaction until after
all of tables affected have finished being loaded.
The Designer/Console server, part of the Replication server, is a Web-based
application that serves all of the user Interface dealing with designing or modifying
the replication system as well as displaying and controlling its operation.
Replication Topologies
A replication task can be set up in various ways. The following topologies are
currently supported by Amazon RDS Migration Tool.
n One-to-One
One-to-One
In a one-one topology, there is one source and one target database. Using two different
replication tasks, the databases may switch roles, allowing two-way synchronization.
When the source and target databases are distinct (one-way replication)
concurrent-update problems do not occur, and transactional integrity and consistency
can be guaranteed.
Logical Independence
Two-way replication works best when updates of a row on a source and on a target are
entirely autonomous and do not affect each other. There is an assumption that any
table or a horizontal or a vertical segment of a partitioned table can only be updated in
one source. We allow the same row to be updated at several places, but in this case the
columns being updated must be distinct. Another assumption is that if a data value in
one row depends on or is derived from a value in another row, the values can be
changed only on the same server but nowhere else (except by the Replicator). This is
called logical independence.
Under logical independence assumptions, no concurrent update conflicts can occur
during replication.
Amazon RDS Migration Tool lets you work with many databases that you already
have. There is no need to install any additional software.
This chapter includes the following topics:
n Databases Supported as Amazon RDS Migration Tool Endpoints
n Amazon RDS Migration Tool Data Types
n Amazon RDS Migration Tool Supported DDL Statements
Amazon RDS Migration Tool supports two types of Oracle Agents, a LogMiner agent
and a binary agent. The configurations that you make to your Oracle database to work
with Amazon RDS Migration Tool depend on which agent you are using.
This chapter describes how to set up and use an Oracle database as a source or target
endpoint in a replication task. There are three types of replication task: Business
Intelligence or Data Warehousing, Bidirectional Replication, and Disaster Recovery.
For information on setting up replication tasks, see Designing Tasks.
The chapter contains the following topics:
n Supported Oracle Database Editions
n Limitations
n Using an Oracle Database as a Source
n Using an Oracle Database as a Target
Install an Oracle client on the computer where Amazon RDS Migration Tool is located.
Install the following:
n On Windows systems, install Oracle Instant Client for Microsoft Windows (x64)
Version 11.2.0.3.0.
Note: Support for the XMLTYPE data type requires the full Oracle Client.
n On Linux systems, install Oracle Instant Client for Linux (x86-64) Version
11.2.0.3.0.
Note: Support for the XMLTYPE data type requires the full Oracle Client.
In addition, if not already included in your system, you need to create a symbolic
link in the $Oracle_Home\lib directory. This link should be called
libclntsh.so, and should point to a specific version of this file. For example:
– Oracle Version 11 client:
lrwxrwxrwx 1 oracle oracle 63 Oct 2 14:16 libclntsh.so ->
/u01/app/oracle/home/lib/libclntsh.so.11.1
– Oracle Version 10 client:
lrwxrwxrwx 1 oracle oracle 63 Oct 2 14:16 libclntsh.so ->
/u01/app/oracle/home/lib/libclntsh.so.10.1
Additionally, the LD_LIBRARY_PATH environment variable should be appended
with the Oracle lib directory and added to the site_arep_login.sh script.
Limitations
n The Amazon RDS Migration Tool Oracle endpoint cannot create a new schema on
the Oracle database. Therefore, if you are replicating data to an Oracle target and
you want to change the schema name, the new schema name must already exist on
the Oracle database. If it does not exist, you must create the schema on the
database, then you can use that schema name in Amazon RDS Migration Tool.
n Amazon RDS Migration Tool only supports Oracle TDE tablespace encryption. All
other forms of encryption are not supported.
n If you are managing supplemental logging and you carry out transformations on
any of the columns, you must be sure that supplemental logging is activated for all
fields and columns.
Security Requirements
In order to use an Oracle source in an Amazon RDS Migration Tool task, the user
specified in the Amazon RDS Migration Tool Oracle database definitions must be
granted the following privileges in the Oracle database:
n SELECT ANY TRANSACTION
n SELECT on V_$ARCHIVED_LOG
n SELECT on V_$LOG
n SELECT on V_$LOGFILE
Table 5–3 Supported Oracle Data Types with Mapping to Amazon RDS Migration Tool
Data Types
Oracle Data Types Amazon RDS Migration Tool Data Types
BINARY_FLOAT REAL4
BINARY_DOUBLE REAL8
BINARY BYTES
FLOAT (P) Precision < or = 24: REAL4
Precision > 24: REAL8
NUMBER (P,S) When scale is < 0: REAL8
NUMBER according to the "Expose number When scale is 0 and:
as" property in the Amazon RDS Migration
Precision = 0: REAL8
Tool Oracle source database settings.
Precision < or = 2: INT1
Precision >2 and <or = 4: INT2
Precision >4 and <or = 9: INT4
Precision > 9: NUMERIC
If precision > or = scale: NUMERIC
In all other cases: REAL8
DATE DATETIME
INTERVAL_YEAR TO MONTH STRING (with interval year_to_month
indication)
INTERVAL_DAY TO SECOND STRING (with interval day_to_second
indication)
TIME DATETIME
TIMESTAMP DATETIME
TIMESTAMP WITH TIME ZONE STRING (with timestamp_with_timezone
indication)
Step 2: Make sure that the required supplemental logging is added for each table
a. If a Primary Key exists, supplemental logging must be added for the Primary
Key either by using the format to add supplemental logging on the Primary
Key, or by adding supplemental logging on the Primary Key columns.
b. If no Primary Key exists and the table has a single Unique Index, then all of
the Unique Index’s columns must be added to the supplemental log. Using
e. If you change the target table primary key, the supplemental logging must be
added on the selected index's columns instead of the columns of the original
primary key/unique index.
If the table has a Unique Index or a Primary Key, you also need to add supplemental
logging on each column that is involved in a filter or transformation (if those columns
are different than the Primary Key or Unique Index columns).
Note: If a transformation uses only one column, this column may not
be added to a supplemental logging group. For example, "A+B" needs
both columns to be added, whereas substring(A, 10) does not need "A"
to be added.
One method of setting up both Primary Key/Unique Index supplemental logging and
supplemental logging on specific columns is to add USER_LOG_GROUP supplemental
logging only on the Primary Key/Unique Index columns and on the columns that are
filtered or transformed.
For example, to replicate a table named EXAMPLE.TABLE with Primary Key ID and
filter by column NAME, you can run a command similar to the one below to create the
log group supplemental logging:
ALTER TABLE EXAMPLE.TABLE ADD SUPPLEMENTAL LOG GROUP example_log_group (ID,NAME)
ALWAYS;
Additional commands that you can execute to change the supplemental logging
attributes include:
exec rdsadmin.rdsadmin_util.alter_supplemental_logging('ADD','ALL');
Setting Up Archiving
To retain archived redo logs of your Oracle database instance (which will allow
Amazon RDS Migration Tool to retrieve the log information using LogMiner), execute
the following command (example 24 hours):
exec rdsadmin.rdsadmin_util.set_configuration('archivelog retention hours',24);
Make sure that your storage has sufficient space for the archived redo logs during the
specified period.
Notes:
n Oracle can also be used as a target database. For information on
using Oracle as a target, see Setting up an Oracle Database as a
Target in Amazon RDS Migration Tool.
n You can also use Oracle files as a source or target. For more
information, see Using Files.
Note: If no <common ASM connection string> is specified, all the RAC threads
should be defined in the ASM connection.
When using Oracle Binary Reader to access the redo logs, the connection string
syntax is as follows:
<Oracle connection string>[, <thread id> <thread BFILE connection string>,
<thread id> <thread BFILE connection string> ...]
7. Type the Oracle authentication information (User Name, Password) for the
authorized user for this Oracle database. If you do not know this information, see
your Oracle Database Administrator (DBA).
n Secret Store encryption entries: When "Use LogMiner to access redo logs" is not
selected and the Oracle instance uses TDE tablespace encryption, you need to
specify the encryption key (or keys if more than one was used) that was/were
used to encrypt the Redo logs. In order to specify the correct encryption key(s),
you first need to find the relevant entry (or entries in the case of multiple keys) in
the Oracle Wallet containing the encryption key(s). For instructions on how to do
this, see Finding the Relevant Entries in the Oracle Wallet below. After you find
the relevant entry or entries, copy the entry and its value (or entries and values if
more than one) into the Names and Values fields respectively.
Note: To enter multiple values, first copy each entry into a text editor
such as Notepad making sure to separate the values with a comma.
Then, copy the string containing the values and commas from the text
editor and paste it into the Values field. There is no need to do this for
entries. You can paste the entries directly into the Entries field,
remembering to separate each entry with a comma.
5. If you know which entry/entries is/are used to encrypt the Redo logs, select
the entry name(s) and use the “viewEntry” option in the Oracle mkstore utility
to determine the entry value, as follows:
mkstore –wrl <full wallet name> -viewEntry <entry name>
Note: If you do not know which entry is used to encrypt the Redo
logs, you can select multiple DB or TS entries and determine their
values as described above (and then copy and paste the entry names
and values into the Names and Values fields as described in the Note
above). If the specified entries are not correct, the task will fail and the
error message will contain the correct entry name.
Note: If the DBA changes the entry while the task is running, the
task will fail and the error message will contain the new entry name.
Add the new entry (name and value) to the already specified entries
and then resume the task.
n ASM Parameters (if redo logs are stored in ASM) - If the Oracle redo logs you are
using are stored using Automated Storage Management (ASM), enter the required
access information in the designated fields (not available when the Use LogMiner
to access redo logs option is selected).
Note: To access the redo logs in ASM, you also need to grant the following read
privilege: SELECT ON v_$transportable_platform
– ASM Connection String: The connection string to the ASM instance if your
Oracle database is using ASM.
– ASM user name: The user name for the ASM user.
– ASM password: The password for the ASM user.
See also Required ASM Privileges.
n To access a redo log as a binary file (i.e. not using Use LogMiner):
Note: To access the redo log as a binary file, you also need to grant the
following read privilege: SELECT ON v_$transportable_platform
n Expose NUMBER as: Select a precision, scale combination from the drop-down
list. Amazon RDS Migration Tool always supports a scale up to 38. You can select
a scale up to 38 or you can select FLOAT.
By default the NUMBER data type is converted to precision 38, scale 10.
n Use archived redo logs only: When this option is selected, Amazon RDS
Migration Tool will only access the archived redo logs. If the archived redo logs
ares stored on ASM only, the Amazon RDS Migration Tool user needs to be
granted the ASM privileges described in Required ASM Privileges.
n Override connection string parameters: Select this to create a custom connect
string.
You can validate ASM account access by opening a command prompt and issuing the
following statements:
sqlplus asmuser/asmpassword@+asmserver as sysdba
-OR-
sqlplus asmuser/asmpassword@+asmserver as sysasm
Security Requirements
A user must have the following privileges granted in the Oracle database to use an
Oracle target in an Amazon RDS Migration Tool task:
n SELECT ANY TRANSACTION
n SELECT on V$NLS_PARAMETERS
n SELECT on V$TIMEZONE_NAMES
n SELECT on ALL_INDEXES
n SELECT on ALL_OBJECTS
n SELECT on DBA_OBJECTS
n SELECT on ALL_TABLES
n SELECT on ALL_USERS
n SELECT on ALL_CATALOG
n SELECT on ALL_CONSTRAINTS
n SELECT on ALL_CONS_COLUMNS
n SELECT on ALL_TAB_COLS
n SELECT on ALL_IND_COLUMNS
n DROP ANY TABLE
n SELECT ANY TABLE
n INSERT ANY TABLE
n INSERT ANY TABLE
n UPDATE ANY TABLE
n CREATE ANY VIEW
n DROP ANY VIEW
n CREATE ANY PROCEDURE
Table 5–5 Supported Oracle Data Types with Mapping from Amazon RDS Migration Tool
Data Types
Amazon RDS Migration Tool Data Types Oracle Data Types
BOOLEAN NUMBER (1)
BYTES RAW (length)
DATE DATETIME
TIME TIMESTAMP (0)
DATETIME TIMESTAMP (scale)
INT1 NUMBER (3)
INT2 NUMBER (5)
INT4 NUMBER (10)
INT8 NUMBER (19)
Note: When the source database is Oracle, the source data types will
be replicated "as is" to the Oracle target. For example, an XMLTYPE
data type on the source will be created as an XMLTYPE data type on
the target.
Notes:
n Oracle can also be used as a source database. For information on
using Oracle as a source, see Setting up an Oracle Database as a
Source in Amazon RDS Migration Tool.
n You can also use Oracle files as a source or target. For more
information, see Using Files.
Note:
n This information is case sensitive.
n You can use the Advanced tab to add specific properties and
create a custom connect string. In this case, you do not need to
enter information in this tab. For more information on using the
Advanced tab, see Using Advanced Properties for an Oracle
Source.
n To determine if you are connected to the database you want to use
or if the connection information you entered is correct, click Test.
If the connection is successful a message in blue is displayed. If
the connection fails, a message in red that includes the log error
information is displayed at the bottom of the dialog box.
To view the log entry if the connection fails, click View Log. The
server log is displayed with the information for the connection
failure. Note that this button is not available unless the test
connection fails.
7. Type the Oracle authentication information (User Name, Password) for the
authorized user for this Oracle database. If you do not know this information, see
your Oracle Database Administrator (DBA).
Note:
n This information is required. If you are using the Advanced tab to
create a custom string, make sure to include the User Name and
Password properties. See Using Advanced Properties for an
Oracle Source for more information.
n This information is case sensitive.
n Important: Make sure that the Oracle user entered in the Oracle
Authentication section has the correct access privileges. For
information on how to provide the required privileges, see
Security Requirements.
n If you want to set custom properties for this database, see Using
Advanced Properties for an Oracle Source.
This chapter describes how to set up and use a Microsoft SQL Server database as the
source or target endpoint in a replication task. It has the following topics:
n Prerequisites
n Limitations
n Using a Microsoft SQL Server Database as a Source
n Using a Microsoft SQL Server Database as a Target
Prerequisites
Amazon RDS Migration Tool supports the following Microsoft SQL Server editions.
n Enterprise Edition
n Standard Edition
n Workgroup Edition
n Developer Edition
Limitations
n If you are using a Microsoft SQL Server source database in a replication task, the
Microsoft SQL Server Replication Publisher definitions for the database that was
used in the task are not removed when you remove a task. A Microsoft SQL Server
system administrator must delete these definitions from Microsoft SQL Server.
n Tables with more than 8000 bytes of information in a Microsoft SQL Server source
including header and mapping information, are not processed correctly due to
limitations in buffer size.
n The Microsoft SQL Server source endpoint does not support the use of sparse
tables.
n Changes to computed fields in a Microsoft SQL Server source will not be
replicated.
n Microsoft SQL Server partition switching is not supported.
n When using the WRITETEXT and UPDATETEXT utilities, Amazon RDS Migration
Tool does not capture events applied on the source database.
n The following DML pattern is not supported:
select <*> into <new_table> from <existing_table>
Table 6–3 Microsoft SQL Server Source Data Types with Mapping to Amazon RDS
Migration Tool Data Types
Microsoft SQL Server Data Types Amazon RDS Migration Tool Data Types
BIGINT INT8
BIT BOOLEAN
DECIMAL NUMERIC
INT INT4
MONEY NUMERIC
NUMERIC (p,s) NUMERIC
SMALLINT INT2
SMALLMONEY NUMERIC
DATE DATE
TIME TIME
DATETIMEOFFSET WSTRING
CHAR STRING
VARCHAR STRING
VARCHAR (max) CLOB
TEXT
To use this data type with Amazon RDS
Migration Tool, you must enable the use of
CLOBs for a specific task.
LOB columns for Microsoft SQL Server tables
are updated in the target even for UPDATE
statements that did not change the value of
the LOB column in Microsoft SQL Server.
During CDC, CLOB data types are supported
only in tables that include a primary key.
For more information, see LOB support in
Task Settings/Metadata.
NCHAR WSTRING
NVARCHAR (length) WSTRING
NVARCHAR (max) NCLOB
NTEXT
To use this data type with Amazon RDS
Migration Tool, you must enable the use of
NCLOBs for a specific task.
LOB columns for Microsoft SQL Server tables
are updated in the target even for UPDATE
statements that did not change the value of
the LOB column in Microsoft SQL Server.
During CDC, NCLOB data types are
supported only in tables that include a
primary key.
For more information, see LOB support in
Task Settings/Metadata
BINARY BYTES
VARBINARY BYTES
In the Microsoft SQL Server’s Management Studio, follow the instructions provided
by the Configure Distribution wizard to set up replication or see the Microsoft SQL
Server documentation.
Use MS-CDC
To set up MS-CDC, you first need to enable MS-CDC for the database by running the
following command:
use [DBname]
EXEC sys.sp_cdc_enable_db
Then you need to enable MS-CDC for each of the source tables by running the
following command:
EXECUTE sys.sp_cdc_enable_table @source_schema = N'MySchema', @source_name =
N'MyTable', @role_name = NULL;
For more information on setting up MS-CDC for specific tables, please refer to the
Microsoft website.
Setting up a Microsoft SQL Server Database as an Amazon RDS Migration Tool Source
You can add a Microsoft SQL Server database to Amazon RDS Migration Tool to use as
a source. For information on how to add databases, see Working with Databases.
When you select Microsoft SQL Server Source as the database type the following
dialog box is displayed:
To add a Microsoft SQL Server source database to Amazon RDS Migration Tool
1. In the Amazon RDS Migration Console, click Add Database to open the Add
Databases dialog box. For more information on adding a database to Amazon RDS
Migration Tool, see Working with Databases.
2. In the Name field, type a name for your database. This can be any name that will
help to identify the database being used.
3. In the Description field, type a description that helps to identify the Microsoft
SQL Server database. This is optional.
4. Select SOURCE as the database Role.
You can do this step before any of the other steps if you want, however before you
can continue with the next step in this process, you must select the database Role.
5. Select Microsoft SQL Server as the database Type.
6. Type the Microsoft SQL Server name. This is the name of the computer with the
Microsoft SQL Server instance you want to work with.
Note:
You can use the Advanced tab to add specific properties and create a
custom connect string. In this case, you do not need to enter
information in this tab. For more information on using the Advanced
tab, see Using Advanced Properties for a Microsoft SQL Server Source
Database.
Note:
n The User Name/Password information is mandatory if you select
User name/password. If you are using the Advanced tab to create
a custom string, make sure to include the User Name and
Password properties. For more information, see Using Advanced
Properties for a Microsoft SQL Server Source Database.
n This information is case sensitive.
n Important: Make sure that the Microsoft SQL Server user has the
correct access privileges. For information on how to provide the
required privileges, see Security Considerations for a Microsoft
SQL Server Source.
n You can set custom properties in the Advanced tab. For more
information, see Using Advanced Properties for a Microsoft SQL
Server Source Database.
n To determine if you are connected to the database you want to use
or if the connection information you entered is correct, click Test.
If the connection is successful a message in blue is displayed. If the
connection fails, a message in red that includes the log error information
is displayed at the bottom of the dialog box.
To view the log entry if the connection fails, click View Log. The server
log is displayed with the information for the connection failure. Note that
this button is not available unless the test connection fails.
8. Type the Database name or select one from the list of available databases. This is
the name of the database where you are replicating the data.
Note: When Amazon RDS Migration Tool has file-level access to the
backup transaction logs, the following rules apply:
n The Alternate backup folder must be a common shared network
folder, for example: \\temp\backup.
n The Amazon RDS Migration Server service must be configured
to log on using the user name and password specified in the
Backup folder user name and Backup folder password fields.
To do this:
1. In the Windows Services console, double-click the Amazon
RDS Migration Server service.
2. In the Log On tab, select This account and then enter the
user name and password.
n The specified user must be granted Read permission to the
alternate backup folder (i.e. the shared network folder).
For a complete list of the limitations affecting file-level access, see
Limitations.
– Backup folder user name: The user name required to access the backup folder
when Amazon RDS Migration Tool has file-level access.
– Backup folder password: The password required to access the backup folder
when Amazon RDS Migration Tool has file-level access.
n Backup file preprocessing command: You can use a third-party utility to convert
the transaction logs to standard Microsoft SQL Server format (if they are in a
different format) and back them up to an alternate backup folder. This option
should be used in conjunction with the Alternate backup folder option described
above.
Prerequisites and Notes:
The command is invoked via the XP_CMDSHELL extended procedure.
– The backup utility is responsible for setting the system return code (0 for
success, 1 for failure), assuming that this code is delegated as the XP_
CMDSHELL return value.
– The backup utility invoked by XP_CMDSHELL must have the same security
rights as the Microsoft SQL Server service account.
– XP_CMDSHELL is normally disabled. It can be enabled and disabled by using
the Policy-Based Management or by executing SP_CONFIGURE.
– Using this extended procedure requires CONTROL SERVER permission (at
least).
Command Usage:
n {ALTDIR_OUTFILE} - The specifications of the target file to transfer to the alternate backup
folder.
n {BACKUP_SET} - The backup set to be processed within the backup log.
Example command:
C:\Temp\YourBackupUtility.exe -B{BACKUP_INFILE} -A{ALTDIR_OUTFILE}"
n Delete processed backup logs: Select this option to delete the backup logs after
they have been read.
n Select virtual backup device types: When this option is selected, Amazon RDS
Migration Tool will read changes from the specified virtual device(s). Usually, this
option only needs to be enabled when using a third-party backup utility (which
will be recorded as a virtual device).
n Override connection string parameters: Select this to use a custom connect string.
The following is an example of a Microsoft SQL Server connection string:
server=SRV1;database=DB1;WindowsAuthentication=Y
For information on the properties you can include in an custom connect string for
Microsoft SQL Server, see Configuration Properties for a Microsoft SQL Server
Source.
Table 6–5 Microsoft SQL Server Target Data Types with Mapping from Amazon RDS
Migration Tool Data Types
Amazon RDS Migration Tool Data Types Microsoft SQL Server Data Types
BOOLEAN TINYINT
BYTES VARBINARY(length)
DATE For Microsoft SQL Server 2008 and later:
DATE
For earlier versions:
If scale is < or = 3: DATETIME
In all other cases: VARCHAR (37)
Setting up a Microsoft SQL Server Database as an Amazon RDS Migration Tool Target
You can add a Microsoft SQL Server database to Amazon RDS Migration Tool to use as
a target. For information on how to add databases, see Working with Databases. When
you select Microsoft SQL Server Target as the database type the following dialog box
is displayed:
To add a Microsoft SQL Server target database to Amazon RDS Migration Tool
1. In the Amazon RDS Migration Console, click Add Database to open the Add
Databases dialog box. For more information on adding a database to Amazon RDS
Migration Tool, see Working with Databases.
2. In the Name field, type a name for your database. This can be any name that will
help to identify the database being used.
3. In the Description field, type a description that helps to identify the Microsoft
SQL Server database. This is optional.
4. Select TARGET as the database Role.
You can do this step before any of the other steps if you want, however before you
can continue with the next step in this process, you must select the database Role.
5. Select Microsoft SQL Server as the database Type.
6. Type the Microsoft SQL Server name. This is the name of the computer with the
Microsoft SQL Server instance you want to work with.
Note:
You can use the Advanced tab to add specific properties and create a
custom connect string. In this case, you do not need to enter
information in this tab. For more information on using the Advanced
tab, see Using Advanced Properties for a Microsoft SQL Server Target
Database.
Note:
n The User Name/Password information is mandatory if you select
User name/password. If you are using the Advanced tab to create
a custom string, make sure to include the User Name and
Password properties. For more information, see Using Advanced
Properties for a Microsoft SQL Server Target Database.
n This information is case sensitive.
n Important: Make sure that the Microsoft SQL Server user has the
correct access privileges. For information on how to provide the
required privileges, see Security Considerations for a Microsoft
SQL Server Target.
n You can set custom properties in the Advanced tab. For more
information, see Using Advanced Properties for a Microsoft SQL
Server Target Database.
n To determine if you are connected to the database you want to use
or if the connection information you entered is correct, click Test.
If the connection is successful a message in blue is displayed. If the
connection fails, a message in red that includes the log error information
is displayed at the bottom of the dialog box.
To view the log entry if the connection fails, click View Log. The server
log is displayed with the information for the connection failure. Note that
this button is not available unless the test connection fails.
8. Type the Database name or select one from the list of available databases. This is
the name of the database where you are replicating the data.
Note: When the target table contains an identity column that does
not exist in the source table, you must disable the Use BCP for
loading tables option.
n BCP packet size: The maximum size of the packets (in bytes) used to transfer data
using BCP.
n Filegroup for Amazon RDS Migration Tool internal tables: Optionally, specify a
filegroup for the Amazon RDS Migration Tool internal tables. When the replication
task starts, all the internal Amazon RDS Migration Tool control tables (attrep_
apply_exception, attrep_apply, attrep_changes) will be created on the
specified filegroup.
The following is an example of a command for creating a filegroup:
ALTER DATABASE replicate
ADD FILEGROUP Test1FG1;
GO
ALTER DATABASE replicate
ADD FILE
(
NAME = test1dat5,
FILENAME = 'C:\temp\DATA\t1dat5.ndf',
SIZE = 5MB,
MAXSIZE = 100MB,
For information on the properties you can include in an custom connect string for
Microsoft SQL Server, see Configuration Properties for a Microsoft SQL Server
Source.
This chapter describes how to set up and use a SAP Sybase ASE database as the source
or target endpoint in a replication task. It contains the following topics:
n Prerequisites
n Limitations
n Using a SAP Sybase ASE Database as a Source
n Using a SAP Sybase ASE Database as a Target
Prerequisites
Before you begin to work with a SAP Sybase ASE database as a source or target in
Amazon RDS Migration Tool, make sure that the SAP Sybase ASE database with the
tables that are necessary for replication is available in your network.
Note:
n Amazon RDS Migration Tool must be installed on any Windows
computer in your network.
n A SAP Sybase ASE account with the required access privileges is
required.
n Amazon RDS Migration Tool creates the target table with columns that do not
allow NULL values, if the database default is not to allow NULL values.
Consequently, if a Full Load or CDC replication task contains empty values, errors
will occur.
To prevent this from happening:
1. Right-click the database name and select Properties from the context menu.
2. In the Options tab, select Allow nulls by default and then click OK.
Security Requirements
The following section describes the security requirements for using Amazon RDS
Migration Tool with a SAP Sybase ASE database source or target.
If the Automatically enable SAP Sybase ASE replication option is enabled, Amazon
RDS Migration Tool needs permission to run the stored procedure sp_setreptable.
For information on the Automatically enable SAP Sybase ASE replication option, see
Using Advanced Properties for a SAP Sybase ASE Source.
Table 7–2 SAP Sybase ASE Database Source Data Types with Mapping to Amazon RDS
Migration Tool Data Types
SAP Sybase ASE Source Data Types Amazon RDS Migration Tool Data Types
BIGINT INT8
INT INT4
SMALLINT INT2
TINYINT UINT1
DECIMAL NUMERIC
NUMERIC NUMERIC
FLOAT REAL8
Setting up a SAP Sybase ASE Database as a Source in Amazon RDS Migration Tool
You can add a SAP Sybase ASE database to Amazon RDS Migration Tool to use as a
source. For information on how to add databases, see Working with Databases. When
you select SAP Sybase ASE Source as the database type the following dialog box is
displayed:
Notes: You can also use SAP Sybase ASE files as a source. For more
information, see Using the Amazon RDS Migration Tool File Channel.
To add a SAP Sybase ASE source database to Amazon RDS Migration Tool
1. In the Amazon RDS Migration Console, click Add Database to open the Add
Databases dialog box. For more information on adding a database to Amazon RDS
Migration Tool, see Working with Databases.
2. In the Name field, type a name for your database. This can be any name that will
help to identify the database being used.
3. In the Description field, type a description that helps to identify the SAP Sybase
ASE database. This is optional.
4. Select SOURCE as the database Role.
You can do this step before any of the other steps if you want, however before you
can continue with the next step in this process, you must select the database Role.
5. Select SAP Sybase ASE as the database Type.
6. In the Server Name field, enter the host name or IP address of the computer on
which the SAP Sybase ASE database is installed.
Note:
n This information is required. If you are using the Advanced tab to
create a custom string, make sure to include the User Name and
Password properties. See Using Advanced Properties for a SAP
Sybase ASE Source for more information.
n This information is case sensitive.
n Important: Make sure that the SAP Sybase ASE user entered in
the SAP Sybase ASE Authentication section has the correct access
privileges. For information on how to provide the required
privileges, see Security Requirements.
n If you want to set custom properties for this database, see Using
Advanced Properties for a SAP Sybase ASE Source.
9. In the Database name field, enter the SAP Sybase ASE database name.
server=SybaseASE1;port=5421;database=MyASE;username=Admin;
password=7yyfhn85;enableReplication=Y
Security Requirements
You must provide SAP Sybase ASE account access to the Amazon RDS Migration Tool
user. This user must have read/write privileges in the SAP Sybase ASE database.
Setting up a SAP Sybase ASE Database as a Target in Amazon RDS Migration Tool
You can add a SAP Sybase ASE database to Amazon RDS Migration Tool to use as a
target. For information on how to add databases, see Working with Databases. When
you select SAP Sybase ASE as the target database type the following dialog box is
displayed:
To add a SAP Sybase ASE target database to Amazon RDS Migration Tool
1. In the Amazon RDS Migration Console, click Add Database to open the Add
Databases dialog box. For more information on adding a database to Amazon RDS
Migration Tool, see Working with Databases.
2. In the Name field, type a name for your database. This can be any name that will
help to identify the database being used.
3. In the Description field, type a description that helps to identify the SAP Sybase
ASE database. This is optional.
4. Select TARGET as the database Role.
You can do this step before any of the other steps if you want, however before you
can continue with the next step in this process, you must select the database Role.
5. Select SAP Sybase ASE as the database Type.
6. In the Server Name field, enter the host name or IP address of the computer on
which the SAP Sybase ASE database is installed.
Note:
n This information is required. If you are using the Advanced tab to
create a custom string, make sure to include the User Name and
Password properties. See Using Advanced Properties for a SAP
Sybase ASE Target for more information.
n This information is case sensitive.
n Important: Make sure that the SAP Sybase ASE user entered in
the SAP Sybase ASE Authentication section has the correct access
privileges. For information on how to provide the required
privileges, see Security Requirements.
n If you want to set custom properties for this database, see Using
Advanced Properties for a SAP Sybase ASE Target.
9. In the Database name field, enter the SAP Sybase ASE database name.
server=SybaseASE1;port=5421;database=MyASE;username=Admin;
password=7yyfhn85;
This chapter describes how to set up and use a MySQL database as the source or target
endpoint in a replication task. It contains the following topics:
n Prerequisites
n Limitations
n Using a MySQL Database as a Source
n Using a MySQL Database as a Target
Prerequisites
Before you begin to work with a MySQL database as a source or target in Amazon
RDS Migration Tool, make sure that the following prerequisites have been met.
General Prerequisites
n Amazon RDS Migration Tool installed on Windows or Linux in your network.
n A MySQL account with the required Security Requirements.
n A MySQL database with the tables that you want to replicate should be accessible
in your network.
The following MySQL editions are supported:
– MySQL Community Edition
– MySQL Standard Edition
– MySQL Enterprise Edition
– MySQL Cluster Carrier Grade Edition
Cluster Prerequisites
To be able to replicate clustered (NDB) tables (i.e. by connecting Amazon RDS
Migration Tool to any of the cluster nodes), the following parameters must be
configured in MySQL’s my.ini (Windows) or my.cnf (UNIX) files.
Security Requirements
The Amazon RDS Migration Tool user must have the ReplicationAdmin role with the
following privileges (according to task type):
n REPLICATION CLIENT - Required for Change Processing tasks only. In other
words, Full Load only tasks do not require this privilege.
n REPLICATION SLAVE - Required for Change Processing tasks only. In other
words, Full Load only tasks do not require this privilege.
n SUPER - Only required in versions prior to MySQL 5.6.6.
The Amazon RDS Migration Tool user must also have SELECT privileges for the
source tables designated for replication.
MySQL Source Data Types Amazon RDS Migration Tool Data Types
ENUM STRING
SET STRING
Notes: You can also use MySQL files as a source. For more
information, see Using the Amazon RDS Migration Tool File Channel.
Note:
n This information is case sensitive.
n You can use the Advanced tab to add specific properties and
create a custom connect string. In this case, you do not need to
enter information in this tab. For more information on using the
Advanced tab, see Using Advanced Properties for a MySQL
Source.
n To determine if you are connected to the database you want to use
or if the connection information you entered is correct, click Test.
If the connection is successful a message in blue is displayed. If
the connection fails, a message in red that includes the log error
information is displayed at the bottom of the dialog box.
To view the log entry if the connection fails, click View Log. The
server log is displayed with the information for the connection
failure. Note that this button is not available unless the test
connection fails.
Note:
n This information is required. If you are using the Advanced tab to
create a custom string, make sure to include the User Name and
Password properties. See Using Advanced Properties for a
MySQL Source for more information.
n This information is case sensitive.
n Important: Make sure that the MySQL user entered in the MySQL
Authentication section has the correct access privileges. For
information on how to provide the required privileges, see
Security Requirements.
n If you want to set custom properties for this database, see Using
Advanced Properties for a MySQL Source.
Selecting a Schema
You can choose which MySQL database to access. After configuring the MySQL source
database connection settings, open the Select Tables dialog box (by clicking the Table
Selection button on the right of the console) and select which schema to use from the
Schema drop down list.
See also: Chapter 14, "Designing Tasks".
Note: Amazon RDS Migration Tool assumes that MySQL Client 5.2
for Linux or MySQL ODBC Client 5.2.6 64-bit for Windows has been
installed on the Amazon RDS Migration Server machine. If a later
version has been installed, you need to specify the version number in
the Additional ODBC connection properties field, as in the following
example:
provider=MySQL ODBC 5.3 Unicode Driver
Security Requirements
You must provide MySQL account access to the Amazon RDS Migration Tool user.
This user must have read/write privileges in the MySQL database.
Table 8–6 Supported MySQL Data Types with Mapping from Amazon RDS Migration
Tool Data Types
Amazon RDS Migration Tool Data Types MySQL Data Types
BOOLEAN BOOLEAN
BYTES If length is => 1 and =< 65535, then:
VARBINARY (Length)
If length is => 65535 and =< 2147483647,
then:
LONGLOB
DATE DATE
TIME TIME
TIMESTAMP If scale is => 0 and =< 6, then:
BIGDATETIME
If scale is => 7 and =< 9, then:
VARCHAR (37)
INT1 TINYINT
INT2 SMALLINT
INT4 INTEGER
INT8 BIGINT
NUMERIC DECIMAL (p,s)
REAL4 FLOAT
REAL8 DOUBLE PRECISION
STRING If length is => 1 and =< 21845, then:
VARCHAR (Length)
If length is => 21846 and =< 2147483647,
then:
LONGTEXT
UINT1 UNSIGNED TINYINT
UINT2 UNSIGNED SMALLINT
UINT4 UNSIGNED INTEGER
UINT8 UNSIGNED BIGINT
WSTRING If length is => 1 and =< 32767, then:
VARCHAR (Length)
If length is => 32768 and =< 2147483647,
then:
LONGTEXT
Note:
n This information is required. If you are using the Advanced tab to
create a custom string, make sure to include the User Name and
Password properties. See Using Advanced Properties for a
MySQL Target for more information.
n This information is case sensitive.
n Important: Make sure that the MySQL user entered in the MySQL
Authentication section has the correct access privileges. For
information on how to provide the required privileges, see
Security Requirements.
n If you want to set custom properties for this database, see Using
Advanced Properties for a MySQL Target.
Note: Amazon RDS Migration Tool assumes that MySQL Client 5.2
for Linux orMySQL ODBC Client 5.2.6 64-bit for Windows has been
installed on the Amazon RDS Migration Server machine. If a later
version has been installed, you need to specify the version number in
the Additional ODBC connection properties field, as in the following
example:
driver=MySQL ODBC 5.3 Unicode Driver
This chapter describes how to use ODBC connectivity to connect to a source database.
It contains the following sections:
n Prerequisites
n Limitations
n Using ODBC to Connect to a Source
Prerequisites
The following section describes the prerequisites for working with Amazon RDS
Migration Tool and an ODBC endpoint.
You can connect a database to Amazon RDS Migration Tool using ODBC by indicating
the DSN (Data Source Name). In this case you must be sure that a DSN is defined for
the ODBC database on the computer where Amazon RDS Migration Tool is installed.
1. Install a database client on the computer where Amazon RDS Migration Tool is
installed. The client you install depends on the ODBC provider you are using. For
example, if you are using an IBM DB2 database, install an IBM DB2 client.
Note: You must use a 64-bit ODBC provider client to work with Amazon RDS
Migration Tool.
2. Use the ODBC Data Source Administrator to create a System DSN.The Data
Source is located in the Windows control panel.
Limitations
When using ODBC as a source, the following limitations apply:
n UPDATES to primary key fields are not supported. To update the field, define it as
a unique index instead.
n The ODBC Source endpoint supports full-load operations only.
n For providers that do not support batch operations, you must manually add
RowByRow=true to the connect string in the Advanced tab of the Add Database
dialog box. See Configuration Properties for the ODBC Source for an explanation
of how to use the RowByRow property.
BulkArraySize Set the array size for all DML queries 1000
(When using array binding).
Possible values: Any positive number.
This value must be set manually using
the Override connection string
parameters option in the Advanced tab.
Table 9–2 Supported ODBC Source Data Types with Mapping to Amazon RDS Migration
Tool Data Types
ODBC Data Types Amazon RDS Migration Tool Data Types
SQL_BIT BOOLEAN
SQL_TINYINT INT1
UINT1
Note: SQL data types are mapped to unsigned
data types when the UNSIGNED_ATTRIBUTE
is set to SQL_TRUE for the data type being
mapped.
SQL_SMALLINT INT2
UINT2
Note: SQL data types are mapped to unsigned
data types when the UNSIGNED_ATTRIBUTE
is set to SQL_TRUE for the data type being
mapped.
Note: If you are using an AIS CDC Agent as the source in a Amazon
RDS Migration Tool task, you cannot select the DSN for the Amazon
ODBC driver as the target. In this case, to use Amazon ODBC as a
source, you must enter the connection string manually by selecting
Connection String and following the directions for that option in this
procedure.
Note:
n You can use the Advanced tab to add specific properties and
create a custom connect string. In this case, you do not need to
enter information in this tab. For more information on using the
Advanced tab, see Using Advanced Properties when Using ODBC
Databases as a Source.
n To determine if you are connected to the database you want to use
or if the connection information you entered is correct, click Test.
If the connection is successful a message in blue is displayed. If
the connection fails, a message in red that includes the log error
information is displayed at the bottom of the dialog box.
To view the log entry if the connection fails, click View Log. The
server log is displayed with the information for the connection
failure. Note that this button is not available unless the test
connection fails.
Notes:
n When you select Connection String be sure to include User
name/password information in the connection string that you
type in the box.
If you are using the Advanced tab to create a custom string, make sure to
include the User Name and Password properties. For more information,
see Using Advanced Properties when Using ODBC Databases as a
Source.
n This information is case sensitive.
n Important: Make sure that the ODBC database user has the
correct access privileges for the ODBC provider being used. For
information on how to provide the required privileges, see.
n You can set custom properties in the Advanced tab. For more
information, see Using Advanced Properties when Using ODBC
Databases as a Source.
Provider syntax: Select the name of the provider syntax if you are using an alternate
provider syntax.
Override connection string parameters: Select this to use a custom connect string.The
following is an example of a string you can enter in this dialog box:
DSN=PERSON;uid=PP1
For information on the properties you can use in the connect string, see Configuration
Properties for the ODBC Source.
This chapter describes how to set up and use Amazon Redshift as a target in a
replication task. Amazon Redshift is located in the cloud and is accessed through an
Amazon Web Services (AWS) account.
This chapter contains the following topics:
n Introducing the Amazon Redshift Target Endpoint for Amazon RDS Migration
Tool
n Amazon Redshift Endpoint Prerequisites
n Configuration Properties
n Amazon Redshift Data Types
n Setting up Amazon Redshift as a Target in Amazon RDS Migration Tool
Configure the Amazon RDS Migration Transfer Service AMI (Amazon Machine Image)
A password is required in order to establish a secure connection between the Amazon
RDS Migration Server and your Amazon RDS Migration Transfer Service AMI (EC2
Instance).
1. Log in to your Amazon RDS Migration Transfer Service AMI (EC2 instance) using
remote desktop or a similar utility. A shortcut to the Amazon RDS Migration
Transfer Service Configuration utility should appear on your desktop.
2. Open the Amazon RDS Migration Transfer Service Configuration utility.
3. Either manually enter a password or click Generate Strong Password. Amazon
RDS Migration Transfer Service requires a password for authentication and data
encryption purposes.
4. Click Set Password and then Copy to Clipboard. You will need to enter this
password when configuring the Amazon RDS Migration Tool Amazon Redshift
target.
Note: You can change the password at any time. However, before changing the
password, it is recommended to stop any tasks with an Amazon Redshift target
(and that are configured to use the same Amazon RDS Migration Transfer Service
AMI). After changing the password, update any relevant tasks with the new
password and then start them.
5. Close the Amazon RDS Migration Transfer Service Configuration utility and then
disconnect from your Amazon RDS Migration Transfer Service AMI EC2 instance.
Configuration Properties
The following table describes the configuration properties available for the Amazon
Redshift endpoint. For information on how to set these properties, see Using
Advanced Properties for an Amazon Redshift Target.
Table 10–2 Supported Amazon Redshift Data Types with Mapping from Amazon RDS
Migration Tool Data Types
Amazon RDS Migration Tool Data Types Amazon Redshift Data Types
BOOLEAN BOOL
BYTES VARCHAR (Length)
DATE DATE
TIME VARCHAR(20)
DATETIME If scale is => 0 and =< 6, then:
TIMESTAMP (s)
If scale is => 7 and =< 9, then:
VARCHAR (37)
INT1 INT2
INT2 INT2
INT4 INT4
INT8 INT8
NUMERIC If scale is => 0 and =< 37, then:
NUMERIC (p,s)
If scale is => 38 and =< 127, then:
VARCHAR (Length)
REAL4 FLOAT4
REAL8 FLOAT8
n Bucket name: Type the name of the Amazon S3 bucket where you are copying
files to.
n Bucket region: Select the Amazon S3 region where the S3 buckets and folders
you are using are hosted. The default value is US East (N. Virginia).
Note: The bucket region specified must be the same region where your
Amazon Redshift database is located.
n Folder: Type the name of the S3 folder where you are copying files to.
n Access key: Type the access key information for Amazon S3.
n Secret key: Type the secret key information for Amazon S3.
n Folder: Type or browse to the S3 folder where you are copying files to.
The information for these properties is available from your Amazon Web Services
(AWS) account. If you do not have these values, refer to your AWS account or the
Amazon Redshift System Administrator for your enterprise
Notes:
n If you are using the Advanced tab to create a custom string, make
sure to include the User Name property. A Password can also be
included but is not required. See Using Advanced Properties for
an Amazon Redshift Target for more information.
n This information is case sensitive.
n If you want to set custom properties, see Using Advanced
Properties for an Amazon Redshift Target.
n To determine if you are connected to the database you want to use
or if the connection information you entered is correct, click Test.
If the connection is successful a message in blue is displayed. If
the connection fails, a message in red that includes the log error
information is displayed at the bottom of the dialog box.
To view the log entry if the connection fails, click View Log. The
server log is displayed with the information for the connection
failure. Note that this button is not available unless the test
connection fails.
For information on the properties you can include in an custom connect string for
Amazon Redshift, see Configuration Properties.
This chapter describes how to set up and use PostgreSQL database as a source or
target in a replication task. It has the following topics:
n Using PostgreSQL Database as a Source
n Source Prerequisites
n Security Requirements
n PostgreSQL Source Data Types
n Setting up PostgreSQL database as a Source in Amazon RDS Migration Tool
n Using PostgreSQL as a Target
n Target Prerequisites
n Security Requirements
n PostgreSQL Database Target Data Types
n Setting up PostgreSQL database as a Target in Amazon RDS Migration Tool
Client Side
n Amazon RDS Migration Server for Windows: The PostgreSQL ODBC Driver
psqlodbc_09_03_0300-x64-1 or above must be installed on the Amazon RDS
Migration Tool machine.
n Amazon RDS Migration Server for Linux: On the Amazon RDS Migration Tool
machine:
1. Install unixODBC driver postgresql94-odbc-09.03.0400-1PGDG.<OS
version>.x86_64 or above for Linux, where <OS version> is the OS of
the Amazon RDS Migration Server machine.
For example, postgresql94-odbc-09.03.0400-1PGDG.<rhel7>.x86_
64 is the client required for Red Hat 7.
2. Makes sure that the /etc/odbcinst.ini file contains an entry for
PostgreSQL, as in the following example:
[PostgreSQL]
Description = PostgreSQL ODBC driver
Driver = /usr/pgsql-9.4/lib/psqlodbc.so
Setup = /usr/pgsql-9.4/lib/psqlodbcw.so
Debug = 0
CommLog = 1
UsageCount = 2
n When the Apply Changes task option is enabled, the user specified in the
PostgreSQL source endpoint’s General tab must be granted super-user
permissions.
Server Side
n The IP address of the Amazon RDS Migration Tool machine must be added to the
pg_hba.conf configuration file.
n The following parameters and values must be set in the postgresql.conf
configuration file.
wal_level = logical
max_replication_slots >=1
The max_wal_senders parameter sets the number of concurrent tasks that can
run.
wal_sender_timeout =0
Security Requirements
The user specified in the General tab when Setting up PostgreSQL database as a
Source in Amazon RDS Migration Tool must be a registered user in the PostgreSQL
database.
Source Limitations
The following Change Processing limitations apply when using PostgreSQL as a
source:
n A captured table must have a Primary Key. In the event that a table does not have
a Primary Key, DELETE record operations will be ignored.
n Updating a Primary Key segment is ignored. In such cases, applying such an
update will be identified by the target as an update that did not update any rows
and will result in a record written to the exceptions table.
n The “Start Process Changes from Timestamp” run option is not supported.
n Change processing is not supported on Amazon RDS for PostgreSQL.
n Replication of multiple tables with the same name but a different case (e.g.
table1, TABLE1 and Table1) may cause unpredictable behavior and is
therefore not supported.
n Change processing of [CREATE | ALTER | DROP] table DDLs are supported
unless they are held in an inner function/procedure body block or in other nested
constructs.
For example, the following change will not be captured:
CREATE OR REPLACE FUNCTION attu.create_distributors1() RETURNS void
LANGUAGE plpgsql
AS $$
BEGIN
create table attu.distributors1(did serial PRIMARY KEY,name
varchar(40) NOT NULL);
END;
$$;
Table 11–1 Supported PostgreSQL Source Data Types with Mapping to Amazon RDS
Migration Tool Data Types
PostgreSQL Data Types Amazon RDS Migration Tool Data Types
INTEGER INT4
SMALLINT INT2
BIGINT INT8
NUMERIC(P,S) If precision is => 0 and =< 38, then:
NUMERIC
If precision is => 39, then:
STRING
DECIMAL(P,S) If precision is => 0 and =< 38, then:
NUMERIC
If precision is => 39, then:
STRING
REAL REAL4
DOUBLE REAL8
SMALLSERIAL INT2
SERIAL INT4
BIGSERIAL INT8
MONEY NUMERIC(38,4)
Note: The MONEY data type is mapped to FLOAT in
Microsoft SQL Server.
CHAR WSTRING (1)
CHAR(N) WSTRING (n)
VARCHAR(N) WSTRING (n)
TEXT NCLOB
BYTEA BLOB
TIMESTAMP TIMESTAMP
6. Type the Server name. This is the name or IP address of the computer with the
PostgreSQL database that you want to access.
7. Optionally, change the default port (5432).
8. Enter the PostgreSQL database authentication information (User name, Password)
of an authorized PostgreSQL user. If you do not know this information, see your
PostgreSQL database system manager.
9. Type the Database name or select one from the list of available databases. This is
the name of the PostgreSQL database from which you are replicating data.
10. Click OK to save your settings and close the dialog box.
n Capture DDLs - When this option is selected, the following actions occur:
– Operational artifacts are created (by RDS Migration Tool) in the database
when the task starts. In order to capture DDL events, Amazon RDS Migration
Tool creates various artifacts in the PostgreSQL database when the task starts.
You can later remove these artifacts as described in Removing RDS Migration
Tool Artifacts from the Source Database.
– Streamed DDL events are captured.
n Create DDL artifacts in schema - The schema in which the operational DDL
database artifacts will be created. The default value is "Public".
n Override connection string parameters: Select this option if you need to:
– Specify internal RDS Migration Tool parameters. Such parameters are rarely
required and are therefore not exposed in the UI.
– Specify pass-through (passthru) values for the specific database client. The
pass-through parameter will be included in the connection sting passed (by
RDS Migration Tool) to the database client.
Note that, apart from the password (which is never revealed in plain text), any
parameters already set in the UI will also be displayed in the edit box.
Target Prerequisites
The following section describes the client prerequisites when replicating to a
PostgreSQL target.
n Amazon RDS Migration Server for Windows: The PostgreSQL ODBC Driver
psqlodbc_09_03_0300-x64-1 or above must be installed on the Amazon RDS
Migration Tool machine.
n Amazon RDS Migration Server for Linux: On the Amazon RDS Migration Tool
machine:
1. Install unixODBC driver postgresql94-odbc-09.03.0400-1PGDG.<OS
version>.x86_64 or above for Linux, where <OS version> is the OS of
the RDS Migration Tool Server machine.
For example, postgresql94-odbc-09.03.0400-1PGDG.<rhel7>.x86_
64 is the client required for Red Hat 7.
2. Makes sure that the /etc/odbcinst.ini file contains an entry for
PostgreSQL, as in the following example:
[PostgreSQL]
Description = PostgreSQL ODBC driver
Driver = /usr/lib/odbc/psqlodbca.so
Setup = /usr/lib/odbc/libodbcpsqlS.so
Debug = 0
CommLog = 1
UsageCount = 2
n Amazon RDS Migration Server for Windows and Linux: psql -- PostgreSQL
interactive terminal must be installed.
Security Requirements
The user specified in the General tab when Setting up PostgreSQL database as a Target
in Amazon RDS Migration Tool must be a registered user in the PostgreSQL database.
6. Type the Server name. This is the name or IP address of the computer with the
PostgreSQL database that you want to access.
Notes:
n This information is case sensitive.
n Important: Make sure that the specified PostgreSQL database user
has the correct access privileges.
n To determine if you are connected to the database you want to use
or if the connection information you entered is correct, click Test.
If the connection is successful a message in blue is displayed. If
the connection fails, a message in red that includes the log error
information is displayed at the bottom of the dialog box.
To view the log entry if the connection fails, click View Log. The
server log is displayed with the information for the connection
failure. Note that this button is not available unless the test
connection fails.
9. Type the Database name or select one from the list of available databases. This is
the name of the PostgreSQL database to which you are replicating data.
10. Click OK to save your settings and close the dialog box.
This chapter describes how to use the Amazon RDS Migration Tool file channel as a
source or target in a replication task.
It covers the following topics:
n Setting Up Amazon RDS Migration Tool File Channel Tasks
n Working with the File Channel Data Files
n Amazon RDS Migration Tool Installation Requirements for the File Channel
n Security
n Limitations
n Using the File Channel as a Source
n Using the File Channel as a Target
Note: When using file channel, Change Tables can be enabled for the
remote task but not for the local task (enabling Change Tables for the
local task will result in remote task failure).
Local Task
You set up the local task using the File-Channel endpoint as a target. The binary file
created in this task is used as the source for one or more remote tasks using the
File-Channel source endpoint.
The local task replicates data from an Amazon RDS Migration Tool supported
database to the file channel. If you changed the default folder for storing data files
(during the installation), then you must specify the location of the binary file created
by the file channel. This location can be anywhere in your system. For more
information on setting up a local task, see Using the File Channel as a Target.
Note: To replicate tables that were added to the local file channel
task after the initial full load, you need to reload both the local and the
remote file channel tasks.
For more information on setting up a remote task, see Using the File Channel as a
Source.
To do this:
1. For each of the target databases, define a separate (remote) task that replicates
from the File Channel source to the target database. In the Advanced tab of the
File Channel source settings, make sure to clear the Delete processed files check
box. This ensures that the File Channel files will be available for distribution as
required.
2. Define a local task that replicates from the source database to a File Channel target.
3. Run the local task (this will create the File Channel files required by the remote
task).
4. For each of the remote tasks, select which tables to replicate (from the File Channel
source) and optionally apply Filters and Transformations to them.
5. Run the remote tasks.
For more information on defining tasks, see Designing Tasks.
For information on Filters and Transformations, see Customizing Tasks.
Limitations
n The File-Channel endpoint does not support the use of BLOBs.
n You cannot use the full-load resume function if you are using the file-channel
endpoint. To resume a full-load operation, you must delete the original data and
then run the task again.
n You must delete the file-channel folder before re-starting a task for change
processing.
n Control tables defined for the Local File Channel task but not for the Remote File
Channel task will not be created on the remote task’s target endpoint.
For information on defining Control Tables, see Control Tables.
Note: The RDS Migration Tool File Transfer Service always transfers
the local file channel task’s files to the default directory on the remote
system (C:\Program Files\Amazon\RDS Migration
Tool\data\databases\<remote_file-channel_db_name>\fc).
Consequently, if you are using the File Transfer Service, ensure that
the default directory always has enough space for the incoming files.
For more information on using the File Transfer Service, see File
Transfer Service and Using Advanced Properties for a File-Channel
Source.
This folder should be in a location that is accessible from anywhere in the WAN
you are working with.
Note:
n You can use the Advanced tab to define specific properties and
create a custom connect string. In this case, you do not need to
enter information in this tab. For more information on using the
Advanced tab, see Using Advanced Properties for a File-Channel
Source.
n To determine if you are connected to the database you want to use
or if the connection information you entered is correct, click Test.
If the connection is successful a message in blue is displayed. If
the connection fails, a message in red that includes the log error
information is displayed at the bottom of the dialog box.
To view the log entry if the connection fails, click View Log. The
server log is displayed with the information for the connection
failure. Note that this button is not available unless the test
connection fails.
For more information about the File Transfer Service, see File Transfer Service.
n Delete processed files: Select this check box to delete the File Channel files after
the data has been replicated to the target database.
You should clear this check box if other tasks need to use the files.
n Override connection string parameters: Select this to use a custom connect string.
The following is an example of a string you can enter in this dialog box:
DeleteProcessedFiles=Y;BufferSize=1000;path=\\folder\file_channel;
Example:
RemoteDatabases=MyDB@123.123.123.1:9090
fileTransferUploadStreams The maximum number of streams to use to 10
transfer the files when TransferFiles=Y. You
can adjust the number of streams to achieve
optimal transfer speeds.
Notes:
n The Type is different depending on the type of file you are
creating, however the information you enter is the same for all file
types.
n All files are used as targets, however you can use an Amazon RDS
Migration Tool file as a source only after you created the file by
loading data into it as a target.
Note:
n You can use the Advanced tab to define specific properties and
create a custom connect string. In this case, you do not need to
enter information in this tab. For more information on using the
Advanced tab, see Using Advanced Properties for a File-Channel
Target.
n To determine if you are connected to the database you want to use
or if the connection information you entered is correct, click Test.
If the connection is successful a message in blue is displayed. If
the connection fails, a message in red that includes the log error
information is displayed at the bottom of the dialog box.
To view the log entry if the connection fails, click View Log. The
server log is displayed with the information for the connection
failure. Note that this button is not available unless the test
connection fails.
n Override connection string parameters: Select this to use a custom connect string.
The following is an example of a string you can enter in this dialog box:
BufferSize=1000;MaxStorageSize=100000;MaxBatchingTimeInterval=120;MaxFileSize=3
2000;path=\\folder\file_channel;
For a description of the supported connection string parameters, see File Channel
Target Properties.
This chapter describes the elements of the Amazon RDS Migration Console and how
to control access to the console by taking advantage of Amazon RDS Migration Tool’s
built-in security roles. The Console is a Web-based interface that runs in most modern
browsers. (For information on supported browsers, see Additional Required Software.)
You can connect from any computer to the Amazon RDS Migration Server. The
following topics describe the Amazon RDS Migration Console.
n The Tasks View
n Viewing Specific Tasks
n The Server View
n Security Roles
Task Icons
Icons in the task view show the current status for each task. The following figure
describes the information shown in the icon.
This status icon shows that the task is stopped due to an error.
The red stopped icon shows this and the blue database icons
indicate the task process is still active. You can stop the task from
the task monitor in the Amazon RDS Migration Console. The red
circle shows that there is at least one error notification.
This icon indicates that the task is running. The status icons and
the database color (blue) indicate this.
Design Mode
You use the design mode to create a task and assign tables to be replicated. To view the
Design mode, click Design from the right side of the button bar.
Note: The Design mode is the default mode displayed when you view a task.
The following figure shows the overview of the design mode:
Monitor Mode
The monitor mode lets you view the replication task activities in real time. To view the
Monitor mode, click Monitor from the right side of the button bar.
The figure below shows an overview of the monitor mode:
Security roles cannot be edited nor can the permissions assigned to each of the roles be
changed. The user under whose account Amazon RDS Migration Tool is installed will
be associated with the Admin role by default.
You can associate users with each of the security roles by adding the user to the
appropriate Active Directory group.
To enable security roles, you need to create Active Directory groups with the same
names as the GroupRef elements listed in the table above.
Any users added to the Active Directory groups will be able to perform tasks in the
Amazon RDS Migration Console according to their group’s predefined Security Role
Permissions.
The following is an example of how these roles appear in the UserConfiguration.xml
file. Make sure that the computer can access the domains where the users are located.
<Role name="Admin" anonymous="false">
For Viewers:
No action buttons will be available to Viewers, since Viewers are not allowed to
perform any operations.
This chapter describes how to design a replication task. To design a replication task,
you must first be sure that you have configured at least one source and one target
database to work with Amazon RDS Migration Tool. Then you can create the task and
add tables to be replicated.
It is also possible to customize a task by creating new tables or columns for the target
database or by selecting only some of the data from each column to be replicated. This
is done using transforms and filters. For more information, see Customizing Tasks.
For more information about replication tasks, see What is a Replication Task.
This chapter contains the following topics:
n Setting up Tasks
n Working with Databases
n Adding a Source and Target Database to a Task
n Adding Tables to a Task
n Editing a Replication Task
n Deleting a Replication Task
n Importing and Exporting Tasks
Setting up Tasks
Before you get started with designing all of the features that you need for a task, you
must define its default behavior.
2. Type a name for the task. The name can be anything that is useful in remembering
what the task is doing.
3. Type a description for the task. This is optional.
4. Toggle the task options:
n Full Load: Click this button to enable or disable Full Load options for this
task.
When full load is enabled, Amazon RDS Migration Tool loads the initial
source data to the target database. By default a full load is carried out for this
To add a database
1. In the Tasks view, click Manage Databases in the toolbar
The Add Database dialog box opens.
2. In the Manage Databases dialog box click Add Database. The Add Database
dialog box opens.
3. In the Add Database dialog box, select the type of database you are using. The
information that you must enter depends on which database you select. For more
information, see the chapter that describes the database you are using. For a list of
databases supported by Amazon RDS Migration Tool, see Databases Supported as
Amazon RDS Migration Tool Endpoints.
Once you add databases to Amazon RDS Migration Tool, you can begin to use them to
build a replication task. For information on how to add a database to a replication task,
see Adding a Source and Target Database to a Task.
2. Drag a source database to the top square in the task map, where it is indicated to
drop the source database. If Amazon RDS Migration Tool does not let you do this,
make sure that the database you are using is a source database. You can see this
information in the Manage Databases dialog box.
3. Drag a target database to the bottom square in the task map, where it is indicated
to drop the target database. If Amazon RDS Migration Tool does not let you do
this, make sure that the database you are using is a target database. You can see
this information in the Manage Databases dialog box.
4. Click Save to make sure that Amazon RDS Migration Tool saves this task.
Once you select the database for your task, you must select the tables from the source
database to be replicated. The next step in creating a replication task is Adding a
Source and Target Database to a Task.
See the following for information on how to work with the Select Tables dialog box:
n Search for Tables to use in a Replication Task
n Selecting Specific Tables for Replication
n Create Table Selection Patterns
Once you have selected tables to replicate, you can run the replication task. However if
you want to make any changes to the structure of the tables in the target database or
only select specific columns, you will need to carry out one or both of the following
procedures:
n Define Transformations on a Single Table
n Using Filters
2. Click Search to display a list of tables. Any table that matches the criteria that you
entered in the fields above will be displayed. You can use this table list to select
tables to use for replication.
After you finish searching, you can Selecting Specific Tables for Replication.
n Select a table owner from the Schema drop-down list. If you select this all
tables that belong to that schema is included in the table selection pattern.
n Type the name or partial name of a table in the Table field. Any specific table
that you list here is included in the table selected pattern.
If the table that you type here is a member of the schema you selected in the
Schema drop-down list, then you only have to type the name of the table.
If you did not select a schema or the table is a belongs to another schema,
include the schema with the table name in the following format:
HR.Employees, where HR is the owner.
2. Click Include to include all of the tables that match the criteria you selected in the
Source Tables section.
3. Click Exclude to exclude any tables that math the criteria entered in the Source
tables section.
4. Click OK to close the Select Tables dialog box.
5. Click Save to make sure that Amazon RDS Migration Tool saves the table
information for this task.
The following example shows a pattern that will replicate all tables that are members
of the HR schema except for the HR.EMPLOYEES table.
Include HR.%
Exclude HR.EMPLOYEES%
When you explicitly select tables, all selected tables are replicated in full unless you
define transformations or filters for the table. If you need to make changes to the table
structures in the target database or if you only want to select specific columns, then
you will need to perform the procedures described in Define Transformations on a
Single Table and Using Filters respectively.
Note: To view all of the tables included when you use a table
selection pattern, click Expand List. The Full Table List lists all of the
tables included in any table pattern you defined and all explicitly
selected tables.
Click Collapse List to return to the main part of the Tables tab.
3. To add schemas:
n One at a time: From the Schema List, select the schema you want to include in
the replication and then click Add.
n To add all existing schemas: Click Add All.
4. To remove schemas:
n One at a time: From the Selected Schemas list, select the schema you do not
want to include in the replication and then click Remove.
n To remove all existing schemas: Click Remove All.
5. Click OK to add the selected schemas to the task. The schemas will be displayed in
the Selected Schemas list on the right of the Tasks view.
To open a task
1. Make sure you are in the Task view. If you see Settings in the button bar at the top
of the Amazon RDS Migration Console, click on the triangle and select Tasks.
2. Click the task you want to work with. A task map is displayed in the right pane.
3. At the top of the task map, click Open. The information about the task is
displayed.
See the following for information on how to make the changes to a task.
n Adding a Source and Target Database to a Task
n Adding Tables to a Task
n Define Transformations on a Single Table
n Using Filters
n Task Settings
To delete a task
1. Make sure you are in the Task view. If you see Settings in the button bar at the top
of the Amazon RDS Migration Console, click on the triangle and select Tasks.
2. Select the task you want to work with.
3. At the top of the task map, click Delete. The task and all of its defined endpoints
are deleted. The source and target database definitions that you created are still
available to be used in other tasks.
When you export at task to a different environment, you may need to edit the task
information. For example, you may need to change the connect string for a database.
The following section describe how to work with export files.
n How to: Export and Import a Task
n Editing an Exported (json) File
To export a task
1. From the Amazon RDS Migration Tool computer where you defined the task you
want to import, open the Amazon RDS Migration Tool command line console by
doing the following:
On Windows: Go to All Programs in the Start menu and point to Amazon RDS
Migration Tool, then to Utilities and then select Amazon RDS Migration Tool
Command Line.
A command-line console is displayed with the correct prompt for Amazon RDS
Migration Tool.
Note: You can also open the Windows command-line console and change the
directory to the following:
<Full path to the Amazon RDS Migration Tool root folder>\bin,
for example to use the path to the folder or directory where Amazon RDS
Migration Tool is installed by default type, C:\Program Files\Amazon\RDS
Migration Tool\bin.
On Linux: Enter the Linux computer, then type the following before you continue:
source ./arep_login.sh
2. At the prompt in the command-line console, type the following:
repctl exportrepository task=<task_name>
A file called task_name.json is copied to the <Full path to the Amazon RDS
Migration Tool root folder>\data\imports folder or directory.
You can import this file into another Amazon RDS Migration Tool instance.
To import a task
1. From the Amazon RDS Migration Tool computer where you want to export the
task, open the Amazon RDS Migration Tool command line console by doing the
following:
Notes:
n Make any changes to the *.json file before you carry out the import
operation.
n Information about the databases, tables, tasks, task settings, and
logger settings should be changed using the Amazon RDS
Migration Console after the file is imported.
To be able to use the new task, you will need to make changes to
the database password and connection strings by editing the
*.json file. See Making Changes to the Database Connection Information
for more information.
This chapter describes how to customize a replication task. Customizing tasks lets you
create new tables or columns for the target database or select only some of the data
from each column to be replicated. This is done using transforms and filters.
For more information about replication tasks, see What is a Replication Task.
The following topics describe all of the procedures that can be included in customizing
a replication task.
n Table Settings
n Defining Global Transformations
n Using the Expression Builder (for Filters, Transforms, and Global Transformations)
n Task Settings
Table Settings
Table settings lets you define how the data for each individual table is replicated to the
target. To carry out these tasks, use the Table Settings dialog box.
Note: This option is available for tables where changes were made.
When you make changes to a table the table is displayed in the table
list with the word (changed).
Amazon RDS Migration Tool lets you make the following changes to the tables and
columns:
n Rename any column for the target table
n Delete a target column
n Change the data type and/or the length of any target column
n Add additional target columns
n Designate which target columns (i.e. segments) will comprise the Unique Index
n Recalculate the data
Use this method if you need to make transformations that are specific to only one or a
few tables in your task. For an explanation of how to configure transforms, see Using
the Transform Tab.
To make a similar change over multiple tables, see Defining Global Transformations.
To create an expression
1. In the Transform window, select the row with the column where you want to
create a new expression.
or
Click Add Column to add a new column.
2. Drag column inputs into the Calculation Expression field at the bottom of the
window. This will create an expression that includes the value of the column.
3. Add strings or number values to the Calculation Expression. These will affect the
value of the column after the transformation.
4. Add SQLite operators between the inputs and or strings/number values, for
example ||, which concatenates the strings for the two inputs you add.
The following are some examples of expressions you can use in the transform window.
FIRST_NAME||LAST_NAME
This combines the string values from each of the columns.
$SALARY+0.1*SALARY
This adds 10 per-cent to each salary value. The salary column should be a numeric
data type.
For information on the SQLite syntax operators you can use, see Using SQLite Syntax
with Transforms.
For more information about SQLite syntax, see the SQLite documentation.
Using Filters
The filtering operation lets you create filters that define the information from a column
to include in/exclude from a replication task. This lets you replicate only the specific
data that you need.
This section covers the following topics:
n Limitations
n Opening the Filter Table Window
n Creating a Filter Condition for a Specified Column
n Creating a Record Selection Condition for One or More Columns
n Using the Range Builder
n Using SQLite Syntax with Filtering
Limitations
When creating a filter, the following limitations apply:
n Filters are not supported for calculating columns of Right-to-Left languages.
n When a filter is created to exclude specific rows in a column, the specified rows
will always be excluded, even if the rows that were initially excluded are later
changed. For example, if you chose to exclude rows "1-10" in a column named
"Age" and those rows were later changed to "11-20", the rows will continue to be
excluded, even though the data is no longer the same.
Note: You can use the following special characters when entering a
string:
n %: Matches any string of zero or more characters. For example,
Mc% searches for every name that begins with Mc or %bob%
includes every name that contains bob.
n _:Matches a single character (as a wildcard). For example: ’Sm_
th’ includes names that begin with Sm and end with th, such as
Smith or Smyth. To search for an underscore character, use [_]"
n [..]: Includes a range or set of characters. For example,
[CK]ars[eo] includes names Carsen, Karsen, Carson, and
Karson or [M-Z]inger includes all words that end in inger with
the first letter between M and Z, such as Ringer, Singer, or Zinger.
For more information see documentation on how to use Transact-SQL.
For information on what SQLite operators can be used to create Record Selection
Condition filters, see Using SQLite Syntax with Filtering.
3. Click Add Range. Select any of the following from the drop-down list displayed.
n Equal to: Select Equal to to enter a single value. The following is displayed in
the range list.
Equal to = [N]
Click the [N] and type a value in the field that is displayed.
When the value in the selected column equals the value you enter, the result is
included or excluded in the replication task depending on the option selected
in the Include/Exclude column.
n Between: Click Between to enter a range of values. The following is displayed
in the range list.
Between [N] - [N]
Click each [N] and type a value in the fields that are displayed.
When the column contains the values between the two values entered, the
result is included or excluded in the replication task depending on the option
selected in the Include/Exclude column.
n Less than or equal to: Select Less than or equal to and enter a maximum
value. The following is displayed in the range list.
Less than or Equal to =< [N]
Click the [N] and type a value in the field that is displayed.
Note: Filter ranges that you enter manually are also displayed in the
Filter Builder. You can use the Filter Builder to delete them.
Table 15–3 SQLite Operators used by Amazon RDS Migration Tool for Filtering
Operator Description
< Is less than.
$SALARY<100000
<= Is less than or equal to
$SALARY<=100000
> Is greater than
$SALARY>100000
>= Is more than or equal to
$SALARY>=100000
= Is equal to
$SALARY=100000
!= or <> Is not equal to
$SALARY!=100000
IS Is the same as
$HIRE_DATE IS 2014-09-29
IS functions the same as = unless one or both of the operands are NULL. In this
case, if both operands are NULL, then the IS operator evaluates to 1 (true). If one
operand is NULL and the other is not, then the IS operator evaluates to 0 (false).
For more information on how to use the SQLite syntax, see the SQLite documentation.
You may need to use this option when you want to change the names of all tables. You
can change the names using wild cards and patterns. For example, you may want to
change the names of the tables from account_% to ac_%. This is helpful when
replicating data from an Microsoft SQL Server database to an Oracle database where
the Microsoft SQL Server database has a limit of 128 characters for a table name and
the Oracle database has a limit of 31 characters.
You may also need to change a specific data type in the source to a different data type
in the target for many or all of the tables in the task. Global transformation will
accomplish this without having to define a transformation for each table individually.
This section includes the following topics:
n Starting the New Transformation Rule Wizard
n Selecting the Transformation Type
n Defining What to Transform
n Defining the Transformation Rule
n Using the Global Transformation Rules List
Note: You can only create one rule for each transformation type. If
you create multiple rules for a single transformation type, only the last
rule you create will be valid.
For example, if you create the following rules (in order) to rename a
schema:
Rename Schema: Add Prefix
Rename Schema: Add Suffix
Only the second rule (adding a suffix to the schema name) will be
executed.
After you complete the Selecting the Transformation Type step, click Next to go to the
Defining the Transformation Rule step.
Note: If the global transformation type you are defining is Drop Column, you do not
need to create a Transformation Rule. In this case, click Finish to add the rule to the
Global Transformation Rules list.
Note: The above figure is for example purposes only. The options
displayed on this page depend on the Transformation Type selected.
Depending on the Transformation Type selected, you define the rule to be carried out
using the options on this page. See the section for any of the following transformation
types you are using.
n Rename Schema
n Rename Table
n Rename Column
n Add Column
n Drop Column
n Convert Data Type
Rename Schema
If your transformation type is Rename Schema, you can do the following:
n Rename schema to (string):
n Add a Prefix or Suffix
n Remove a Prefix or Suffix
n Replace a Prefix or Suffix with Different Characters
n Convert schema name to uppercase
n Convert schema name to lowercase
n Rename schema (expression)
Add a Prefix or Suffix Use the Add a prefix or suffix option to add additional
characters to the beginning or end of the schema name for all schemas that fit the
definition you created in the Apply transformation rule step. For example, if the
schema name is HR, you can add a suffix, such as TAR or _TAR to the schema name for
all tables with that schema name. In this case, the resulting schema name will be
HRTAR or HR_TAR.
2. Click the word Prefix or Suffix and select one of these two from the list.
3. Click [string] to activate the field.
4. Type the characters you want as the prefix or suffix. If you want to include an
underscore or other legal character to separate the prefix/suffix from the original
name, you must add it as part of the character string.
5. Click Finish to add the rule to the Global Transformation Rules list.
Note: If you are using Oracle as your target database, Amazon RDS
Migration Tool does not create a new schema. Therefore, the schema
name that is the result of adding a prefix or suffix must exist in the
Oracle target database. If the resulting schema name does not exist,
you must create the schema in the Oracle database before carrying out
this task.
For more information, see Limitations in Chapter 5, "Using an Oracle
Database as a Source or Target".
Remove a Prefix or Suffix Use the Remove a prefix or suffix option to remove a
string of characters from the beginning or end of a schema name for all schema that fit
the definition you created in the Apply transformation rule step.
For example, you can use this option to remove the letters _REV from the schema
name for all tables in the schema HR_REV. In this case the schema name in the target
will be HR.
Replace a Prefix or Suffix with Different Characters Use the Replace a prefix or
suffix option to replace a string of characters with a different string of characters. You
determine whether to replace the characters at the beginning or end of a schema name
for all schema that fit the definition you created in the Apply transformation rule step.
For example, you can use this option to replace the letters _ORIG with _REPL in the
schema name for all tables in the schema HR_ORIG. In this case the schema name in
the target will be HR_REPL.
Note: If you are using Oracle as your target database, Amazon RDS
Migration Tool does not create a new schema. Therefore, the schema
name that is the result of replacing a prefix or suffix with a different
string of characters must exist in the Oracle target database. If the
resulting schema name does not exist, you must create the schema in
the Oracle database before carrying out this task.
Convert schema name to uppercase Use the convert to uppercase option to convert
all of the letters in a schema name to upper case. For example:
Schema_cat, becomes SCHEMA_CAT
schema_cat, becomes SCHEMA_CAT
sChEMa_Cat, becomes SCHEMA_CAT
2. Click Finish to add the rule to the Global Transformation Rules list.
Note: If you are using Oracle as your target database, Amazon RDS
Migration Tool does not create a new schema. Therefore, you must
enter a schema name that exists on the database or you must create
the schema in the Oracle database before carrying out this task.
For more information, see Oracle Limitations.
Rename Table
If your transformation type is Rename Table, you can do the following:
n Rename table to (string):
n Add a Prefix or Suffix
Rename table to (string): Use the Rename table to: [string] option to change the
name of all tables that you defined in the Apply transformation rule step to a different
name. For example, if you have a table called EMPLOYEE and want to change all
instances of this name to EMP then enter the string EMP. You can enter any string in this
field.
Add a Prefix or Suffix Use the Add a prefix or suffix option to add additional
characters to the beginning or end of the table name for all tables that fit the definition
you created in the Apply transformation rule step. For example, if the table name is
EMPLOYEES, you can add a suffix, such as TAR or _TAR to the table name for all tables
with that table name. In this case, the resulting table name will be EMPLOYEESTAR or
EMPLOYEES_TAR.
2. Click the word Prefix or Suffix and select one of these two from the list.
3. Click [string] to activate the field.
4. Type the characters you want as the prefix or suffix. If you want to include an
underscore or other legal character to separate the prefix/suffix from the original
name, you must add it as part of the character string.
5. Click Finish to add the rule to the Global Transformation Rules list.
Remove a Prefix or Suffix Use the Remove a prefix or suffix option to remove a
string of characters from the beginning or end of a table name for all tables that fit the
definition you created in the Apply transformation rule step.
For example, you can use this option to remove the letters _REV from the table name
for all tables with the name EMPLOYEES. In this case the table name in the target will
be EMPLOYEES.
2. Click the word Prefix or Suffix and select one of these two from the list.
3. Click [string] to activate the field.
4. Type the characters you want to remove. If you want to remove an underscore or
other legal character from the original name, you must add it as part of the
character string.
5. Click Finish to add the rule to the Global Transformation Rules list.
Replace a Prefix or Suffix with Different Characters Use the Replace a prefix or
suffix option to replace a string of characters with a different string of characters. You
determine whether to replace the characters at the beginning or end of a table name for
all tables that fit the definition you created in the Apply transformation rule step.
Convert table name to uppercase Use the convert to uppercase option to convert a
table name to all upper case. For example:
Table_cat, becomes TABLE_CAT
table_cat, becomes TABLE_CAT
taBLe_Cat, becomes TABLE_CAT
Convert table name to lowercase Use the convert to lowercase option to convert a
table name to all lower case. For example:
Table_cat, becomes table_cat
TABLE_CAT, becomes table_cat
taBLe_Cat, becomes table_cat
2. Click Finish to add the rule to the Global Transformation Rules list.
Rename table (expression) Use the Rename table to [expression] option to change
the name of all tables that fit the definition you created in the Apply transformation
rule step. For example, if you have a table called EMPLOYEE and want to change all
instances of this name as defined in the previous step it to EMP.
Rename Column
If your transformation type is Rename Column, you can do the following:
n Rename column to (string):
n Add a Prefix or Suffix
n Remove a Prefix or Suffix
n Replace a Prefix or Suffix with Different Characters
n Convert column name to uppercase
n Convert column name to lowercase
n Rename Column (expression)
Rename column to (string): Use the Rename column to: [string] option to change
the name of all columns that you defined in the Apply transformation rule step to a
different name. For example, if you have a table called SALARY and want to change all
instances of this name to EMP then enter the string SAL. You can enter any string in this
field.
Add a Prefix or Suffix Use the Add a prefix or suffix option to add additional
characters to the beginning or end of the column name for all columns that fit the
definition you created in the Apply transformation rule step. For example, if the
column name is SALARY, you can add a suffix, such as TAR or _TAR to the table name
for all tables with that table name. In this case, the resulting table name will be
SALARYTAR or SALARY_TAR.
2. Click the word Prefix or Suffix and select one of these two from the list.
3. Click the [string] to activate the field.
Remove a Prefix or Suffix Use the Remove a prefix or suffix option to remove a
string of characters from the beginning or end of a column name for all columns that
fit the definition you created in the Apply transformation rule step.
For example, you can use this option to remove the letters _REV from the column
name for all columns with the name SALARY. In this case the column name in the
target will be SALARY.
2. Click the word Prefix or Suffix and select one of these two from the list.
3. Click [string] to activate the field.
4. Type the characters you want to remove. If you want to remove an underscore or
other legal character from the original name, you must add it as part of the
character string.
5. Click Finish to add the rule to the Global Transformation Rules list.
Replace a Prefix or Suffix with Different Characters Use the Replace a prefix or
suffix option to replace a string of characters with a different string of characters. You
determine whether to replace the characters at the beginning or end of a column name
for all columns that fit the definition you created in the Apply transformation rule
step.
For example, you can use this option to replace the letters _ORIG with _REPL in the
column names for all columns called SALARY_ORIG. In this case the column name in
the target will be SALARY_REPL.
Convert column name to uppercase Use the convert to uppercase option to convert
a column name to all upper case. For example:
Column_cat, becomes COLUMN_CAT
2. Click Finish to add the rule to the Global Transformation Rules list.
Convert column name to lowercase Use the convert to lowercase option to convert
a column name to all lower case. For example:
Column_cat, becomes column_cat
column_cat, becomes column_cat
coLUMnM_Cat, becomes column_cat
2. Click the button to the right of the Rename column option to open the Expression
Editor. For information on how to use the Expression Editor, see Using the
Expression Builder (for Filters, Transforms, and Global Transformations). Then go
to step 4.
or
Click [expression] to activate the field and continue with step 3.
3. Type an SQLite expression or a string (in quotes) to rename the column. For
example:
n "New_Column"
n ’PREF_’||$COLUMN_NAME_VAR||’_SUFF’
You can use the following variables in the SQLite expression:
n $SCHEMA_NAME_VAR
n $TABLE_NAME_VAR
n $COLUMN_NAME_VAR
n $COLUMN_DATATYPE_VAR
Add Column
When you add a column to multiple tables, you must provide a name, define the data
type for the column an define the data that the column contains. The column that you
define here is added to all tables that fit the definition you created in the Apply
transformation rule step.
Drop Column
This option does not require a transformation rule. For this option you complete the
Global Transformation Rule after the Defining What to Transform step.
The following sections describe what you can do in each part of the Expression
Builder:
n A. Elements Pane: This pane contains elements that you can add to an expression.
Select elements and move them into the Expression Builder box to create the
expression. For more information, see Build an Expression.
The Elements Pane contains the following tabs:
– Metadata (available only when working with Global Transformations)
– Input Columns (available only when working with transforms or filters)
– Operators
– Functions
– Header Columns (for Global Transformations, this tab is available only when
you select Add Column)
n B. Build Expression Panel: The Build Expression Panel is where you put together
the expression you are building. You move elements, such as columns or operators
into the box. You can also type all or part of an expression in this box. For more
information, see Build an Expression.
n C. Evaluate Expression Panel: This panel displays the parameters for the
expression. After you build the expression, click Evaluate to list the expression
parameters. You can then enter a value or argument for each of the parameters.
For more information, see Evaluate an Expression.
The top part of the Expression panel contains the Operator button bar. This button
bar contains the most common operators. Click the operator you want to use to
Build an Expression
The first step in using the expression builder is to build an expression. The expression
that you build is displayed in the top section of the right pane. You can open the
Expression when:
n You define Transformations for a single table.
n You define Filters for a single table.
n Use the Global Transformations dialog box to Rename Schema, Rename Table,
Rename Column, or Add Column.
To build an expression
1. In the Elements Pane, select any element you want to include in your expression.
For information on the elements you can use in an expression, see Functions.
2. Move the selected element into the Build Expression box by doing one of the
following:
n Select the element and then click the arrow at the right of the element.
n Double-click the element.
n Drag the element into the Build Expression box.
3. Continue to add elements as needed.
Note: To add operators to your expression, you can use the Operator tab in the
Element pane or the Operator buttons at the top of the Build Expression panel or any
combination of these. See Operators and Operator Button Bar.
For example, to create an expression that will combine the first name and last name,
do the following:
1. In the Input Columns tab drag the FIRST_NAME column into the Build
Expression box.
2. Click the concatenate (||) operator from the Operator bar at the top of the Build
Expression box.
3. In the Input Columns tab drag the LAST_NAME column into the Build
Expression box.
Evaluate an Expression
You can evaluate an expression to determine its parameters and to determine whether
the expression is valid.
To evaluate an expression
1. From the Expression Builder window, click Build an Expression.
2. Click Evaluate.
If the expression is not valid, an error message is written in red at the bottom of the
Expression Builder window. The following figure shows an example of an error
message.
If the expression is valid, the expression parameters are displayed in the Parameter
column in the Evaluate Expression section. See Figure 15–10, "Test Expression".
Type a valid value for each of the parameters in the Value column to Test an
Expression.
For example, type John for the FIRST_NAME and Smith for the LAST_NAME in the
Value column. Once you type in values, you can Test an Expression.
Test an Expression
You can use the Amazon RDS Migration Tool Test procedure to display the results of a
test expression. The following figure is an example of a built expression that is
evaluated and contains a test result.
To test an expression
1. From the Expression Builder window, Build an Expression.
Operators
The sections below describe the SQLite operators you can use to build an expression
with the Expression builder. The Expression builder divides the operators into the
following categories:
n Strings
n Logical
n Mathematical
Strings
You can use the following string:
||
Name: Concatenate strings.
Examples:
FIRST_NAME||LAST_NAME
PHONE_NUMBER||<Office Only> (adds the string Office Only to the telephone
number).
Logical
The following table describes the logical SQLite operators used by the Amazon RDS
Migration Tool Expression Builder:
Table 15–5 Logical SQLite Operators used by Amazon RDS Migration Tool Expression
Builder
Operator Description
!= or <> Is not equal to
$SALARY!=100000
IS Is the same as
$HIRE_DATE IS 2014-09-29
IS functions the same as = unless one or both of the operands are NULL. In this
case, if both operands are NULL, then the IS operator evaluates to 1 (true). If one
operand is NULL and the other is not, then the IS operator evaluates to 0 (false).
IS NOT Is not the same as
$HIRE_DATE IS NOT 2014-09-29
IS NOT functions the same as != unless one or both of the operands are NULL. In
this case, if both operands are NULL, the IS NOT operator evaluates to 0 (false). If
one operand is NULL and the other is not, then the IS NOT operator evaluates to 1
(true).
Mathematical
The following table describes the mathematical SQLite operators used by the Amazon
RDS Migration Tool Expression Builder:
Table 15–6 SQLite Mathematical Operators used by the Amazon RDS Migration Tool
Expression Builder
Operator Description
+ Adds two values together.
DEPARTMENT_ID+100 (adds 100 to each ID number). Any column used in an
expression with this operator must be a numeric data type.
- Subtracts a value from another value.
MANAGER_ID-100 (subtracts 100 from each ID number). Any column used in an
expression with this operator must be a numeric data type.
% Uses the remainder of a division expression as the value.
%SALARY/7 (Divides the value of the Salary column by 7 and uses any remainder
from the expression as the column value).
/ Divides one value into another.
SALARY/.16 (Divides the value of the Salary column by .16.
Note: If the two values in the division expression are integers (two NUMERIC
columns with no digits after the decimal) and the result is a fractional value, the
result returned will be 0.
* SALARY*.16 (Multiplies the value of the Salary column by .16. This could be used
to calculate taxes that are subtracted from a salary).
Functions
The sections below describe the SQLite functions you can use to build an expression
with the Expression builder. The Expression builder divides the functions into the
following categories:
n Strings
n LOBs
n Numeric
n NULL check
n Date and Time
n Operation
n Data Enrichment
Strings
The following table describes the string functions used by the Amazon RDS Migration
Tool Expression Builder:
Table 15–7 SQLite String Functions used by the Amazon RDS Migration Tool
Expression Builder
Function Description
lower(x) The lower(x) function returns a copy of string x with all characters converted to
lower case. The default built-in lower() function works for ASCII characters
only.
ltrim(x,y) The ltrim(x,y) function returns a string formed by removing all characters that
appear in y from the left side of x. If there is no value for y, ltrim(x) removes
spaces from the left side of x.
replace(x,y,z) The replace(x,y,z) function returns a string formed by substituting string z for
every occurrence of string y in string x.
rtrim(x,y) The rtrim(x,y) function returns a string formed by removing all characters that
appear in y from the right side of x. If there is no value for y, rtrim(x) removes
spaces from the right side of x.
substr(x,y,z) The substr(x,y,z) function returns a substring of input string x that begins with
the y-th character and which is z characters long. If z is omitted then substr(x,y)
returns all characters through the end of the string x beginning with the y-th.
The left-most character of x is number 1. If y is negative then the first character
of the substring is found by counting from the right rather than the left. If z is
negative then the abs(z) characters preceding the y-th character are returned. If
x is a string then characters indices refer to actual UTF-8 characters. If x is a
BLOB then the indices refer to bytes.
trim(x,y) The trim(x,y) function returns a string formed by removing all characters that
appear in y from both sides of x. If there is no value for y, trim(x) removes
spaces from both sides of x.
upper(x) The upper(x) function returns a copy of string x with all characters converted to
upper case.
LOBs
The following table describes the LOB functions used by the Amazon RDS Migration
Tool Expression Builder:
Table 15–8 SQLite Lob Functions used by the Amazon RDS Migration Tool Expression
Builder
Function Description
hex(x) The hex() function receives an argument as a BLOB and returns an
upper-case hexadecimal string version of the BLOB content.
randomblob(N) The randomblob(N) function returns an N-byte BLOB that contains
pseudo-random bytes. If N is less than 1 then a 1-byte random BLOB is
returned.
zeroblob(N) The zeroblob(N) function returns a BLOB that consists of N bytes of 0x00.
Numeric
The following table describes the numeric functions used by the Amazon RDS
Migration Tool Expression Builder:
NULL check
The following table describes the NULL check functions used by the Amazon RDS
Migration Tool Expression Builder:
Table 15–10 SQLite NULL Check Functions used by the Amazon RDS Migration Tool
Expression Builder
Function Description
coalesce(x,y...) The coalesce() function returns a copy of its first non-NULL argument, it
returns NULL if all arguments are NULL. Coalesce() have at least two
arguments.
ifnull(x,y) The ifnull() function returns a copy of its first non-NULL argument, it returns
NULL if both arguments are NULL. Ifnull() must have exactly two arguments.
The ifnull() function is the same as coalesce() with two arguments.
nullif(x,y) The nullif(x,y) function returns a copy of its first argument if the arguments are
different and returns NULL if the arguments are the same. The nullif(x,y)
function searches its arguments from left to right for an argument that defines
a collating function and uses that collating function for all string comparisons.
If neither argument to nullif() defines a collating function then the BINARY is
used.
Table 15–12 SQLite Data Enrichment Functions used by the Amazon RDS Migration
Tool Expression Builder
Function Description
source_lookup(TTL,'SCHM','TBL','EXP','COND', Use to retrieve additional data
from the source database.
COND_PARAMS)
target_lookup(TTL,'SCHM','TBL','EXP','COND', Use to retrieve additional data
from the target database.
COND_PARAMS)
Input Parameters
The possible input parameters for the lookup functions are described in the table
below. For a usage example, see Data Enrichment Example.
Where:
10000 is the TTL parameter.
HR is the schema name.
DEPARTMENTS is the table name.
DEPARTMENT_NAME is the expression.
DEPARTMENT_ID=:1 (or ? on Microsoft SQL Server) is the condition.
$DEPARTMENT_ID is the condition parameter.
5. Run the task.
Table 15–14 SQLite Operation Functions used by the Amazon RDS Migration Tool
Expression Builder
Function Description
operation_indicator(value_ When the operation_indicator function is invoked
on_delete, value_on_update, on its own or as part of an expression, records deleted
value_on_insert) from the source database will not be deleted from the
target database. Instead, the corresponding target
record will be flagged (with a user-provided value) to
indicate that it was deleted from the source. The
operation_indicator function also requires you to
provide values to indicate records that were inserted or
updated in the source database.
Note: The operation_indicator function is not
supported on tables that do not have a Primary Key.
Note: It is recommended to add a dedicated column for
the flag values, for example, OPERATION. For an
explanation of how to add a column, see Using the
Transform Tab.
To specify the function values:
Replace value_on_delete, value_on_insert and
value_on_update with the values that you want to
appear in the target database.
Values should be formatted according to the
corresponding column type.
Example when the column type is INT4:
operation_indicator(’1’, ’0’, ’0’)
Example when the column type is STRING:
operation_indicator(’Deleted’, ’Updated’,
’Inserted’)
Other Functions
The following table describes additional functions used by the Amazon RDS
Migration Tool Expression Builder:
Table 15–15 SQLite Functions used by the Amazon RDS Migration Tool Expression
Builder
Function Description
length(x) For a string value x, the length(x) function returns the number of characters (not
bytes) in x before to the first NULL character.
If x is NULL then length(x) is NULL. If x is numeric then length(X) returns the
length of a string representation of X.
like(x,y,z) The like() function is used to implement the "Y LIKE X [ESCAPE Z]"
expression. The ESCAPE (z) clause is optional. If there is a z clause, then the
like() function is invoked with three arguments. Otherwise, it is invoked with
two arguments.
typeof(x) The typeof(x) function returns a string that indicates the datatype of the
expression x: null, integer, real, text, or BLOB.
Task Settings
Each task has settings that you can configure according to your needs for replication.
You configure the settings in the Task Settings dialog box.
Metadata
When you click Metadata in the Task Settings dialog box, you can configure the Target
Metadata Settings for a replication task.
Target Metadata
Target table schema: (if empty, use the schema from the source table): This will
automatically add the owner prefix for the target database to all tables if no source
schema is defined.
Include LOB columns in replication (BLOB, CLOB and similar large object
datatypes): Select this for if you are using tables that include LOBs. For information
on LOB support in Amazon RDS Migration Tool endpoints, see Support for Large
Object Data Types (LOBs).
Note: LOB data types are supported only in tables that include a
primary key.
If you select Include LOB columns, you must select one of the following:
n Full LOB mode (slower unlimited LOB size): If you select this option then enter a
value for the following parameter:
Chunk size (KB): Use the counter or type in the size of the LOB chunks to use
when replicating the data to the target.
n Limited-size LOB mode: If you select this option then enter a value for the
following parameter:
Max LOB size (KB): Use the counter or type the maximum size (in kilobytes) for
an individual LOB. Enter 0 for unlimited size.
Control Tables
Control Tables provide information about the replication task as well as useful
statistics that can be used to plan and manage both the current replication task and
future replication tasks. Aside from the Apply Exceptions table which is always
created, users can choose which of the following Control Tables to create on the target:
n Replication Status: Provides details about the current task including task status,
amount of memory consumed by the task, number of changes not yet applied to
the target and the position in the source database from which Amazon RDS
Migration Tool is currently reading.
n Suspended Tables: Provides a list of suspended tables as well as the reason they
were suspended.
n Replication History: Provides information about the replication history including
the number and volume of records processed during a replication task, latency at
the end of a CDC task, among others.
For a detailed description of these tables, see Control Tables.
Table Selection
In addition to the Apply Exceptions table (required), select which of the following
Control Tables you want Amazon RDS Migration Tool to create on the target database:
Replication Status, Suspended Tables and Replication History.
Bidirectional
This tab is only applicable to bidirectional replication tasks. When you click
Bidirectional in the Task Settings dialog box, the Loopback Prevention tab is
displayed. In bidirectional replication, loopback prevention is a mechanism that
prevents the same data from being replicated back and forth in an endless loop. To
enable loopback prevention, you need to specify a source and target Loopback
prevention table schema.
Bidirectional replication consists of two separate tasks: Task 1 captures changes made
to Database A and replicates them to Database B. Task 2 captures changes made to
Database B and replicates them to Database A. When configuring Task 1 of a
bidirectional replication setup, the source loopback prevention table schema must be
identical to the target loopback prevention table schema specified in the Loopback
Prevention settings of Task 2.
Likewise, when configuring Task 2 of a bidirectional replication setup, the source
loopback prevention table schema must be identical to the target loopback prevention
table schema specified in the Loopback Prevention settings of Task 1.
Full Load
When you click Full Load in the Task Settings dialog box, you can configure the
following:
n Full Load Settings
n Full Load Tuning
Note: Full load can be turned on or off at any stage even if change
processing is on. Once the task begins to process changes, the full load
on/off switch is used only as additional protection against accidental
or unauthorized reload.
Note: Target table preparation settings are not available for Disaster
Recovery tasks (any existing target tables will alway be dropped and
replaced with new tables).
If target table already exists: Select one of the following from the list to determine
how you want to handle loading the target at full-load start up:
n DROP and Create table: The table is dropped and a new table is created in its
place.
n TRUNCATE before loading: Data is truncated without affecting the table
metadata.
n Do nothing: Data and metadata of the target table are not affected.
Resuming an incomplete full load: This lets you set the policy for determining
whether to allow tables to resume processing from the point of interruption during a
full load. Allowing the restarting of tables from the point of interruption can cause a
slowdown in the full-load processing. However, starting extremely large tables from
the beginning may cause long delays in completing the full load. In addition, for tables
to be restarted from the point of interruption, they must have a unique index. It is
recommended that they have a clustered primary key.
For tables interrupted while in full load: Select how you want to handle loading
tables that were interrupted during a full-load operation.
Use the following options to determine the policy for reloading tables after an
interruption of a full-load process:
n Allow resuming table full load: Select this to allow tables loading when a
full-load process is interrupted to begin loading from the point where the loading
was interrupted. When you select this option, you must define which tables can be
reloaded in the Resume interrupted full load only for tables option.
n Always restart table full load: Select this option to restart the loading for all tables
even those that were partially loaded when the full-load process was interrupted.
Resume interrupted full load only for tables: This option is available when you select
Allow resuming table full load. Use this option to define which tables should resume
loading when the full-load operation begins again. You can define the following
options:
n Larger than (rows): Use the arrows (or type) to select the minimum number of
rows that a table must have to resume loading. Tables with fewer rows than the
number entered in this field will restart the full load. The default value is
1000000.
Change Processing
When you click Change Processing in the Task Settings dialog box, you can configure
the following:
n Apply Changes Settings
n Changes Processing Tuning
Note: When you turn on apply changes you must reload the task or
position back to the point of the previous reload.
DDL handling policy: Determine how to handle the target table for the change
capture:
Note: DDL handling policy settings are not available for Disaster
Recovery tasks (Changes are always applied).
Batch tuning
The following options are available when Batch optimized apply (TurboStream CDC)
Change processing mode is selected:
Apply batched changes in intervals:
n Longer than: The minimum amount of time to wait between each application
of batch changes. The default value is 1.
n But less than: The maximum amount of time to wait between each application
of batch changes (before declaring a timeout). The default value is 30.
n Force apply a batch when processing memory exceeds (MB): The maximum
amount of memory to use for pre-processing in Batch optimized apply mode. The
default value is 500.
Miscellaneous tuning
Statements cache size (number of statements): The maximum number of prepared
statements to store on the server for later execution (when applying changes to the
target). The default is 50. The maximum is 200.
Error Handling
When you click Error Handling in the Task Settings dialog box, you can configure the
following:
n Error Handling Settings
n Environmental Errors
n Data Error
n Table Error
n Apply Conflicts
For more information on error handling in Amazon RDS Migration Tool, see Error
Handling.
Environmental Errors
Click the Environmental Errors sub-tab to configure the following:
n Maximum retry count: Select this option and then specify the maximum number
of attempts to restart a task when an environmental error occurs.
Specify "0" to never restart a task.
When the check box is not selected, Amazon RDS Migration Tool will attempt to
restart the task an infinite number of times.
When the system attempts to restart the task the designated number of times, the
task is stopped and manual intervention is required.
n Interval between retry attempts: Use the counter to select or type the number of
seconds that the system waits between attempts between attempts to restart a task.
n Increase retry interval for long outages: Select this check box to increase the retry
interval for long outages. When this option is enabled, the number of seconds
between retry attempts increases each time.
n Maximum retry interval: Use the counter to select or type the number of seconds
to wait between attempts to restart a task when the Increase retry interval for
long outages option is enabled.
For information about environmental errors and the configuration properties, see
Environmental Errors and Error Handling Properties in the Error Handling appendix.
Data Error
Click the Data Error sub-tab to configure the following:
n Policy: Click the triangle to open the list and select what happens when an error
occurs in one or more specific records. You can select one of the following from the
list:
– Ignore record: The task continues and the error is ignored.
– Log error (default): The task continues and the error is written to the task log.
– Suspend table: The task continues but data from the table with the error
record is moved into an error state and its data is not replicated
– Stop task: The task is stopped and manual intervention is required.
Table Error
Click the Table Error sub-tab to configure the following:
n Policy: Click the triangle to open the list and select what happens when an error
occurs in the general table data for one or more tables being replicated. You can
select one of the following from the list:
– Suspend table (default): The task continues but data from the table with the
error record is moved into an error state and its data is not replicated
– Stop task: The task is stopped and manual intervention is required.
n Escalation: Select the Escalation check box to implement the escalation policy for
table errors. If this is not selected, escalation is not implemented.
n Escalation count: Use the counter to select or type the number of errors that can
occur to the data for general table data for a specific table before the task is
stopped.
n Escalation policy: You cannot take any action in this option. The escalation policy
for table errors is automatically set to Stop task.
For information about environmental errors and the configuration properties, see
Table Errors and Error Handling Properties in the Error Handling appendix.
Apply Conflicts
Click the Apply Conflicts sub-tab to configure the following:
n No record found for applying a DELETE: Click the triangle to open the list and
select what happens when there is a conflict with a DELETE operation. You can
select one of the following from the list:
Logging
When you click Logging in the Task Settings dialog box, you can carry out the actions
in the following sub-tabs:
n Logging Level
n Log File
Logging Level
Click the Logging Level sub-tab to set the logging level for task logs. The level you set
determines what information is written to the log
Note: You can also set the task logging level from the Tools menu in
the Monitor view. For more information, see Monitor Mode. and
Working with Task Logs at Runtime.
The following are the available logging levels. The list is in order from the lowest level
to the highest level.
1. Error
2. Warnings
3. Info
4. Debug
5. Detailed Debug
The higher levels always include the messages from the lower levels. Therefore, if you
select Error, only error messages are written to the log. However, if you select Info,
informational messages, warnings, and error messages are included. Selecting
Detailed Debug writes all possible messages to the log.
For information on how to set the logging level, see Set the Task Logging Level.
Log File
You can manually roll the log file for the task you are working with in the Log File
sub-tab. This lets you stop logging to the current log file and begin to log to a new log
file. The current log file is called reptask_<name of task> and saved (older) log
files have the file name reptask_<name of task>_xxxxxxxxxxxx where
xxxxxxxxxxxx represents a 12-digit timestamp.
Click Roll Log File to immediately roll the current server log file for the task you are
working with.
This chapter describes how to work with tasks that you design. For information on
how to design a task, see Designing Tasks. This chapter contains information on
running tasks, viewing the task status, and viewing messages about the task.
Information on monitoring and working with tasks during runtime is in the chapter
Monitoring and Controlling Replication Tasks.
This chapter contains the following topics:
n Running a Task
n Viewing the Task Status
n Reading Messages about a Task
Running a Task
After you design a task (see Designing Tasks), you can run and monitor its progress
with one click in Amazon RDS Migration Tool. This simple Click-2-Replicate function
is described in this topic. In addition, the various types of run options available are
also described. This topic has the following sub-topics.
n How to run a Task
n Using the Run Button Options
Note: The task run buttons area available in the button bar at the top
of the console in the following views:
n The Tasks View (In both the Design Mode and Monitor Mode)
n When Viewing Specific Tasks
In some cases, task replication may stop due to an error although the task process is
still running. See Task Icons for information on the task status and how Amazon RDS
Migration Tool displays information on the current task status.
The following figure shows the Run and Stop buttons in the button bar.
Start Replication
This is available the first time you run the task only. This will execute the initial full
load operation. If CDC is also enabled for the task, change processing will start as soon
as any changes are made to the source database.
File Channel Note: To replicate tables that were added to the local
file channel task after the initial full load, you need to reload both the
local and the remote file channel tasks.
The Advanced Run Options dialog box lets you do the following:
n Resume Processing: Resumes task execution from the point that it was stopped.
You can also resume processing by clicking the Run button if the task has been
stopped.
n *Reload Target: This will re-start the full-load replication process if the task was
previously run. See Reload Target for more information.
*Only available for Full Load tasks.
n **Restart task and start processing changes from current time: This starts the
CDC replication task from the beginning (as if the task has not run before).
**Only available for CDC tasks.
n Tables are already loaded. Start process changes from: Select the date and time to
create a timestamp to define the point from where you want to process changes.
Note: The timestamp uses the local time of the browser machine.
Click the calendar to select the date.
Viewing Notifications
The Notifications tab displays notifications about the task. These messages alert you
to specific events encountered by a task, such as the task starting or stopping, a
specific error type, or information about latency and disk space.
The Notifications tab displays the time of a notification and a description of the
notification. You define the notifications that are sent for each task and a description
for each notification in the Settings area. For more information, see Define the
Notification Message.
Open a Notification When you open a notification, you can see the full message
presented in a dialog box. The dialog box contains a button to copy the text so that you
can use it somewhere else for trouble shooting and the timestamp for the notification.
Clear a Notification You can clear notifications from the list to make sure that you are
seeing only those that are relevant to you.
To clear a notification
1. In the Messages section of the console, click the Notifications tab.
2. Select the notification you want to clear from the list.
3. Click Clear from the button bar at the top of the list.
Select the Log Message Type Two types of log messages are displayed in the Log
Messages List. You can view Errors, Warnings, or both.
Open a Log Message When you open a log message, you can see the full log text
presented in a dialog box. The dialog box contains a button to copy the text so that you
can use it somewhere else for trouble shooting and the timestamp for the log message.
2. Select the log message you want to open from the list.
3. Double-click the log message or click Open from the button bar at the top of the
list.
Clear a Log Message You can clear log messages from the list to make sure that you are
seeing only those that are relevant to you.
2. Select the log message you want to clear from the list.
3. Click Clear from the button bar at the top of the list.
Monitoring and running a task let you use the Click-2-Replicate function to carry out
the replication task and view its functions in near real time. This chapter contains the
following topics about running and monitoring a replication task.
n Viewing Information in the Monitor
n Monitoring the Full-Load Operation
n Monitoring Change Processing Operations
n Viewing Messages
n Using the Monitor Tools
You can set the throughput measurement values either to the number of records
replicated per second, or to the number of kilobytes replicated per second. The display
is always based on the current load operation.
The following tables are available when you select Applied Changes.
n Recent Activity
n Aggregates
Recent Activity
Click Recent Activity at the top of the Applied Changes Details section to view
information about which changes occurred in each table. It has the following
information:
n Table Name: The names of the source tables that are included in the task.
n Insert: The number of INSERT operations processed for the specific table.
n Delete: The number of DELETE operations processed for the specific table.
n Update: The number of UPDATE operations processed for the specific table.
n DDL: The number of metadata changes (DDL) processed. DDL changes include
information about events like changes to table names or to column names.
n Total Applied: The total number of changes applied to the target.
n Last Modified: The time the last change occurred for the specific table.
n Reload: Click the Reload icon to reload the data for selected tables and run the
full-load operation again.
Aggregates
Click Aggregates at the top of the Applied Changes Details section to view
information about total changes for each change type and transaction type.
The Aggregate table displays the total changes (for all tables) applied for each of the
following types of operations:
n INSERT
n UPDATE
n DELETE
n DDL
The Aggregate table also displays the information about transactions. It displays the
total number and volume of:
You can set the Apply Throughput measurement values either to the number of
change records replicated per second, or to the number of kilobytes replicated per
second. The display is always based on the current load operation.
The latency values displayed in the Amazon RDS Migration Console measure the time
delay (latency) between the time when a change is visible to the source (and
committed), and the time when this same change is visible to the target. The display is
always based on the current change being applied. For more information about
latency, see Apply Latency.
Select the Apply Latency gauge to display a graph with the latency details. To view the
graph only and hide the Change-Processing gauges including the latency gauge, click
the double arrow as shown in the figure below.
Viewing Messages
You can see messages sent for the task while in the monitor view. For information on
viewing messages, see Reading Messages about a Task.
Note: If you want to view the log file, select Launch Log Viewer to
view the log file in the viewer window. For more information, see
View Log Messages for a Task.
To set the log level for the server, see The Server View.
2. At the top of the Logging window, move the Task Logging Level slider to the log
level you want. This sets the logging level for all server log modules. Note that all
of the sliders for the individual modules move to the same position that you set in
the main slider.
3. Make any changes to the sliders for the individual modules. This is optional. Note
that if you change the main slider, all of the individual sliders are reset to the new
position. If you want to maintain a different logging level for a specific module,
you need to reset it.
See Figure 0–2, "The Logging Window (Monitor View)" for a list of the logging
modules you can set.
4. Click Apply at the top of the Logging window. Changes to the logging level take
place immediately. There is no need to restart the Amazon RDS Migration Server.
2. At the top of the Logging window, click Log Viewer. The Live Log Viewer is
displayed.
3. To view a different log file, select the file you want to view from the Select Log list
at the top right of the window.
This chapter describes how to configure Amazon RDS Migration Tool using the Server
page. For information on opening and viewing the Server page, see The Server View.
Note: Configurations made in the Server page affect all Tasks that
are created in the Amazon RDS Migration Tool instance you are
working with.
Notifications Settings
The following can be defined in the Notifications settings:
n Defining Notifications
n Setting up Mail Parameters
n Creating a Default Recipient List
Define the Action that Triggers the Notification In the Operator section of the Notify When?
page, you can determine the action that triggers the notification. If the Operator
section is not displayed, click on the header with the word Operator to display the
options for this section. Select one of the following:
n Task was started: To send the notification when the task starts.
n Task was stopped: To send the notification when the task is stopped.
n Full load started: To send the notification when the full-load process starts.
n Full load completed: To send the notification when the full-load process is
complete.
Once you determine when to send the notification, you can decide whether specific
changes in status trigger the notification.
If you want to send a message about problems in latency, memory utilization, or disk
utilization, click Performance/Resources. See Define Which Changes of Status Trigger
the Notification for an explanation.
If you want to send the notification when certain errors occur, click Errors. See Define
Errors That Trigger the Notification for an explanation.
Or you can click Next to Define the Notification Distribution Properties.
Note: If you select one of these options, the notification is sent only
when the selected parameter is true. However, you must also Define
the Action that Triggers the Notification.
Once you determine the status changes that trigger a notification, you can decide
whether specific errors trigger a notification.
If you want to send the notification when certain errors occur, click Errors. See Define
Errors That Trigger the Notification for an explanation.
Or you can click Next to Define the Notification Distribution Properties.
Define the Notification Distribution Properties In the Notify How/Who? page of the New
Notification Rule wizard, you determine which users receive the notification and how
the notification is sent.
Determine the Email-Message Recipients for the Notification In addition to sending an email
message for a specific notification to the default notification list, you can also create a
custom notification list of users who receive this notification only or you can also send
the email message to a custom list of users only.
This section describes how:
n To create a custom list of users
n To send notification email messages to a custom list of recipients only
Define the Notification Message You can create a message for your notification. By
default, a standard message is created based on the definitions you entered when
Define Which Changes of Status Trigger the Notification and Define Errors That
Trigger the Notification.
Associate Tasks with the Notification By default, notifications are sent for all tasks that are
defined in the Amazon RDS Migration Tool instance you are using. You can determine
whether to send the notification to specific tasks defined in the Amazon RDS
n All Tasks: To associate this notification with all tasks that are defined in the
Amazon RDS Migration Tool instance you are working with. In this case all
tasks that were previously defined and any future task will be associated with
this notification.
If you choose to associate this notification with All Tasks, then click Next to
Review the Notification Rule.
n Selected Tasks: To associate this notification with one or more specific tasks
only. Continue with the next step.
2. Select the check box next to any of the tasks you want to associate with this
notification. You can select one or more tasks.
Note: The Task check box at the top of the check-box column lets you select all of
the tasks that are displayed. When you select this check box it is as if you select
each of the tasks individually. Therefore, if you add tasks in the future they will
not be included.
3. Click Next to Review the Notification Rule.
Review the Notification Rule The Summary page lets you review the notification rule that
you defined so that you can determine whether the selections you made in the wizard
are correct. If you want to make changes, click Back and go to the page or pages you
want to change.
When you are sure that the notification rule is defined in the way that you want, click
Finish to close the wizard and add the rule to the Notification List.
After you close the wizard, make sure to click Save at the top of the Settings page.
This will save the information for all settings, not only for the notification rule that you
created. If you made changes that you do not want to keep, click Discard to discard all
changes before you make changes to any of the other settings.
Notes:
You can only make changes to those sections that you defined when
Creating a New Notification.
n You cannot change name of the notification.
n If you defined a notification to let you know when the task or full
load started or stopped, this cannot be edited. For example, if you
created a notification rule for starting a task and you now also
want to get notified when the task stops, you must create a new
notification rule.
n In the Notify When? page, you can make changes to the data you
defined in the original notification rule. For example, if you
defined a Memory utilization message in the Notify when? page,
Performance/Resources section, you can only change this
parameter. If you want to add information about something that
was not defined in the original notification rule, for example you
want to add errors to your notification or you want to get
information about latency, you must create a new notification rule.
Deleting a Notification
You can delete notification rules that you no longer want to use.
To delete a notification
1. From the Notification List select the notification you want to delete.
Note: Click Save to save the information for all settings, not only for
the recipient list. If you made changes that you do not want to keep,
click Discard to discard all changes before you make changes to any of
the other settings.
License Settings
You need to register the software before you can use Amazon RDS Migration Tool.
Your Amazon vendor should provide you with a text file called license.txt.This file
contains details such as the product expiration date (if any).
Use the License settings page for:
n Requesting a License
n Registering a License
n Viewing a License
Requesting a License
You must have a valid license to work with Amazon RDS Migration Tool. You can
request a license from the License settings page in the Amazon RDS Migration
Console. In the License Request dialog box, fill out the required information and
submit the request by email. Once your request is approved, the license file is sent to
you by email. To use Amazon RDS Migration Tool, register the license by using the
procedure described in Registering a License.
To request a license
1. From the Server page, click License.
Registering a License
You must have a valid license to work with Amazon RDS Migration Tool. If you did
not receive a license.txt file, you can request a license using the procedure described in
Requesting a License. Once you receive the license, you must register it to work with
Amazon RDS Migration Tool.
To register a license
1. Copy the license.txt file to your computer or any computer in your network
you have access to.
Viewing a License
You can view the license information in the Amazon RDS Migration Console at any
time.
Error Handling
You can configure how Amazon RDS Migration Tool responds to specific types of
errors. You can define error handling on the task level or the server level. The
configurations you make in the Server Settings affect all tasks created for this instance
of Amazon RDS Migration Tool unless you define a task to use the definitions you
create for that task. For information on how to configure error handling for a specific
task, see Error Handling in the Customizing Tasks chapter.
Note: To view the log files, open the Log File sub-tab.
For more information, see View a Server Log.
How it Works
A solution using FTS consists of two Amazon RDS Migration Servers: A local Amazon
RDS Migration Server installed on the source database LAN and a remote Amazon
RDS Migration Server installed on the target database LAN.
A local task on the local server is defined from the source database to a File Channel
target. A remote task on the remote Amazon RDS Migration Server is defined from a
File Channel source to the target database.
The FTS runs on the remote Amazon RDS Migration Server only and transfers the File
Channel files from the storage location defined in the local task to the storage location
defined in the remote task.
Upon file transfer, and before Compression and Encryption, large files are split into
smaller blocks which form recoverable transport units, and small files are merged into
bigger blocks to be sent at the same time. The blocks are then transferred and
reconstructed into File Channel files when received by the FTS server.
For information on setting up a File Channel source or target to use FTS, see Using
Advanced Properties for a File-Channel Source and Using Advanced Properties for a
File-Channel Target respectively.
Compression
File Channel files are compressed upon sending using GZIP. You can disable the
compression and control the compression level.
Encryption
After compression, File Channel files are encrypted using a randomly generated
AES-256 session key. The session key is exchanged between the client and server using
the Diffie-Hellman key exchange protocol which is authenticated using a secret key
that is shared between the client and the server.
2. In the left side of the Server view, click File Transfer Service.
The File Transfer Service list is displayed.
3. In the Actions toolbar, click New File Transfer Service.
A new row is added to the File Transfer Service list.
Note: When you edit a File Transfer Service, make sure that any File
Channel targets configured to use the File Transfer Service are also
updated accordingly. For more information on File Channel Targets,
see Setting up a File Channel Target using Amazon RDS Migration
Tool.
Note: For Full Load only tasks, it is preferable to select Reload task
rather than Run task when the scheduling is set to Daily, Weekly or
Monthly. This will update the table’s load whereas Run task will
replace the existing table.
Amazon RDS Migration Tool must handle different types of errors during its
operation. The way the system should respond to these errors depends on several
aspects, including the component where the error occurred, the type of error, and the
scope of the error. Because different sites may have different requirements for error
behavior, Amazon RDS Migration Tool lets you configure the Error Handling.
Error Types
This topic provides information on the different types of errors that you may
encounter using Amazon RDS Migration Tool. The following is a list of the error types:
n Environmental Errors: An error that is caused by an environmental problem in the
source or target database or on the network. Some examples of environmental
errors are loss of communication with the source or target database, restarting a
database, or network problems.
n Data Errors: An error related to data processing at the record level. Some examples
of data errors are conversion errors, errors in transformations, or bad data.
n Table Errors: An error in processing data or metadata for a specific table. This only
includes general table data and not an error that relates to a specific record.
n Apply Errors (conflicts): Errors that occur when the target database is not
synchronized with the source database when processing changes. This can cause
duplicate key errors on INSERT operations or zero rows affected on
UPDATE/DELETE operations.
n Fatal Errors: An error that is not related to a specific table but is not
environmental. Some examples of fatal errors are incorrect database configuration
or incorrect password information.
n Abnormal Termination: An error that is returned when the task terminates for an
abnormal or unknown reason.
Environmental Errors
When an environmental error occurs, the system will attempt to restart the task if it
stops. You can define exactly how and when the system will try to restart a task.
The following table describes the properties used to determine how to handle
environmental errors.
Data Errors
When a data error occurs (to a specific record), you can define what you want the task
to do and how to handle the error.
The following table describes the properties used to determine how to handle data
errors.
Apply Errors
The following table describes the properties used to determine how to handle Apply
errors. Apply errors occur when the source and target databases are not properly
synchronized during a change processing operation. This causes the following
record-level issues:
n DELETE or UPDATE operations occur with zero rows affected.
n INSERT operation occurs with a duplicate key.
Fatal Errors
There is no manual configuration for handling fatal errors. In case of a fatal error, the
task is stopped and manual intervention is required.
This appendix describes how to set up a secure connection between the Amazon RDS
Migration Console and the Amazon RDS Migration Server and covers the following
topics:
n Setting up HTTPS with IIS
n Setting up HTTPS without IIS
n Changing the Server Password
b. To create the thumbprint, remove the spaces from the hexadecimal string. For
example, removing the spaces from the string above would produce the
following thumbprint:
fcfcc8be554dcac4ace739018e040d60f23ba126
2. To associate the certificate with the HTTPS endpoint, run the following command:
$ netsh http add sslcert ipport=0.0.0.0:443
appid="{4dc3e181-e14b-4a21-b022-59fc669b0914}"
certhash=fcfcc8be554dcac4ace739018e040d60f23ba126
Where:
n The ipport value should match the host:port part of the url= attribute of
the ServiceConfiguration.xml file. For example:
<?xml version="1.0" encoding="utf-8"?>
<ServiceConfiguration xmlns:xsd="http://www.w3.org/2001/XMLSchema"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
name="AmazonRDSMigrationToolConsole"
displayName="Amazon RDS Migration Console"
description="Web server for Amazon RDS Migration Console Web
application."
url="https://rds migration
tool.acme.com:443/AmazonRDSMigrationToolConsole">
…
</ServiceConfiguration>
0.0.0.0:443 means that the certificate can be used with all local IP
addresses.
n The appid value is the IIS Application ID. The value can also be the Amazon
RDS Migration Console assembly GUID:
{3477c544-694e-46ee-93d8-ba4b9a0d24e4}, or any other GUID.
Note: When setting up a port other than 443 for HTTPS, the IIS Application ID
should not be used.
n The certhash value is the thumbprint value created earlier.
Where the ipport value is the same as the one specified when the certificate was
added.
For Firefox:
n Simply add an exception for the certificate when the Add Security Exception
prompt is displayed.
2. On Linux:
a. Run the following command:
repctl
This appendix describes how to set up Amazon RDS Migration Tool in Windows and
Linux clustering environments and contains the following topics:
n Setting up Amazon RDS Migration Tool in a Windows Cluster
n Setting up Amazon RDS Migration Tool in a Linux Cluster
Add Storage
You must define the disk that you use for storage. Use the storage to hold all of the
information in the Amazon RDS Migration Tool data folder.
To add storage
1. Right-click the Storage node and select Add a Disk. This disk will contain the data
in the data folder. The data folder is located in the root folder where Amazon RDS
Migration Tool is installed. This disk is bound to the RDS Migration Tool Services.
This means that in case of failover, this disk will be moved to the another cluster
running the Amazon RDS Migration Tool Services.
2. On the Storage node, you can add the new disk to the failover cluster.
3. Right-Click one of the services in the cluster and select Add Storage. Then select
the new disk to add this disk to the cluster you are working with.
Note: Make sure that there is a disk available on the server where you are adding the
disk. Also, to ensure that the system recognizes the disk, make sure that the storage
configuration for the disk is correct. The disk you want to use must be configured as a
basic disk, not a dynamic disk. For more information, see the Microsoft online help.
For example, let’s assume that one Amazon RDS Migration Console is installed on a
machine called rds migration tool-main and the other Amazon RDS Migration
Console is installed on a machine called rds migration tool-failover. To set
up a different configuration for each machine, simply create two copies of the
ServiceConfiguration.xml file and then rename them to
n Only one instance of RDS Migration Tool can be active at a given data location.
The cluster software should be set so that during failover, one RDS Migration Tool
instance is stopped and the other is started.
This appendix describes the Amazon RDS Migration Tool Control Tables which are
created on the target database when the corresponding table is selected in the Control
Tables tab.
The following tables are described:
n Apply Exceptions
n Replication Status
n Suspended Tables
n Replication History
Apply Exceptions
All task-related errors are recorded in the attrep_apply_exceptions table, which is
described below.
Replication Status
The attrep_status table contains the current status of the replication task and the target
data.
Suspended Tables
The following section describes the attrep_suspended_tables table.
Replication History
The following section describes the attrep_history table.
This appendix describes how Amazon RDS Migration Tool is affected by Daylight
Saving Time (DST) and provides guidelines for handling changes brought about by
DST.
There are two types of DST changes:
n DST On - Occurs approximately when Summer starts (actual date is country
specific). Its impact on local time is that local time is moved one hour forward (so,
for example, 01:00AM becomes 02:00AM). This DST change does not impact
Amazon RDS Migration Tool as it does not result in time overlap.
n DST Off - Occurs approximately when Winter starts (actual date is country
specific). Its impact on local time is that local time is moved back one hour (so, for
example, 02:00AM becomes 01:00AM). This DST change results in time overlap
where local time travels over the same hour twice in a row.
The comments below assume that the customer has not changed the time but rather
the timezone or the DST setting. Changing the actual time (not for minor time
adjustments) is a sensitive operation and is best done when Amazon RDS Migration
Tool is stopped.
Running Amazon RDS Migration Tool tasks do not depend on the timezone or DST
(daylight saving time) for correctly scanning and processing the transaction logs.
Internally, Amazon RDS Migration Tool timers use UTC.
Still, there are several places where DST may have an effect:
1. Timestamps in logs and audit messages are in local time. As a result, when Winter
time starts, the logs will show the time going back an hour; conversely, when
Summer time starts, the logs may appear to be missing one hour.
2. The global and table manipulation variables timestamp and commit_
timestamp are expressed in local time so their use would also be affected. The
impact of this depends on the manipulation done and on the intended use of the
timestamp based data. If the backward/forward change prescribed by the
timestamp may be disruptive, then it is recommended to stop the Amazon RDS
Migration Server for the duration of the first overlap period.
Going in to Winter time, for example, if at 02:00AM the clock is to be set back to
01:00AM then when the time is 00:55AM the Amazon RDS Migration Server
should be stopped and, after an hour and ten minutes (at 01:05AM), should be
started again.
Full Load
Creates all defined files or tables at the target database, automatically defines the
metadata that is required at the target, and populates the tables with data from the
source.
Latency
The time interval between the completion of a transaction on the source database and
the appearance of changes in the target. The latency at a target may vary due to
various reasons (for example, the target system may be down or inaccessible), but the
Amazon RDS Migration Tool attempts to keep the latency at a target in the
several-seconds-to-several-minutes range.
Source Database
A collection of files or tables managed by a database management system (such as,
Oracle, SQL Server) that is part of the main computing service of the IT organization of
an enterprise. This source continuously updated, may need to provide a high
throughput rate, may have strict 24/7 up-time requirements, and may reference or
update a number of tables in the course of a single logical transaction while providing
transactional consistency and integrity for the data.
Target Database
A collection of files or tables managed by a Database Management System (DBMS),
which may be different from the DBMS managing the source database. It contains data
that is derived from the source. It may contain only a subset of the tables, columns, or
rows that appear in the source. Its tables may contain columns that do not appear in
the source but are transformations or computations based on the source data.
Glossary-1
Glossary-2
Index
A
abnormal termination, 19-7
Accessing the console, 1-14
add database, 14-3
Amazon Redshift target, 10-7
Microsoft SQL Server (source), 6-10
Microsoft SQL Server (target), 6-19
Microsoft SQL Server source, 6-10
Microsoft SQL Server target, 6-19
MySQL source, 8-8
MySQL target, 8-14
ODBC source, 9-6
Oracle source, 5-14
Oracle target, 5-26
PostgreSQL target, 11-12
SAP Sybase ASE source, 7-4
SAP Sybase ASE target, 7-9
add databases
file channel source, 12-8
file channel target, 12-13
source, 14-4
target, 14-4
add schemas, 14-9
add tables, 14-5
advance license request options, 18-15
advanced properties
Amazon Redshift target, 10-9
file channel, 12-15
file channel source, 12-10
Microsoft SQL Server source, 6-12
Microsoft SQL Server target, 6-21
MySQL source, 8-10
MySQL target, 8-16
ODBC target, 9-8
Oracle source, 5-16
Oracle target, 5-29
PostgreSQL source, 11-8
PostgreSQL target, 11-14
SAP Sybase ASE source, 7-6
SAP Sybase ASE target, 7-12
alerts, 16-5
alternate backup folder, 6-13, 6-14
Amazon RDS
Oracle, 5-13
Amazon Redshift, 10-1
Index-1
advanced properties (target), 10-9 table errors, 19-4
Amazon Redshift target, adding a, 10-7 configuring tasks, 15-45
Cloud prerequisites, 10-2 control tables
configuration properties, 10-3 creating on target, 15-47
data types, 10-5 create a notification message, 18-7
prerequisites, 10-2 create notifications, 18-2
Amazon Redshift registration, 10-2 creating expressions for transforms, 15-6
Amazon Redshift requirements, 10-2 customizing tasks, 15-1
apply changes task settings, 15-50
apply changes tuning settings, 15-51
D
apply errors, 19-4
apply errors settings, 15-54 data enrichment, 15-40
apply global transformation rule, 15-15 data errors, 19-2
ARCHIVELOG (Oracle source), 5-11 data errors settings, 15-53
Data Types, 4-2
data types
B Amazon Redshift, 10-5
backup Microsoft SQL Server
Microsoft SQL Server, 6-8 not supported, 6-7
Bidirectional Tab, 15-47 Microsoft SQL Server source data types, 6-5
browsers, 1-2 Microsoft SQL Server target data types, 6-17
MySQL source data types, 8-5
MySQL target data types, 8-12
C
ODBC source, 9-3
CDC Oracle
monitoring, 17-7 not supported, 5-10
monitoring tables, 17-8 Oracle source data types, 5-8
processes, 3-4 PostgreSQL, 11-10
certificates, 20-1 PostgreSQL source, 11-4
change processing, 3-4 SAP Sybase ASE
monitoring, 17-7 not supported, 7-4, 7-9
monitoring tables, 17-8 SAP Sybase ASE source data types, 7-3
Cluster SAP Sybase ASE target data types, 7-8
Linux, 21-5 database configuration
Windows, 21-1 Microsoft SQL Server source, 6-8
concepts, 3-1 Oracle target, 5-26
concepts and design, 3-1 databases, 4-1
configuration adding, 14-3
Microsoft SQL Server source, 6-8 Amazon Redshift, 10-1
Oracle target, 5-26 architecture, 10-2
configuration properties editing, 14-3
Amazon Redshift, 10-3 file channel, 12-1
file channel source, 12-7 files, 12-1
file channel target, 12-11 managing, 14-2
Microsoft SQL Server source, 6-4 Microsoft SQL Server, 6-1
Microsoft SQL Server target, 6-15 MySQL, 8-1
MYSQL source, 8-5 ODBC, 9-1
MyYSQL target, 8-12 Oracle, 5-1
ODBC target, 9-2 PostgreSQL, 11-1
Oracle source, 5-4 SAP Sybase ASE, 7-1
Oracle target, 5-20 view information, 14-3
SAP Sybase ASE source, 7-2 working with, 14-2
SAP Sybase ASE target, 7-8 default recipient list, 18-13
configure log files, 18-18 define changes that trigger the notification, 18-3
configuring error handling behaviors, 19-1 define errors that trigger a notification, 18-5
abnormal termination, 19-7 define the notification distribution properties, 18-5
apply errors, 19-4 define the notification message, 18-6
data errors, 19-2 define the transformation rule (global
environmental errors, 19-2 transformation), 15-17
fatal errors, 19-6 define trigger to send notification, 18-3
Index-2
define what to transform (global mathematical operators, 15-36
transformation), 15-15 metadata
delete global transformation rule, 15-28 expression builder
deleting a notification, 18-12 using variables, 15-34
design mode, 13-4 null check functions, 15-38
designing tasks, 14-1 numeric functions, 15-37
determine email message recipients for the operations, 15-44
notification, 18-6 operators, 15-33, 15-34
other functions, 15-44
E overview, 15-29
stings, 15-34
edit global transformation rule, 15-28 strings, 15-37
editing a notification, 18-12 test expression, 15-32
editing database, 14-3 using elements, 15-33
encryption, 20-1 expression builder for transforms and filters, 15-29
endpoints, 4-1
Amazon Redshift, 10-1
architecture, 10-2 F
file channel, 12-1 fatal errors, 19-6
files, 12-1 features, 3-1
Microsoft SQL Server, 6-1 file channel, 12-1
MySQL, 8-1 advanced properties, 12-15
ODBC, 9-1 directory structure, 12-4
Oracle, 5-1 distribution, 12-2
PostgreSQL, 11-1 file-channel files, 12-4
SAP Sybase ASE, 7-1 directory structure, 12-4
environmental errors, 19-2 file-channel files, working with, 12-4
environmental errors settings, 15-53 installation requirements, 12-5
error handling, 18-16 limitations, 12-6
error types, 19-1 local tasks, 12-1
properties, 19-1 remote tasks, 12-1, 12-2
abnormal termination, 19-7 source, 12-6
apply errors, 19-4 target, 12-11
data errors, 19-2 tasks, 12-1, 12-2
environmental errors, 19-2 file channel source
fatal errors, 19-6 advanced properties, 12-10
table errors, 19-4 configuration properties, 12-7
error handling behaviors file channel source, adding a, 12-8
properties, 19-1 file channel target
abnormal termination, 19-7 configuration properties, 12-11
apply errors, 19-4 file channel target, adding a, 12-13
data errors, 19-2 file transfer service
environmental errors, 19-2 settings, 18-20
fatal errors, 19-6 File_Channel_settingUpTarget, 12-13
table errors, 19-4 file-channel directory structure, 12-4
error handling settings, 15-52 file-channel files, working with, 12-4
error types, 19-1 files, 12-1
export tasks, 14-10 filters, 15-7
expression builder, 15-29 expression builder, 15-29
build expression, 15-31 range builder, 15-11
data enrichment, 15-40 SQLite syntax, 15-12
date and time functions, 15-38 full load, 3-4
evaluate expression, 15-32 monitoring, 17-1
functions, 15-33, 15-36 monitoring tables, 17-2
global transformations, 15-34 full load task settings, 15-47
header columns, 15-45 full load tuning, 15-50
input columns, 15-33 function
inputs, 15-33 data enrichment, 15-40
LOBs, 15-37
logical operators, 15-34
Index-3
G required software, 1-2
Linux installation procedures, 1-11
general settings
Linux pre-installation tasks, 1-10
default recipient list, 18-13
Linux uninstall, 1-14
mail parameters, 18-12
list of notifications, 18-11
general table settings, 15-2
log file, 16-7
getting started, 2-1
log file actions (task), 17-14
global transformations, 15-13
log file maitenance operations, 18-18
apply rule, 15-15
log messages (task), 16-6
define the transformation rule, 15-17
log viewer, 16-7
delete transformation rule, 15-28
logging
edit transformation rule, 15-28
task log settings, 15-56
expression builder, 15-29
logging file settings, 15-56
new rule wizard, 15-13
logging level settings, 15-56
transformation rule
logging settings (server), 18-17
add column, 15-26
convert data type, 15-27
drop column, 15-27 M
rename column, 15-24 mail parameters, 18-12
rename schema, 15-18 manage databases, 14-2, 14-3
rename table, 15-21 messages, 16-5, 17-12
transformation rules list, 15-27 log messages (task), 16-6
transformation type, 15-14 notifications, 16-5
Microsoft SQL Server, 6-1
H advanced properties (source), 6-12
advanced properties (target), 6-21
header columns, 15-45
backup and recovery, 6-8
high availability
configuration properties, 6-4, 6-15
different configurations, 21-4
data types
history, 17-13
not supported, 6-7
HTTPS, 20-1
limitations, 6-2
set up MS Replication, 6-8
I source data types, 6-5
supported versions, 6-1
import tasks, 14-10
tables without a primary key, replicating, 6-9
input columns (for expressions), 15-33
target data types, 6-17
installation, 1-4
Microsoft SQL server
file channel, 12-5
alternate backup folder, 6-13, 6-14
Linux, 1-10
Microsoft SQL Server file, 12-1
pre-installation, 1-10
Microsoft SQL Server source
procedures, 1-11
add Microsoft SQL Server, 6-10
uninstall, 1-14
database configuration, 6-8
Oracle, 5-1
Microsoft SQL Server source data types, 6-5
Windows, 1-4
Microsoft SQL Server target
add Microsoft Microsoft SQL Server, 6-19
L security, 6-17
license settings, 18-14 Microsoft SQL Server target data types, 6-17
licensing, 18-14 monitor mode, 13-6
advanced license request options, 18-15 monitor tools, 17-12
registering, 18-15 monitoring, 17-1
request a license, 18-14 alerts, 16-5
viewing a license, 18-16 CDC, 17-7
limitations applied changes, 17-10
file channel, 12-6 incoming changes, 17-9
Microsoft SQL Server, 6-2 latency, 17-11
MYSQL, 8-4 tables, 17-8
ODBC, 9-1 throughput, 17-11
Oracle, 5-2 change processing, 17-7
SAP Sybase ASE, 7-2 applied changes, 17-10
Linux incoming changes, 17-9
Index-4
latency, 17-11 define, 18-2
tables, 17-8 define changes of status that trigger the
throughput, 17-11 notification, 18-3
full load define errors that trigger the notification, 18-5
all tables, 17-3 define the notification distribution properties, 18-5
completed loading, 17-4 define the notification message, 18-6
errors, 17-6 define trigger, 18-3
loading tables, 17-4 deleting, 18-12
queued, 17-5 determine email recipients for notification, 18-6
tables, 17-2 editing, 18-12
througput, 17-6 notification list, 18-11
total completion, 17-3 review the notification rule, 18-11
messages, 17-12 set values for latency, disk utilization, memory
run options, 16-2 utilization, 18-4
running a task, 16-1 using variables, 18-7
status tab notifications settings, 18-1
full load, 17-1
task status, 16-4
O
tools, 17-12
history, 17-13 ODBC
task logging, 17-13 advanced properties (target), 9-8
view, 17-1 limitations, 9-1
monitoring CDC source, 9-1
tables, 17-8 ODBC data source, 9-1
monitoring change processing, 17-7 ODBC prerequisites, 9-1
tables, 17-8 ODBC source
monitoring full load, 17-1 configuration properties, 9-2
tables, 17-2 data types, 9-3
multitenant ODBC source, adding a, 9-6
Oracle, 5-15 Oracle, 5-1
MYSQL add Oracle source, 5-14
configuration properties, 8-5 add Oracle target, 5-26
limitations, 8-4 advanced properties (source), 5-16
MySQL, 8-1 advanced properties (target), 5-29
add a MySQL source, 8-8 Amazon RDS, 5-13
add a MySQL target, 8-14 ARCHIVELOG (source), 5-11
advanced properties (source), 8-10, 8-16 configuration properties, 5-4
configuration properties, 8-12 data types
source data types, 8-5 not supported, 5-10
target data types, 8-12 installation, 5-1
MySQL source data types, 8-5 limitations, 5-2
MySQL target multitenant support, 5-15
security, 8-12 Oracle source data types, 5-8
MySQL target data types, 8-12 Oracle target data types, 5-24
supplemental logging, 5-11
supported versions, 5-1
N Oracle file, 12-1
new task, 14-1 Oracle on Amazon RDS
notification list, 18-11 working with, 5-13
notification rule wizard, 18-2 Oracle source
notification message page, 18-6 security requirements, 5-5
notify how/when? page, 18-6 Oracle source data types, 5-8
notify how/who? page, 18-5 Oracle target
Notify When? page (changes of status), 18-3 configuration properties, 5-20
Notify When? page (define errors), 18-5 database configuration, 5-26
Notify When? page (define when), 18-3 security requirements, 5-21
summary page, 18-11 Oracle target data types, 5-24
notifications, 16-5, 18-2, 18-7
create, 18-2
create a notification message, 18-7
Index-5
P SAP Sybase ASE target data types, 7-8
scheduler settings (server), 18-23
PostgreSQL, 11-1
security
add PostgreSQL target, 11-12
encryption, 20-1
advanced properties (source), 11-8
https, 20-1
advanced properties (target), 11-14
Microsoft SQL Server source, 6-4
data types, 11-10
Microsoft SQL Server target, 6-17
source, 11-1
MySQL target, 8-12
PostgreSQL source
SAP Sybase ASE source, 7-3
data types, 11-4
SAP Sybase ASE target, 7-8
prerequisites, 1-2
security requirements
properties
Oracle source, 5-5
Amazon Redshift, 10-3
Oracle target, 5-21
file channel source, 12-7
selecting tables, 14-6
file channel target, 12-11
server log level, 18-17
Microsoft SQL Server source, 6-4
server log settings, 18-17
Microsoft SQL Server target, 6-15
server settings
MYSQL source, 8-5
error handling, 18-16
MySQL target, 8-12
server view, 13-8
ODBC target, 9-2
setting up a new task, 14-1
Oracle source, 5-4
setting up Microsoft SQL Server for replication, 6-8
Oracle target, 5-20
settings, 18-1
SAP Sybase ASE source, 7-2
configure log files, 18-18
SAP Sybase ASE target, 7-8
default recipient list, 18-13
license, 18-14
R log file actions, 18-19
logging (server), 18-17
recovery
mail parameters, 18-12
Microsoft SQL Server, 6-8
maintenance operationsl, 18-18
registering a license, 18-15
notifications, 18-2
remote file channel task
create, 18-2
adding tables to, 12-4
notificationsl, 18-1
replication, 3-2
scheduler (server), 18-23
replication task, 3-3
server log level, 18-17
replication topologies, 3-4
view server log, 18-19
requesting a license, 18-14
software requirements, 1-1, 1-2
advance license request options, 18-15
source database
required software, 1-1, 1-2
file channel, 12-1
review the notification rule, 18-11
Microsoft SQL Server, 6-1
running a task, 16-1
MySQL, 8-1
run options, 16-2
ODBC, 9-1
Oracle, 5-1
S PostgreSQL, 11-1
SAP Sybase ASE, 7-1 SAP Sybase ASE, 7-1
add SAP Sybase ASE source, adding a, 7-4 source databases
advanced properties (source), 7-6 adding, 14-4
advanced properties (target), 7-12 files, 12-1
configuration properties, 7-2 source endpoint
data types file channel, 12-6
not supported, 7-4, 7-9 SQL Server source
limitations, 7-2 security, 6-4
SAP Sybase ASE target, adding a, 7-9 SQLite
source data types, 7-3 for transforms, 15-6
target data types, 7-8 SQLite syntax
SAP Sybase ASE source for filters, 15-12
security, 7-3 supplemental logging, 5-11
SAP Sybase ASE source data types, 7-3 supported browsers, 1-2
SAP Sybase ASE target supported databases, 4-1
configuration properties, 7-8 supported Linux versions, 1-2
security, 7-8 supported operating systems, 1-1
Index-6
supported platforms, 1-1 customizing, 15-1
supported Windows versions, 1-1 deleting, 14-10
system architecture, 3-3 designing, 14-1
editing, 14-10
filters, 15-7
T
range builder, 15-11
table errors, 19-4 SQLite syntax, 15-12
table errors settings, 15-54 global transformations, 15-13
table settings, 15-1 importing and exporting, 14-10
filters, 15-7 messages, 16-5
general, 15-2 new, 14-1
transforms, 15-3 running a task, 16-1
target database selecting tables, 14-6
Amazon Redshift, 10-1 setting up, 14-1
file channel, 12-1 table settings, 15-1, 15-2
Microsoft SQL Server, 6-1 transforms, 15-3
PostgreSQL, 11-1 creating expressions, 15-6
target databases SQLite syntax, 15-6
adding, 14-4 tasks view, 13-2
files, 12-1 topologies, 3-4
MySQL, 8-1 Transform Tab
Oracle, 5-1 unique index, designating on target, 15-4
SAP Sybase ASE, 7-1 transforms, 15-3
target endpoint creating expressions, 15-6
file channel, 12-11 expression builder, 15-29
task, 3-3 SQLite syntax, 15-6
log, 16-7 tutorial, 2-1
task configurations, 15-45
task log file
viewing, 17-14
U
task logging, 17-13 UI Console, 13-1
log file actions, 17-14 connecting to with multiple users, 1-15
set logging level, 17-13 design mode, 13-4
view log file, 17-14 monitor mode, 13-6
task logging level, 17-13 server view, 13-8
task logging settings, 15-56 tasks view, 13-2
task settings, 15-45 viewing tasks, 13-4
apply changes, 15-50, 15-51 uninstall from Linux, 1-14
apply changes settings sub-tab, 15-50 unique index
apply changes tuning sub-tab, 15-51 designating on target, 15-5
apply error sub-tab, 15-54 using HTTPS, 20-1
Control Tables tab, 15-46
data error sub-tab, 15-53
V
environmental errors sub-tab, 15-53
error handling, 15-52, 15-53, 15-54 variables, 18-7
error handling settings sub-tab, 15-52 variables in messages, 18-7
full load, 15-47, 15-50 view database information, 14-3
full load settings sub-tab, 15-47 view server log, 18-19
full load tuning sub-tab, 15-50 view task log file, 17-14
logging, 15-56 viewing a license, 18-16
logging file sub-tab, 15-56 viewing tasks in the UI Console, 13-4
logging level sub-tab, 15-56
table error sub-tab, 15-54 W
Target Metadata tab, 15-46
task status, 16-4 Web console, 13-1
tasks, 13-2, 14-1, 15-1, 16-1 Web security, 20-1
add filters, 15-7 Windows
add schemas, 14-9 required software, 1-1
add tables, 14-5 working with file-channel files, 12-4
creating, 14-1
Index-7
Index-8