Using Goldengate Stream Analytics Cloud Marketplace
Using Goldengate Stream Analytics Cloud Marketplace
Using Goldengate Stream Analytics Cloud Marketplace
F25909-05
February 2022
Oracle Database Using GoldenGate Stream Analytics on Cloud Marketplace,
F25909-05
This software and related documentation are provided under a license agreement containing restrictions on
use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your
license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license,
transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse
engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is
prohibited.
The information contained herein is subject to change without notice and is not warranted to be error-free. If
you find any errors, please report them to us in writing.
If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on
behalf of the U.S. Government, then the following notice is applicable:
U.S. GOVERNMENT END USERS: Oracle programs (including any operating system, integrated software,
any programs embedded, installed or activated on delivered hardware, and modifications of such programs)
and Oracle computer documentation or other Oracle data delivered to or accessed by U.S. Government end
users are "commercial computer software" or "commercial computer software documentation" pursuant to the
applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, the use,
reproduction, duplication, release, display, disclosure, modification, preparation of derivative works, and/or
adaptation of i) Oracle programs (including any operating system, integrated software, any programs
embedded, installed or activated on delivered hardware, and modifications of such programs), ii) Oracle
computer documentation and/or iii) other Oracle data, is subject to the rights and limitations specified in the
license contained in the applicable contract. The terms governing the U.S. Government’s use of Oracle cloud
services are defined by the applicable contract for such services. No other rights are granted to the U.S.
Government.
This software or hardware is developed for general use in a variety of information management applications.
It is not developed or intended for use in any inherently dangerous applications, including applications that
may create a risk of personal injury. If you use this software or hardware in dangerous applications, then you
shall be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure its
safe use. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of this
software or hardware in dangerous applications.
Oracle, Java, and MySQL are registered trademarks of Oracle and/or its affiliates. Other names may be
trademarks of their respective owners.
Intel and Intel Inside are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are
used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Epyc,
and the AMD logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered
trademark of The Open Group.
This software or hardware and documentation may provide access to or information about content, products,
and services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly
disclaim all warranties of any kind with respect to third-party content, products, and services unless otherwise
set forth in an applicable agreement between you and Oracle. Oracle Corporation and its affiliates will not be
responsible for any loss, costs, or damages incurred due to your access to or use of third-party content,
products, or services, except as set forth in an applicable agreement between you and Oracle.
Contents
2 Customizing Configurations
2.1 Configuring an external Metadata Store 2-1
2.1.1 Customer Managed Oracle Database 2-1
2.1.2 Autonomous Database 2-3
2.2 Configuring an external Spark cluster 2-5
2.3 Configuring an External OCI Streaming Service 2-5
iii
5 Upscaling and Downscaling GoldenGate Stream Analytics Compute
Shape Sizes
iv
Preface
This book describes how to use Oracle Stream Anaytics on Oracle Cloud Marketplace.
Topics:
• Audience
• Documentation Accessibility
• Related Documents
• Conventions
Audience
This document is intended for users of Oracle Stream Analytics on Oracle Cloud
Marketplace.
Documentation Accessibility
For information about Oracle's commitment to accessibility, visit the Oracle Accessibility
Program website at http://www.oracle.com/pls/topic/lookup?ctx=acc&id=docacc.
Related Documents
Documentation for Oracle Stream Analytics is available on Oracle Help Center.
Also see the following documents for reference:
• Understanding Oracle Stream Analytics
• Developing Custom Jars and Custom Stages in Oracle Stream Analytics
• Quick Installer for Oracle Stream Analytics
• Known Issues in Oracle Stream Analytics
• Spark Extensibility for CQL in Oracle Stream Analytics
Conventions
The following text conventions are used in this document.
5
Conventions
Convention Meaning
boldface Boldface type indicates graphical user interface elements associated
with an action, or terms defined in text or the glossary.
italic Italic type indicates book titles, emphasis, or placeholder variables for
which you supply particular values.
monospace Monospace type indicates commands within a paragraph, URLs, code
in examples, text that appears on the screen, or text that you enter.
6
1
Getting Started with GGSA on Oracle Cloud
This chapter provides an introduction to GoldenGate Stream Analytics on the Oracle Cloud
Marketplace.
1.1 Overview
Oracle GoldenGate Stream Analytics (GGSA) on Oracle Cloud Marketplace enables
customers to set up and run Oracle GGSA on Oracle Cloud. The cloud version provides the
same functionality, scalability, security and support as the on-premise version. All sources
and targets supported in the on-premise version are supported on cloud.
Acquiring data
Stream Analytics can acquire data from any of the following on-premises and cloud-native
data sources:
• GoldenGate: Natively integrated with Oracle GoldenGate, Stream Analytics offers data
replication for high-availability data environments, real-time data integration, and
transactional change data capture.
• Oracle Cloud Streaming: Ingest continuous, high-volume data streams that you can
consume or process in real-time.
1-1
Chapter 1
Resources
• Kafka: A distributed streaming platform used for metrics collection and monitoring,
log aggregation, and so on.
• Java Message Service: Allows java-based applications to send, receive, and read
distributed communications.
Processing data
With Stream Analytics on Oracle Cloud Infrastructure Marketplace, you can filter,
correlate, and process events in real-time, in the cloud. Marketplace solutions are
deployed to Oracle Cloud Infrastructure Compute instances within your regional
subnet.
After Stream Analytics processes the data, you can:
• store the data in Autonomous Data Warehouse, where it can be accessed by
Analytics Cloud
• configure the Notifications service to broadcast messages to other applications
hosted on Oracle Cloud Infrastucture
• send messages to OCI streaming service
• send the processed data to the OCI Object store
Analyzing data
You can connect Analytics Cloud to Autonomous Data Warehouse so that users can
analyze the data in visualizations, analyses, dashboards, and pixel-perfect reports.
Learn more.
1.3 Resources
The GGSA marketplace image includes the following components installed:
• GoldenGate Stream Analytics 19.1.0.0.6.1
• Spark 3.0.2
• Kafka 2.1.1
• Mysql Enterprise 8.0.18
• Java – JRE-8u281
• Oracle GoldenGate for Big Data 19.1.0.0.8 (Classic Architecture)
Except for Java and Mysql Enterprise, all the other components are installed under
the /u01/app directory structure.
1-2
Chapter 1
Creating a GGSA Instance
Note:
Do not include a hyphen in the Host DNS Name, while provisioning a
GGSA instance. You will see the following error if you include a hyphen:
No connection to server. Check your network connection and
refresh page. If the problem persist contact administrator.
• Network Settings -
– Create New – Uncheck the check box Use Existing Network if you want to
create new network infrastructure.
1-3
Chapter 1
Creating a GGSA Instance
Note:
If you are using an existing VCN and subnet, check the following
if you want the instance to be accessible from public internet :
* Your VCN must have an internet gateway
* Your subnet should have security rules to allow ingress
HTTP traffic for the following ports : 443, 7801, 7811-7830
* 443 – has ngnix running which fronts the OSA UI and
spark console
* 7801 – is the port for the GG Manager when we install
GoldenGate Big Data (GGBD)
* 7811-7830 –When we start GG Change data from OSA
UI, it creates a GG Distribution path and starts the GG
Replicat process. These are the ports used by Oracle
GG processes to bind for communication with a remote
Oracle GG process.
* Your subnet should be a public subnet and should have
route rule with target type set to Internet Gateway, and
should use the Gateway of the VCN.
• Instance Settings –
a. Availability Domain – It specifies the availability domain for the newly
created GGSA Instance. It must match the Availability Domain of Subnet
that you have selected in the Use Existing Network settings.
b. Compute Shape – Shape of new compute instance. Supported shapes
are VM.Standard2.4, VM.Standard2.8, VM.Standard2.16 and
VM.Standard2.24
1-4
Chapter 1
Creating a GGSA Instance
c. Assign Public IP – This option indicates if the newly created VM should have a
public IP address. This option is selected by default. If you clear this check box,
no public IP address will be assigned preventing public access to the compute
node.
It is important to provide your RSA public key for accessing the instance using
ssh. SSH is the only way to retrieve the passwords for GoldenGate Stream
Analytics and MySQL Metadata store. RSA public key will start with ssh-rsa.
d. Custom Volume Sizes – Select this check box to customize the size of the new
block storage volumes that are built for the compute node.
i. Boot Volume – 50 GB
ii. Kafka Logs – 50 GB
iii. Mysql Datafile – 50 GB
Note:
GGSA Resource Manager Stack will create 3 additional block volumes
apart from the boot volume attached to the instance. The naming
conventions for these block volume is :
– <Display Name> (Kafka Logs)
– <Display Name> (Mysql Datafile)
– <Display Name> (GG4BD Datadir)
The Display Name is what you specified earlier.
You can configure backup for these block volumes by subscribing to the
OCI Backup Policy. To apply the Backup Policy, to the Block Volume,
go to Block Storage -> Block Volume and select the block volume for
which you want to configure the backup. Click Apply.
• Use Existing Block Volume – Select this option if you already have block volumes
from your previous install that has data that you want to reuse.
a. Kafka Logs Block Volume – Provide the OCID of the block volume which was
used for Kafka logs in your previous setup.
b. MySql Datafile Block Volume – Provide the OCID of the block volume that was
used for Mysql datafile storage in your previous setup
c. GG4BD Datadir - Provide the OCID of the block volume that was used for Mysql
datafile storage in your previous setup
• Shell Access
– SSH Public Key – Public Key for allowing SSH access as the opc user. Enter
the key and click Next.
3. Review -
• On the Review page, review the information you provided and then click Create. The
OCI Resource Manager Stack is created, and Terraform Apply action is initiated to
create the OCI resources. You can see the created stack in the list of stacks in the
Resource Manager - > Stacks section. You can manage the stack here.
1-5
Chapter 1
Validating a GGSA Instance
• Plan – Terraform Plan performs a refresh, unless explicitly disabled, and then
determines what actions are necessary to achieve the desired state.
• Apply – Use the Terraform Apply action to create the OCI resources required to
reach the required state as per the user selection in the stack creation.
• Destroy – Use the Terraform Destroy action to destroy all the OCI resources
created by the Stack during the Apply action.
Note:
All the resources created by the Apply action will be destroyed if you use
the Destroy action, so be careful when using this action.
To connect to the instance, use the private key corresponding to the public key,
provided at the time of stack creation.
You can find the Instance IP in the logs of the Apply job, or from Outputs under
Resources on the Job Details page: ggesp_public_ip.
The following example illustrates how you can connect to the OSA compute node:ssh
-i <PRIVATE_KEY_FILE_PATH> opc@INSTANCE_IP
1-6
Chapter 1
Starting and Stopping GGSA Services
Note:
If you see the OSA service to be dead or failed, wait for about 10 minutes for the
services to start.
Note:
As mentioned in the previous section, wait for all the services to start, else the
README file will have only placeholder passwords and not the actual passwords.
1-7
Chapter 1
Securing your GGSA Instance
1-8
Chapter 1
Securing your GGSA Instance
Note:
Similarly you can change OSA_DEMO user password with mysql -u OSA_DEMO
-p.
You can set the sparkadmin user and password in the Environment tab of the System
Settings screen on the UI. You will need this to access the Spark Console.
1-9
2
Customizing Configurations
This chapter describes how to configure an external metadata store or a spark cluster.
For a list of other supported system configurations, see the latest certification matrix.
Note:
After configuring the OSA to a new metadata store, execute the below sql to the
same user/schema.
UPDATE osa_system_property SET value="true" where
mkey="osa.oci.instance".
<?xml version="1.0"?>
<!DOCTYPE Configure PUBLIC "-//Jetty//Configure//EN" "http://
www.eclipse.org/jetty/configure_9_3.dtd">
<Configure id="Server" class="org.eclipse.jetty.server.Server">
<!--
<New id="osads" class="org.eclipse.jetty.plus.jndi.Resource">
<Arg>
<Ref refid="wac"/>
</Arg>
<Arg>jdbc/OSADataSource</Arg>
<Arg>
<New class="com.mysql.cj.jdbc.MysqlConnectionPoolDataSource">
<Set name="URL">jdbc:mysql://localhost:3306/OSADB</Set>
<Set name="User">osa</Set>
<Set name="Password">
<Call
class="org.eclipse.jetty.util.security.Password" name="deobfuscate">
2-1
Chapter 2
Configuring an external Metadata Store
<Arg>OBF:{OBFUSCATED_PASSWORD}</Arg>
</Call>
</Set>
<Set name="maxAllowedPacket">209715200</Set>
<Set name="allowPublicKeyRetrieval">true</Set>
</New>
</Arg>
</New>
-->
</Configure>
3. Add the following section for Oracle database right below the commented section.
2-2
Chapter 2
Configuring an external Metadata Store
Note:
You will need database administrator credentials with sysdba privileges to
perform this step.
<?xml version="1.0"?>
<!DOCTYPE Configure PUBLIC "-//Jetty//Configure//EN" "http://
www.eclipse.org/jetty/configure_9_3.dtd">
<Configure id="Server" class="org.eclipse.jetty.server.Server">
<!--
<New id="osads" class="org.eclipse.jetty.plus.jndi.Resource">
<Arg>
<Ref refid="wac"/>
</Arg>
<Arg>jdbc/OSADataSource</Arg>
<Arg>
<New class="com.mysql.cj.jdbc.MysqlConnectionPoolDataSource">
<Set name="URL">jdbc:mysql://localhost:3306/OSADB</Set>
<Set name="User">osa</Set>
<Set name="Password">
<Call
class="org.eclipse.jetty.util.security.Password" name="deobfuscate">
2-3
Chapter 2
Configuring an external Metadata Store
<Arg>OBF:{OBFUSCATED_PASSWORD}</Arg>
</Call>
</Set>
<Set name="maxAllowedPacket">209715200</Set>
<Set name="allowPublicKeyRetrieval">true</Set>
</New>
</Arg>
</New>
-->
</Configure>
3. Add the following section for Oracle database right below the commented section.
2-4
Chapter 2
Configuring an external Spark cluster
5. The next step is to initialize the metadata store and set the password for user osaadmin.
Note:
You will need database administrator credentials with sysdba privileges to
perform this step.
2-5
3
Working with a GGSA Instance
This chapter describes how to create artifacts, data sources and targets, working with
pipelines and monitoring the GGSA services.
3-1
Chapter 3
Using the Local MYSQL Database, Spark, and Kafka Clusters
You can also connect from SQL Developer as shown in the screenshot below. Please
note the setting for Zero Date Handling.
3-2
Chapter 3
Using the Local MYSQL Database, Spark, and Kafka Clusters
You can use the following connect string, when you create a connection from GGSA UI for
MySQL database:
3-3
Chapter 3
Using the Local MYSQL Database, Spark, and Kafka Clusters
Note:
The password mentioned in the screen above is the password for
OSA_DEMO schema, which holds the sample data for users to try, and is by
default set to Welcome123!. You have an option to change it.
3-4
Chapter 3
Working with a Sample Pipeline
Note:
your-shell-script generates data in JSON format.
Sample output:
{"price":284.03,"symbol":"USDHUF=x","name":"USD/HUF"}
{"price":316.51,"symbol":"EURHUF=x","name":"EUR/HUF"}
{"price":0.8971,"symbol":"USDEUR=x","name":"USD/EUR"}
3-5
Chapter 3
Working with a Sample Pipeline
2. Click the IOT logo to navigate to the Catalog page, or click the Catalog tab. On
the Catalog page, click the pipeline to view the live results from the pipeline.
3-6
Chapter 3
Working with a Sample Pipeline
3. Click Done, to exit the editor, and return to Catalog page. Click Publish, to publish the
pipeline.
4. Click the dashboard associated with the Pipeline, after successfully publishing the
pipeline.
The Dashboard displays the live results. Click Exit, to exit the Dashboard and return to
the Catalog page.
3-7
Chapter 3
Working with a Sample Pipeline
Telecommunications Sample
The telecom sample analyzes logs from an Nginx reverse proxy to proactively alert on
security violations, application outages, traffic patterns, redirects, etc.
3-8
Chapter 3
Monitoring Pipelines from the Spark Console
Retail Sample
The retail sample analyzes credit card purchases in real-time and drives additional sales by
making compelling offers. Offers are made based on customer location, past purchase
history, etc. The sample also uses a Machine Learning model to score the likelihood of the
customer redeeming the offer. Offers are dropped if the likelihood of redemption is low.
Pipelines in a Draft state end with the suffix _draft. The Duration column indicates the
duration for which the pipeline has been running.
You can also view the Streaming statistics by clicking App ID-> Application Detail UI ->
Streaming.
3-9
Chapter 3
Monitoring GoldenGate Change Data
The Streaming statistic page displays the data ingestion rate, processing time for a
micro-batch, scheduling delay, etc. For more information on these metrics, please refer
to the Spark documentation.
3-10
4
Upgrading a GGSA Instance
This chapter describes how to Upgrade an existing GGSA instance.
3. Terminate the existing OCI VM on which the current GGSA instance is running. To
terminate the instance, go to Compute > Instances. Select your instance running GGSA
and in the Instance Details page, select terminate in the Actions drop down.
Note:
Ensure that you delete just the OCI VM running GGSA and not the complete
infrastructure. If you use Terraform Delete action against the stack, it will
delete all the artifacts including block volumes, network.
4. On the OCI Marketplace home page, click GoldenGate Stream Analytics to view the
page below. Select the GGSA version, and compartment. Agree to the Oracle standard
terms and restrictions, and click on Launch Stack.
5. In the Create Stack wizard, provide the essential information such as:
• Stack Information Screen
• Configure Variables
a. In the Instance Settings section, select Use Existing Block Storage Volumes.
4-1
Chapter 4
Upgrading to newer GGSA version
b. For Block Volumes for Kafka Logs, Mysql Datafiles and GG4BD Datadir,
provide the ID of the existing Block Volumes, that were attached to your
GGSA instance that you terminated earlier. To find the same, go to the top
level menu, Block Storage and to Block Volume. You can identify the
block volumes by their names, which are similar to the name of your
instance host and will have Kafka Logs, Mysql Datafiles, GG4BD Datadir
in brackets.
c. Review
.
6. Click Create. This will create the OCI Resource Manager Stack, and also trigger
the Terraform Apply action on that stack to create the OCI resources.
7. After the Terraform Apply action is complete, go to the Instances page to find the
details of the new instance created.
8. Connect to the new instance using SSH and update the MYSQL OSA User
password in /u01/app/osa/osa-base/etc/jetty-osa-datasource.xml,
using the password copied in Step 2.
9. Restart the OSA service using the commands
sudo systemctl reset-failed
sudo systemctl restart osa
10. If the existing pipelines are deployed on the internal spark cluster, republish them
after the upgrade, failing which, they will be Killed when the existing GGSA
instance is terminated.
4-2
5
Upscaling and Downscaling GoldenGate
Stream Analytics Compute Shape Sizes
This chapter describes how to upscale or downscale the compute shape sizes of your GGSA
instance on Oracle Cloud Marketplace compute node.
To change the size of the your GGSA Compute Shape:
1. Login to your OCI Tenancy.
2. From the Context menu on the top left corner of your screen, select Compute, and then
select Instances.
3. Select the GGSA Compute Node to edit.
4. From the Instance Details screen, locate the Edit drop down menu.
5. From the Edit drop down menu, select the option Edit Shape. A list of available shapes
appear, allowing you to select the desired shape for your selected compute instance.
6. Select the required compute shape and click Change Shape.
This restarts your compute node.
5-1
6
Creating a Production Cluster for GGSA
This chapter describes how to create a production cluster for GGSA on OCI, with an example
production cluster with six compute instances configured as follows:
A production cluster should have a minimum of 1 web-tier compute instance, 2 spark master
compute instances, and a variable number of worker instances for Spark workers and Kafka.
A minimum of 2 worker instances are required to run spark workers and Kafka.
1. A single compute instance running GGSA Web-tier. Minimal shape is VM 2.4. This web-
tier instance also acts as a Bastion Host for the cluster and should be created in a public
regional subnet.
2. Two spark master compute instances running spark master processes. Minimal shape is
VM 2.2. Both instances can be of same shape and should be created in a private regional
subnet.
3. Three worker instances running Spark worker, Kafka broker, and Zookeeper processes.
Minimal shape is VM 2.2. All three instances can be of same shape and should be
created in a private regional subnet.
All instances will use the GGSA image that comes packaged with GGSA, GGBD, Spark,
Kafka, MySQL, and Nginx.
To provision a GGSA production cluster you must first provision GGSA compute instances,
using the custom GGSA VM image. The GGSA VM image contains binaries for GGSA,
GGBD, Spark, Kafka, MySQL, OSA, Nginx, etc. You do not require any additional software
other than the GGSA image. The image packages the following scripts:
• init-kafka-zk.sh: Bash script to initialize Zookeeper and Kafka on worker VMs
• init-spark-master.sh: Bash script to initialize Spark master on Spark master VMs
• init-spark-slave.sh: Bash script to initialize Spark worker on worker VMs
• init-web-tier.sh: Bash script to initialize GGSA web-tier on the Web-tier VM
6-1
Chapter 6
Creating Network Components
Infrastructure resources must be created and scripts must be run in the order as
specified:
1. Creating Network Components
2. Creating Security Lists and Associating with Subnets
3. Creating Storage
4. Creating Worker VMs and Configuring ZK plus Kafka on each VM
5. Creating Spark Master VMs and Configuring a Spark Master on each VM
6. Initializing Spark Workers on Worker VMs
7. Creating Web-tier VM And Configuring the Runtime Environment
8. Validating Topology
6-2
Chapter 6
Creating Storage
Note:
The export path names have to be exactly as defined below.
Note:
Mount Target IP address is required for subsequent steps. Note it down.
• VCN = your-vcn-name
• Subnet = your-private-regional-subnet
2. Create Kafka file system with attributes below. Use the same Mount Target as in Step 1
because Mount Targets are expensive, and also the scripts assume a single Mount
Target.
• File System Name = KAFKA
6-3
Chapter 6
Creating Worker VMs and Configuring ZK plus Kafka on each VM
Note:
IP addresses of all these Worker VMs are required for the next step.
Save them as a space separated list, for example, Worker VM IP List.
2. SSH and execute the script init-kafka-zk.sh on each Worker VM. The usage for
the script is as follows and all arguments to the script must be separated by space.
6-4
Chapter 6
Creating Spark Master VMs and Configuing a Spark Master on each VM
Note:
The Server ID must be incremental starting with 1 and up to the number of
workers. For example, if we have three Worker VMs, the ServerID will start with
1 and end at 3.
A successful run has Kafka and Zookeeper servers running on all Worker VMs.
You can use an OCI Streaming Service (OSS) connection instead of Kafka or Zookeeper.
See Creating a Connection to OSS.
Note:
IP addresses of these Master VMs are required for subsequent steps. Save
them as a space separated list, for example, Master VM IP List.
2. SSH and execute the script init-spark-master.sh on each Master VM. The usage for
the script is as follows and all arguments to the script must be separated by space.
$OSA_HOME/osa-base/bin/init-spark-master.sh <MountTargetIP> < Worker VM IP
List >
Example: $OSA_HOME/osa-base/bin/init-spark-master.sh 10.0.1.73 10.0.0.45
10.0.0.46 10.0.0.47
On successful run, Spark Master processes must be running on Spark Master VMs.
6-5
Chapter 6
Creating a Web-tier VM and configuring the Runtime Environment
Note:
To run this script you require, the Mount Target IP, Spark master 1 IP,
Spark Master 2 IP, and one or more Worker Node IPs. The usage for the
script is as follows and all arguments to the script must be separated by
space.
$OSA_HOME/osa-base/bin/init-web-tier.sh <MountTargetIP> <
Master 1 IP>< Master 2 IP><Worker 1IP>...
Example: $OSA_HOME/osa-base/bin/init-web-tier.sh 10.0.1.73
10.0.0.95 10.0.0.96 10.0.0.100
6-6
7
Troubleshooting GGSA on OCI
This chapter describes how to troubleshoot the issues you encounter while using GoldenGate
Stream Analytics on OCI.
For troubleshooting specific UI issues, you can refer to Oracle Stream Analytics
Troubleshooting Guide.
7-1
Chapter 7
GoldenGate Change Data
For troubleshooting specific UI issues, you can refer to Oracle Stream Analytics
Troubleshooting Guide.
If a Replicat is abended, you can check the GoldenGate for Big Data error log
ggserr.log under the folder /u01/app/ggbd/
OGG_BigData_Linux_x64_19.1.0.0.8/
7-2