DO480-ACM2.4-en-0-f54b122
DO480-ACM2.4-en-0-f54b122
DO480-ACM2.4-en-0-f54b122
The contents of this course and all its modules and related materials, including handouts to audience members, are
Copyright © 2021 Red Hat, Inc.
No part of this publication may be stored in a retrieval system, transmitted or reproduced in any way, including, but
not limited to, photocopy, photograph, magnetic, electronic or other record, without the prior written permission of
Red Hat, Inc.
This instructional program, including all material provided herein, is supplied without any guarantees from Red Hat,
Inc. Red Hat, Inc. assumes no liability for damages or legal action arising from the use or misuse of contents or details
contained herein.
If you believe Red Hat training materials are being used, copied, or otherwise improperly distributed, please send
email to training@redhat.com or phone toll-free (USA) +1 (866) 626-2994 or +1 (919) 754-3700.
Red Hat, Red Hat Enterprise Linux, the Red Hat logo, JBoss, OpenShift, Fedora, Hibernate, Ansible, CloudForms,
RHCA, RHCE, RHCSA, Ceph, and Gluster are trademarks or registered trademarks of Red Hat, Inc. or its subsidiaries
in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS® is a registered trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or
other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent
Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack Logo are either registered trademarks/service marks or trademarks/
service marks of the OpenStack Foundation, in the United States and other countries and are used with the
OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack
Foundation or the OpenStack community.
DO480-ACM2.4-en-0-f54b122 vii
viii DO480-ACM2.4-en-0-f54b122
Document Conventions
This section describes various conventions and practices used throughout all
Red Hat Training courses.
Admonitions
Red Hat Training courses use the following admonitions:
References
These describe where to find external documentation relevant to a
subject.
Note
These are tips, shortcuts, or alternative approaches to the task at hand.
Ignoring a note should have no negative consequences, but you might
miss out on something that makes your life easier.
Important
These provide details of information that is easily missed: configuration
changes that only apply to the current session, or services that need
restarting before an update will apply. Ignoring these admonitions will
not cause data loss, but may cause irritation and frustration.
Warning
These should not be ignored. Ignoring these admonitions will most likely
cause data loss.
Inclusive Language
Red Hat Training is currently reviewing its use of language in various areas
to help remove any potentially offensive terms. This is an ongoing process
and requires alignment with the products and services covered in Red Hat
Training courses. Red Hat appreciates your patience during this process.
DO480-ACM2.4-en-0-f54b122 ix
x DO480-ACM2.4-en-0-f54b122
Introduction
DO480-ACM2.4-en-0-f54b122 xi
Introduction
xii DO480-ACM2.4-en-0-f54b122
Introduction
Two other machines are also used by students for these activities:
Classroom Machines
ROLE bastion.lab.example.com
workstation.lab.example.com 172.25.250.9
servera.lab.example.com 172.25.250.11
servera.lab.example.com 172.25.250.13
DO480-ACM2.4-en-0-f54b122 xiii
Introduction
Exercises are in two types. The first type, a guided exercise, is a practice exercise that follows
a course narrative. If a narrative is followed by a quiz, then usually the topic did not have an
achievable practice exercise. The second type, an end-of-chapter lab, is a gradable exercise to
help verify your learning. When a course includes a comprehensive review, the review exercises are
structured as gradable labs. The syntax for running an exercise script is:
The action is a choice of start, grade, or finish. All exercises support start and finish.
Only end-of-chapter labs and comprehesive review labs support grade. Older courses might still
use setup and cleanup instead of the current start and finish actions.
start
Formerly setup. A script's start logic verifies the required resources to begin an exercise. It
might include configuring settings, creating resources, checking prerequisite services, and
verifying necessary outcomes from previous exercises. You can take an exercise at any time,
even without taking prerequisite exercises.
grade
End-of-chapter labs help verify what you have learned, after practicing with earlier guided
exercises. The grade action directs the lab command to display a list of grading criteria, with
a PASS or FAIL status for each. To achieve a PASS status for all criteria, fix the failures and
rerun the grade action.
finish
Formerly cleanup. A script's finish logic deletes exercise resources that are no longer
necessary. You can take an exercise as many times as you want.
xiv DO480-ACM2.4-en-0-f54b122
Introduction
Note
Scripts download from the http://content.example.com/courses/course/
release/grading-scripts share, but only if the script does not yet exist on
workstation. When you need to download a script again, such as when a script
on the share is modified, manually delete the current exercise script from /usr/
local/lib on workstation, and then run the lab command for the exercise
again. The newer exercise script then downloads from the grading-scripts
share.
To delete all current exercise scripts on workstation, use the lab command's --refresh
option. A refresh deletes all scripts in /usr/local/lib but does not delete the log files.
Instead, the exercise error log is more useful for troubleshooting. Even when the scripts succeed,
messages are still sent to the exercise error log. For example, a script that verifies that an object
already exists before attempting to create it should cause an object not found message when the
object does not exist yet. In this scenario, that message is expected and does not indicate a failure.
Actual failure messages are typically more verbose, and experienced system administrators should
recognize common log message entries.
Although exercise scripts are always run from workstation, they perform tasks on other systems
in the course environment. Many course environments, including OpenStack and OpenShift, use
a command-line interface (CLI) that is invoked from workstation to communicate with server
systems by using API calls. Because script actions typically distribute tasks to multiple systems,
additional troubleshooting is necessary to determine where a failed task occurred. Log in to those
other systems and use Linux diagnostic skills to read local system log files and determine the root
cause of the lab script failure.
DO480-ACM2.4-en-0-f54b122 xv
xvi DO480-ACM2.4-en-0-f54b122
Chapter 1
Managing Multicluster
Kubernetes Architectures
Goal Describe multicluster architectures and solve its
challenges with Red Hat OpenShift Platform Plus.
DO480-ACM2.4-en-0-f54b122 1
Chapter 1 | Managing Multicluster Kubernetes Architectures
Objectives
After completing this section, you should be able to describe common use cases for multiple
Kubernetes clusters and examine the challenges of multicluster architectures.
The wide adoption of the Internet brought new business models to different industries, with new
technologies and ways of working. One example is the music and video streaming services that
completely changed the entertainment industry. Another example is the growth of online stores
that pushed traditional retailers to change how they operated their traditional stores. These
disruptions affect other sectors also, such as telecommunications, the automotive business, and
financial companies.
Furthermore, these new business models must frequently support operations that serve a huge
number of users. An online business with one thousand customers could succeed and grow to
hundreds of thousands or millions of users in a few days or weeks.
The following list enumerates some of the challenges that face IT departments:
Three modern technologies and ways of working are helping IT departments to adapt to digital
transformation:
• Cloud computing
• Containers technology
• Automation of processes
• You can access all resources in the cloud computing environment through a network.
• The cloud computing environment contains a repository of IT resources, such as applications,
software services, and hardware abstractions.
• You can provision and scale the cloud computing environment quickly.
There are different types of clouds, such as public clouds, private clouds, and hybrid clouds. A
public cloud provider is a company in charge of maintaining the base hardware, the network, and
2 DO480-ACM2.4-en-0-f54b122
Chapter 1 | Managing Multicluster Kubernetes Architectures
the software used to virtualize the servers. Cloud providers also offer service agreements and
pay-per-use options for using their services.
The arrival of public cloud providers was a disruptive change in the IT market. Now, an enterprise
can run all its software workloads without maintaining on premises data centers or spending on
new hardware.
However, not all companies and workloads are suitable for migration to a public cloud. The
following list suggests some reasons to limit the use of public clouds:
• The company already has many critical workloads running in its own data centers.
• The cost of migrating to cloud-native developments is very high.
• Some legacy systems are not adapted to run on cloud environments.
• The operational cost of public cloud services can be high, especially if the enterprise grows
rapidly.
• Some verticals, such as healthcare or finance, must comply with important regulatory
restrictions on where their data is stored. The General Data Protection Regulation law of the
European Union is an excellent example of a data location restriction.
• Concerns about security and losing control of the IT infrastructure are prioritized.
There are many other models of interacting with one or multiple clouds, such as between a fully
externalized public provider and a private data center. One model is a fully private cloud.
A private cloud is an IT infrastructure hosted in private data centers whose services are requested
and served with a model similar to public clouds:
• Self-provisioning on demand
• Auto-scaling resources based on infrastructure usage
• Better resource allocation and utilization over the same physical resources
Private clouds can be isolated from the external world. Ideally, a private cloud cannot and should
not communicate with a public cloud. However, in the real world, IT departments use mixed
models. Private data centers, hosted data centers, data centers on public clouds, or data centers
on private clouds need to share resources between them. A Hybrid Cloud is an IT infrastructure
that can communicate and share services between the following resource types:
• One or more data centers with bare metal or virtualized environments owned by the company
• One or more data centers with bare metal or virtualized environments rented to other
companies
• One or more private clouds
• One or more public clouds
The boundaries between some of these kinds of infrastructures are not always clear, particularly
as the technologies evolve. For example, many public cloud providers install some services, and
physical resources in some cases, in their customer's private data centers.
• High availability and failover capabilities: If the primary cloud is down,then other clouds can
assume the workload.
• Using specific services from specific clouds: Administrators can use a preferred cloud service
from each cloud.
• Geographical proximity: Some services can be faster if they run in close proximity to the user,
although this depends on the customer's location.
• Regulatory restrictions: In some cases, data regulatory restrictions can apply.
DO480-ACM2.4-en-0-f54b122 3
Chapter 1 | Managing Multicluster Kubernetes Architectures
• Network latency: For some industries like Telco, network traffic latency is a critical concern.
As an example of a hybrid cloud, the following diagram displays the distribution of two software
services, an online shop for global access to customers and a back-office application to manage
the online shop. The shop service has the benefits of global distribution. Customers always
connect to the nearest shop location. The back-office service only runs in the private data centers,
whereas the shop service runs in the private data centers and in two public clouds. However, the
regulated back-office data is only present in the Europe data center.
Linux containers are one of the best approaches to providing the infrastructure and development
framework to run both cloud-native applications and legacy workloads.
Container technologies use software packaging techniques to make applications fully portable to
any hybrid cloud. The packaging and portability of containers make them suitable for every cloud-
computing model and in traditional data centers.
The following list describes some of the benefits of using Linux containers:
In contrast, the use of Linux containers can introduce some of the following new challenges:
You can address these challenges by using a container orchestration system. Kubernetes is an
orchestration system that simplifies the deployment, management, and scaling of containerized
applications.
Automation Processes
Companies must reduce the time to market for their new services and offerings but keep the
ability to scale their infrastructure up or down quickly if demands change.
4 DO480-ACM2.4-en-0-f54b122
Chapter 1 | Managing Multicluster Kubernetes Architectures
By using hybrid clouds and Linux containers orchestration systems, companies can cope with
changing demands. Moreover, companies must also increase agility and automate their processes
at both the technical and organizational levels.
Modern IT teams are adapting their work to adopt agile models that automate a good part of
the software development lifecycle. Methods like CI/CD, DevSecOps, and GitOps are difficult to
implement, but are the key to an organization's ability to rapidly bring new business value through
software development.
Note
To learn more about CI/CD and GitOps see the GLS course DO380 - Red Hat
OpenShift Administration III : Scaling Kubernetes Deployments
in the Enterprise, chapter 4.
The following sections discuss the main challenges to adopting a multicluster architecture.
When an organization has many development and QA teams belonging to different areas or
regions, or in different business units, the number of Kubernetes clusters can grow quickly. The
higher the number of clusters, the more difficult it is to locate information about the performance
and the status of the fleet.
Managing a high number of clusters is time-consuming and error prone. Administrators must work
with multiple consoles and distributed business applications, and sometimes inconsistent security
controls across the diverse clusters deployed on premises or in public clouds.
Another concern of IT departments from a security viewpoint is the origin of the software that runs
in the Kubernetes clusters. The packaging techniques of containers make it very easy to package
software inside a container with security problems or obsolete versions of some dependencies.
The developer's freedom to choose the software used in a container has the associated risk of
adding layers of defective software to the container image.
DO480-ACM2.4-en-0-f54b122 5
Chapter 1 | Managing Multicluster Kubernetes Architectures
Furthermore, if the applications have dependencies with services in different clusters, having
visibility of the network topology is a difficult challenge. Multicluster architectures need
repositories of IT artifacts available from different clouds. This need represents another challenge
that affects not only the applications but also the deployment of infrastructure in the multicluster
environment. In the containers world, many infrastructure components are packaged as container
images.
To summarize, the following table relates the new technologies and processes to the challenges
that those technologies and new processes present in a multicluster architecture:
6 DO480-ACM2.4-en-0-f54b122
Chapter 1 | Managing Multicluster Kubernetes Architectures
References
Understanding cloud computing
https://www.redhat.com/en/topics/cloud
For more information, refer to the official Documentation of Red Hat Advanced
Cluster Management for Kubernetes at
https://access.redhat.com/documentation/en-us/
red_hat_advanced_cluster_management_for_kubernetes/2.4
DO480-ACM2.4-en-0-f54b122 7
Chapter 1 | Managing Multicluster Kubernetes Architectures
Quiz
1. Which two of the following modern technologies are helping IT departments to adapt
to digital transformation? (Choose two.)
a. Cloud computing
b. Distributed computing
c. Linux containers
d. Databases
e. Private data centers
2. Which three of the following advantages are a result of using multiple clouds?
(Choose three.)
a. Geographical proximity to the end users
b. Centralized data sources
c. Managing regulatory restrictions per country
d. Ability to use specific cloud services from different providers
e. High availability and service failover between clouds
3. Which two of the following challenges arise from using Linux containers? (Choose
two.)
a. Managing communication between services running in containers
b. Easily updating the server RPM packages
c. Allocating resources effectively for the running containers
d. Reverting failed operating system upgrades
4. Which three of the following advantages result from using Linux containers compared
to legacy deployments? (Choose three.)
a. Portability and reusability of the applications
b. Better resource utilization
c. Simpler deployment architecture
d. Reduced time to deployment
e. Reduced need to keep software versions updated
8 DO480-ACM2.4-en-0-f54b122
Chapter 1 | Managing Multicluster Kubernetes Architectures
5. Which four of the following challenges arise from using a Kubernetes multicluster
architecture? (Choose four.)
a. Managing multiple clusters effectively
b. Keeping high availability of the running applications
c. Distributing the applications efficiently among multiple clusters
d. Ensuring the security and compliance of the fleet of clusters
e. Monitoring the fleet of clusters from a single pane of glass
f. Failover when a cloud environment is unavailable
DO480-ACM2.4-en-0-f54b122 9
Chapter 1 | Managing Multicluster Kubernetes Architectures
Solution
1. Which two of the following modern technologies are helping IT departments to adapt
to digital transformation? (Choose two.)
a. Cloud computing
b. Distributed computing
c. Linux containers
d. Databases
e. Private data centers
2. Which three of the following advantages are a result of using multiple clouds?
(Choose three.)
a. Geographical proximity to the end users
b. Centralized data sources
c. Managing regulatory restrictions per country
d. Ability to use specific cloud services from different providers
e. High availability and service failover between clouds
3. Which two of the following challenges arise from using Linux containers? (Choose
two.)
a. Managing communication between services running in containers
b. Easily updating the server RPM packages
c. Allocating resources effectively for the running containers
d. Reverting failed operating system upgrades
4. Which three of the following advantages result from using Linux containers compared
to legacy deployments? (Choose three.)
a. Portability and reusability of the applications
b. Better resource utilization
c. Simpler deployment architecture
d. Reduced time to deployment
e. Reduced need to keep software versions updated
10 DO480-ACM2.4-en-0-f54b122
Chapter 1 | Managing Multicluster Kubernetes Architectures
5. Which four of the following challenges arise from using a Kubernetes multicluster
architecture? (Choose four.)
a. Managing multiple clusters effectively
b. Keeping high availability of the running applications
c. Distributing the applications efficiently among multiple clusters
d. Ensuring the security and compliance of the fleet of clusters
e. Monitoring the fleet of clusters from a single pane of glass
f. Failover when a cloud environment is unavailable
DO480-ACM2.4-en-0-f54b122 11
Chapter 1 | Managing Multicluster Kubernetes Architectures
Objectives
After completing this section, you should be able to identify the tools provided by OpenShift
Platform Plus to facilitate end-to-end management, security, and compliance of Kubernetes fleets
on hybrid clouds.
Red Hat OpenShift Platform Plus is a collection of software tools running on Red Hat OpenShift
Container Platform. Red Hat OpenShift Platform Plus allows administratos to manage the
software life cycle across Kubernetes and RHOCP clusters. It provides end-to-end visibility and
control over multiple Kubernetes clusters from a single pane of glass. Furthermore, OpenShift
Platform Plus brings Kubernetes-native security capabilities to protect the software supply chain,
infrastructure, and workloads. Finally, OpenShift Platform Plus also provides a globally-distributed
and scalable registry.
The capabilities mentioned previously are provided by the following Red Hat products, included
in the Red Hat OpenShift Platform Plus subscription, and distributed and installed as Kubernetes
operators in OpenShift Container Platform.
By using Red Hat Advanced Cluster Management for Kubernetes, administrators can perform the
following tasks:
• Create, update, or delete RHOCP or Kubernetes clusters across multiple private and public
clouds.
• Find and modify any Kubernetes resource across the entire domain using the built-in search
engine.
• Automate and apply tasks in the clusters through the integration with Red Hat Ansible
Automation Platform.
• Create and manage alerts generated across all the managed clusters.
12 DO480-ACM2.4-en-0-f54b122
Chapter 1 | Managing Multicluster Kubernetes Architectures
By using Red Hat Advanced Cluster Security for Kubernetes, administrators get the following
advantages:
• Visibility — Centralized view of your deployments, traffic in all clusters, and critical system level
events in each running container.
• Compliance — Assessment for CIS Benchmarks, payment card industry (PCI), Health Insurance
Portability and Accountability Act (HIPAA), and NIST SP 800-190. Centralized compliance
dashboard with evidence export for auditors, and detailed view compliance to pinpoint specific
clusters, nodes, or namespaces.
• Network segmentation — Visualization of allowed and active traffic, simulation of network policy
changes, network policy recommendations, and network enforcement capabilities.
• Risk profiling — Deployment raking based on security risk calculation, and security tracking to
validate changes in configuration.
• Runtime detection and response — System level event monitoring, automatic allowing of process
activity, pre-built policies to detect crypto mining, privilege escalation, and various exploits, and
system level data collection using external Berkeley Packet Finder (eBPF) or other Linux kernel
modules.
• Integration — API and prebuilt plugins to integrate with DevOps systems, CI/CD tools, image
scanners, registries, container runtimes, security integration event management (SIEM)
solutions, and notification tools.
• Time machine, with the ability to rollback image changes to a previous state.
• Advanced access control management, including support for mapping teams and organizations
integrating with an existing identity infrastructure.
• TLS security encryption between Red Hat Quay and your servers.
DO480-ACM2.4-en-0-f54b122 13
Chapter 1 | Managing Multicluster Kubernetes Architectures
• Integrating with CI/CD pipelines using build triggers, git hooks, and robot accounts.
Easily provisioning clusters on all types of Red Hat Advanced Cluster Management for
clouds Kubernetes
Locating information about objects present in Red Hat Advanced Cluster Management for
a fleet of clusters Kubernetes search engine
Managing configuration compliance and the Red Hat Advanced Cluster Management for
fleet of cluster's security Kubernetes governance engines
Monitoring the behavior of all clusters and Red Hat Advanced Cluster Management for
their workloads from performance and Kubernetes observability engine
scalability points of view
Managing communications between services Red Hat Advanced Cluster Management for
provided by containers, even when they are in Kubernetes Submariner services
different clusters or clouds
Security scanning of the artifacts and images Red Hat Quay for static scanning and Red Hat
that containers use Advanced Cluster Security for Kubernetes for
dynamic scanning
An easy way for developers to implement CI/ Red Hat Advanced Cluster Management for
CD across a fleet of clusters Kubernetes applications engine
An easy way for DevOps teams to implement Red Hat Advanced Cluster Management for
GitOps across a fleet of clusters Kubernetes applications engine
14 DO480-ACM2.4-en-0-f54b122
Chapter 1 | Managing Multicluster Kubernetes Architectures
References
Red Hat OpenShift Container Platform
https://www.redhat.com/en/technologies/cloud-computing/openshift/container-
platform
DO480-ACM2.4-en-0-f54b122 15
Chapter 1 | Managing Multicluster Kubernetes Architectures
Quiz
1. Which three of the following products include Red Hat OpenShift Platform Plus?
(Choose three.)
a. Red Hat Advanced Cluster Management for Kubernetes
b. Red Hat Satellite
c. Red Hat Quay
d. Red Hat Advanced Cluster Security for Kubernetes
e. Red Hat Identity Management
2. Which three of the following tasks can administrators perform using Red Hat
Advanced Cluster Management for Kubernetes? (Choose three.)
a. Create, update, or delete RHOCP clusters across multiple private and public clouds.
b. Create virtual networks to connect clusters across multiple private and public clouds.
c. Enforce compliance policies across a fleet of managed clusters.
d. Create custom ISO files to deploy Red Hat Enterprise Linux CoreOS (RHCOS).
e. Visualize cluster metrics from a centralized monitoring stack.
3. Which three of the following advantages arise from using Red Hat Quay? (Choose
three.)
a. Integrate with CI/CD pipelines using build triggers, git hooks, and robot accounts.
b. Preserve a private source code control system for infrastructure as a service (IaaS)
repositories.
c. Rollback container image changes to a previous state.
d. Receive alerts about detected vulnerabilities in the hosted container images.
e. Deploy RHOCP clusters from the Red Hat Quay dashboard.
4. Which four of the following advantages arise from using Red Hat Advanced Cluster
Security for Kubernetes? (Choose four.)
a. Assessment for CIS benchmarks
b. Simulation of changes in the network policies
c. Configuration policies at build time for CI/CD integration
d. Automatic application updates to avoid recent vulnerabilities
e. Centralized vulnerability management with correlation to running deployments
f. Automatic cluster updates to avoid recent vulnerabilities
16 DO480-ACM2.4-en-0-f54b122
Chapter 1 | Managing Multicluster Kubernetes Architectures
Solution
1. Which three of the following products include Red Hat OpenShift Platform Plus?
(Choose three.)
a. Red Hat Advanced Cluster Management for Kubernetes
b. Red Hat Satellite
c. Red Hat Quay
d. Red Hat Advanced Cluster Security for Kubernetes
e. Red Hat Identity Management
2. Which three of the following tasks can administrators perform using Red Hat
Advanced Cluster Management for Kubernetes? (Choose three.)
a. Create, update, or delete RHOCP clusters across multiple private and public clouds.
b. Create virtual networks to connect clusters across multiple private and public clouds.
c. Enforce compliance policies across a fleet of managed clusters.
d. Create custom ISO files to deploy Red Hat Enterprise Linux CoreOS (RHCOS).
e. Visualize cluster metrics from a centralized monitoring stack.
3. Which three of the following advantages arise from using Red Hat Quay? (Choose
three.)
a. Integrate with CI/CD pipelines using build triggers, git hooks, and robot accounts.
b. Preserve a private source code control system for infrastructure as a service (IaaS)
repositories.
c. Rollback container image changes to a previous state.
d. Receive alerts about detected vulnerabilities in the hosted container images.
e. Deploy RHOCP clusters from the Red Hat Quay dashboard.
4. Which four of the following advantages arise from using Red Hat Advanced Cluster
Security for Kubernetes? (Choose four.)
a. Assessment for CIS benchmarks
b. Simulation of changes in the network policies
c. Configuration policies at build time for CI/CD integration
d. Automatic application updates to avoid recent vulnerabilities
e. Centralized vulnerability management with correlation to running deployments
f. Automatic cluster updates to avoid recent vulnerabilities
DO480-ACM2.4-en-0-f54b122 17
Chapter 1 | Managing Multicluster Kubernetes Architectures
Objectives
After completing this section, you should be able to describe and deploy the RHACM operator
from the OperatorHub in an OpenShift hub cluster.
The core components of RHACM run in the hub cluster, the central controller that runs the
RHACM management APIs, the RHACM web console, and an integrated command-line tool.
The MultiClusterHub operator, part of the RHACM operator installation, is responsible of
managing, upgrading, and installing hub cluster components.
In RHACM, the local cluster is the RHOCP cluster hosting the hub cluster. The hub cluster is also a
managed cluster itself unless specified otherwise during the creation of the MultiClusterHub
object. A managed cluster is an additional cluster managed by the hub cluster.
The agent controlling the connection between clusters is the klusterlet. The hub cluster can
control managed clusters communicating with the klusterlet running in them. The operator
creates all the necessary custom resource definitions (CRDs) for the RHACM components.
The following diagram provides a high level overview of an RHACM deployment on RHOCP.
As displayed in the previous diagram, there are three ways to interact with RHACM:
18 DO480-ACM2.4-en-0-f54b122
Chapter 1 | Managing Multicluster Kubernetes Architectures
The web console provides visual, simplified access to the search engine, topology view, visual web
terminal, and the observability console among other features.
Apart from the core components running in the hub cluster, every managed cluster runs agents to
communicate with the hub cluster. These agents, also called addons are:
• the klusterlet
• the search engine addon
• the addon for managing application deployments from RHACM
• the addon for applying policies
• the addon for the observability components
RHACM leverages capabilities from the RHOCP local cluster for cluster lifecycle management,
application lifecycle management, applying governance, risk, and compliance policies, and the
prometheus and grafana instances running on it for observability.
Installing RHACM
Red Hat recommends administrators to install the Advanced Cluster Management for Kubernetes
operator from the OperatorHub menu in the RHCOP web console.
Note
A list of supported hub cluster providers is available in the RHACM
documentation [https://access.redhat.com/documentation/en-us/
red_hat_advanced_cluster_management_for_kubernetes/2.4/html-single/clusters/
supported-clouds#supported-hub-cluster-providers].
The installation requires selecting an Update channel. An update channel is used to deliver
streams of updates to the operators. Typically, update channels correspond to minor software
versions. The installation also requires to choose an Update approval strategy, between automatic
or manual. Choosing the manual update approval approach requires human intervention to update
when a new version of the operator is available in the update stream. Alternatively, the automatic
update approval will trigger an update when a new operator version is available.
After the operator is installed, administrators must create the MultiClusterHub custom
resource to deploy all the RHACM components. You will be required to perform this steps
elsewhere in this course.
Administrators can also install the RHACM operator using the CLI. Both installation methods can
be performed in connected and disconnected environments.
Note
For more information about all the available installation methods,
check the Red Hat Advanced Cluster Management for Kubernetes
Install Guide at https://access.redhat.com/documentation/en-us/
red_hat_advanced_cluster_management_for_kubernetes/2.4/html-single/install/
index
DO480-ACM2.4-en-0-f54b122 19
Chapter 1 | Managing Multicluster Kubernetes Architectures
Note
A list of supported managed cluster providers is available in the RHACM
documentation [https://access.redhat.com/documentation/en-us/
red_hat_advanced_cluster_management_for_kubernetes/2.4/html-single/clusters/
supported-clouds#supported-managed-cluster-providers].
The local cluster (the RHOCP cluster where the hub cluster runs) is automatically imported into
RHACM, unless specified during the creation of the MultiClusterHub object.
There are two ways for importing new clusters: from the web console, and using the CLI.
5. In case of importing a Red Hat OpenShift Dedicated cluster, the hub cluster running in it, and
cluster-admin permissions to use RHACM.
1. From the RHACM web console, navigate to the Infrastructure → Clusters menu.
6. Click Copy command to copy the generated command and token to the clipboard.
7. Use the oc CLI tool to log into the cluster you want to import.
20 DO480-ACM2.4-en-0-f54b122
Chapter 1 | Managing Multicluster Kubernetes Architectures
The newly imported cluster appears immediately in the Infrastructure → Clusters menu. It takes
a while to see all the cluster details. This is because RHACM is installing all the add-ons in the
managed cluster.
Note
For detailed steps and expanded information, review the Chapter
8. Importing a target managed cluster to the hub cluster section
of the Red Hat Advanced Cluster Management for Kubernetes
Clusters Guide at https://access.redhat.com/documentation/en-us/
red_hat_advanced_cluster_management_for_kubernetes/2.4/html-single/clusters/
index#importing-a-target-managed-cluster-to-the-hub-cluster
The following steps are a summary of the actions to be performed in each cluster, for a better
understanding of the process.
Note
For detailed instructions on how to import clusters into RHACM using
the CLI, visit https://access.redhat.com/documentation/en-us/
red_hat_advanced_cluster_management_for_kubernetes/2.4/html-single/clusters/
index#importing-a-managed-cluster-with-the-cli
1. Create a project with the name of the cluster to import, for instance: imported-cluster
In the hub cluster, administrators must extract existing secrets and generate YAML files
containing:
2. The secret for importing new clusters. For instance in a file named import.yaml.
Then, in the managed cluster to import, administrators must use the YAML files generated in the
previous step to create the necessary objects.
1. oc create -f klusterlet-crd.yaml
2. oc create -f import.yaml
DO480-ACM2.4-en-0-f54b122 21
Chapter 1 | Managing Multicluster Kubernetes Architectures
In the hub cluster, validate the JOINED and AVAILABLE status of the managedcluster named
imported-cluster.
In the target cluster, validate the status of the pods of all the addons running on the target cluster
namespace open-cluster-management-agent-addon.
To remove an imported cluster using the CLI, log in to the hub cluster, locate the correspondent
managedcluster object and use oc to delete it. Because RHACM is removing all the add-
ons and other RHACM components from the managed cluster, the detach process takes a few
minutes.
References
For more information, refer to the About guide in the Red Hat Advanced Cluster
Management for Kubernetes documentation at
https://access.redhat.com/documentation/en-us/
red_hat_advanced_cluster_management_for_kubernetes/2.4/html-single/about/
index
For more information, refer to the Install guide in the Red Hat Advanced Cluster
Management for Kubernetes documentation at
https://access.redhat.com/documentation/en-us/
red_hat_advanced_cluster_management_for_kubernetes/2.4/html-single/install/
index
For more information, refer to the Clusters guide in the Red Hat Advanced Cluster
Management for Kubernetes documentation at
https://access.redhat.com/documentation/en-us/
red_hat_advanced_cluster_management_for_kubernetes/2.4/html-single/clusters/
index
22 DO480-ACM2.4-en-0-f54b122
Chapter 1 | Managing Multicluster Kubernetes Architectures
Guided Exercise
Then, you will remove the imported cluster and uninstall the RHACM operator and all its
components.
Outcomes
You should be able to:
This command ensures that the RHACM operator is not already installed.
Note
The hub cluster must be installed in the cluster ocp4.example.com.
Instructions
1. Using OperatorHub, install the Advanced Cluster Management for Kubernetes operator
in the ocp4.example.com cluster. The web console URL is https://console-
openshift-console.apps.ocp4.example.com.
• Username: admin
• Password: redhat
DO480-ACM2.4-en-0-f54b122 23
Chapter 1 | Managing Multicluster Kubernetes Architectures
1.2. Install the Advanced Cluster Management for Kubernetes operator from
OperatorHub.
Navigate to Operators → OperatorHub and type Advanced Cluster
Management in the Filter by keyword text field.
Click Advanced Cluster Management for Kubernetes, and then click Install.
In the Update Channel, ensure that the release-2.4 radio button is selected.
Select Manual as the Approval strategy. Then, click Install.
Next, you must approve the installation or updates to the RHACM operator manually.
Click Approve in the next step. The installation can take a few minutes to complete.
When the operator is installed, you see the following message:
24 DO480-ACM2.4-en-0-f54b122
Chapter 1 | Managing Multicluster Kubernetes Architectures
2.1. Open a terminal and log in to the ocp4.example.com cluster as the admin user.
The APIServer URL is https://api.ocp4.example.com:6443.
2.2. Check the status of all the objects in the open-cluster-management namespace.
The output is quite long. Review the status of the pods, services, deployments, replica
sets, and stateful sets.
2.3. Retrieve the route to the RHACM web console, named multicloud-console.
3. Test the RHACM deployment by importing the ocp4-mng.example.com cluster from the
RHACM web console.
DO480-ACM2.4-en-0-f54b122 25
Chapter 1 | Managing Multicluster Kubernetes Architectures
3.1. In Firefox, open a new tab and log in to the RHACM web console with the URL
multicloud-console.apps.ocp4.example.com.
When prompted, select htpasswd_provider.
Type the credentials:
• Password: redhat
• Name: managed-cluster
Leave the rest of the values unchanged and click Save import and generate code.
The Save import and generate code button now displays the Code generated
successfully message.
Click Copy command.
26 DO480-ACM2.4-en-0-f54b122
Chapter 1 | Managing Multicluster Kubernetes Architectures
Note
The previous steps copies a command to the clipboard to be executed in a terminal
after logging in the managed cluster ocp4-mng.example.com.
3.4. From the terminal, log in to the ocp4-mng.example.com cluster as the admin user.
The APIServer URL is https://api.ocp4-mng.example.com:6443.
3.5. Paste and run the import code in to the terminal and press Enter. You can use
the keys Ctrl+Shift+V or your preferred method for pasting content from the
clipboard. The paste command is quite long, and most of it is encoded in base64.
DO480-ACM2.4-en-0-f54b122 27
Chapter 1 | Managing Multicluster Kubernetes Architectures
4.1. From the RHACM web console, detach the imported cluster.
In the Infrastructure → Clusters pane, click Options button on the right side of the
managed-cluster. Then, click Detach cluster.
Then, type the cluster name managed-cluster in the text field and click Detach.
The Status column changes to Detaching. After a few minutes, the managed-
cluster is completely detached.
28 DO480-ACM2.4-en-0-f54b122
Chapter 1 | Managing Multicluster Kubernetes Architectures
Click Options to the right side of the multiclusterhub object. Then, click Delete
MultiClusterHub.
When prompted, click Delete to confirm. The Status column changes to Phase:
Uninstalling. After a few minutes, the multiclusterhub object is completely
removed and the page displays the No operands found message.
Note
It takes several minutes to remove the multiclusterhub object. Do not continue
with the next step until the multiclusterhub object is completely removed.
DO480-ACM2.4-en-0-f54b122 29
Chapter 1 | Managing Multicluster Kubernetes Architectures
When prompted, click Uninstall to confirm. After a few minutes, the operator is
completely uninstalled and the page displays the No Operators found message.
Finish
On the workstation machine, use the lab command to complete this exercise. This is important
to ensure that resources from previous exercises do not impact upcoming exercises.
30 DO480-ACM2.4-en-0-f54b122
Chapter 1 | Managing Multicluster Kubernetes Architectures
Lab
Outcomes
You should be able to perform the following tasks by using the oc command-line tool:
This command ensures that the RHACM operator is not already installed.
Note
The hub cluster must be installed in the cluster ocp4.example.com.
Instructions
1. Install the Advanced Cluster Management for Kubernetes operator on the
ocp4.example.com cluster.
2. Create the RHACM MultiClusterHub object.
3. Prepare RHACM to import a cluster using the name managed-cluster.
4. Generate the files to import the managed-cluster into RHACM.
5. Import the RHCOP cluster, ocp4-mng.example.com, available in the lab environment into
RHACM. Use managed-cluster as the name of the cluster to import.
6. Log back in to the hub cluster and verify the status of the managed-cluster.
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
DO480-ACM2.4-en-0-f54b122 31
Chapter 1 | Managing Multicluster Kubernetes Architectures
Finish
Do not make any other changes to the lab environment until the next guided exercise. You will
continue using this environment in upcoming exercises.
As the student user on the workstation machine, use the lab command to complete this
exercise.
32 DO480-ACM2.4-en-0-f54b122
Chapter 1 | Managing Multicluster Kubernetes Architectures
Solution
Outcomes
You should be able to perform the following tasks by using the oc command-line tool:
This command ensures that the RHACM operator is not already installed.
Note
The hub cluster must be installed in the cluster ocp4.example.com.
Instructions
1. Install the Advanced Cluster Management for Kubernetes operator on the
ocp4.example.com cluster.
1.1. From the workstation machine, open a terminal and log in to the ocp4 cluster as the
admin user. The APIServer URL is https://api.ocp4.example.com:6443.
DO480-ACM2.4-en-0-f54b122 33
Chapter 1 | Managing Multicluster Kubernetes Architectures
1.4. Create a subscription to the Advanced Cluster Management for Kubernetes operator.
First, create a file named subscription.yaml with the following contents:
34 DO480-ACM2.4-en-0-f54b122
Chapter 1 | Managing Multicluster Kubernetes Architectures
As you can see in the preceding output, the installation is not approved. Approve the
installation plan install-4k2q8 by running the following command.
2.1. From the terminal, create a file named mch.yaml containing the definition of the
MultiClusterHub object.
2.2. Use the oc command to create the MultiClusterHub object from the mch.yaml
file.
DO480-ACM2.4-en-0-f54b122 35
Chapter 1 | Managing Multicluster Kubernetes Architectures
Note
It takes around 5 minutes to create all the resources of the MultiClusterHub object.
2.3. Wait until the MultiClusterHub object creates all its components. Use the watch
command to monitor the status.
When the STATUS column displays Running, exit the watch command with Ctrl+C.
The creation of the MultiClusterHub object is complete.
3.2. Label the namespace with the cluster name to be used by RHACM.
3.5. Create a klusterlet add-on configuration file named klusterlet.yaml with the
following contents:.
36 DO480-ACM2.4-en-0-f54b122
Chapter 1 | Managing Multicluster Kubernetes Architectures
3.6. Use the klusterlet.yaml file to create the klusterlet configuration in the hub
cluster.
This file contains the necessary secrets to import the managed-cluster into RHACM.
5. Import the RHCOP cluster, ocp4-mng.example.com, available in the lab environment into
RHACM. Use managed-cluster as the name of the cluster to import.
DO480-ACM2.4-en-0-f54b122 37
Chapter 1 | Managing Multicluster Kubernetes Architectures
5.1. Use the terminal to log in to the ocp4-mng cluster as the admin user. The APIServer
URL is https://api.ocp4-mng.example.com:6443.
5.2. Use the klusterlet-crd.yaml file to create the klusterlet custom resource
definition.
5.3. Now, use the import.yaml file to create the rest of the resources necessary to import
the cluster into RHACM.
5.5. Finally, use the watch command to validate the status of the agent pods running in the
open-cluster-management-agent-addon namespace.
38 DO480-ACM2.4-en-0-f54b122
Chapter 1 | Managing Multicluster Kubernetes Architectures
When all the pods are ready, press Ctrl+C to exit the watch command.
6. Log back in to the hub cluster and verify the status of the managed-cluster.
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
Do not make any other changes to the lab environment until the next guided exercise. You will
continue using this environment in upcoming exercises.
As the student user on the workstation machine, use the lab command to complete this
exercise.
DO480-ACM2.4-en-0-f54b122 39
Chapter 1 | Managing Multicluster Kubernetes Architectures
Summary
In this chapter, you learned:
• How the Red Hat OpenShift Platform Plus tools help to address the challenges of multicluster
architectures.
40 DO480-ACM2.4-en-0-f54b122
Chapter 2
DO480-ACM2.4-en-0-f54b122 41
Chapter 2 | Inspecting Resources from Multiple Clusters Using the RHACM Web Console
Objectives
After completing this section, you should be able to locate objects across a fleet of managed
clusters by using the search engine and enumerate RHACM features through the web console.
The following sections describe the different components of RHACM web console.
Home
The Home pane provides information about RHACM and its use cases, along with links to the main
product features. The Home pane includes the following submenus:
Welcome
Provides information and links to access the main RHACM features.
Overview
Provides a summary and a high-level overview of the details and status of the managed
clusters.
Infrastructure
You can use the Infrastructure pane to access cluster lifecycle management, bare metal assets
management, Ansible automation configuration, and infrastructure environments management.
The Infrastructure pane includes the following submenus:
Clusters
The following list shows some of the features about clusters that you can use from the
Clusters pane:
• Scaling manually the clusters, or enabling autoscaling. The proccess of resizing a cluster is
different if you created the cluster by using RHACM, or if the cluster already existed and you
imported it to RHACM.
• Using RHACM features such as cluster sets, cluster pools, and discover clusters.
42 DO480-ACM2.4-en-0-f54b122
Chapter 2 | Inspecting Resources from Multiple Clusters Using the RHACM Web Console
Note
A list of supported hub cluster providers is available in the RHACM
documentation [https://access.redhat.com/documentation/en-us/
red_hat_advanced_cluster_management_for_kubernetes/2.4/html-single/clusters/
supported-clouds#supported-hub-cluster-providers].
Automation
In the Automation pane, you can create Ansible job templates and run Ansible jobs
automatically during different stages of a cluster lifecycle. To create Ansible templates you
need the Ansible Automation Platform Resource Operator installed on the RHACM hub
cluster. You will also need an Ansible Tower 3.7.3 or a later version available.
Infrastructure environments
In RHACM, an infrastructure environment is a pool of resources that allows you to create
hosts, and create clusters on those hosts. The main component for managing infrastructure
environments is the Central Infrastructure Management (CIM) service, that you need to
enable. In the Infrastructure environments pane, you can create infrastructure environments
and access to them to add hosts.
Applications
You can use the Applications pane to create, deploy, and manage applications across the fleet of
clusters.
You can find more information in the Introducing the RHACM Application Model chapter elsewhere
in this Course.
Governance
Through the Governance pane, you can create and manage policies and policy controllers, and
apply those to the fleet of clusters.
You can find more information in the Deploying the Compliance Operator Across the Fleet of
Clusters Using the RHACM Compliance Operator Policy chapter elsewhere in this Course.
Credentials
In RHACM, a credential stores the access information for a cloud provider. Kubernetes secrets is
the way to store credentials. Each credential has 2 keys, the cloud provider access information, and
DO480-ACM2.4-en-0-f54b122 43
Chapter 2 | Inspecting Resources from Multiple Clusters Using the RHACM Web Console
a DNS name within that cloud provider. The following is the list of types of credentials that RHACM
uses:
• Cloud provider credentials: for example, Amazon Web Services, Google Cloud Platform, or
Microsoft Azure
• Data center credentials: for example, Red Hat OpenStack Platform or bare metal resources
• Automation and other credentials: for example, for access to Red Hat Ansible Automation
Platform
• Centrally managed: type of credential for on premises environments
You can use the Credentials pane to create and administer credentials for all different cloud
providers and systems.
The RHACM search engine is always enabled, and you can access to it through the magnifying
glass icon in upper area of RHACM web console.
The function of the search engine is to index and store the Kubernetes objects present in the fleet
of clusters, and calculates the relationships with other objects.
• Collector — It is deployed in each of the clusters of the fleet. In the hub cluster the search
collector is deployed in the open-cluster-management namespace. In the rest of the
managed clusters the collector is deployed in the open-cluster-management-agent-
addon namespace, as part of the search-collector add-on. The collector indexes the
information of the Kubernetes objects and computes relationships for objects within the
managed clusters.
• Search API — Provides an API to access the data in the search index, enforcing role-based
access control (RBAC). The search API uses the RBAC of each managed cluster. So, if you are
using the RHACM web console you can only search for objects in managed cluster where you
already have authorization.
You can refine each search adding more filters to the query. While you type new filters the UI
displays the values that are stored for that index.
44 DO480-ACM2.4-en-0-f54b122
Chapter 2 | Inspecting Resources from Multiple Clusters Using the RHACM Web Console
Some of the filters allow aritmethic comparators for fields of numeric nature, as cpu:,
replicas:, capacity:, or memory:.
You can use use the search UI also for editing the objects. Once you click in the object displayed in
a result, you see the YAML definition of the object and the Edit button.
What is indexed
The RHACM search engine uses many different filters to classify the objects present in the fleet of
clusters. Those filters are the keys of the indexing process. You can see all the filters in the search
user interface of the RHACM web console, as displayed in the next image.
You can use any of the filters to perform search, and you can also make free text searchs. But you
can not make a search refining by all the fields of the object. If the field is not part of a filter, it is no
indexed for search.
Note
RHACM search engine uses always the label: filter to index every Kubernetes
object. It is a good practice to set labels during the different phases of CI/CD to the
objects, to make more accurate and quicker searches.
DO480-ACM2.4-en-0-f54b122 45
Chapter 2 | Inspecting Resources from Multiple Clusters Using the RHACM Web Console
References
For more information, refer to the Red Hat Advanced Cluster Management for
Kubernetes Web Console guide at
https://access.redhat.com/documentation/en-us/
red_hat_advanced_cluster_management_for_kubernetes/2.4/html-single/
web_console/index
For more information about cluster management, refer to the Red Hat Advanced
Cluster Management for Kubernetes Clusters guide at
https://access.redhat.com/documentation/en-us/
red_hat_advanced_cluster_management_for_kubernetes/2.4/html-single/clusters/
index
For more information about configuring and tuning the search engine, refer to the
Red Hat Advanced Cluster Management for Kubernetes Web Console guide at
https://access.redhat.com/documentation/en-us/
red_hat_advanced_cluster_management_for_kubernetes/2.4/html-single/
web_console/index#search-customization
For more information about credentials, refer to the Red Hat Advanced Cluster
Management for Kubernetes Credentials guide at
https://access.redhat.com/documentation/en-us/
red_hat_advanced_cluster_management_for_kubernetes/2.4/html-single/
credentials/index
46 DO480-ACM2.4-en-0-f54b122
Chapter 2 | Inspecting Resources from Multiple Clusters Using the RHACM Web Console
Guided Exercise
Outcomes
You should be able to:
• Use the saved searches to quickly identify the failing deployments across a fleet of
clusters in the RHACM web console
• Locate kubernetes and OpenShift objects across a fleet of clusters in the RHACM web
console
• Edit Kubernetes objects across a fleet of clusters in the RHACM web console
Warning
The preparation of this lab will change screen resolution to 1920x1080 for the
most comfortable visualization for RHACM console. When the lab is finished
resolution is reverted to the existing before the exercise.
Instructions
1. Identify any failed database server deployments across the fleet of clusters using the
RHACM web console feature of saved searches.
1.1. From workstation, use Firefox to navigate to the RHACM web console at https://
multicloud-console.apps.ocp4.example.com.
When prompted, select htpasswd_provider
Type the credentials:
• Username: admin
• Password: redhat
DO480-ACM2.4-en-0-f54b122 47
Chapter 2 | Inspecting Resources from Multiple Clusters Using the RHACM Web Console
1.2. Click the glass icon to navigate to the RHACM search web interface.
Then click the predefined search Unhealthy pods. Observe that the syntax of the
search is based on the status of the pods.
There are 2 failing pods with names that start with mysql-<ID_OF_THE_POD>. The
2 failing pods are in the namespace called company-applications-5. There is a
namespace company-applications-5 in each managed cluster.
1.3. Click the name of one of the MySQL pods in the Pending status. Scroll down to the
status section of the YAML file to identify the cause of the failure.
48 DO480-ACM2.4-en-0-f54b122
Chapter 2 | Inspecting Resources from Multiple Clusters Using the RHACM Web Console
The pods are pending because there is a reference to a Persistent Volume Claim
(PVC) that does not exit.
Note
The search results can show pods not related to MySQL deployments. Ignore those
results and use the ones related to the MySQL deployments.
1.4. Navigate back to the predefined search Unhealthy pods and click the related
deployment button. The deployment that contains the pods is mysql-finance-
application-2. The deployment is present in all the managed clusters.
1.5. Click again the glass icon to perform a new search for locating the correct name of
the existing PVCs.
Type namespace:company-applications-5 kind:persistentvolumeclaim
in the search field.
1.7. Click the name of the first deployment in the Deployment results. Edit the yaml file to
set dbclaim as the PVC name.
DO480-ACM2.4-en-0-f54b122 49
Chapter 2 | Inspecting Resources from Multiple Clusters Using the RHACM Web Console
Then click the Save button to trigger a redeployment automatically using the right
name of the PVC.
1.8. Repeat the operation to fix the failing deployment in the other cluster.
1.9. Navigate back to the saved searches and review the Unhealthy pods saved search
button to verify that there are no more MySQL pods failing.
2. Search for any database software deployed across the fleet of managed clusters.
2.1. In the search field type namespace: and look at the different namespaces across
the fleet of clusters offered by the search engine. You can see that the name of the
namespaces are always prepended by company-applications-.
2.2. Clear the search field and type kind:deployment. Then add
kind:deploymentconfig. Notice that the RHACM console merges the filters
using the syntax kind:deployment,deploymentconfig.
Finally add the free text "mysql".
Notice that the MySQL instances are deployed as Kubernetes Deployment objects.
The label application indicates the name of the application. There are 2 different
applications using MySQL containers: globalshop-application, and finance-
application-2.
50 DO480-ACM2.4-en-0-f54b122
Chapter 2 | Inspecting Resources from Multiple Clusters Using the RHACM Web Console
2.3. In the search field, remove the free text "mysql" and replace it with "mariadb". Do not
remove the kind:deployment,deploymentconfig filter.
Notice that the MariaDB instances are deployed as OpenShift DeploymentConfig
objects. Use the label application to locate the name. There are 2 different
applications using MariaDB containers: humanresources-application-1, and
marketing-application-2.
2.4. In the search field, remove the free text "mysql" and replace it with "postgresql". Do
not remove the kind:deployment,deploymentconfig filter.
You will see that PostgreSQL is deployed as OpenShift DeploymentConfig objects.
Use the label application to locate the name. There are 5 different
applications using PostgreSQL containers: finance-application-1,
finance-application-3 finance-application-4,humanresources-
application-2, and marketing-application-1.
The most used database server is PostgreSQL that is used in 5 applications of
existing 9.
3. Using the RHACM web console, locate any running container using a MySQL image with
known vulnerabilities. Red Hat Container Catalog states that updated image in use for
MySQL 8 must be registry.redhat.io/rhel8/mysql-80:1.
Note
You can see the mapping between MySQL version and Red Hat MySQL container
image version in https://catalog.redhat.com/software/containers/rhel8/
mysql-80/5ba0ad4cdd19c70b45cbf48c
DO480-ACM2.4-en-0-f54b122 51
Chapter 2 | Inspecting Resources from Multiple Clusters Using the RHACM Web Console
Find the applications using the container image mysql-80 with the tag 1-127.
3.2. Click one of the pods to see the reference to the container image in use in its YAML
definition. Most of the MySQL running pods are using registry.redhat.io/
rhel8/mysql-80 with tag 1-152.
3.3. Navigate to the initial search page of RHACM and add "1-127" in the search field.
Do not remove the filter registry.redhat.io/rhel8/mysql-80. Inspect the
results: you will see two pods running in namespaces with the same name, but in
different clusters.
3.4. Click one of the pods to see the image in use in its YAML definition.
3.5. Click the Logs tab within the pod page to verify that the running version of MySQL is
the old one, 8.0.21, as stated in the Red Hat Container Catalog.
With this information, you can inform the developers which namespaces contain
running pods using a vulnerable MySQL container image.
Finish
On the workstation machine, use the lab command to complete this exercise. This is important
to ensure that resources from previous exercises do not impact upcoming exercises.
52 DO480-ACM2.4-en-0-f54b122
Chapter 2 | Inspecting Resources from Multiple Clusters Using the RHACM Web Console
Objectives
After completing this section, you should be able to create different user roles in RHACM and
define an authentication model for multicluster management.
The following table contains the roles created by RHACM to define different access levels to a
cluster, all the clusters, or a group of clusters in the same cluster set.
Role Definition
DO480-ACM2.4-en-0-f54b122 53
Chapter 2 | Inspecting Resources from Multiple Clusters Using the RHACM Web Console
Role Definition
Role Definition
RHACM roles are especially useful for assigning user or group permissions to a cluster set. Apart
from using the oc adm command, the admin and view cluster set roles can be assigned to users
in the Access management tab within the clusterset details pane in the RHACM web console. You
can assign the admin and view cluster set roles by using the oc adm command or by navigating
to the Access management tab within the Cluster sets details pane in the RHACM web console.
Combining cluster sets and cluster selectors, administrators can use other tools outside of
RHACM to act on a subset of clusters belonging to the same or different cluster sets.
Placement Rules
Using placement rules, administrators can define a subset of clusters pertaining to different
cluster sets for placing Kubernetes resources.
For example, the following Placement uses the claimSelector parameter to specify that
resources should deploy on clusters from the us-west-1 region.
54 DO480-ACM2.4-en-0-f54b122
Chapter 2 | Inspecting Resources from Multiple Clusters Using the RHACM Web Console
apiVersion: cluster.open-cluster-management.io/v1alpha1
kind: Placement
metadata:
name: placement2
namespace: ns1
spec:
predicates:
- requiredClusterSelector:
claimSelector:
matchExpressions:
- key: region.open-cluster-management.io
operator: In
values:
- us-west-1
Note
For more information, refer to the Using ManagedClusterSets with
Placement section at https://access.redhat.com/documentation/en-us/
red_hat_advanced_cluster_management_for_kubernetes/2.3/html-single/clusters/
index#placement-managed
As noted previously, RHACM takes advantage of the Red Hat OpenShift Container Platform
authentication and authorization layers. The authentication layer of OpenShift uses the OAuth
open standard as an authorization framework. By default, OpenShift uses an internal OAuth
server. Unfortunately, neither Kubernetes nor OpenShift currently provides a way to federate all
the internal OAuth servers of each cluster.
To solve this problem, you can use an external identity manager for central management of the
identities of the users that access to your clusters.
Note
You can find the list of supported OAuth providers in the Supported identity
providers section of the Red Hat OpenShift Container Platform Authorization
and Authentication guide at https://access.redhat.com/documentation/en-us/
openshift_container_platform/4.9/html-single/authentication_and_authorization/
index#supported-identity-providers
DO480-ACM2.4-en-0-f54b122 55
Chapter 2 | Inspecting Resources from Multiple Clusters Using the RHACM Web Console
References
For more information, refer to the Red Hat Advanced Cluster Management for
Kubernetes Access Control guide at
https://access.redhat.com/documentation/en-us/
red_hat_advanced_cluster_management_for_kubernetes/2.3/html-single/
access_control/index
56 DO480-ACM2.4-en-0-f54b122
Chapter 2 | Inspecting Resources from Multiple Clusters Using the RHACM Web Console
Guided Exercise
Outcomes
You should be able to:
• Create a ClusterSet
• Remove a ClusterSet
As the student user on the workstation machine, use the lab command to prepare your
system for this exercise.
The lab environment includes a Red Hat Identity Management server. This command ensures
that the necessary users and roles exist in the Red Hat Identity Management server. It also
configures an additional LDAP identity provider in the hub cluster.
The following table summarizes the users and groups available in the Red Hat Identity
Management server for this exercise.
User Group
prod-admin production-administrators
stage-admin stage-administrators
Note
RHACM is installed in the hub cluster with URL ocp4.example.com.
DO480-ACM2.4-en-0-f54b122 57
Chapter 2 | Inspecting Resources from Multiple Clusters Using the RHACM Web Console
Note
The route to the RHACM web console is multicloud-
console.apps.ocp4.example.com.
Instructions
1. From the RHACM web console, create a cluster set named production and add the
cluster named local-cluster.
• Username: admin
• Password: redhat
1.3. Create a cluster set named production. Add the local-cluster to it.
From within the Cluster sets page, scroll down and click Create cluster set
58 DO480-ACM2.4-en-0-f54b122
Chapter 2 | Inspecting Resources from Multiple Clusters Using the RHACM Web Console
Type production in the Cluster set name field and click Create
DO480-ACM2.4-en-0-f54b122 59
Chapter 2 | Inspecting Resources from Multiple Clusters Using the RHACM Web Console
2. From the RHACM web console, create a cluster set named stage and add the cluster
named managed-cluster.
2.1. Navigate to Infrastructure → Clusters and click Cluster sets and repeat all the
previous steps to create a stage cluster set.
Add the cluster named managed-cluster to the stage cluster set.
From the Managed clusters page of the stage cluster set, click 11 labels to verify
that RHACM adds the label cluster.open-cluster-management.io/
clusterset=stage to the managed-cluster cluster.
60 DO480-ACM2.4-en-0-f54b122
Chapter 2 | Inspecting Resources from Multiple Clusters Using the RHACM Web Console
Note
RHACM automatically creates default admin and view cluster roles in the
RHOCP hub cluster for each new cluster set. For instance, to grant administrator
permissions to the production cluster set, use the role open-cluster-
management:managedclusterset:admin:<clusterset_name>.
To grant view permissions to the production cluster set, use the role open-
cluster-management:managedclusterset:view:<clusterset_name>.
3.1. Open a terminal and log in to the ocp4 cluster as the admin user. The APIServer URL
is https://api.ocp4.example.com:6443.
Note
The group production-administrators already exists in the Red Hat Identity
Management server in the lab environment. The lab start script triggers a group
synchronization between the hub cluster and the Red Hat Identity Management
server.
DO480-ACM2.4-en-0-f54b122 61
Chapter 2 | Inspecting Resources from Multiple Clusters Using the RHACM Web Console
The following table summarizes the users, groups and roles at this point.
5. Log out from RHACM and close Firefox to clean any cached credentials in the browser.
Open it again and log in to the RHACM web console with the prod-admin user from the
Red Hat Identity Management provider. Verify that the prod-admin user has administrator
permissions on the production cluster set, comprised by the cluster named local-
cluster. Also, verify that the prod-admin user has view only permissions on the stage
cluster set.
Note
The prod-admin user is member of the production-administrators group.
5.2. Close all the instances of Firefox. Open a new Firefox window and log in to the
RHACM web console with the user prod-admin from the Red Hat Identity
Management provider.
62 DO480-ACM2.4-en-0-f54b122
Chapter 2 | Inspecting Resources from Multiple Clusters Using the RHACM Web Console
• Username: prod-admin
• Password: redhat
5.3. Navigate to Infrastructure → Clusters. Note the differences between the two
clusters.
The Upgrade available button is active for the local-cluster, but inactive for the
managed-cluster. This happens because the prod-admin user has administrator
privileges in the production cluster set, but only view permissions in the stage
cluster set.
A cluster set administrator can perform actions on clusters such as Edit labels,
Upgrade cluster, Select channel for updates, Search cluster resources, and Detach
cluster.
DO480-ACM2.4-en-0-f54b122 63
Chapter 2 | Inspecting Resources from Multiple Clusters Using the RHACM Web Console
6. Log out from RHACM and close Firefox to clean any cached credentials in the browser.
Open it again and log in to the RHACM web console with the stage-admin user from
the Red Hat Identity Management provider. Verify that the stage-admin user has
administrator permissions on the stage cluster set, comprised by the cluster named
managed-cluster. Also, verify that the stage-admin don't have any permissions on the
production cluster set.
Note
The stage-admin user is member of the stage-administrators group.
6.2. Close all the instances of Firefox. Open a new Firefox window and log in to the
RHACM web console with the user stage-admin from the Red Hat Identity
Management provider.
When prompted, select Red Hat Identity Management
Type the credentials:
• Username: stage-admin
• Password: redhat
6.3. Navigate to Infrastructure → Clusters. Note that the stage-admin user can only
see clusters belonging to the stage cluster set.
The Upgrade available button and all the options from the Options button are
enabled.
This is because the stage-admin user has administrator permissions on the stage
cluster set.
The cluster local-cluster, pertaining to the production cluster set does not
appear in the Managed clusters list.
7. Log out from the RHACM web console and close all the Firefox instances to remove any
cached credentials.
7.1. From the RHACM web console, click the stage-admin button at the top right corner,
and click Logout.
Close the Firefox window.
Finish
On the workstation machine, use the lab command to complete this exercise. This is important
to ensure that resources from previous exercises do not impact upcoming exercises.
64 DO480-ACM2.4-en-0-f54b122
Chapter 2 | Inspecting Resources from Multiple Clusters Using the RHACM Web Console
DO480-ACM2.4-en-0-f54b122 65
Chapter 2 | Inspecting Resources from Multiple Clusters Using the RHACM Web Console
Lab
Outcomes
You should be able to:
• Manage RHACM roles and groups to allow searches across a fleet of clusters.
• Locate Kubernetes and OpenShift objects across a fleet of clusters by using the correct
group of users.
• Edit Kubernetes objects across a fleet of clusters using the correct group of users.
This command ensures that RHACM is installed and running and the managed clusters
are present. It also checks that the necessary users and roles exist in the Red Hat Identity
Management server. Finally, it creates all the deployments across the fleet of clusters.
The following table summarizes the users and groups available in the Red Hat Identity
Management server for this exercise.
User Group
apac-operator apac-operators
emea-operator emea-operators
fleet-searcher fleet-searchers
The following table summarizes the managed cluster sets and the roles of the groups
available in the laboratory environment for this exercise.
66 DO480-ACM2.4-en-0-f54b122
Chapter 2 | Inspecting Resources from Multiple Clusters Using the RHACM Web Console
- fleet-searcher -
Instructions
1. Assign the cluster-wide role view to the fleet-searchers group to allow searches across
all the objects in the fleet of clusters.
2. Locate all Kubernetes objects from the applications that have the app=finance-
application label and contain fewer than 3 replicas.
3. Fix the number of replicas of the two deployments by logging in as a user belonging to the
emea-operators group.
Evaluation
Lab scripts to be developed.
Finish
As the student user on the workstation machine, use the lab command to complete this
exercise. This is important to ensure that resources from previous exercises do not impact
upcoming exercises. Do not make any other changes to the lab environment until the next guided
exercise. You will continue using it in later guided exercises.
DO480-ACM2.4-en-0-f54b122 67
Chapter 2 | Inspecting Resources from Multiple Clusters Using the RHACM Web Console
Solution
Outcomes
You should be able to:
• Manage RHACM roles and groups to allow searches across a fleet of clusters.
• Locate Kubernetes and OpenShift objects across a fleet of clusters by using the correct
group of users.
• Edit Kubernetes objects across a fleet of clusters using the correct group of users.
This command ensures that RHACM is installed and running and the managed clusters
are present. It also checks that the necessary users and roles exist in the Red Hat Identity
Management server. Finally, it creates all the deployments across the fleet of clusters.
The following table summarizes the users and groups available in the Red Hat Identity
Management server for this exercise.
User Group
apac-operator apac-operators
emea-operator emea-operators
fleet-searcher fleet-searchers
The following table summarizes the managed cluster sets and the roles of the groups
available in the laboratory environment for this exercise.
68 DO480-ACM2.4-en-0-f54b122
Chapter 2 | Inspecting Resources from Multiple Clusters Using the RHACM Web Console
- fleet-searcher -
Instructions
1. Assign the cluster-wide role view to the fleet-searchers group to allow searches across
all the objects in the fleet of clusters.
1.1. From the workstation machine, open a terminal and log in to the ocp4 cluster
as the admin user. The password is redhat. The APIServer URL is https://
api.ocp4.example.com:6443.
open-cluster-management:managedclusterset:admin:apac apac-operators
open-cluster-management:admin:local-cluster
system:masters,system:cluster-admins,apac-operators
open-cluster-management:view:local-cluster system:cluster-
readers,system:cluster-admins,system:masters,apac-operators
[student@workstation ~]$
1.4. Use the oc adm command to assign the view cluster role to the fleet-searchers
group.
1.5. Reverify the fleet-searchers group roles to identify the additional roles assigned
by the RHACM.
DO480-ACM2.4-en-0-f54b122 69
Chapter 2 | Inspecting Resources from Multiple Clusters Using the RHACM Web Console
2. Locate all Kubernetes objects from the applications that have the app=finance-
application label and contain fewer than 3 replicas.
2.1. From the workstation machine, use Firefox to navigate to the RHACM web console
at https://multicloud-console.apps.ocp4.example.com.
When prompted, select Red Hat Identity Management
Type the following credentials:
• Username: fleet-searcher
• Password: redhat
2.2. Click the magnifying glass icon to navigate to the RHACM search web interface.
70 DO480-ACM2.4-en-0-f54b122
Chapter 2 | Inspecting Resources from Multiple Clusters Using the RHACM Web Console
Then expand the related cluster to see all the information about the managed-
cluster.
2.4. Locate the cluster set that contains the managed-cluster by finding the value of the
label cluster.open-cluster-management.io/clusterset=.
The managed-cluster cluster belongs to the emea cluster set. The emea-
operators group can modify the number of replicas.
3. Fix the number of replicas of the two deployments by logging in as a user belonging to the
emea-operators group.
3.1. Log out from RHACM and close Firefox to clean any cached credentials in the browser.
Open Firefox again and log in to the RHACM web console.
When prompted, select Red Hat Identity Management
Type the credentials:
DO480-ACM2.4-en-0-f54b122 71
Chapter 2 | Inspecting Resources from Multiple Clusters Using the RHACM Web Console
• Username: emea-operator
• Password: redhat
3.2. Click the magnifying glass icon to navigate to the RHACM search web interface.
Repeat previous search by typing label:app=finance-application
current:<3 in the search field. The same 2 deployments display as when you
searched as a member of the fleet-searchers group, but now the members of the
emea-operators group can edit the objects.
3.4. Find the text replicas: 1 in the spec: section of the deployment. You can use crtl
+f to find inside the YAML file.
Then change the value to replicas: 3 and click Save.
3.5. Repeat the preceding operation in the deployment present in the company-
applications-7 namespace.
3.6. Check that the previous search has no more results by clicking the Search link.
Evaluation
Lab scripts to be developed.
Finish
As the student user on the workstation machine, use the lab command to complete this
exercise. This is important to ensure that resources from previous exercises do not impact
upcoming exercises. Do not make any other changes to the lab environment until the next guided
exercise. You will continue using it in later guided exercises.
72 DO480-ACM2.4-en-0-f54b122
Chapter 2 | Inspecting Resources from Multiple Clusters Using the RHACM Web Console
DO480-ACM2.4-en-0-f54b122 73
Chapter 2 | Inspecting Resources from Multiple Clusters Using the RHACM Web Console
Summary
In this chapter, you learned:
• How to locate Kubernetes objects across de fleet of clusters by using the RHACM search
engine.
• How the role-based access control model of RHACM works, and its default roles.
• How to define an authentication model to manage the fleet of clusters classifying the clusters in
RHACM clustersets.
74 DO480-ACM2.4-en-0-f54b122
Chapter 3
DO480-ACM2.4-en-0-f54b122 75
Chapter 3 | Deploying and Managing Policies for Multiple Clusters with RHACM
Objectives
After completing this section, you should be able to deploy policies on multiple clusters by using
the command line and the RHACM Governance Dashboard.
Governance Architecture
Provides a summary and componantes of governance architecture
Governance Dashboard
Provides a summary of governance dashboard
apiVersion: policy.open-cluster-management.io/v1
kind: Policy
metadata:
76 DO480-ACM2.4-en-0-f54b122
Chapter 3 | Deploying and Managing Policies for Multiple Clusters with RHACM
name: policy-namespace
annotations:
policy.open-cluster-management.io/standards: NIST SP 800-53
policy.open-cluster-management.io/categories: CM Configuration Management
policy.open-cluster-management.io/controls: CM-2 Baseline Configuration
spec:
remediationAction: inform
disabled: false
policy-templates:
- objectDefinition:
apiVersion: policy.open-cluster-management.io/v1
kind: ConfigurationPolicy
metadata:
name: policy-namespace-example
spec:
remediationAction: inform # the policy-template spec.remediationAction
is overridden by the preceding parameter value for spec.remediationAction.
severity: low
namespaceSelector:
exclude: ["kube-*"]
include: ["default"]
object-templates:
- complianceType: musthave
objectDefinition:
kind: Namespace # must have namespace 'prod'
apiVersion: v1
metadata:
name: prod
---
apiVersion: policy.open-cluster-management.io/v1
kind: PlacementBinding
metadata:
name: binding-policy-namespace
placementRef:
name: placement-policy-namespace
kind: PlacementRule
apiGroup: apps.open-cluster-management.io
subjects:
- name: policy-namespace
kind: Policy
apiGroup: policy.open-cluster-management.io
---
apiVersion: apps.open-cluster-management.io/v1
kind: PlacementRule
metadata:
name: placement-policy-namespace
spec:
clusterConditions:
- status: "True"
type: ManagedClusterConditionAvailable
clusterSelector:
matchExpressions:
- {key: environment, operator: In, values: ["dev"]}
DO480-ACM2.4-en-0-f54b122 77
Chapter 3 | Deploying and Managing Policies for Multiple Clusters with RHACM
spec.remediationAction: Optional. The values for this parameter are enforce and inform.
spec.disabled: Required. The values for this parameter are true and false.
References
For more information, refer to the _Red Hat Advanced Cluster Management policy
yaml structure at
https://access.redhat.com/documentation/en-us/
red_hat_advanced_cluster_management_for_kubernetes/2.3/html-single/
governance/index#policy-yaml-structure
78 DO480-ACM2.4-en-0-f54b122
Chapter 3 | Deploying and Managing Policies for Multiple Clusters with RHACM
Guided Exercise
Outcomes
You should be able to:
Instructions
1. Log in to the Hub OpenShift cluster and create a policy-governance project.
1.1. Open the terminal application on the workstation machine. Log in to the Hub
OpenShift cluster as the admin user.
2. Log in to the RHACM web console and create the certificate policy.
2.1. From the workstation machine, open Firefox and access https://multicloud-
console.apps.ocp4.example.com.
2.2. Click htpasswd_provider and log in as the admin user with the redhat password.
DO480-ACM2.4-en-0-f54b122 79
Chapter 3 | Deploying and Managing Policies for Multiple Clusters with RHACM
3. Create the certificate policy with the following parameters for openshift-console and
openshift-ingress namespace:
Name policy-certificatepolicy
Namespace policy-governance
Remediation Inform
3.1. Click Governance tab on the left pane to navigate to the governance dashboard.
Then, click Create policy button. The Create policy page appears.
3.2. Fill in the fields as follows, leaving rest of the fields unchanged. Do not click Create
button yet.
80 DO480-ACM2.4-en-0-f54b122
Chapter 3 | Deploying and Managing Policies for Multiple Clusters with RHACM
3.3. On the right side of the Create policy page, edit the YAML code as follows:
spec:
remediationAction: inform
disabled: false
policy-templates:
- objectDefinition:
apiVersion: policy.open-cluster-management.io/v1
kind: CertificatePolicy
metadata:
name: policy-certificatepolicy-cert-expiration
spec:
namespaceSelector:
include:
- default
DO480-ACM2.4-en-0-f54b122 81
Chapter 3 | Deploying and Managing Policies for Multiple Clusters with RHACM
- openshift-console
- openshift-ingress
exclude:
- kube-*
remediationAction: inform
severity: low
minimumDuration: 300h
3.4. In the Governance page, scroll down and click the policy-certificatepolicy
policy name from the list of policies. Click Clusters tab to review the policy details.
The status of the policy is Not compliant for the cluster named managed-
cluster.
Note
Not compliant policy status indicates that one certificate has already expired or
has less than 300h time left in the openshift-ingress namespace.
4.1. Open the terminal application on the workstation machine and change to the
~/DO480/labs/policy-governance/ directory.
4.2. Log in to the managed-cluster as the admin user. The APIServer URL is
https://api.ocp4-mng.example.com:6443.
82 DO480-ACM2.4-en-0-f54b122
Chapter 3 | Deploying and Managing Policies for Multiple Clusters with RHACM
4.3. Run the renew_wildcard.sh script. The script uses an Ansible Playbook to create a
new wildcard certificate with an expiration date set to 3650 days from now.
[student@workstation policy-governance]$ cd ~
[student@workstation ~]$
5.1. Click the Governance → policy-certificatepolicy policy name from the list of policies.
Note
It takes around a minute before the router pods restart and use the updated
certificate. Also, it takes a few seconds to change the policy status from Not
complaint to Complaint.
Finish
On the workstation machine, use the lab command to complete this exercise. This is important
to ensure that resources from previous exercises do not impact upcoming exercises.
DO480-ACM2.4-en-0-f54b122 83
Chapter 3 | Deploying and Managing Policies for Multiple Clusters with RHACM
84 DO480-ACM2.4-en-0-f54b122
Chapter 3 | Deploying and Managing Policies for Multiple Clusters with RHACM
Objectives
After completing this section, you should be able to deploy the compliance operator policy and
view compliance reports from multiple clusters by using the command line and the RHACM
Governance Dashboard.
Definition of Compliance
Compliance management is a continuous process to ensure that the IT systems comply with
different kinds of policies and requirements. The following list shows some of this kinds of
requirements:
• Industry standards
• Security standards
• Corporate policies
• Regulatory policies
• The Compliance Operator also verifies the compliance of the Kubernetes objects, as well as the
nodes running the cluster.
• The Compliance Operator scans and enforce the security policies you provides using
OpenSCAP, a NIST-certified tool.
• Operatorgroup
• Subscription
DO480-ACM2.4-en-0-f54b122 85
Chapter 3 | Deploying and Managing Policies for Multiple Clusters with RHACM
References
What is compliance management?
https://www.redhat.com/en/topics/management/what-is-compliance-
management
For more information, refer to the Compliance operator policy chapter in the
Red Hat Advanced Cluster Management Governance Guide at
https://access.redhat.com/documentation/en-us/
red_hat_advanced_cluster_management_for_kubernetes/2.3/html-single/
governance/index#compliance-operator-policy
86 DO480-ACM2.4-en-0-f54b122
Chapter 3 | Deploying and Managing Policies for Multiple Clusters with RHACM
Guided Exercise
Outcomes
You should be able to:
Instructions
1. Log in to the Hub OpenShift cluster and create a policy-compliance project.
1.1. Open the terminal application on the workstation machine. Log in to the Hub
OpenShift cluster as the admin user.
2. Log in to the RHACM web console and install the compliance operator.
2.1. From the workstation machine, open Firefox and access https://multicloud-
console.apps.ocp4.example.com.
DO480-ACM2.4-en-0-f54b122 87
Chapter 3 | Deploying and Managing Policies for Multiple Clusters with RHACM
2.2. Click htpasswd_provider and log in as the admin user with the redhat password.
Name policy-complianceoperator
Namespace policy-compliance
Remediation Inform
3.1. On the left pane, click Governance tab to display the governance dashboard, and
then click Create policy to create the policy. The Create policy page is displayed.
3.2. Complete the page with the following details, leaving the other fields unchanged.
Then, click Create.
88 DO480-ACM2.4-en-0-f54b122
Chapter 3 | Deploying and Managing Policies for Multiple Clusters with RHACM
3.3. Click the policy-complianceoperator policy name from the list of policies to
check the policy details from the governance dashboard.
The Policy status shows Not compliant for the cluster named managed-cluster.
The Not compliant policy status indicates that the compliance operator is not
installed on the managed-cluster.
3.4. On the left pane, click Governance tab. The dashboard shows the policy-
complianceoperator policy. Click vertical elipses button to the right of the policy
field, and then click Enforce. Click Enforce again in the confirmation window to
change policy mode from Inform to Enforce.
DO480-ACM2.4-en-0-f54b122 89
Chapter 3 | Deploying and Managing Policies for Multiple Clusters with RHACM
4. As the admin user, verify the compliance operator installation by using the RHACM web
console.
4.1. Click the search icon on the top right of the RHACM web console.
4.2. Type the following parameters in the search bar to search the compliance-operator
subscription.
kind subscription
name compliance-operator
90 DO480-ACM2.4-en-0-f54b122
Chapter 3 | Deploying and Managing Policies for Multiple Clusters with RHACM
Note
APAC location has one cluster, named managed-cluster.
5. Clone the policy collection repository. The policy-collection repository has policy
examples for Open cluster management.
5.1. Open a terminal application on the workstation machine. Run the following
command to clone the policy-collection repository.
remediationAction enforce
key location
values APAC
DO480-ACM2.4-en-0-f54b122 91
Chapter 3 | Deploying and Managing Policies for Multiple Clusters with RHACM
Note
The policy-collection repository has both stable and community policy examples.
This course uses the E8 scan policy that is available under the stable → CM-
Configuration-Management directory.
...output omitted...
spec:
clusterConditions:
- status: "True"
type: ManagedClusterConditionAvailable
clusterSelector:
matchExpressions:
- {key: location, operator: In, values: ["APAC"]}
6.2. Log in to Hub OpenShift cluster as the admin user, and then switch to the policy-
compliance project.
...output omitted...
[student@workstation CM-Configuration-Management]$ oc project policy-compliance
...output omitted...
92 DO480-ACM2.4-en-0-f54b122
Chapter 3 | Deploying and Managing Policies for Multiple Clusters with RHACM
[student@workstation CM-Configuration-Management]$ cd ~
[student@workstation ~]$
7.1. On the left pane, click Governance tab. The dashboard shows the policy-e8-scan
policy. Click policy-e8-scan and then select the Clusters tab to check the status.
Note
It takes 2-3 minutes for Compliance-suite-e8-result to show Not
compliant status.
7.3. Click View details and review the details. On the details page, you can see the name
of non-compliant objects.
DO480-ACM2.4-en-0-f54b122 93
Chapter 3 | Deploying and Managing Policies for Multiple Clusters with RHACM
7.4. To list the compliance check results, click the search icon on the top right side of
the RHACM web console and type kind:configurationpolicy, and then click
search.
7.5. Click compliance-suite-e8-result and check the status. It will show a list of
NonCompliant objects.
Finish
On the workstation machine, use the lab command to complete this exercise. This is important
to ensure that resources from previous exercises do not impact upcoming exercises.
94 DO480-ACM2.4-en-0-f54b122
Chapter 3 | Deploying and Managing Policies for Multiple Clusters with RHACM
DO480-ACM2.4-en-0-f54b122 95
Chapter 3 | Deploying and Managing Policies for Multiple Clusters with RHACM
Objectives
After completing this section, you should be able to deploy the gatekeeper policy and gatekeeper
constraints to multiple clusters by using the command line and the RHACM Governance
Dashboard.
OPA Gatekeeper
Provides a summary of OPA Gatekeeper
• Audit template
• Admission template
References
For more information, refer to the _Red Hat Advanced Cluster Management policy
yaml structure at
https://access.redhat.com/documentation/en-us/
red_hat_advanced_cluster_management_for_kubernetes/2.3/html-single/
governance/index#policy-yaml-structure
For more information, refer to the Managing gatekeeper operator policies section at
https://access.redhat.com/documentation/en-us/
red_hat_advanced_cluster_management_for_kubernetes/2.3/html-single/
governance/index#managing-gatekeeper-operator-policies
96 DO480-ACM2.4-en-0-f54b122
Chapter 3 | Deploying and Managing Policies for Multiple Clusters with RHACM
Guided Exercise
Instructions
1. Log in to the Hub OpenShift cluster and create a policy-gatekeeper project.
1.1. Open the terminal application on the workstation machine. Log in to the Hub
OpenShift cluster as the admin user.
2. Log in to the RHACM web console and install the gatekeeper operator.
2.1. From the workstation machine, open Firefox and access https://multicloud-
console.apps.ocp4.example.com.
2.2. Click htpasswd_provider and log in as the admin user with the redhat password.
DO480-ACM2.4-en-0-f54b122 97
Chapter 3 | Deploying and Managing Policies for Multiple Clusters with RHACM
Name policy-gatekeeperoperator
Namespace policy-gatekeeper
Remediation Enforce
3.1. On the left pane, click the Governance tab to display the governance dashboard, and
then click Create policy to create the policy. The Create policy page is displayed.
3.2. Complete the page with the following details, leaving the other fields unchanged.
Then, click Create.
3.3. On the Governance page, scroll down and click the policy-
gatekeeperoperator policy name from the list of policies. Click the Status tab to
review the policy details.
98 DO480-ACM2.4-en-0-f54b122
Chapter 3 | Deploying and Managing Policies for Multiple Clusters with RHACM
Note
It takes 2-3 minutes to install the gatekeeper operator. The policy status changes
from Not complaint to Complaint after gatekeeper installation.
4.1. Open the terminal application on the workstation machine and change to the
~/DO480/labs/policy-gatekeeper/ directory.
excludedNamespaces app-stage
key environment
values stage
DO480-ACM2.4-en-0-f54b122 99
Chapter 3 | Deploying and Managing Policies for Multiple Clusters with RHACM
Note
The policy-collection repository has both stable and community policy examples.
This policy example is from the community → CM-Configuration-Management
directory.
...output omitted...
- openshift-user-workload-monitoring
- openshift-vsphere-infra
- app-stage
processes:
- '*'
---
apiVersion: policy.open-cluster-management.io/v1
kind: PlacementBinding
...output omitted...
...output omitted...
clusterSelector:
matchExpressions:
- {key: environment, operator: In, values: ["stage"]}
4.3. Deploy the gatekeeper policy to exclude app-stage namespace for all constraints in
the stage environment.
100 DO480-ACM2.4-en-0-f54b122
Chapter 3 | Deploying and Managing Policies for Multiple Clusters with RHACM
4.4. On the Governance page, scroll down and click the policy-gatekeeper-
config-exclude-namespaces policy name from the list of policies.
Note
The stage environment has one cluster, named local-cluster. The production
environment has one cluster, named managed-cluster.
DO480-ACM2.4-en-0-f54b122 101
Chapter 3 | Deploying and Managing Policies for Multiple Clusters with RHACM
Note
The policy-collection repository has both stable and community policy examples.
This policy example is from the community → CM-Configuration-Management
directory.
5.2. Deploy the gatekeeper policy to enforce containers not to use images with the latest
tag for both stage and production environment.
5.3. On the Governance page, scroll down and click the policy-gatekeeper-
containerimagelatest policy name from the list of policies.
102 DO480-ACM2.4-en-0-f54b122
Chapter 3 | Deploying and Managing Policies for Multiple Clusters with RHACM
The the policy status is Compliant for both stage and production environments.
Note
It takes 2-3 minutes to for policy to check all object templates. The policy status
changes from Not complaint to Complaint after 2-3 minutes.
6. Deploy the stage-hello application to the stage environment by using the ~/DO480/
labs/policy-gatekeeper/stage-hello.yaml file.
...output omitted...
DO480-ACM2.4-en-0-f54b122 103
Chapter 3 | Deploying and Managing Policies for Multiple Clusters with RHACM
metadata:
name: stage-hello
labels:
app: stage-hello
name: stage-hello
namespace: app-stage
spec:
replicas: 1
selector:
matchLabels:
app: stage-hello
name: stage-hello
template:
metadata:
labels:
app: stage-hello
name: stage-hello
spec:
containers:
- name: stage-hello
image: quay.io/redhattraining/do480-hello-app:latest
---
apiVersion: v1
kind: Service
metadata:
labels:
app: stage-hello
name: stage-hello
name: stage-hello
namespace: app-stage
spec:
ports:
- port: 8080
selector:
name: stage-hello
---
kind: Route
apiVersion: route.openshift.io/v1
metadata:
name: stage-hello
labels:
app: stage-hello
name: stage-hello
namespace: app-stage
spec:
host: stage-hello.apps.ocp4.example.com
subdomain: ''
to:
kind: Service
name: stage-hello
weight: 100
port:
targetPort: 8080
wildcardPolicy: None
104 DO480-ACM2.4-en-0-f54b122
Chapter 3 | Deploying and Managing Policies for Multiple Clusters with RHACM
7.1. On the Governance page, scroll down and click the policy-gatekeeper-
containerimagelatest policy name from the list of policies. Click the Status tab
to review the policy details.
The the policy status is Compliant for all the clusters.
Note
The policy-gatekeeper-containerimagelatest policy status is Compliant
despite using the latest tag because the policy-gatekeeper-config-
exclude-namespaces policy excludes the app-stage namespace for all
constraints in the stage environment.
...output omitted...
DO480-ACM2.4-en-0-f54b122 105
Chapter 3 | Deploying and Managing Policies for Multiple Clusters with RHACM
metadata:
name: prod-hello
labels:
app: prod-hello
name: prod-hello
namespace: app-prod
spec:
replicas: 1
selector:
matchLabels:
app: prod-hello
name: prod-hello
template:
metadata:
labels:
app: prod-hello
name: prod-hello
spec:
containers:
- name: prod-hello
image: quay.io/redhattraining/do480-hello-app:latest
---
apiVersion: v1
kind: Service
metadata:
labels:
app: prod-hello
name: prod-hello
name: prod-hello
namespace: app-prod
spec:
ports:
- port: 8080
selector:
name: prod-hello
---
kind: Route
apiVersion: route.openshift.io/v1
metadata:
name: prod-hello
labels:
app: prod-hello
name: prod-hello
namespace: app-prod
spec:
host: prod-hello.apps.ocp4-mng.example.com
subdomain: ''
to:
kind: Service
name: prod-hello
weight: 100
port:
targetPort: 8080
wildcardPolicy: None
106 DO480-ACM2.4-en-0-f54b122
Chapter 3 | Deploying and Managing Policies for Multiple Clusters with RHACM
We are unable to deploy the application because the deployment is using the latest
image tag.
9.1. On the Governance page, scroll down and click the policy-gatekeeper-
containerimagelatest policy name from the list of policies. Click the Status tab
to review the policy details.
9.2. Click View details and review the details. On the details page, scroll down and click
View yaml for event objects.
DO480-ACM2.4-en-0-f54b122 107
Chapter 3 | Deploying and Managing Policies for Multiple Clusters with RHACM
9.3. The YAML file displays an error message. The error message indicates that
Deployment/prod-hello is using the latest tag.
10. Replace the latest image tag with the v1.0 version. Deploy and test the hello-prod
application.
10.1. Log in to the managed-cluster as the admin user, and then remove the app-prod
project.
...output omitted...
[student@workstation policy-gatekeeper]$ oc delete project app-prod
project.project.openshift.io "app-prod" deleted
10.2. Edit the prod-hello yaml file to use v1.0 image tag.
108 DO480-ACM2.4-en-0-f54b122
Chapter 3 | Deploying and Managing Policies for Multiple Clusters with RHACM
[student@workstation policy-gatekeeper]$ cd ~
[student@workstation ~]$
Finish
On the workstation machine, use the lab command to complete this exercise. This is important
to ensure that resources from previous exercises do not impact upcoming exercises.
DO480-ACM2.4-en-0-f54b122 109
Chapter 3 | Deploying and Managing Policies for Multiple Clusters with RHACM
Lab
Instructions
1. Log in to the Hub OpenShift cluster and create a policy-review project.
2. Log in to the RHACM web console and deploy a namespace policy in the policy-
review namespace to ensure that the test namespace does not exist in the production
environment.
3. Deploy an IAM policy in the policy-review namespace to limit the number of cluster
administrators to 2 for all clusters.
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
On the workstation machine, use the lab command to complete this exercise. This is important
to ensure that resources from previous exercises do not impact upcoming exercises.
110 DO480-ACM2.4-en-0-f54b122
Chapter 3 | Deploying and Managing Policies for Multiple Clusters with RHACM
Solution
Instructions
1. Log in to the Hub OpenShift cluster and create a policy-review project.
1.1. Open the terminal application on the workstation machine. Log in to the Hub
OpenShift cluster as the admin user.
2. Log in to the RHACM web console and deploy a namespace policy in the policy-
review namespace to ensure that the test namespace does not exist in the production
environment.
2.1. From the workstation machine, open Firefox and access https://multicloud-
console.apps.ocp4.example.com.
2.2. Click htpasswd_provider and log in as the admin user with the redhat password.
2.3. On the left pane, click the Governance tab to display the Governance dashboard, and
then click Create policy to create the policy. The Create policy page is displayed.
2.4. Complete the page with the following details, leaving the other fields unchanged. Do
not click Create button yet.
DO480-ACM2.4-en-0-f54b122 111
Chapter 3 | Deploying and Managing Policies for Multiple Clusters with RHACM
Name policy-namespace
Namespace policy-review
Remediation Enforce
2.5. Use the YAML editor on the right of the Create policy page to make the following edits
to the YAML file:
spec:
remediationAction: enforce
disabled: false
policy-templates:
- objectDefinition:
apiVersion: policy.open-cluster-management.io/v1
kind: ConfigurationPolicy
metadata:
name: policy-namespace-prod-ns
spec:
remediationAction: inform
severity: low
namespaceSelector:
exclude:
- kube-*
include:
- default
object-templates:
- complianceType: mustnothave
objectDefinition:
112 DO480-ACM2.4-en-0-f54b122
Chapter 3 | Deploying and Managing Policies for Multiple Clusters with RHACM
kind: Namespace
apiVersion: v1
metadata:
name: test
Click Create.
2.6. On the Governance page, scroll down and click the policy-namespace policy name
from the list of policies. Click the Clusters tab to review the policy details.
3. Deploy an IAM policy in the policy-review namespace to limit the number of cluster
administrators to 2 for all clusters.
3.1. On the left pane, click the Governance tab to display the Governance dashboard, and
then click Create policy to create the policy. The Create policy page is displayed.
3.2. Complete the page with the following details, leaving the other fields unchanged. Do
not click Create button yet.
Name policy-iampolicy
Namespace policy-review
DO480-ACM2.4-en-0-f54b122 113
Chapter 3 | Deploying and Managing Policies for Multiple Clusters with RHACM
3.3. Use the YAML editor on the right of the Create policy page to make the following edits
to the YAML file:
spec:
remediationAction: inform
disabled: false
policy-templates:
- objectDefinition:
apiVersion: policy.open-cluster-management.io/v1
kind: IamPolicy
metadata:
name: policy-iampolicy-limit-clusteradmin
spec:
severity: medium
namespaceSelector:
include:
- '*'
exclude:
- kube-*
- openshift-*
remediationAction: inform
maxClusterRoleBindingUsers: 2
3.4. On the Governance page, scroll down and click the policy-iampolicy policy name
from the list of policies. Click the Clusters tab to review the policy details.
114 DO480-ACM2.4-en-0-f54b122
Chapter 3 | Deploying and Managing Policies for Multiple Clusters with RHACM
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
On the workstation machine, use the lab command to complete this exercise. This is important
to ensure that resources from previous exercises do not impact upcoming exercises.
DO480-ACM2.4-en-0-f54b122 115
Chapter 3 | Deploying and Managing Policies for Multiple Clusters with RHACM
Summary
Note
This section is under development.
116 DO480-ACM2.4-en-0-f54b122
Chapter 4
DO480-ACM2.4-en-0-f54b122 117
Chapter 4 | Configuring the Observability Stack in RHACM
Objectives
After completing this section, you should be able to describe the architecture of the observability
stack and the benefits of observability in a multicluster environment.
In production environments, monitoring, processing, and storing metrics from the fleet of clusters
is crucial to anticipating issues, improving the performance of the clusters, and conducting post-
mortem analysis.
The Grafana and Alertmanager instances deployed in the hub cluster receive, process, and display
alerts and metrics from all the managed clusters with the observability addon activated.
118 DO480-ACM2.4-en-0-f54b122
Chapter 4 | Configuring the Observability Stack in RHACM
You can add custom dashboards to Grafana. Also, as explained elsewhere in this chapter, you
can create custom recording rules and alerting rules. The Alertmanager instance manages alert
forwarding to third-party applications.
The RHOCP monitoring stack provides out-of-the-box monitoring for core platform components.
Starting with RHOCP version 4.6, you can also monitor your own projects.
The RHACM observability service is designed to provide cluster-level metrics, receiving those
from the fleet of clusters. You can use the Grafana instance of the RHACM observability service to
monitor cluster-level metrics from across the fleet of clusters.
• Amazon S3
• Red Hat Ceph
• Google Cloud Storage
• Azure Storage
• Red Hat OpenShift Container Storage
• Red Hat OpenShift on IBM (ROKS)
Note
For detailed instructions about enabling the observability service, refer to the
Enable observability service section at https://access.redhat.com/documentation/
en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html-single/
observability/index#enable-observability
References
For more information, refer to the Red Hat Advanced Cluster Management for
Kubernetes Observability Guide at
https://access.redhat.com/documentation/en-us/
red_hat_advanced_cluster_management_for_kubernetes/2.3/html-single/
observability/index
DO480-ACM2.4-en-0-f54b122 119
Chapter 4 | Configuring the Observability Stack in RHACM
Guided Exercise
Outcomes
You should be able to:
This command ensures that RHACM is deployed in the hub cluster, and the managed-
cluster exists in RHACM.
Instructions
1. Enable the observability service from the terminal.
1.1. Open a terminal and log in to the ocp4 cluster as the admin user. The APIServer URL
is https://api.ocp4.example.com:6443.
120 DO480-ACM2.4-en-0-f54b122
Chapter 4 | Configuring the Observability Stack in RHACM
1.5. Use your text editor to create a YAML file with the definition of a new object bucket
claim (OBC).
# BUCKET_SUBREGION
DO480-ACM2.4-en-0-f54b122 121
Chapter 4 | Configuring the Observability Stack in RHACM
1.7. Create a secret containining the thanos.yaml configuration file with the information
from the previous step.
1.8. Create the multicluster observability object to enable the observability service.
122 DO480-ACM2.4-en-0-f54b122
Chapter 4 | Configuring the Observability Stack in RHACM
At this point, the observability components deployment starts. You can check the
progress of the installation using the following command:
...output omitted...
Status:
Conditions:
Last Transition Time: 2021-11-09T13:34:59Z
Message: Installation is in progress
Reason: Installing
Status: True
Type: Installing
...output omitted...
2. Verify the deployment of the observability components in all the managed clusters.
2.1. Verify the completion of the installation in the ocp4 cluster, named local-cluster
in RHACM.
After a few minutes installing, the status of the observability components transitions
to Ready.
...output omitted...
Status:
Conditions:
Last Transition Time: 2021-11-09T13:34:59Z
Message: Installation is in progress
Reason: Installing
Status: True
Type: Installing
Last Transition Time: 2021-11-09T13:36:32Z
Message: Observability components are deployed and running
Reason: Ready
Status: True
Type: Ready
Events: <none>
2.2. Verify the status of the components of the observability addon in the cluster
named managed-cluster in RHACM.
First, log in to the ocp4-mng cluster.
DO480-ACM2.4-en-0-f54b122 123
Chapter 4 | Configuring the Observability Stack in RHACM
Note
If the status of the pods is not Active, look for error messages in the logs of the
pods, or the events in the namespace.
3. Navigate to the Grafana dashboard available in the Home → Overview menu of the
RHACM console. Analyze the capacity and utilization data provided about CPU and
memory of the clusters.
Note
After enabling the observability components, RHACM shows direct access to the
Grafana dashboard from within the RHACM console.
124 DO480-ACM2.4-en-0-f54b122
Chapter 4 | Configuring the Observability Stack in RHACM
• Username: admin
• Password: redhat
Note
Reload the Grafana dashboard if the panels show any error message.
Finally, you can see the Grafana dashboard with the default configuration.
The Grafana dashboard of RHACM contains information of all the clusters in the fleet.
The observability addon, deployed in all the managed clusters, sends information to
the central Grafana instance deployed in the hub cluster.
DO480-ACM2.4-en-0-f54b122 125
Chapter 4 | Configuring the Observability Stack in RHACM
3.2. From within the Grafana dashboard, scroll down to find the Capacity /
Utilization table.
Grafana gathers information about CPU and memory utilization across all the clusters
in the fleet. This way, a system administrator has a single pane of glass to monitor all
the custers from the same dashboard.
4. Disable the metrics collection from the managed-cluster. This is useful to avoid
overloading the observability service with metrics from clusters used for stage
environments.
4.1. Open a terminal and log in to the ocp4 cluster as the admin user. The APIServer URL
is https://api.ocp4.example.com:6443.
After applying this label, RHACM removes the objects in the open-cluster-
management-addon-observability namespace.
4.3. Verify that the changes in the managed-cluster are applied. Log in to the ocp4-
mng cluster as the admin user. The APIServer URL is https://api.ocp4-
mng.example.com:6443.
126 DO480-ACM2.4-en-0-f54b122
Chapter 4 | Configuring the Observability Stack in RHACM
The expected result is that there are not resources found in the open-cluster-
management-observability-addon namespace.
Note
The Grafana dashboard can show cached information about the managed-
cluster after disabling the observability addon. After some time, all the
information about the managed-cluster dissapers.
Finish
On the workstation machine, use the lab command to complete this exercise. This is important
to ensure that resources from previous exercises do not impact upcoming exercises.
DO480-ACM2.4-en-0-f54b122 127
Chapter 4 | Configuring the Observability Stack in RHACM
Objectives
After completing this section, you should be able to customize the RHACM observability stack.
Note
For more information, visit the Prometheus recording rules documentation at
https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/
Note
For more information, visit the Prometheus alerting rules documentation at https://
prometheus.io/docs/prometheus/latest/configuration/alerting_rules/
Note
For more information, review the Adding custom metrics
section at https://access.redhat.com/documentation/en-us/
red_hat_advanced_cluster_management_for_kubernetes/2.3/html-single/
observability/index#adding-custom-metrics
To facilitate this centralized alerting, you can configure extra alert receivers as explained in
the Prometheus Alertmanager documentation at https://prometheus.io/docs/alerting/latest/
configuration/.
128 DO480-ACM2.4-en-0-f54b122
Chapter 4 | Configuring the Observability Stack in RHACM
References
For more information, refer to the Red Hat Advanced Cluster Management for
Kubernetes Observability Guide at
https://access.redhat.com/documentation/en-us/
red_hat_advanced_cluster_management_for_kubernetes/2.3/html-single/
observability/index
DO480-ACM2.4-en-0-f54b122 129
Chapter 4 | Configuring the Observability Stack in RHACM
Guided Exercise
Outcomes
You should be able to:
As the student user on the workstation machine, use the lab command to prepare your
system for this exercise.
This command ensures that RHACM is deployed in the hub cluster, the managed-cluster
exists in RHACM, and the observability service is enabled.
Instructions
1. From the terminal, increase to six the replicas of the observability metrics receiver pods.
1.1. Open a terminal and log in to the ocp4 cluster as the admin user. The APIServer URL
is https://api.ocp4.example.com:6443.
130 DO480-ACM2.4-en-0-f54b122
Chapter 4 | Configuring the Observability Stack in RHACM
Note
After exiting the editor, the following message displays:
multiclusterobservability.observability.open-cluster-management.io/
observability edited
1.3. Use the watch command to verify that there are six pods of the observability-
thanos-receive-default statefulset.
Notice that three of the pods are created recently, and the other three were already
running.
Note
Increasing the replicas of the receiver pods is necessary in production environments
when the number of managed clusters increases.
DO480-ACM2.4-en-0-f54b122 131
Chapter 4 | Configuring the Observability Stack in RHACM
2. Create a Prometheus alerting rule so that cluster administrators receive alerts when the
CPU requests of any cluster are above 70% of the available compute capacity.
2.1. From the terminal, create a file named custom_rule.yaml containing the
instructions in PromQL.
2.3. Verify that the new configuration applies. You can monitor the restart of the pods in
the open-cluster-management-observability namespace with the watch
command.
132 DO480-ACM2.4-en-0-f54b122
Chapter 4 | Configuring the Observability Stack in RHACM
3. Verify that the custom alert is firing for the cluster named local-cluster.
• Username: admin
• Password: redhat
DO480-ACM2.4-en-0-f54b122 133
Chapter 4 | Configuring the Observability Stack in RHACM
Scroll down to the Table field and verify that the alert with name
ClusterCPUReq-70 is firing in the cluster named local-cluster.
134 DO480-ACM2.4-en-0-f54b122
Chapter 4 | Configuring the Observability Stack in RHACM
4.1. Open a terminal and log in to the ocp4 cluster as the admin user. The APIServer URL
is https://api.ocp4.example.com:6443.
Note
The deletion of the open-cluster-management-observability namespace
can take a while.
Finish
On the workstation machine, use the lab command to complete this exercise. This is important
to ensure that resources from previous exercises do not impact upcoming exercises.
DO480-ACM2.4-en-0-f54b122 135
Chapter 4 | Configuring the Observability Stack in RHACM
Lab
Outcomes
You should be able to:
This command ensures that RHACM is deployed in the hub cluster, the managed-cluster
exists in RHACM, and the observability service is not enabled.
Instructions
1. Enable the observability service from the terminal.
2. Create a Prometheus alerting rule so that cluster administrators receive alerts when the
memory usage of a cluster exceeds 50%.
3. From Grafana, verify that the custom alert is firing for the clusters local-cluster and
managed-cluster.
4. Disable the RHACM observability service.
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
136 DO480-ACM2.4-en-0-f54b122
Chapter 4 | Configuring the Observability Stack in RHACM
Finish
As the student user on the workstation machine, use the lab command to complete this
exercise. This is important to ensure that resources from previous exercises do not impact
upcoming exercises.
DO480-ACM2.4-en-0-f54b122 137
Chapter 4 | Configuring the Observability Stack in RHACM
Solution
Outcomes
You should be able to:
This command ensures that RHACM is deployed in the hub cluster, the managed-cluster
exists in RHACM, and the observability service is not enabled.
Instructions
1. Enable the observability service from the terminal.
1.1. Open a terminal and log in to the ocp4 cluster as the admin user. The APIServer URL
is https://api.ocp4.example.com:6443.
138 DO480-ACM2.4-en-0-f54b122
Chapter 4 | Configuring the Observability Stack in RHACM
1.3. Extract the pull secret from the openshift-config namespace. Store it in a variable.
1.5. Use your text editor to create a YAML file with a new object bucket claim (OBC)
definition.
name: thanos-bc
namespace: open-cluster-management-observability
spec:
storageClassName: openshift-storage.noobaa.io
generateBucketName: observability-bucket
# BUCKET_SUBREGION
DO480-ACM2.4-en-0-f54b122 139
Chapter 4 | Configuring the Observability Stack in RHACM
1.7. Create a secret containing the thanos.yaml configuration file with the information
from the previous step.
1.8. Create the multicluster observability object to enable the observability service.
140 DO480-ACM2.4-en-0-f54b122
Chapter 4 | Configuring the Observability Stack in RHACM
The observability components deployment begins. Verify the status of the installation
by using the following command:
...output omitted...
Status:
Conditions:
Last Transition Time: 2021-11-09T13:34:59Z
Message: Installation is in progress
Reason: Installing
Status: True
Type: Installing
...output omitted...
When the observability components are ready, the output of the previous command
displays the following message:
Name: observability
...output omitted...
Status:
Conditions:
Last Transition Time: 2021-11-25T08:29:07Z
Message: Observability components are deployed and running
Reason: Ready
Status: True
Type: Ready
...output omitted...
2. Create a Prometheus alerting rule so that cluster administrators receive alerts when the
memory usage of a cluster exceeds 50%.
2.1. From the terminal, create a file named custom_rule.yaml containing the ConfigMap
object with the instructions in Prometheus Querying Language (PromQL).
DO480-ACM2.4-en-0-f54b122 141
Chapter 4 | Configuring the Observability Stack in RHACM
expr: |
cluster:memory_usage:ratio > 0.5
for: 5s
labels:
cluster: "{{ $labels.cluster }}"
severity: warning
2.3. Verify that the new configuration applies. You can monitor the restart of the pods in
the open-cluster-management-observability namespace with the watch
command.
3. From Grafana, verify that the custom alert is firing for the clusters local-cluster and
managed-cluster.
• Username: admin
• Password: redhat
142 DO480-ACM2.4-en-0-f54b122
Chapter 4 | Configuring the Observability Stack in RHACM
4.1. Open a terminal and log in to the ocp4 cluster as the admin user. The APIServer URL
is https://api.ocp4.example.com:6443.
Note
Deleting the open-cluster-management-observability namespace can take
a while.
DO480-ACM2.4-en-0-f54b122 143
Chapter 4 | Configuring the Observability Stack in RHACM
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
As the student user on the workstation machine, use the lab command to complete this
exercise. This is important to ensure that resources from previous exercises do not impact
upcoming exercises.
144 DO480-ACM2.4-en-0-f54b122
Chapter 4 | Configuring the Observability Stack in RHACM
Summary
Note
This section is under development.
DO480-ACM2.4-en-0-f54b122 145
146 DO480-ACM2.4-en-0-f54b122
Chapter 5
DO480-ACM2.4-en-0-f54b122 147
Chapter 5 | Managing Applications Across Multiple Clusters with ACM
Objectives
After completing this section, you should be able to describe and define GitOps concepts and the
resources of the RHACM application model.
GitOps Principles
Implementing and maintaining GitOps principles are essential to your multicluster environment
and consists of the following:
Systems are required to use declarative data to export the desired state of the system. The data
must be in a format that can be written and read by both machines and humans alike.
The Git version control system acts as a single source of truth and all declarative states are stored
in Git. In Git, the system infrastructure changes are available chronologically thus assisting users
with troubleshooting, auiditing, and rollbacks.
GitOps Components
• Infrastructure as code
Infrastructure as code is the practice of storing all configurations for managing and provisioning
infrastructure in a repository. This ensures that the same environment is provisioned every
time. Similar to any software source code, all files under version control can be modified and
then deployed. IaC is used for automating infrastructure provisioning and thereby removes the
requirement for developers to manually provision and manage resources such as storage and
operating systems.
• Merge request
Merge Requests are used effectively as a catalyst for all the infrastructure updates. Merge request
enable the project developers to collaborate and review IaC prior to deployment to production
systems.
148 DO480-ACM2.4-en-0-f54b122
Chapter 5 | Managing Applications Across Multiple Clusters with ACM
• Continuous integration
Continuous Integration (CI) is the discipline of integrating changes in the main branch as
often as possible. Developers use short-lived branches or small change sets and integrate them
frequently into the main branch, ideally several times a day. This speeds up the integration, makes
code review easier, and reduces potential problems.
• A version control system, which stores your source code in a repository. The repository uses
a main or trunk branch to keep track of the main development line. Feature development occurs
in separate branches, which after review and validation, are integrated into the main branch. The
most popular version control system is Git.
• A CI automation service. This is usually a server or a daemon that watches the repository for
changes. If the repository changes, the CI automation service checks out the new code and
verifies that everything is correct. RHACM accomplishes this by deploying the subscription
operator to monitor Git for respository changes.
GitOps Workflow
GitOps workflows use Git as the version control system which contains the configurations for
managing and provisioning infrastructures. GitOps and IaC use a declarative approach as opposed
to an imperative approach to configuration management. A declarative model in programming
focuses on writing code to describe what an application should do, while an imperative model
focus on how it should do it.
2. A branch is created from the main repository which will contain the updated resource file.
3. After the branch is updated, the administrator pushes the change to Git and creates a pull
request.
6. The updated branch is then deployed via the subscription operator in RHACM.
DO480-ACM2.4-en-0-f54b122 149
Chapter 5 | Managing Applications Across Multiple Clusters with ACM
Subscriptions
The subscription (subscription.apps.open-cluster-management.io) enables clusters to subscribe
to a source repository, also known as a channel. The subscription type can be a Git repository,
Helm release registry, or Object storage repository. Both the hub and managed cluster can
use subscriptions. Subscriptions are associated with the channel and identifies new or updated
resource templates.
The subscription operator pulls from the source repository and deploys to the targeted managed
clusters without checking the hub cluster first. The subscription operator can then monitor for new
or updated resources. For example, the following displays a typical subscription YAML file.
apiVersion: apps.open-cluster-management.io/v1
kind: Subscription
metadata:
name: web-app-cluster1
namespace: web-app
labels:
deployment: hello
annotations:
apps.open-cluster-management.io/github-branch: main
apps.open-cluster-management.io/github-path: mysql
spec:
channel: web-app/web-app-channel
placement:
placementRef:
kind: PlacementRule
name: cluster1
The Subscription is a required value and indicates the kind or type of resource.
The namespace is a required value and sets the namespace resource to use for the
subscription.
An Optional value that sets namespace name Namespace/Name and defines the channel for
the subscription.
The PlacementRule is a required value and indicates that the resource is a placement rule
The name is a required value that indicates the name of the cluster where the rule is placed.
Channels
The application channels channel.apps.open-cluster-management.io are source definitions of
repositories that the cluster can access. The channels can be a Github repository, Helm charts,
objects stores,and deployable resource namespaces on the hub cluster. With the exception of
150 DO480-ACM2.4-en-0-f54b122
Chapter 5 | Managing Applications Across Multiple Clusters with ACM
Github, all channels require individual namespaces. The following examples illustrates a typical
channel configuration YAML file:
The pathname is a required value that specifies the URL to the repository that contains the
IaC resource files.
The type is a required value thats sets the type of repository selected to contain the IaC
resource files.
Placement Rules
Placement rules placementrule.apps.open-cluster-management.io are used to ensure that the
application subscription is placed on the correct cluster in your infrastructure. Placement rules
assist in managing a multicluster deployment and can be shared across multiple subscriptions. For
example the following illustrates a typical placement rule YAML file.
The matchLabel is an optional value that indicates the labels that must exist on the target
clusters.
Applications
Applications application.app.k8s.io in Red Hat Advanced Cluster Management for Kubernetes,
are used for grouping Kubernetes resources that build an application. For example the following
illustrates a typical application YAML file.
DO480-ACM2.4-en-0-f54b122 151
Chapter 5 | Managing Applications Across Multiple Clusters with ACM
selector:
matchLabels:
app: web-app
The namespace resource identifies the namespace to use for the application.
The matchLabel resource is optional and specifies the label that must exist on the target
clusters.
References
For more information, refer to the Red Hat Advanced Cluster Management for
Kubernetes Applications guide at
https://access.redhat.com/documentation/en-us/
red_hat_advanced_cluster_management_for_kubernetes/2.4
152 DO480-ACM2.4-en-0-f54b122
Chapter 5 | Managing Applications Across Multiple Clusters with ACM
Quiz
2. Which of the following statements describes best practices for maintaining GitOps
resource YAML files?
a. Resource YAML files are stored on multiple systems, similar to block chains.
b. Resource YAML files are stored in a multicluster Postgres database.
c. Resource YAML files are stored on the local storage of infrastructure nodes.
d. Resource YAML files are stored in a respository such as Git.
3. Which of the following sentences list the three major GitOps components?
a. Infrastructure as code (IaC), Centralized Processes, Continuous Delivery/Continuous
Integration (CD/CI).
b. Continuous Delivery/Continuous Integration (CD/CI), Pull Requests, Peer Review.
c. Infrastructure as code (IaC), Merge Request, Continuous Delivery/Continuous
Integration (CD/CI).
d. Declarative Configurations, Resource YAML files, Infrastructure as code (IaC).
e. Application channels lists repositories that the clusters can access.
f. Application channels lists are only stored in Helm charts.
g. Application channels ensure the application is deployed on a specific cluster.
h. Application channels group resources together to build a deployment.
DO480-ACM2.4-en-0-f54b122 153
Chapter 5 | Managing Applications Across Multiple Clusters with ACM
5. Which one of the following statements best describes Infrastructure as code (IaC)?
a. Infrastructure as code (IaC) is the practice of storing all configurations for managing and
provisioning infrastructure on each OpenShift node.
b. Infrastructure as code (IaC) ensures that the same infrastructure is not provisioned every
time.
c. Infrastructure as code (IaC) enables and encourages administrators to to manually
provision and manage resources.
d. Infrastructure as code (IaC) is used for automating provisioning.
154 DO480-ACM2.4-en-0-f54b122
Chapter 5 | Managing Applications Across Multiple Clusters with ACM
Solution
2. Which of the following statements describes best practices for maintaining GitOps
resource YAML files?
a. Resource YAML files are stored on multiple systems, similar to block chains.
b. Resource YAML files are stored in a multicluster Postgres database.
c. Resource YAML files are stored on the local storage of infrastructure nodes.
d. Resource YAML files are stored in a respository such as Git.
3. Which of the following sentences list the three major GitOps components?
a. Infrastructure as code (IaC), Centralized Processes, Continuous Delivery/Continuous
Integration (CD/CI).
b. Continuous Delivery/Continuous Integration (CD/CI), Pull Requests, Peer Review.
c. Infrastructure as code (IaC), Merge Request, Continuous Delivery/Continuous
Integration (CD/CI).
d. Declarative Configurations, Resource YAML files, Infrastructure as code (IaC).
e. Application channels lists repositories that the clusters can access.
f. Application channels lists are only stored in Helm charts.
g. Application channels ensure the application is deployed on a specific cluster.
h. Application channels group resources together to build a deployment.
DO480-ACM2.4-en-0-f54b122 155
Chapter 5 | Managing Applications Across Multiple Clusters with ACM
5. Which one of the following statements best describes Infrastructure as code (IaC)?
a. Infrastructure as code (IaC) is the practice of storing all configurations for managing and
provisioning infrastructure on each OpenShift node.
b. Infrastructure as code (IaC) ensures that the same infrastructure is not provisioned every
time.
c. Infrastructure as code (IaC) enables and encourages administrators to to manually
provision and manage resources.
d. Infrastructure as code (IaC) is used for automating provisioning.
156 DO480-ACM2.4-en-0-f54b122
Chapter 5 | Managing Applications Across Multiple Clusters with ACM
Objectives
After completing this section, you should be able to deliver applications into multiple clusters by
using RHACM GitOps.
├── mysql-app
└── subscriptions
└── mysql-sub
The directory structure is created for the following subscription flow: subscription > mysql >
mysql-apps.
The subscription in the mysql-sub folder is applied from the CLI terminal to the hub cluster.
Subscriptions and policies are then downloaded and applied to the hub cluster from the mysql-sub
folder. The subscriptions and policies in the mysql-sub folder then run on the managed clusters
based on the placement rule. Placement rules determine which managed-clusters are affected
by each subscription. The subscriptions or policies determine what is applied to the clusters that
match their placement. Moreover, the subscriptions content from the mysql-sub folder is
applied to all managed clusters, common applications, and common configurations that match the
placement rule.
DO480-ACM2.4-en-0-f54b122 157
Chapter 5 | Managing Applications Across Multiple Clusters with ACM
Application Console
The application console dashboard manages the application lifecycle. The dashboard includes
capabilities to create, manage, and view the application status.
Resource topology
RHACM provides a resource topology page that displays a visual representation of the application
and corresponding resources. The visual representation of the state of the resources assists in
troubleshooting and confirming application issues.
Advanced configuration
The advanced configuration tab provides the capability to view all application tables and
terminology. The Advance configuration tab also includes access to filter for the following
resources:
• Subscription
• Placement rules
• Channels
158 DO480-ACM2.4-en-0-f54b122
Chapter 5 | Managing Applications Across Multiple Clusters with ACM
To deploy and manage an application with RHACM, you will need to create some custom
resources. After creating the resource YAML file, you can verify that the resource definition
creates without an error using the kubectl command.
Namespace
There must be a namespace resource per deployed application. The following example namespace
definition displays the namespace resource created for a MySql application.
apiVersion: v1
kind: Namespace
metadata:
name: mysql
Subscriptions
Create the subscriptions resource YAML file for the MySql application. The following subscription
resource is an example of the subscription resource created for a MySql application. The
apiVersion and full group, apps.open-cluster-management.io must be specified in the resource
YAML file. By default, the subscription operator subscribes to the master branch and you want
to use main. You can subscribe to a different branch by specifying the branch name annotation in
the subscription. You can also specify the path directory, mysql, that is used to access the custom
resources for deployment.
DO480-ACM2.4-en-0-f54b122 159
Chapter 5 | Managing Applications Across Multiple Clusters with ACM
apiVersion: apps.open-cluster-management.io/v1
kind: Subscription
metadata:
name: mysql-development-subscription
labels:
app: mysql
annotations:
apps.open-cluster-management.io/github-path: mysql
apps.open-cluster-management.io/github-branch: main
Channels
Create the channel resource YAML file. Channels define the source repositories that a cluster can
subscribe to with a subscription. The following example channel definition displays an example
of a Git channel for the mysql application. Within the hub cluster, the channel uses the mysql
namespace. The Channel also specifies the https://github.com/redhattraining/do480-
apps pathname, which points to the storage location of resources used in the deployment.
apiVersion: apps.open-cluster-management.io/v1
kind: Channel
metadata:
name: mysql-channel
namespace: mysql
spec:
pathname: 'https://github.com/redhattraining/do480-apps'
type: GitHub
Placementrule
Create the placementrule resource YAML file. Placementrules describe and specify the target
clusters where subcriptions are deployed. The following example placementrule definition
specifies the mysql namespace and that the application can only occupy the local cluster hub-
cluster.
apiVersion: apps.open-cluster-management.io/v1
kind: PlacementRule
metadata:
labels:
app: mysql
name: mysql-placement-1
namespace: mysql
spec:
160 DO480-ACM2.4-en-0-f54b122
Chapter 5 | Managing Applications Across Multiple Clusters with ACM
clusterSelector:
matchLabels:
'local-cluster': 'true'
Services
Create the service resource YAML file. Services are used for pod communication in the network.
The following example placementrule definition specifies the service targets pods based on the
selector specified label. The service requires a selector to identify it works with any pods with the
label app: todonodejs
apiVersion: v1
kind: Service
metadata:
labels:
app: todonodejs
name: mysql
name: mysql
spec:
ports:
- port: 3306
selector:
name: mysql
Routes
Create the route resource YAML file. Routes are used by OpenShift to expose a Service outside
the cluster. These routes must have a resource YAML file as part of the deployment or the
administrators would have to create the route manually.
apiVersion: route.openshift.io/v1
kind: Route
metadata:
labels:
app: todonodejs
name: route-frontend
name: frontend
namespace: mysql
spec:
host: todo.apps.ocp4.example.com
path: "/todo"
to:
kind: Service
DO480-ACM2.4-en-0-f54b122 161
Chapter 5 | Managing Applications Across Multiple Clusters with ACM
name: frontend
weight: 100
wildcardPolicy: None
After creating the required resource YAML files for the application and pushing them to a Git
repository, you are ready to deploy the application with RHACM.
Navigate to the left menu and select Applications and click Create application. RHACM provides
a YAML editor that is built directly into the application console. The YAML web UI enables
administrators to view, configure, and update the resources in the YAML file directly without
leaving the console. The YAML web UI updates and displays in real time as the administrator
configures the application.
Enter a valid Kubernetes name and namespace for the application. The drop-down menu provides
a list of current namespaces based on your access role.Administrators can select a namespace
from drop-down menu or create a new one if they have been assigned the correct access role.
162 DO480-ACM2.4-en-0-f54b122
Chapter 5 | Managing Applications Across Multiple Clusters with ACM
The Location for respository resources menu contains three repository types Git,
Helm, and Object storage. Selecting the Git repository displays the following fields:
Field Value
Pre/Post deployment task Set the Ansible Tower secret for jobs that you
want to run before or after the subscription
deploys the application resources. The
Ansible Tower tasks must be placed within
prehook and posthook folders in this
repository.
DO480-ACM2.4-en-0-f54b122 163
Chapter 5 | Managing Applications Across Multiple Clusters with ACM
The Select clusters to deploy field allows you to define and update the placementrule
allocated for the application. You have the option to deploy the placementrule to the local hub
cluster, to the online managed clusters and the local cluster, or only deploy to clusters that match a
specified label. You also have the option to Select existing placement configuration if
you want to use a previously defined placementrule for an application in an existing namespace.
From settings, you can specify the application behavior for the Deployment window and
Time window. The Time window allows resource subscriptions to begin deployments only
during specific days and hours. You can define the subscription Deployment window type as
Always active, active within specified interval, or blocked within specified
interval for the specified time window. For example, you could define an active time window
between 3:00 PM and 3:30 PM every Monday for web application updates. This Time window
would allow the application deployments to occur every Monday between 3:00 PM and 3:30 PM.
In the event that a deployment begins during a defined time window and is still running when the
time window elapses, the deployment will still continue until the deployment is completed. If you
were to use blocked instead of active with the previous example, application deployments
could not begin between 3:00 PM and 3:30 PM, but could begin at any other time. The
subscription operator continues to monitor regardless of the defined deployment windows. The
following is an example Time window YAML definition that defines an active, 10:00 PM to 12:00
PM, eastern time zone time window that occurs every Monday.
164 DO480-ACM2.4-en-0-f54b122
Chapter 5 | Managing Applications Across Multiple Clusters with ACM
apiVersion: apps.open-cluster-management.io/v1
kind: Subscription
..._output omitted_...
spec:
channel:
placement:
placementRef:
kind: PlacementRule
name: mysql-placement-1
timewindow:
windowtype: active
location: "America/New_York"
daysofweek: ["Monday"]
hours:
- start: "10:00PM"
end: "12:00AM"
References
For more information, refer to the Red Hat Advanced Cluster Management for
Kubernetes Applications guide at
https://access.redhat.com/documentation/en-us/
red_hat_advanced_cluster_management_for_kubernetes/2.4
DO480-ACM2.4-en-0-f54b122 165
Chapter 5 | Managing Applications Across Multiple Clusters with ACM
Guided Exercise
Outcomes
You should be able to:
As the student user on the workstation machine, use the lab command to prepare your
system for this exercise.
This command ensures that the RHACM is installed and that the Development clusters are
ready and available.
Instructions
1. Fork the do480-apps course GitHub repository at https://github.com/
redhattraining/do480-apps/. This repository contains the YAML files required to
build the application.
1.2. In the top-right corner of the GitHub page, click Fork to create a fork for your
repository.
2.1. From the workstation machine, open Firefox and access https://multicloud-
console.apps.ocp4.example.com.
166 DO480-ACM2.4-en-0-f54b122
Chapter 5 | Managing Applications Across Multiple Clusters with ACM
2.2. Click htpasswd_provider and log in as the admin user with the redhat password.
3. Use ACM Git Ops to create a new MySQL application based on the following criteria.
DO480-ACM2.4-en-0-f54b122 167
Chapter 5 | Managing Applications Across Multiple Clusters with ACM
Field Value
Name mysql
Namespace mysql
Branch main
Path mysql
3.1. Navigate to the left menu and select Applications and click Create application.
3.2. Select YAML: On to view the YAML in the console as you create the application.
3.3. In both the Name and namespace fields, type mysql. Applications from the local
hub-cluster require a namespace for every application to contain the resources that
represent the application in the managed-clusters.
168 DO480-ACM2.4-en-0-f54b122
Chapter 5 | Managing Applications Across Multiple Clusters with ACM
3.4. Select Git for the Repository type to specify the location of your deployable
resources. Input the URL of your forked do480-apps repository for the channel
source.
3.5. Type main for the branch and mysql for the path respectively.
DO480-ACM2.4-en-0-f54b122 169
Chapter 5 | Managing Applications Across Multiple Clusters with ACM
3.6. Scroll down and select Select clusters to deploy toto configure a placement rule for
the application. Check the box located to the left of Deploy on local cluster
which creates a placment rule to only deploy the application on the local cluster.
3.7. Click the Save button, then you are redirected to the applications Topology page.
4. Verify you can access the MySQL application frontend for the Development clusters.
4.1. In the Topology page, select the router pod, and click the url located under
Location to view the application frontend on the Development cluster.
170 DO480-ACM2.4-en-0-f54b122
Chapter 5 | Managing Applications Across Multiple Clusters with ACM
4.2. In the subsequent web page, append /todo/ to the url to display the application.
5. Define an active time window to the deploy updated versions of application resources.
DO480-ACM2.4-en-0-f54b122 171
Chapter 5 | Managing Applications Across Multiple Clusters with ACM
5.2. Select the menu at the end of the mysql application row and click edit application to
update the application resources.
5.3. Scroll down and click Settings: Specify application behavior. Select the Active with
specified interval radio button which displays the Time window configuration
5.4. Specify the day of the week for the time interval. Select the day you are performing
this general exercise. For example, today is Thursday and the current time is 05:30.
The active time window is defined for between 05:30 and 05:45.
5.5. Click the Update button and you are redirected to the applications Topology page.
172 DO480-ACM2.4-en-0-f54b122
Chapter 5 | Managing Applications Across Multiple Clusters with ACM
6. Navigate to your forked do480-apps repository in the main branch and edit the do480-
apps/mysql/deployment.yaml file. Update the container image to mysql-80:152.
6.1. In your github fork, ensure you are in the main branch. Click mysql and then click the
deployment.yaml file link to access the file. Next, click the pencil icon to edit the
YAML file.
6.2. In the YAML web editor, update the current image tag to mysql-80:1-156. Then,
scroll to the bottom of the page and click the Commit changes button to update the
main branch.
spec:
containers:
- image: registry.redhat.io/rhel8/mysql-80:1-152
name: mysql
env:
7. Return to the RHACM console and view the applications Topology page to verify the new
pod was created.
7.1. In the Topology page, select the mysql ReplicaSet pod. View the Created time
for the pod as the resource updates and creates a new pod. After a couple minutes,
you will see the newly created mysql ReplicaSet pod.
DO480-ACM2.4-en-0-f54b122 173
Chapter 5 | Managing Applications Across Multiple Clusters with ACM
8. Verify resource updates cannot occur outside the active time window. For example, the
current time is 5:35, the timezone is US/NY and the time window intervel is defined to end
at 5:45
8.1. After the time window interval expires, in your github fork, ensure you are in the main
branch. Click mysql and then click the deployment.yaml file link to access the file.
Next, click the pencil icon to edit the YAML file.
8.2. In the GitHub YAML editor, update the current image tag to mysql-80:latest.
Then, scroll to the bottom of the page and click the Commit changes button to
update the main branch.
spec:
containers:
- image: registry.redhat.io/rhel8/mysql-80:latest
name: mysql
env:
8.3. Return to the RHACM console and view the applications Topology page to verify the
new pod was created.
8.4. In the Topology page, select the mysql ReplicaSet pod. The pod has not been re-
created due to the expiration of the active time window interval.
Finish
On the workstation machine, use the lab command to complete this exercise. This is important
to ensure that resources from previous exercises do not impact upcoming exercises.
174 DO480-ACM2.4-en-0-f54b122
Chapter 5 | Managing Applications Across Multiple Clusters with ACM
Objectives
After completing this section, you should be able to describe Kustomize concepts, features, and
deploy new customizations with the Kustomize command-line tool.
Introducing Kustomize
Kustomize is a configuration management solution that allows an administrator to make
declarative changes to application configurations and components while preserving the original
base YAML files. Kustomize overlays declarative YAML artifacts, or patches that specifically
override the general settings without actually modifying the original base files. For example, you
could have an multiphase application for multiple departments. The application configuration is
predominately identical amongst the different departments, however there are a few settings that
are specific to each department and therefore require some customization. Kustomize implements
the kustomization.yaml file that contains specific settings that override the parameters of
the original base configuration. Kustomize enables the GitOps team to consume base YAML file
updates while maintaining use-case specific customization overrides.
Kustomize implements the concepts of bases and overlays. Bases are essentially the base
minimum YAML files that contain the settings shared among the application target environments.
For example, you could have a base set a files to deploy a database. Overlays, inherit from either
one or more bases and are used patch manifests for specific environments or clusters. In reference
to the previous example, administrators could have a database with base files for deploying
PostGreSql and an overlay that patches the base manifest for an environment to use a specific
type of storage.
Kustomize CLI
The Kustomize CLI Kustomize was embedded in Kubernetes since the March 2019 release of
version 1.14. and also maintains a standalone version.
Kustomize benefits
Reusability
Kustomize allows administrators to reuse the base YAML files across multiple environments.
Templateless
Kustomize does not have a templating language and uses the same standard YAML syntax as
Kubernetes.
Debug isolation
Kustomize overlays contains the non common or environment specific setting. Due to the
isolation of these problematic settings, troubleshooting issues becomes significantly less
cumbersome as you only need to focus on the overlays performance and compare that to the
performance of the base.
DO480-ACM2.4-en-0-f54b122 175
Chapter 5 | Managing Applications Across Multiple Clusters with ACM
Kustomize files
Kustomize breaks down application YAML files structure in to base YAML files and overlay
YAML files for environmental specific settings. After the customization parameters have been
defined, Kustomize rebuilds and deploys a new manifest with kubectl kustomize and kubectl
apply respectively. Kustomize consists of the following file types:
The base YAML files contain the application settings that are identical or common multiple
environments.
The patch overlay YAML files contain the application settings that are unique and will override the
base file parameters.
• Kustomization files
The kustomization YAML files define how the settings in the overlay YAML files should interact
with the base YAML files to build the new manifest. The new manifest builds consisting of the base
YAML files and their overlays and are called variants. Generally, in a multicluster infrastructure,
each environment has a corresponding kustomization.yaml file that contains specific settings for
different environments such as production and development.
Kustomize Features
The following table lists a subset of the features available from Kustomize:
Kustomize Features
=== | namePreffix | prepends a specific prefix to resource names. | commonLabels | replaces
label values for a specific keyword. | images | replaces the image with a customized image name.
| configMapGenerator | creates K8s ConfigMap resources used as environment variables in the
pod definition. | patchesStrategicMerge | merges specific configuration segments into to the
common base configuration. ===
The infrastructure consists of two specific environments, named development and production,
that run mysql applications on their respective clusters. Although both environments consists
mostly of the same configuration settings, there are a few differences.
First the administrator procures all of the required resource YAML files.
── base
│ ├── deployment.yaml
│ ├── kustomization.yaml
│ └── service.yaml
└── overlays
├── dev
│ └── kustomization.yaml
├── production
│ ├── kustomization.yaml
│ ├── pvc.yaml
176 DO480-ACM2.4-en-0-f54b122
Chapter 5 | Managing Applications Across Multiple Clusters with ACM
The base directory must contain a kustomization.yaml file that specifies the resource YAML
files that are required to build a new manifest for the application.
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
- service.yaml
The base directory contains the resource files for the application and must be built with the
kubectl kustomize command. This command takes the YAML source (via a path or URL) and
creates a new YAML that can be piped into the kubectl create command.
Patch overlays are created to facilitate the customization of both the production and
development environments.
namePrefix
The namePrefix directive is used to add a prefixed set of characters to the beginning of resource
names. The following kustomization.yaml examples apply the prefix prod to all the resource
names for the production environment and the prefix dev to all the resource names in the
development environment.
#Production
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
#Development
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
commonLabels
The production and development cluster sets require a label for the application that references
the cluster set environment. The following kustomization.yaml file defines the labels
DO480-ACM2.4-en-0-f54b122 177
Chapter 5 | Managing Applications Across Multiple Clusters with ACM
prod-todolist.js and dev-todolist.js for the keyword app on the production and
development cluster sets respectively.
#Production
commonLabels:
app: prod-todolist.js
#Development
commonLabels:
app: dev-todolist.js
Note that with commonLabels does not append or prepend characters but directly replaces the
values.
images
The production and development cluster sets require an image for the application that
references the cluster set environment. The images feature replaces the placeholder mysql-
image in the deployment.yaml file. The prod and devel cluster sets images are defined in their
kustomization.yaml file using newName and newTag. The following kustomization.yaml files
defines the newName value as quay.io/example/todo-single for both the production and
development cluster sets. The production cluster sets use the stable version 1.1 that has been
tested and secured. However, the development cluster sets implements the latest open source
release of the image, v2.0-beta.
#Production
images:
- name: mysql-db-image
newName: quay.io/example/todo-single
newTag: v1.1
#Development
images:
- name: mysql-db-image
newName: quay.io/example/todo-single
newTag: v2.0-beta
patchesStrategicMerge
The development cluster sets use the default emptyDir setting with ephemeral storage.
However, the production environment requires persistent volumes and persistent volume
claims for storage. The PVCs are defined in a separate YAML file named pvc.yaml You
can use the patchesStrategicMerge to apply the volumes defined in the pvc.yaml. In the
kustomization.yaml file for the production environment, the pvc.yaml is defined under
resources.
#Production
patchesStrategicMerge:
- pvc.yaml
178 DO480-ACM2.4-en-0-f54b122
Chapter 5 | Managing Applications Across Multiple Clusters with ACM
References
For more information, refer to the Red Hat Advanced Cluster Management for
Kubernetes Applications guide at
https://access.redhat.com/documentation/en-us/
red_hat_advanced_cluster_management_for_kubernetes/2.4
DO480-ACM2.4-en-0-f54b122 179
Chapter 5 | Managing Applications Across Multiple Clusters with ACM
Guided Exercise
Outcomes
You should be able to:
As the student user on the workstation machine, use the lab command to prepare your
system for this exercise.
This command ensures that the RHACM is installed and that the Development clusters are
ready and available.
Instructions
1. From workstation, use Kustomize to build the base directory and corresponding
kustomization.yaml file in the ~/DO480/labs/applications-kustomize
directory. The base directory contents are located at http://github.com/
<your_fork>/do480-apps in the kustomize branch.
1.1. Open the terminal application on the workstation machine. Log in to the Hub
OpenShift cluster as the admin user.
180 DO480-ACM2.4-en-0-f54b122
Chapter 5 | Managing Applications Across Multiple Clusters with ACM
1.4. Use Git to clone your do480-apps forked repository to access the common set of
yaml resource files
1.5. Navigate to the do480-apps directory and change to the kustomize branch to
access the mysql deployment yaml files.
1.6. Navigate to the base directory, copy the contents of the kustomize directory to
your base directory and list the contents.
[student@workstation base]$ ll
total 32
-rw-rw-r--. 1 student student 165 Dec 24 06:10 applications.yaml
-rw-rw-r--. 1 student student 807 Dec 24 06:10 deployment-frontend.yaml
-rw-rw-r--. 1 student student 692 Dec 24 06:10 deployment.yaml
-rw-rw-r--. 1 student student 65 Dec 24 06:10 namespace.yaml
-rw-rw-r--. 1 student student 227 Dec 24 06:10 placementrule.yaml
DO480-ACM2.4-en-0-f54b122 181
Chapter 5 | Managing Applications Across Multiple Clusters with ACM
1.7. Build the base directory kustomization.yaml file which contains the set of
resources and associated customizations. Your kustomization.yaml file should
contain the following:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- applications.yaml
- deployment-frontend.yaml
- deployment.yaml
- service.yaml
- service-frontend.yaml
- namespace.yaml
- placementrule.yaml
- route.yaml
1.8. Verify the kustomization.yaml file builds the deployment without errors with the
kubectl kustomize command.
182 DO480-ACM2.4-en-0-f54b122
Chapter 5 | Managing Applications Across Multiple Clusters with ACM
name: mysql
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: todonodejs
name: frontend
name: frontend
namespace: mysql
spec:
replicas: 1
selector:
matchLabels:
app: todonodejs
name: frontend
template:
metadata:
labels:
app: todonodejs
name: frontend
spec:
containers:
- env:
- name: MYSQL_ENV_MYSQL_DATABASE
value: items
- name: MYSQL_ENV_MYSQL_USER
value: user1
- name: MYSQL_ENV_MYSQL_PASSWORD
value: mypa55
- name: APP_PORT
value: "8080"
image: quay.io/redhattraining/todo-single:v1.0
name: todonodejs
ports:
- containerPort: 8080
name: nodejs-http
resources:
limits:
cpu: "0.5"
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: todonodejs
name: mysql
name: mysql
spec:
replicas: 1
selector:
matchLabels:
app: todonodejs
name: mysql
template:
DO480-ACM2.4-en-0-f54b122 183
Chapter 5 | Managing Applications Across Multiple Clusters with ACM
metadata:
labels:
app: todonodejs
name: mysql
spec:
containers:
- env:
- name: MYSQL_ROOT_PASSWORD
value: r00tpa55
- name: MYSQL_USER
value: user1
- name: MYSQL_PASSWORD
value: mypa55
- name: MYSQL_DATABASE
value: items
image: registry.redhat.io/rhel8/mysql-80:1-156
name: mysql
ports:
- containerPort: 3306
name: mysql
---
apiVersion: app.k8s.io/v1beta1
kind: Application
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
---
apiVersion: apps.open-cluster-management.io/v1
kind: PlacementRule
metadata:
labels:
app: mysql
name: mysql-placement-1
namespace: mysql
spec:
clusterSelector:
matchLabels:
local-cluster: "true"
---
apiVersion: route.openshift.io/v1
kind: Route
metadata:
labels:
app: todonodejs
name: route-frontend
name: frontend
namespace: mysql
spec:
host: todo.apps.ocp4.example.com
path: /todo
to:
184 DO480-ACM2.4-en-0-f54b122
Chapter 5 | Managing Applications Across Multiple Clusters with ACM
kind: Service
name: frontend
weight: 100
wildcardPolicy: None
Now you have a base set of deployment resources that can be shared amongst both
the Production and production administrators.
2. Examine the deployment.yaml in the base directory and note that the storage is not
configured. The production clusters require that the mysql applications use persistent
volumes and persistent volume claims for storage. Build an overlay for the production
clusters that uses persistent storage.
2.1. Create the overlay directory and the corresponding production and
development subdirectories. Ensure the overlays directory is at the same
directory level as the base directory.
2.2. Create the dbclaim-pv.yaml file resource, which contains the persistent volume
claim, in the overlays/production/ subdirectory. Your dbclaim-pv.yaml file
should contain the following:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
namespace: mysql
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Mi
storageClassName: nfs-storage
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../../base
resources:
- dbclaim-pv.yaml
DO480-ACM2.4-en-0-f54b122 185
Chapter 5 | Managing Applications Across Multiple Clusters with ACM
bases:
- ../../base
resources:
- dbclaim-pv.yaml
2.4. Use the kubectl kustomize command to build the deployment for the production
clusters with the new storage resource. Verify the new storage resource has been
added to the deployment.
2.5. The output of the previous command can now be applied to the production clusters.
Use the kubectl kustomize command piped to kubectl apply -f to build and
deploy the application to the production clusters with the new storage resource.
2.6. Verify the deployment and confirm that the new storage resource is deployed.
186 DO480-ACM2.4-en-0-f54b122
Chapter 5 | Managing Applications Across Multiple Clusters with ACM
2.7. From workstation, use Firefox to navigate to the mysql application at http://
todo.apps.ocp4.example.com/todo/ to verify the application.
2.8. Use the kubectl kustomize command to confirm that the base yaml files have not
been altered.
DO480-ACM2.4-en-0-f54b122 187
Chapter 5 | Managing Applications Across Multiple Clusters with ACM
labels:
app: todonodejs
name: frontend
name: frontend
namespace: mysql
spec:
replicas: 1
selector:
matchLabels:
app: todonodejs
name: frontend
template:
metadata:
labels:
app: todonodejs
name: frontend
spec:
containers:
- env:
- name: MYSQL_ENV_MYSQL_DATABASE
value: items
- name: MYSQL_ENV_MYSQL_USER
value: user1
- name: MYSQL_ENV_MYSQL_PASSWORD
value: mypa55
- name: APP_PORT
value: "8080"
image: quay.io/redhattraining/todo-single:v1.0
name: todonodejs
ports:
- containerPort: 8080
name: nodejs-http
resources:
limits:
cpu: "0.5"
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: todonodejs
name: mysql
name: mysql
spec:
replicas: 1
selector:
matchLabels:
app: todonodejs
name: mysql
template:
metadata:
labels:
app: todonodejs
name: mysql
spec:
188 DO480-ACM2.4-en-0-f54b122
Chapter 5 | Managing Applications Across Multiple Clusters with ACM
containers:
- env:
- name: MYSQL_ROOT_PASSWORD
value: r00tpa55
- name: MYSQL_USER
value: user1
- name: MYSQL_PASSWORD
value: mypa55
- name: MYSQL_DATABASE
value: items
image: registry.redhat.io/rhel8/mysql-80:1-156
name: mysql
ports:
- containerPort: 3306
name: mysql
---
apiVersion: app.k8s.io/v1beta1
kind: Application
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
---
apiVersion: apps.open-cluster-management.io/v1
kind: PlacementRule
metadata:
labels:
app: mysql
name: mysql-placement-1
namespace: mysql
spec:
clusterSelector:
matchLabels:
local-cluster: "true"
---
apiVersion: route.openshift.io/v1
kind: Route
metadata:
labels:
app: todonodejs
name: route-frontend
name: frontend
namespace: mysql
spec:
host: todo.apps.ocp4.example.com
path: /todo
to:
kind: Service
name: frontend
weight: 100
wildcardPolicy: None
DO480-ACM2.4-en-0-f54b122 189
Chapter 5 | Managing Applications Across Multiple Clusters with ACM
3. Create an overlay for the production clusters to increase the number of mysql database
replicas from one to three.
3.2. Modify the kustomization.yaml file to increase the number of mysql replica
pods. Your kustomization.yaml file should contain the following:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../../base
resources:
- dbclaim-pv.yaml
replicas:
- name: mysql
count: 3
bases:
- ../../base
resources:
- dbclaim-pv.yaml
replicas:
- name: mysql
count: 3
3.3. Use the kubectl kustomize command piped to kubectl apply -f to build and
deploy the application and verify the number of mysql replicas.
190 DO480-ACM2.4-en-0-f54b122
Chapter 5 | Managing Applications Across Multiple Clusters with ACM
application.app.k8s.io/mysql unchanged
placementrule.apps.open-cluster-management.io/mysql-placement-1 unchanged
route.route.openshift.io/frontend unchanged
Finish
On the workstation machine, use the lab command to complete this exercise. This is important
to ensure that resources from previous exercises do not impact upcoming exercises.
DO480-ACM2.4-en-0-f54b122 191
Chapter 5 | Managing Applications Across Multiple Clusters with ACM
Lab
Outcomes
You should be able to:
The following command ensures that the RHACM is installed and that the Development and
Production clusters are ready and available.
Instructions
1. Fork the do480-apps course GitHub repository at https://github.com/
redhattraining/do480-apps/. This repository contains the YAML files required to build
the application.
2. Use the RHACM web console to add the labels usage=development and
usage=production to the local-cluster and managed-cluster clusters respectively.
The web console URL is https://multicloud.console.apps.ocp4.example.com
3. Use ACM Git Ops to create a new MySQL application based on the following:
192 DO480-ACM2.4-en-0-f54b122
Chapter 5 | Managing Applications Across Multiple Clusters with ACM
Field Value
Name mysql
Namespace mysql
Branch main
Path mysql
Field Value
Branch production
Path mysql
4. Verify you can access the MySQL application frontend for the Development and
Production clusters.
5. In Github, use your fork of the do480-apps repository to promote the Development MySQL
image mysql-80:1-156 to Production.
6. Return to the RHACM web console and refresh the application Topology page. Verify the
Mysql application is working and using the correct image.
Finish
As the student user on the workstation machine, use the lab command to complete this
exercise. This is important to ensure that resources from previous exercises do not impact
upcoming exercises.
DO480-ACM2.4-en-0-f54b122 193
Chapter 5 | Managing Applications Across Multiple Clusters with ACM
194 DO480-ACM2.4-en-0-f54b122
Chapter 5 | Managing Applications Across Multiple Clusters with ACM
Solution
Outcomes
You should be able to:
The following command ensures that the RHACM is installed and that the Development and
Production clusters are ready and available.
Instructions
1. Fork the do480-apps course GitHub repository at https://github.com/
redhattraining/do480-apps/. This repository contains the YAML files required to build
the application.
1.2. In the top-right corner of the GitHub page, click Fork to create a fork for your
repository.
2. Use the RHACM web console to add the labels usage=development and
usage=production to the local-cluster and managed-cluster clusters respectively.
The web console URL is https://multicloud.console.apps.ocp4.example.com
2.1. From workstation, use Firefox to navigate to the RHACM web console at https://
multicloud.console.apps.ocp4.example.com.
When prompted, select htpasswd_provider
Type the following credentials
• Username: admin
• Password: redhat
DO480-ACM2.4-en-0-f54b122 195
Chapter 5 | Managing Applications Across Multiple Clusters with ACM
2.2. In the left navigation menu, select Clusters to add new labels to the local-cluster
and managed-cluster clusters.
2.3. On the local-cluster row, select the drop-down menu at the end of the row, and
click edit label.
2.4. Type environment=development for the new label and click Save.
2.5. On the managed-cluster row, select the drop-down menu at the end of the row, and
click edit label.
2.6. Type environment=production for the new label and click Save.
3. Use ACM Git Ops to create a new MySQL application based on the following:
Field Value
Name mysql
Namespace mysql
Branch main
Path mysql
Field Value
Branch production
Path mysql
3.1. Navigate to the left menu and select Applications and click Create application.
196 DO480-ACM2.4-en-0-f54b122
Chapter 5 | Managing Applications Across Multiple Clusters with ACM
Field Value
Name mysql
Namespace mysql
Branch main
Path mysql
3.3. Add an additional repository to the MySQL application for the Production cluster
based on the following and click Save:
Field Value
Branch production
Path mysql
4. Verify you can access the MySQL application frontend for the Development and
Production clusters.
5. In Github, use your fork of the do480-apps repository to promote the Development MySQL
image mysql-80:1-156 to Production.
DO480-ACM2.4-en-0-f54b122 197
Chapter 5 | Managing Applications Across Multiple Clusters with ACM
5.1. In your github fork, switch to the production branch. Click mysql and then click the
deployment.yaml link to access the file. Next, click the pencil icon to edit the YAML file.
5.2. Scroll down and replace the current image tag with mysql-80:1-156. Then scroll to
the bottom of the page, select Create a new branch for this commit and start a pull
request, accept the default branch, and click Proposed changes.
5.3. To complete the workflow, click Create pull request, Merge pull request, and finally
Confirm merge. After your pull request is successfully merged and closed, click Delete
branch.
6. Return to the RHACM web console and refresh the application Topology page. Verify the
Mysql application is working and using the correct image.
6.1. In the Topology page, select the Subscription → mysql-subscription-2 to view the
managed-cluster topology.
6.2. Select the mysql ReplicaSet pod and click View Pod YAML and Logs to view the YAML
file and verify the image tag.
6.3. In the YAML console UI, scroll down and verify the image is mysql-80:1-156.
6.4. In the left navigation menu, select the Applications → mysql link to view the
application.
6.5. In the Topology page, select Subscription → mysql-subscription-2 and select the
router pod. Click todo.apps.ocp4-mng.example.com/, located under Location, to
view the application. In the subsequent web page, append /todo/ to the url to display
the application.
Finish
As the student user on the workstation machine, use the lab command to complete this
exercise. This is important to ensure that resources from previous exercises do not impact
upcoming exercises.
198 DO480-ACM2.4-en-0-f54b122
Chapter 5 | Managing Applications Across Multiple Clusters with ACM
Summary
In this chapter, you learned:
• How to apply the GitOps principles of declarative desired state and immutable
desired state versions for the application lifecycle in RHACM.
• How to leverage Kustomize to apply and manage custom settings for multicluster
environments.
DO480-ACM2.4-en-0-f54b122 199
200 DO480-ACM2.4-en-0-f54b122