OpenShift Container Platform 4.12 Release Notes
OpenShift Container Platform 4.12 Release Notes
OpenShift Container Platform 4.12 Release Notes
12 release notes
OpenShift Container Platform 4.12 clusters are available at https://console.redhat.com/openshift. With the
Red Hat OpenShift Cluster Manager application for OpenShift Container Platform, you can deploy
OpenShift clusters to either on-premises or cloud environments.
OpenShift Container Platform 4.12 is supported on Red Hat Enterprise Linux (RHEL) 8.4 and 8.5, as well
as on Red Hat Enterprise Linux CoreOS (RHCOS) 4.12.
You must use RHCOS machines for the control plane, and you can use either RHCOS or RHEL for
compute machines.
Starting with OpenShift Container Platform 4.12 an additional six months of Extended Update Support
(EUS) phase on even numbered releases from 18 months to two years. For more information, see the Red
Hat OpenShift Container Platform Life Cycle Policy.
OpenShift Container Platform 4.8 is an Extended Update Support (EUS) release. More information on
Red Hat OpenShift EUS is available in OpenShift Life Cycle and OpenShift EUS Overview.
Maintenance support ends for version 4.8 in January 2023 and goes to extended life phase. For more
information, see the Red Hat OpenShift Container Platform Life Cycle Policy.
Default consoles for new clusters are now determined by the installation
platform
Red Hat Enterprise Linux CoreOS (RHCOS) nodes installed from an OpenShift Container Platform 4.12
boot image now use a platform-specific default console. The default consoles on cloud platforms
correspond to the specific system consoles expected by that cloud provider. VMware and OpenStack
images now use a primary graphical console and a secondary serial console. Other bare metal installations
now use only the graphical console by default, and do not enable a serial console. Installations performed
with coreos-installer can override existing defaults and enable the serial console.
Existing nodes are not affected. New nodes on existing clusters are not likely to be affected because they
are typically installed from the boot image that was originally used to install the cluster.
For information about how to enable the serial console, see the following documentation:
For more information, see Installing RHCOS using IBM Secure Execution.
For more information, see Installing a cluster on AWS with network customizations.
Extend worker nodes to the edge of AWS when installing into an existing
Virtual Private Cloud (VPC) with Local Zone subnets.
With this update you can install OpenShift Container Platform to an existing VPC with installer-
provisioned infrastructure, extending the worker nodes to Local Zones subnets. The installation program
will provision worker nodes on the edge of the AWS network that are specifically designated for user
applications by using NoSchedule taints. Applications deployed on the Local Zones locations deliver low
latency for end users.
For more information, see Installing a cluster using AWS Local Zones.
For more information about installing using installer-provisioned infrastructure, see Using a GCP
Marketplace image. For more information about installing a using user-provisioned infrastructure, see
Creating additional worker machines in GCP.
For more information about installing a cluster, see Preparing to install on IBM Cloud VPC.
A cluster administrator must provide a manual acknowledgment before the cluster can be upgraded from
OpenShift Container Platform 4.11 to 4.12. This is to help prevent issues after upgrading to OpenShift
Container Platform 4.12, where APIs that have been removed are still in use by workloads, tools, or other
components running on or interacting with the cluster. Administrators must evaluate their cluster for any
APIs in use that will be removed and migrate the affected components to use the appropriate new API
version. After this is done, the administrator can provide the administrator acknowledgment.
All OpenShift Container Platform 4.11 clusters require this administrator acknowledgment before they can
be upgraded to OpenShift Container Platform 4.12.
For more information, see Preparing to update to OpenShift Container Platform 4.12.
Mirroring file-based catalog Operator images in OCI format with the oc-mirror
CLI plugin (Technology Preview)
Using the oc-mirror CLI plugin to mirror file-based catalog Operator images in OCI format instead of
Docker v2 format is now available as a Technology Preview.
For more information, see Mirroring file-based catalog Operator images in OCI format.
For more information about the Ironic provisioning service, see Deploying installer-provisioned clusters on
bare metal.
For a brief overview of this type of deployment, see the blog post Deploying Your Cluster at the Edge With
OpenStack.
OpenShift Container Platform on AWS Outposts
OpenShift Container Platform 4.12 is now supported on the AWS Outposts platform. With AWS Outposts
you can deploy edge-based worker nodes, while using AWS Regions for the control plane nodes. The
documentation for this feature is currently unavailable and is targeted for release at a later date.
Optional
!"Zero Touch Provisioning (ZTP) manifests
With the preferred mode, you can configure the install-config.yaml file and specify Agent-based
specific settings in the agent-config.yaml file. For more information, see About the Agent-based
OpenShift Container Platform Installer.
!"IPv4
!"IPv6
!"Installation image generation: The user-provided manifests are checked for validity and compatibility.
!"Installation: The installation service checks the hardware available for installation and emits validation
events that can be retrieved with the openshift-install agent wait-for subcommands.
Post-installation configuration
!"VMware vSphere version 7.0.2 or later, up to but not including version 8. vSphere 8 is not supported.
!"vCenter 7.0.2 or later, up to but not including version 8. vCenter 8 is not supported.
Components with versions earlier than those above are still supported, but are deprecated. These versions
are still fully supported, but version 4.12 of OpenShift Container Platform requires vSphere virtual
hardware version 15 or later. For more information, see Deprecated and removed features.
Failing to meet the above requirements prevents OpenShift Container Platform from upgrading to
OpenShift Container Platform 4.13 or later.
Cluster Capabilities
The following new cluster capabilities have been added:
!"Console
!"Insights
!"Storage
!"CSISnapshot
A new predefined set of cluster capabilities, v4.12 , has been added. This includes all capabilities from
v4.11 , and the new capabilities added with the current release.
On a cluster with multi-architecture compute machines, you can now override the node affinity in the
Operator’s Subscription object to schedule pods on nodes with architectures that the Operator
supports. For more information, see Using node affinity to control where an Operator is installed.
Web console
Administrator Perspective
With this release, there are several updates to the Administrator perspective of the web console.
!"The OpenShift Container Platform web console displays a ConsoleNotification if the cluster is
upgrading. Once the upgrade is done, the notification is removed.
!"A restart rollout option for the Deployment resource and a retry rollouts option for the
DeploymentConfig resource are available on the Action and Kebab menus.
!"You can view a list of supported clusters on the All Clusters dropdown list. The supported clusters
include OpenShift Container Platform, OpenShift Container Platform Service on AWS (ROSA),
Azure Red Hat OpenShift (ARO), ROKS, and Red Hat OpenShift Dedicated.
For more information about multi-architechture compute machines, see Configuring a multi-architecture
compute machine on an OpenShift cluster.
This feature was previously introduced as a Technology Preview in OpenShift Container Platform 4.10 and
is now generally available in OpenShift Container Platform 4.12. With the dynamic plugin, you can build
high quality and unique user experiences natively in the web console. You can:
Developer Perspective
With this release, there are several updates to the Developer perspective of the web console. You can
perform the following actions:
!"Export your application in the ZIP file format to another project or cluster by using the Export
application option on the +Add page.
!"Create a Kafka event sink to receive events from a particular source and send them to a Kafka topic.
!"Set the default resource preference in the User Preferences → Applications page. In addition, you
can select another resource type to be the default.
!"Optionally, set another resource type from the Add page by clicking Import from Git →
Advanced options → Resource type and selecting the resource from the drop-down list.
!"Make the status.HostIP node IP address for pods visible in the Details tab of the Pods page.
!"See the resource quota alert label on the Topology and Add pages whenever any resource reaches
the quota. The alert label link takes you to the ResourceQuotas list page. If the alert label link is for
a single resource quota, it takes you to the ResourceQuota details page.
!"For deployments, an alert is displayed in the topology node side panel if any errors are
associated with resource quotas. Also, a yellow border is displayed around the deployment
nodes when the resource quota is exceeded.
!"See the common updates to the Pipeline details and PipelineRun details page visualization by
performing the following actions:
!"Use the standard icons to zoom in, zoom out, fit to screen, and reset the view.
!" PipelineRun details page only: At specific zoom factors, the background color of the tasks
changes to indicate the error or warning status. You can hover over the tasks badge to see the
total number of tasks and the completed tasks.
In OpenShift Container Platform 4.12, you can do the following from the Helm page:
!"View the list of the existing Helm chart repositories with their scope in the Repositories page.
!"View the newly created Helm release in the Helm Releases page.
Managing plugins for the OpenShift CLI with Krew (Technology Preview)
Using Krew to install and manage plugins for the OpenShift CLI ( oc ) is now available as a Technology
Preview.
!"Installing a cluster with RHEL KVM on IBM Z and LinuxONE in a restricted network
Notable enhancements
The following new features are supported on IBM Z and LinuxONE with OpenShift Container Platform
4.12:
!"Cron jobs
!"Descheduler
!"IPv6
!"PodDisruptionBudget
!"Scheduler profiles
Supported features
The following features are also supported on IBM Z and LinuxONE:
!"Currently, the following Operators are supported:
!"Compliance Operator
!"NFD Operator
!"NMState Operator
!"Bridge
!"Host-device
!"IPAM
!"IPVLAN
!"Cloning
!"Expansion
!"Snapshot
!"Helm
!"Multipathing
!"Operator API
These features are available only for OpenShift Container Platform on IBM Z and LinuxONE for 4.12:
!"HyperPAV enabled on IBM Z and LinuxONE for the virtual machines for FICON attached ECKD
storage
Restrictions
The following restrictions impact OpenShift Container Platform on IBM Z and LinuxONE:
!"NVMe
!"OpenShift Metering
!"OpenShift Virtualization
!"Persistent shared storage must be provisioned by using either Red Hat OpenShift Data Foundation
or other supported storage protocols
!"Persistent non-shared storage must be provisioned using local storage, like iSCSI, FC, or using LSO
with DASD, FCP, or EDEV/FBA
IBM Power
With this release, IBM Power is now compatible with OpenShift Container Platform 4.12. For installation
instructions, see the following documentation:
Notable enhancements
The following new features are supported on IBM Power with OpenShift Container Platform 4.12:
!"Cron jobs
!"Descheduler
!"PodDisruptionBudget
!"Scheduler profiles
!"Stream Control Transmission Protocol (SCTP)
!"Topology Manager
Supported features
The following features are also supported on IBM Power:
!"Compliance Operator
!"NFD Operator
!"NMState Operator
!"Bridge
!"Host-device
!"IPAM
!"IPVLAN
!"CSI Volumes
!"Cloning
!"Expansion
!"Snapshot
!"Helm
!"IPv6
!"Multipathing
!"Multus SR-IOV
!"Operator API
Restrictions
The following restrictions impact OpenShift Container Platform on IBM Power:
!"OpenShift Metering
!"OpenShift Virtualization
!"Precision Time Protocol (PTP) hardware
!"Compute nodes must run Red Hat Enterprise Linux CoreOS (RHCOS)
!"Persistent storage must be of the Filesystem type that uses local volumes, Red Hat OpenShift Data
Foundation, Network File System (NFS), or Container Storage Interface (CSI)
Images
A new import value, importMode , has been added to the importPolicy parameter of image streams.
The following fields are available for this value:
!" Legacy : Legacy is the default value for importMode . When active, the manifest list is discarded,
and a single sub-manifest is imported. The platform is chosen in the following order of priority:
!"Tag annotations
!"Linux/AMD64
!" PreserveOriginal : When active, the original manifest is preserved. For manifest lists, the manifest
list and all of its sub-manifests are imported.
Security and compliance
The SPO provides a way to define secure computing (seccomp) profiles and SELinux profiles as custom
resources, synchronizing profiles to every node in a given namespace.
Networking
The OVN-Kubernetes network plugin includes a wider array of features than OpenShift SDN, including:
!"Support for network flow tracking in NetFlow, sFlow, and IPFIX formats
There are also enormous scale, performance, and stability improvements in OpenShift Container Platform
4.12 compared to prior versions.
If you are using the OpenShift SDN network plugin, note that:
!"OpenShift SDN remains the default on OpenShift Container Platform versions earlier than 4.12.
!"As of OpenShift Container Platform 4.12, OpenShift SDN is a supported installation-time option.
For information on migrating to OVN-Kubernetes from OpenShift SDN, see Migrating from the OpenShift
SDN network plugin.
!" ovn_controller_southbound_database_connected
!" ovnkube_master_libovsdb_monitors
!" ovnkube_master_network_programming_duration_seconds
!" ovnkube_master_network_programming_ovn_duration_seconds
!" ovnkube_master_egress_routing_via_host
!" ovs_vswitchd_interface_resets_total
!" ovs_vswitchd_interface_rx_dropped_total
!" ovs_vswitchd_interface_tx_dropped_total
!" ovs_vswitchd_interface_rx_errors_total
!" ovs_vswitchd_interface_tx_errors_total
!" ovs_vswitchd_interface_collisions_total
!" ovnkube_master_skipped_nbctl_daemon_total
The upstream version of ExternalDNS for OpenShift Container Platform 4.12 is v0.13.1.
Capturing metrics and telemetry associated with the use of routes and shards
In OpenShift Container Platform 4.12, the Cluster Ingress Operator exports a new metric named
route_metrics_controller_routes_per_shard . The shard_name label of the metric specifies the
name of the shards. This metric gives the total number of routes that are admitted by each shard.
The AWS Load Balancer Operator sets the EnableIPTargetType feature gate to false . The AWS Load
Balancer controller disables the support for services and ingress resources for target-type ip .
For more information, see Configuring an Ingress Controller to manually manage DNS.
You can now configure multi-network for SR-IOV additional networks. Configuring SR-IOV additional
networks is a Technology Preview feature and is only supported with kernel network interface cards
(NICs).
Switch between AWS load balancer types without deleting the Ingress
Controller
You can update the Ingress Controller to switch between an AWS Classic Load Balancer (CLB) and an
AWS Network Load Balancer (NLB) without deleting the Ingress Controller.
!"Egress IP addresses
!"Egress firewalls
!"Multicast
For more information about how the migration to OVN-Kubernetes works, see Migrating from the
OpenShift SDN cluster network provider.
For more information, see Advertising an IP address pool from a subset of nodes.
Additional deployment specifications for MetalLB
This update provides additional deployment specifications for MetalLB. When you use a custom resource
to deploy MetalLB, you can use these additional deployment specifications to manage how MetalLB
speaker and controller pods deploy and run in your cluster. For example, you can use MetalLB
deployment specifications to manage where MetalLB pods are deployed, define CPU limits for MetalLB
pods, and assign runtime classes to MetalLB pods.
For more information about deployment specifications for MetalLB, see Deployment specifications for
MetalLB.
With OpenShift Container Platform 4.12, a new interface has been added to the nodeip-configuration
service, which allows users to create a hint file. The hint file contains a variable, NODEIP_HINT , that
overrides the default IP selection logic and selects a specific node IP address from the subnet
NODEIP_HINT variable. Using the NODEIP_HINT variable allows users to specify which IP address is used,
ensuring that network traffic is distributed from the correct interface.
For more information, see Optional: Overriding the default node IP selection logic.
!"CoreDNS does not expand the query UDP buffer size if it was previously set to a smaller value.
!"CoreDNS now always prefixes each log line in Kubernetes client logs with the associated log level.
Deploy the SR-IOV Operator for hosted control planes (Technology Preview)
If you configured and deployed your hosting service cluster, you can now deploy the SR-IOV Operator for
a hosted cluster. For more information, see Deploying the SR-IOV Operator for hosted control planes.
Support for IPv6 virtual IP (VIP) addresses for the Ingress VIP and API VIP
services
With this update, in installer-provisioned infrastructure clusters, the ingressVIP and apiVIP
configuration settings in the install-config.yaml file are deprecated. Instead, use the ingressVIPs
and apiVIPs configuration settings. These settings support dual-stack networking for applications that
require IPv4 and IPv6 access to the cluster by using the Ingress VIP and API VIP services. The
ingressVIPs and apiVIPs configuration settings use a list format to specify an IPv4 address, an IPv6
address, or both IP address formats. The order of the list indicates the primary and secondary VIP address
for each service. The primary IP address must be from the IPv4 network when using dual stack networking.
Support for switching the Bluefield-2 network device from data processing
unit (DPU) mode to network interface controller (NIC) mode (Technology
Preview)
With this update, you can switch the BlueField-2 network device from data processing unit (DPU) mode to
network interface controller (NIC) mode.
Storage
For more information, see see GCP Filestore CSI Driver Operator.
Automatic CSI migration for AWS Elastic Block Storage auto migration is
generally available
Starting with OpenShift Container Platform 4.8, automatic migration for in-tree volume plugins to their
equivalent Container Storage Interface (CSI) drivers became available as a Technology Preview feature.
Support for Amazon Web Services (AWS) Elastic Block Storage (EBS) was provided in this feature in
OpenShift Container Platform 4.8, and OpenShift Container Platform 4.12 now supports automatic
migration for AWS EBS as generally available. CSI migration for AWS EBS is now enabled by default and
requires no action by an administrator.
This feature automatically translates in-tree objects to their counterpart CSI representations and should
be completely transparent to users. Translated objects are not stored on disk, and user data is not
migrated.
While storage class referencing to the in-tree storage plugin will continue working, it is recommended that
you switch the default storage class to the CSI storage class.
While storage class referencing to the in-tree storage plugin will continue working, it is recommended that
you switch the default storage class to the CSI storage class.
Volume population is currently enabled, and supported as a Technology Preview feature. However,
OpenShift Container Platform does not ship with any volume populators.
!"VMware vSphere version 7.0.2 or later, up to but not including version 8. vSphere 8 is not supported.
!"vCenter 7.0.2 or later, up to but not including version 8. vCenter 8 is not supported.
If a third-party CSI driver is present in the cluster, OpenShift Container Platform does not overwrite it. The
presence of a third-party CSI driver prevents OpenShift Container Platform from upgrading to OpenShift
Container Platform 4.13 or later.
For more information, see VMware vSphere CSI Driver Operator requirements.
Operator lifecycle
A platform Operator is an OLM-based Operator that can be installed during or after an OpenShift
Container Platform cluster’s Day 0 operations and participates in the cluster’s lifecycle. As a cluster
administrator, you can use platform Operators to further customize your OpenShift Container Platform
installation to meet your requirements and use cases.
For more information about platform Operators, see Managing platform Operators. For more information
about RukPak and its resources, see Operator Framework packaging format.
In OpenShift Container Platform 4.12, you can control where an Operator pod is installed by adding affinity
constraints to the Operator’s Subscription object.
For more information, see Controlling where an Operator is installed.
For more information, see Security context constraint synchronization with pod security standards.
Operator development
For example:
If your Operator requests permission to use any of the APIs removed from Kubernetes 1.25, the command
displays a warning message.
If any of the API versions removed from Kubernetes 1.25 are included in your Operator’s cluster service
version (CSV), the command displays an error message.
See Beta APIs removed from Kubernetes 1.25 and the Operator SDK CLI reference for more information.
Machine API
Currently, RHCOS image layering allows you to work with Customer Experience and Engagement (CEE) to
obtain and apply Hotfix packages on top of your RHCOS image, based on the Red Hat Hotfix policy. It is
planned for future releases that you can use RHCOS image layering to incorporate third-party software
packages such as Libreswan or numactl.
You can add or remove sysctls from the predefined list. When you add sysctls , they can be set across
all nodes. Updating the interface-specific safe sysctls list is a Technology Preview feature only.
For more information, see Updating the interface-specific safe sysctls list.
The Node Health Check Operator now also includes a web console plugin for managing Node Health
Checks. For more information, see Creating a node health check.
For installing or updating to the latest version of the Node Health Check Operator, use the stable
subscription channel. For more information, see Installing the Node Health Check Operator by using the
CLI.
Monitoring
The monitoring stack for this release includes the following new and modified features.
!"kube-state-metrics to 2.6.0
!"node-exporter to 1.4.0
!"prom-label-proxy to 0.5.0
!"Prometheus to 2.39.1
!"prometheus-adapter to 0.10.0
!"prometheus-operator to 0.60.1
!"Thanos to 0.28.1
! Red Hat does not guarantee backward compatibility for recording rules or alerting rules.
!" New
!"
Added the TelemeterClientFailures alert, which triggers when a cluster tries and fails to
submit Telemetry data at a certain rate over a period of time. The alert fires when the rate of
failed requests reaches 20% of the total rate of requests within a 15-minute window.
!" Changed
!"The KubeAggregatedAPIDown alert now waits 900 seconds rather than 300 seconds before
sending a notification.
!"The NodeRAIDDegraded and NodeRAIDDiskFailure alerts now include a device label filter to
match only the value returned by mmcblk.p.+|nvme.+|sd.+|vd.+|xvd.+|dm-.+|dasd.+ .
If you are upgrading to OpenShift Container Platform 4.12 and have specified relative
paths for additional Alertmanager secret keys that are referenced as files, you must
" change these relative paths to absolute paths in your Alertmanager configuration.
Otherwise, alert receivers that use the files will fail to deliver notifications.
Disabling realtime using workload hints removes Receive Packet Steering from
the cluster
At the cluster level by default, a systemd service sets a Receive Packet Steering (RPS) mask for virtual
network interfaces. The RPS mask routes interrupt requests from virtual network interfaces according to
the list of reserved CPUs defined in the performance profile. At the container level, a CRI-O hook script
also sets an RPS mask for all virtual network devices.
With this update, if you set spec.workloadHints.realTime in the performance profile to False , the
system also disables both the systemd service and the CRI-O hook script which set the RPS mask. The
system disables these RPS functions because RPS is typically relevant to use cases requiring low-latency,
realtime workloads only.
To retain RPS functions even when you set spec.workloadHints.realTime to False , see the RPS
Settings section of the Red Hat Knowledgebase solution Performance addons operator advanced
configuration.
For more information about configuring workload hints, see Understanding workload hints.
Tuned profile
The tuned profile now defines the fs.aio-max-nr sysctl value by default, improving asynchronous
I/O performance for default node profiles.
For more information, see Adding worker nodes to single-node OpenShift clusters with GitOps ZTP.
For more information, see Using ZTP to provision clusters at the network far edge.
A standard host operating system uses systemd to constantly scan all mount namespaces: both the
standard Linux mounts and the numerous mounts that Kubernetes uses to operate. The current
implementation of Kubelet and CRI-O both use the top-level namespace for all container and Kubelet
mount points. Encapsulating these container-specific mount points in a private namespace reduces
systemd overhead and enhances CPU performance. Encapsulation can also improve security, by storing
Kubernetes-specific mount points in a location safe from inspection by unprivileged users.
For more information, see Optimizing CPU usage with mount namespace encapsulation.
Optionally, you can now override the default evaluation intervals for all policies in PolicyGenTemplate
CRs.
For more information, see Configuring policy compliance evaluation timeouts for PolicyGenTemplate CRs.
Insights Operator
Insights alerts
In OpenShift Container Platform 4.12, active Insights recommendations are now presented to the user as
alerts. You can view and configure these alerts with Alertmanager.
!" console_helm_uninstalls_total
!" console_helm_upgrades_total
clouds:
openstack:
auth:
auth_url: https://127.0.0.1:13000
application_credential_id: '5dc185489adc4b0f854532e1af81ffe0'
application_credential_secret:
'PDCTKans2bPBbaEqBLiT_IajG8e5J_nJB4kvQHjaAy6ufhod0Zl0NkNoBzjn_bWSYzk587ieIGSlT11c4pVeh
A'
auth_type: "v3applicationcredential"
region_name: regionOne
To use application credentials with your cluster as a RHOSP administrator, create the credentials. Then,
use them in a clouds.yaml file when you install a cluster. Alternatively, you can create the clouds.yaml
file and rotate it into an existing cluster.
Hosted control planes (Technology Preview)
The HostedCluster and NodePool API resources are available in the beta version of the API and follow a
similar policy to OpenShift Container Platform and Kubernetes.
!"The oVirt CSI driver logging was revised with new error messages to improve the clarity and
readability of the logs.
!"The cluster API provider automatically updates oVirt and Red Hat Virtualization (RHV) credentials
when they are changed in OpenShift Container Platform.
Global restricted enforcement for pod security admission is currently planned for the next minor release of
OpenShift Container Platform. When this restricted enforcement is enabled, pods with pod security
violations will be rejected.
To prepare for this upcoming change, ensure that your workloads match the pod security admission profile
that applies to them. Workloads that are not configured according to the enforced security standards
defined globally or at the namespace level will be rejected. The restricted-v2 SCC admits workloads
according to the Restricted Kubernetes definition.
If you are receiving pod security violations, see the following resources:
!"See Identifying pod security violations for information about how to find which workloads are causing
pod security violations.
!"See Security context constraint synchronization with pod security standards to understand when pod
security admission label synchronization is performed. Pod security admission labels are not
synchronized in certain situations, such as the following situations:
!"The workload is running in a system-created namespace that is prefixed with openshift- .
!"The workload is running on a pod that was created directly without a pod controller.
!"If necessary, you can set a custom admission profile on the namespace or pod by setting the pod-
security.kubernetes.io/enforce label.
In OpenShift Container Platform 4.12, namespaces do not have restricted pod security enforcement by
default and the default catalog source security mode is set to legacy .
If you do not want to run your SQLite-based catalog source pods under restricted pod security
enforcement, you do not need to update your catalog source in OpenShift Container Platform 4.12.
However, to ensure your catalog sources run in future OpenShift Container Platform releases, you must
update your catalog sources to run under restricted pod security enforcement.
As a catalog author, you can enable compatibility with restricted pod security enforcement by completing
either of the following actions:
!"Update your catalog image with a version of the opm CLI tool released with OpenShift Container
Platform 4.11 or later.
If you do not want to update your SQLite database catalog image or migrate your catalog to the file-
based catalog format, you can configure your catalog to run with elevated permissions.
For more information, see Catalog sources and pod security admission.
! For more information, see Beta APIs removed from Kubernetes 1.25 and Validating bundle
manifests for APIs removed from Kubernetes 1.25.
If you have Operator projects that were previously created or maintained with Operator SDK 1.22.0, update
your projects to keep compatibility with Operator SDK 1.25.4.
Deprecated functionality is still included in OpenShift Container Platform and continues to be supported;
however, it will be removed in a future release of this product and is not recommended for new
deployments. For the most recent list of major functionality deprecated and removed within OpenShift
Container Platform 4.12, refer to the table below. Additional details for more functionality that has been
deprecated and removed are listed after the table.
In the following tables, features are marked with the following statuses:
!"General Availability
!"Deprecated
!"Removed
Operator deprecated and removed features
Table 2. Operator deprecated and removed tracker
Access to Prometheus and Grafana UIs in monitoring stack Deprecated Removed Removed
CoreDNS wildcard queries for the cluster.local domain General General Deprecated
Availability Availability
Feature 4.10 4.11 4.12
ingressVIP and apiVIP settings in the install-config.yaml file General General Deprecated
for installer-provisioned infrastructure clusters Availability Availability
Deprecated features
While these hardware models remain fully supported in OpenShift Container Platform 4.12, Red Hat
recommends that you use later hardware models.
Notable
Resource Removed API Migrate to changes
!"
For more information about pod security admission in OpenShift Container Platform, see Understanding and
managing pod security admission.
Empty file and stdout support for the oc registry login command
The --registry-config and --to option options for the oc registry login command now stop
accepting empty files. These options continue to work with files that do not exist. The ability to write
output to - (stdout) is also removed.
RHEL 7 support for the OpenShift CLI (oc) has been removed
Support for using Red Hat Enterprise Linux (RHEL) 7 with the OpenShift CLI ( oc ) has been removed. If
you use the OpenShift CLI ( oc ) with RHEL, you must use RHEL 8 or later.
!"OpenShift Container Platform 4.11 removes "OpenShift Jenkins Maven" and "NodeJS Agent" images
from its payload. Previously, OpenShift Container Platform 4.10 deprecated these images. Red Hat
no longer produces these images, and they are not available from the ocp-tools-4 repository at
registry.redhat.io .
However, upgrading to OpenShift Container Platform 4.11 does not remove "OpenShift Jenkins
Maven" and "NodeJS Agent" images from 4.10 and earlier releases. And Red Hat provides bug fixes
and support for these images through the end of the 4.10 release lifecycle, in accordance with the
OpenShift Container Platform lifecycle policy.
See the Deprecated API Migration Guide in the upstream Kubernetes documentation for the list of
planned Kubernetes API removals.
See Navigating Kubernetes API deprecations and removals for information about how to check your
cluster for Kubernetes APIs that are planned for removal.
Bug fixes
!"
Before this update, the Ironic provisioning service did not support Baseboard Management
Controllers (BMC) that use weak eTags combined with strict eTag validation. By design, if the BMC
provides a weak eTag, Ironic returns two eTags: the original eTag and the original eTag converted to
the strong format for compatibility with BMC that do not support weak eTags. Although Ironic can
send two eTags, BMC using strict eTag validation rejects such requests due to the presence of the
second eTag. As a result, on some older server hardware, bare-metal provisioning failed with the
following error: HTTP 412 Precondition Failed . In OpenShift Container Platform 4.12 and later,
this behavior changes and Ironic no longer attempts to send two eTags in cases where a weak eTag is
provided. Instead, if a Redfish request dependent on an eTag fails with an eTag validation error, Ironic
retries the request with known workarounds. This minimizes the risk of bare-metal provisioning
failures on machines with strict eTag validation. ( OCPBUGS-3479 )
!"Before this update, when a Redfish system features a Settings URI, the Ironic provisioning service
always attempts to use this URI to make changes to boot-related BIOS settings. However, bare-
metal provisioning fails if the Baseboard Management Controller (BMC) features a Settings URI but
does not support changing a particular BIOS setting by using this Settings URI. In OpenShift
Container Platform 4.12 and later, if a system features a Settings URI, Ironic verifies that it can
change a particular BIOS setting by using the Settings URI before proceeding. Otherwise, Ironic
implements the change by using the System URI. This additional logic ensures that Ironic can apply
boot-related BIOS setting changes and bare-metal provisioning can succeed. ( OCPBUGS-2052 )
Builds
!"
By default, Buildah prints steps to the log file, including the contents of environment variables, which
might include build input secrets. Although you can use the --quiet build argument to suppress
printing of those environment variables, this argument isn’t available if you use the source-to-image
(S2I) build strategy. The current release fixes this issue. To suppress printing of environment
variables, set the BUILDAH_QUIET environment variable in your build configuration:
sourceStrategy:
...
env:
- name: "BUILDAH_QUIET"
value: "true"
( BZ#2099991 )
Cloud Compute
!"Previously, instances were not set to respect the GCP infrastructure default option for automated
restarts. As a result, instances could be created without using the infrastructure default for automatic
restarts. This sometimes meant that instances were terminated in GCP but their associated
machines were still listed in the Running state because they did not automatically restart. With this
release, the code for passing the automatic restart option has been improved to better detect and
pass on the default option selection from users. Instances now use the infrastructure default properly
and are automatically restarted when the user requests the default functionality.
( OCPBUGS-4504 )
!"
The v1beta1 version of the PodDisruptionBudget object is now deprecated in Kubernetes. With
this release, internal references to v1beta1 are replaced with v1 . This change is internal to the
cluster autoscaler and does not require user action beyond the advice in the Preparing to upgrade to
OpenShift Container Platform 4.12 Red Hat Knowledgebase Article. ( OCPBUGS-1484 )
!"Previously, the GCP machine controller reconciled the state of machines every 10 hours. Other
providers set this value to 10 minutes so that changes that happen outside of the Machine API
system are detected within a short period. The longer reconciliation period for GCP could cause
unexpected issues such as missing certificate signing requests (CSR) approvals due to an external IP
address being added but not detected for an extended period. With this release, the GCP machine
controller is updated to reconcile every 10 minutes to be consistent with other platforms and so that
external changes are picked up sooner. ( OCPBUGS-4499 )
!"Previously, due to a deployment misconfiguration for the Cluster Machine Approver Operator,
enabling the TechPreviewNoUpgrade feature set caused errors and sporadic Operator degradation.
Because clusters with the TechPreviewNoUpgrade feature set enabled use two instances of the
Cluster Machine Approver Operator and both deployments used the same set of ports, there was a
conflict that lead to errors for single-node topology. With this release, the Cluster Machine Approver
Operator deployment is updated to use a different set of ports for different deployments.
( OCPBUGS-2621 )
!"
Previously, the scale from zero functionality in Azure relied on a statically compiled list of instance
types mapping the name of the instance type to the number of CPUs and the amount of memory
allocated to the instance type. This list grew stale over time. With this release, information about
instance type sizes is dynamically gathered from the Azure API directly to prevent the list from
becoming stale. ( OCPBUGS-2558 )
!"Previously, Machine API termination handler pods did not start on spot instances. As a result, pods
that were running on tainted spot instances did not receive a termination signal if the instance was
terminated. This could result in loss of data in workload applications. With this release, the Machine
API termination handler deployment is modified to tolerate the taints and pods running on spot
instances with taints now receive termination signals. ( OCPBUGS-1274 )
!"Previously, error messages for Azure clusters did not explain that it is not possible to create new
machines with public IP addresses for a disconnected install that uses only the internal publish
strategy. With this release, the error message is updated for improved clarity. ( OCPBUGS-519 )
!"Previously, the Cloud Controller Manager Operator did not check the cloud-config configuration
file for AWS clusters. As a result, it was not possible to pass additional settings to the AWS cloud
controller manager component by using the configuration file. With this release, the Cloud Controller
Manager Operator checks the infrastructure resource and parses references to the cloud-config
configuration file so that users can configure additional settings. ( BZ#2104373 )
!"
Previously, when Azure added new instance types and enabled accelerated networking support on
instance types that previously did not have it, the list of Azure instances in the machine controller
became outdated. As a result, the machine controller could not create machines with instance types
that did not previously support accelerated networking, even if they support this feature on Azure.
With this release, the required instance type information is retrieved from Azure API before the
machine is created to keep it up to date so the machine controller is able to create machines with
new and updated instance types. This fix also applies to any instance types that are added in the
future. ( BZ#2108647 )
!"Previously, the cluster autoscaler did not respect the AWS, IBM Cloud, and Alibaba Cloud topology
labels for the CSI drivers when using the Cluster API provider. As a result, nodes with the topology
label were not processed properly by the autoscaler when attempting to balance nodes during a
scale-out event. With this release, the autoscaler’s custom processors are updated so that it respects
this label. The autoscaler can now balance similar node groups that are labeled by the AWS, IBM
Cloud, or Alibaba CSI labels. ( BZ#2001027 )
!"Previously, Power VS cloud providers were not capable of fetching the machine IP address from a
DHCP server. Changing the IP address did not update the node, which caused some inconsistencies,
such as pending certificate signing requests. With this release, the Power VS cloud provider is
updated to fetch the machine IP address from the DHCP server so that the IP addresses for the
nodes are consistent with the machine IP address. ( BZ#2111474 )
!"
Previously, machines created in early versions of OpenShift Container Platform with invalid
configurations could not be deleted. With this release, the webhooks that prevent the creation of
machines with invalid configurations no longer prevent the deletion of existing invalid machines.
Users can now successfully remove these machines from their cluster by manually removing the
finalizers on these machines. ( BZ#2101736 )
!"Previously, short DHCP lease times, caused by NetworkManager not being run as a daemon or in
continuous mode, caused machines to become stuck during initial provisioning and never become
nodes in the cluster. With this release, extra checks are added so that if a machine becomes stuck in
this state it is deleted and recreated automatically. Machines that are affected by this network
condition can become nodes after a reboot from the Machine API controller. ( BZ#2115090 )
!"Previously, when creating a new Machine resource using a machine profile that does not exist in IBM
Cloud, the machines became stuck in the Provisioning phase. With this release, validation is
added to the IBM Cloud Machine API provider to ensure that a machine profile exists, and machines
with an invalid machine profile are rejected by the Machine API. ( BZ#2062579 )
!"Previously, the Machine API provider for AWS did not verify that the security group defined in the
machine specification exists. Instead of returning an error in this case, it used a default security
group, which should not be used for OpenShift Container Platform machines, and successfully
created a machine without informing the user that the default group was used. With this release, the
Machine API returns an error when users set either incorrect or empty security group names in the
machine specification. ( BZ#2060068 )
!"
Previously, the Machine API provider Azure did not treated user-provided values for instance types
as case sensitive. This led to false-positive errors when instance types were correct but did not
match the case. With this release, instance types are converted to the lowercase characters so that
users get correct results without false-positive errors for mismatched case. ( BZ#2085390 )
!"Previously, there was no check for nil values in the annotations of a machine object before
attempting to access the object. This situation was rare, but caused the machine controller to panic
when reconciling the machine. With this release, nil values are checked and the machine controller is
able to reconcile machines without annotations. ( BZ#2106733 )
!"Previously, the cluster autoscaler metrics for cluster CPU and memory usage would never reach, or
exceed, the limits set by the ClusterAutoscaler resource. As a result, no alerts were fired when the
cluster autoscaler could not scale due to resource limitations. With this release, a new metric called
cluster_autoscaler_skipped_scale_events_count is added to the cluster autoscaler to more
accurately detect when resource limits are reached or exceeded. Alerts will now fire when the cluster
autoscaler is unable to scale the cluster up because it has reached the cluster resource limits.
( BZ#1997396 )
!"Previously, when the Machine API provider failed to fetch the machine IP address, it would not set the
internal DNS name and the machine certificate signing requests were not automatically approved.
With this release, the Power VS machine provider is updated to set the server name as the internal
DNS name even when it fails to fetch the IP address. ( BZ#2111467 )
!"
Previously, the Machine API vSphere machine controller set the PowerOn flag when cloning a VM.
This created a PowerOn task that the machine controller was not aware of. If that PowerOn task
failed, machines were stuck in the Provisioned phase but never powered on. With this release, the
cloning sequence is altered to avoid the issue. Additionally, the machine controller now retries
powering on the VM in case of failure and reports failures properly. ( BZ#2087981 ,
OCPBUGS-954 )
!"With this release, AWS security groups are tagged immediately instead of after creation. This means
that fewer requests are sent to AWS and the required user privileges are lowered. ( BZ#2098054 ,
OCPBUGS-3094 )
!"Previously, a bug in the RHOSP legacy cloud provider resulted in a crash if certain RHOSP operations
were attempted after authentication had failed. For example, shutting down a server causes the
Kubernetes controller manager to fetch server information from RHOSP, which triggered this bug. As
a result, if initial cloud authentication failed or was configured incorrectly, shutting down a server
caused the Kubernetes controller manager to crash. With this release, the RHOSP legacy cloud
provider is updated to not attempt any RHOSP API calls if it has not previously authenticated
successfully. Now, shutting down a server with invalid cloud credentials no longer causes Kubernetes
controller manager to crash. ( BZ#2102383 )
Developer Console
!"
Previously, the openshift-config namespace was hard coded for the HelmChartRepository
custom resource, which was the same namespace for the ProjectHelmChartRepository custom
resource. This prevented users from adding private ProjectHelmChartRepository custom
resources in their desired namespace. Consequently, users were unable to access secrets and
configmaps in the openshift-config namespace. This update fixes the
ProjectHelmChartRepository custom resource definition with a namespace field that can read
the secret and configmaps from a namespace of choice by a user with the correct permissions.
Additionally, the user can add secrets and configmaps to the accessible namespace, and they can
add private Helm chart repositories in the namespace used the creation resources. ( BZ#2071792 )
Image Registry
!"Previously, the image trigger controller did not have permissions to change objects. Consequently,
image trigger annotations did not work on some resources. This update creates a cluster role binding
that provides the controller the required permissions to update objects according to annotations.
( BZ#2055620 )
!"Previously, the Image Registry Operator did not have a progressing condition for the node-ca
daemon set and used generation from an incorrect object. Consequently, the node-ca daemon
set could be marked as degraded while the Operator was still running. This update adds the
progressing condition, which indicates that the installation is not complete. As a result, the Image
Registry Operator successfully installs the node-ca daemon set, and the installer waits until it is fully
deployed. ([ BZ#2093440 )
Installer
!"Previously, the number of supported user-defined tags was 8, and reserved OpenShift Container
Platform tags were 2 for AWS resources. With this release, the number of supported user-defined
tags is now 25 and reserved OpenShift Container Platform tags are 25 for AWS resources. You can
now add up to 25 user tags during installation. ( CFE#592 )
!"Previously, installing a cluster on Amazon Web Services started and then failed when the IAM
administrative user was not assigned the s3:GetBucketPolicy permission. This update adds this
policy to checklist that the installation program uses to ensure that all of the required permissions are
assigned. As a result, the installation program now stops the installation with a warning that the IAM
administrative user is missing the s3:GetBucketPolicy permission. ( BZ#2109388 )
!"Previously, installing a cluster on Microsoft Azure failed when the Azure DCasv5-series or DCadsv5-
series of confidential VMs were specified as control plane nodes. With this update, the installation
program now stops the installation with an error, which states that confidential VMs are not yet
supported. ( BZ#2055247 )
!"Previously, gathering bootstrap logs was not possible until the control plane machines were running.
With this update, gathering bootstrap logs now only requires that the bootstrap machine be available.
( BZ#2105341 )
!"Previously, if a cluster failed to install on Google Cloud Platform because the service account had
insufficient permissions, the resulting error message did not mention this as the cause of the failure.
This update improves the error message, which now instructs users to check the permissions that are
assigned to the service account. ( BZ#2103236 )
!"
Previously, when an installation on Google Cloud provider (GCP) failed because an invalid GCP
region was specified, the resulting error message did not mention this as the cause of the failure. This
update improves the error message, which now states the region is not valid. ( BZ#2102324 )
!"Previously, cluster installations using Hive could fail if Hive used an older version of the install-
config.yaml file. This update allows the installation program to accept older versions of the
install-config.yaml file provided by Hive. ( BZ#2098299 )
!"Previously, the installation program would incorrectly allow the apiVIP and ingressVIP parameters
to use the same IPv6 address if they represented the address differently, such as listing the address
in an abbreviated format. In this update, the installer correctly validates these two parameters
regardless of their formatting, requiring separate IP addresses for each parameter. ( BZ#2103144 )
!"Previously, uninstalling a cluster using the installation program failed to delete all resources in clusters
installed on GCP if the cluster name was more than 22 characters long. In this update, uninstalling a
cluster using the installation program correctly locates and deletes all GCP cluster resources in cases
of long cluster names. ( BZ#2076646 )
!"Previously, when installing a cluster on Red Hat OpenStack Platform (RHOSP) with multiple networks
defined in the machineNetwork parameter, the installation program only created security group
rules for the first network. With this update, the installation program creates security group rules for
all networks defined in the machineNetwork so that users no longer need to manually edit security
group rules after installation. ( BZ#2095323 )
!"
Previously, users could manually set the API and Ingress virtual IP addresses to values that conflicted
with the allocation pool of the DHCP server when installing a cluster on OpenStack. This could cause
the DHCP server to assign one of the VIP addresses to a new machine, which would fail to start. In
this update, the installation program validates the user-provided VIP addresses to ensure that they
do not conflict with any DHCP pools. ( BZ#1944365 )
!"Previously, when installing a cluster on vSphere using a datacenter that is embedded inside a folder,
the installation program could not locate the datacenter object, causing the installation to fail. In this
update, the installation program can traverse the directory that contains the datacenter object,
allowing the installation to succeed. ( BZ#2097691 )
!"Previously, when installing a cluster on Azure using arm64 architecture with installer-provisioned
infrastructure, the image definition resource for hyperVGeneration V1 incorrectly had an
architecture value of x64 . With this update, the image definition resource for hyperVGeneration V1
has the correct architecture value of Arm64 . ( OCPBUGS-3639 )
!"Previously, when installing a cluster on VMware vSphere, the installation could fail if the user
specified a user-defined folder in the failureDomain section of the install-config.yaml file.
With this update, the installation program correctly validates user-defined folders in the
failureDomain section of the install-config.yaml file. ( OCPBUGS-3343 )
!"Previously, when destroying a partially deployed cluster after an installation failed on VMware
vSphere, some virtual machine folders were not destroyed. This error could occur in clusters
configured with multiple vSphere datacenters or multiple vSphere clusters. With this update, all
installer-provisioned infrastructure is correctly deleted when destroying a partially deployed cluster
after an installation failure. ( OCPBUGS-1489 )
!"Previously, when installing a cluster on VMware vSphere, the installation failed if the user specified
the platform.vsphere.vcenters parameter but did not specify the
platform.vsphere.failureDomains.topology.networks parameter in the install-
config.yaml file. With this update, the installation program alerts the user that the
platform.vsphere.failureDomains.topology.networks field is required when specifying
platform.vsphere.vcenters . ( OCPBUGS-1698 )
!"Previously, when installing a cluster on VMware vSphere, the installation failed if the user defined the
platform.vsphere.vcenters and platform.vsphere.failureDomains parameters but did not
define platform.vsphere.defaultMachinePlatform.zones , or
compute.platform.vsphere.zones and controlPlane.platform.vsphere.zones . With this
update, the installation program validates that the user has defined the zones parameter in multi-
region or multi-zone deployments prior to installation. ( OCPBUGS-1490 )
!"For the OpenShift Container Platform 4.12 release, the descheduler can now publish events to an API
group because the release adds additional role-based access controls (RBAC) rules to the
descheduler’s profile.( OCPBUGS-2330 )
!"Previously, the product name for Azure Red Hat OpenShift was incorrect in Customer Case
Management (CCM). As a result, the console had to use the same incorrect product name to
correctly populate the fields in CCM. Once the product name in CCM was updated, the console
needed to be updated as well. With this update, the same, correct product name as CCM is correctly
populated with the correct Azure product name when following the link from the console.
( OCPBUGS-869 )
!"Previously, when a plugin page resulted in an error, the error did not reset when navigating away from
the error page, and the error persisted after navigating to a page that was not the cause of the error.
With this update, the error state is reset to its default when a user navigates to a new page, and the
error no longer persists after navigating to a new page. ( BZ#2117738 , OCPBUGS-523 )
!"Previously, the View it here link in the Operator details pane for installed Operators was incorrectly
built when All Namespaces was selected. As a result, the link attempted to navigate to the
Operator details page for a cluster service version (CSV) in All Projects , which is an invalid route.
With this update, the View it here link to use the namespace where the CSV is installed now builds
correctly and the link works as expected. ( OCPBUGS-184 )
!"
Previously, line numbers with more than five digits resulted in a cosmetic issue where the line number
overlaid the vertical divider between the line number and the line contents making it harder to read.
With this update, the amount of space available for line numbers was increased to account for longer
line numbers, and the line number no longer overlays the vertical divider. ( OCPBUGS-183 )
!"Previously, in the administrator perspective of the web console, the link to Learn more about the
OpenShift local update services on the Default update server pop-up window in the Cluster
Settings page produced a 404 error. With this update, the link works as expected. ( BZ#2098234 )
!"Previously, the MatchExpression component did not account for array-type values. As a result, only
single values could be entered through forms using this component. With this update, the
MatchExpression component accepts comma-separated values as an array. ( BZ#207690 )
!"Previously, there were redundant checks for the model resulting in tab reloading which occasionally
resulted in a flickering of the tab contents where they rerendered. With this update, the redundant
model check was removed, and the model is only checked once. As a result, the tab contents do not
flicker and no longer rerender. ( BZ#2037329 )
!"Previously, when selecting the edit label from the action list on the OpenShift Dedicated node
page, no response was elicited and a web hook error was returned. This issue has been fixed so that
the error message is only returned when editing fails. ( BZ#2102098 )
!"Previously, if issues were pending, clicking on the Insights link would crash the page. As a
workaround, you can wait for the variable to become initialized before clicking the Insights link.
As a result, the Insights page will open as expected. ( BZ#2052662 )
!"
Previously, when the MachineConfigPool resource was paused, the option to unpause said
Resume rollouts . The wording has been updated so that it now says Resume updates .
( BZ#2094240 )
!"Previously, the wrong calculating method was used when counting master and worker nodes. With
this update, the correct worker nodes are calculated when nodes have both the master and worker
role. ( BZ#1951901 )
!"Previously, incomplete YAML was not synced was occasionally displayed to users. With this update,
synced YAML always displays. ( BZ#2084453 )
!"Previously, when installing an Operator that required a custom resource (CR) to be created for use,
the Create resource button could fail to install the CR because it was pointing to the incorrect
namespace. With this update, the Create resource button works as expected. ( BZ#2094502 )
!"Previously, the Cluster update modal was not displaying errors properly. As a result, the Cluster
update modal did not display or explain errors when they occurred. With this update, the Cluster
update modal correctly display errors. ( BZ#2096350 )
Monitoring
!"Before this update, cluster administrators could not distinguish between a pod being not ready
because of a scheduling issue and a pod being not ready because it could not be started by the
kubelet. In both cases, the KubePodNotReady alert would fire. With this update, the
KubePodNotScheduled alert now fires when a pod is not ready because of a scheduling issue, and
the KubePodNotReady alert fires when a pod is not ready because it could not be started by the
kubelet. ( OCPBUGS-4431 )
!"Before this update, node_exporter would report metrics about virtual network interfaces such as
tun interfaces, br interfaces, and ovn-k8s-mp interfaces. With this update, metrics for these
virtual interfaces are no longer collected, which decreases monitoring resource consumption.
( OCPBUGS-1321 )
!"Before this update, Alertmanager pod startup might time out because of slow DNS resolution, and
the Alertmanager pods would not start. With this release, the timeout value has been increased to
seven minutes, which prevents pod startup from timing out. ( BZ#2083226 )
!"Before this update, if Prometheus Operator failed to run or schedule Prometheus pods, the system
provided no underlying reason for the failure. With this update, if Prometheus pods are not run or
scheduled, the Cluster Monitoring Operator updates the clusterOperator monitoring status with a
reason for the failure, which can be used to troubleshoot the underlying issue. ( BZ#2043518 )
!"Before this update, if you created an alert silence from the Developer perspective in the OpenShift
Container Platform web console, external labels were included that did not match the alert.
Therefore, the alert would not be silenced. With this update, external labels are now excluded when
you create a silence in the Developer perspective so that newly created silences function as
expected. ( BZ#2084504 )
!"Previously, if you enabled an instance of Alertmanager dedicated to user-defined projects, a
misconfiguration could occur in certain circumstances, and you would not be informed that the user-
defined project Alertmanager config map settings did not load for either the main instance of
Alertmanager or the instance dedicated to user-defined projects. With this release, if this
misconfiguration occurs, the Cluster Monitoring Operator now displays a message that informs you
of the issue and provides resolution steps. ( BZ#2099939 )
!"Before this update, if the Cluster Monitoring Operator (CMO) failed to update Prometheus, the
CMO did not verify whether a previous deployment was running and would report that cluster
monitoring was unavailable even if one of the Prometheus pods was still running. With this update,
the CMO now checks for running Prometheus pods in this situation and reports that cluster
monitoring is unavailable only if no Prometheus pods are running. ( BZ#2039411 )
!"Before this update, if you configured OpsGenie as an alert receiver, a warning would appear in the log
that api_key and api_key_file are mutually exclusive and that api_key takes precedence. This
warning appeared even if you had not defined api_key_file . With this update, this warning only
appears in the log if you have defined both api_key and api_key_file . ( BZ#2093892 )
!"Before this update the Telemeter Client (TC) only loaded new pull secrets when it was manually
restarted. Therefore, if a pull secret had been changed or updated and the TC had not been
restarted, the TC would fail to authenticate with the server. This update addresses the issue so that
when the secret is rotated, the deployment is automatically restarted and uses the updated token to
authenticate. ( BZ#2114721 )
Networking
!"Previously, routers that were in the terminating state delayed the oc cp command which would
delay the oc adm must-gather command until the pod was terminated. With this update, a timeout
for each issued oc cp command is set to prevent delaying the must-gather command from
running. As a result, terminating pods no longer delay must-gather commands. ( BZ#2103283 )
!"Previously, an Ingress Controller could not be configured with both the Private endpoint publishing
strategy type and PROXY protocol. With this update, users can now configure an Ingress Controller
with both the Private endpoint publishing strategy type and PROXY protocol. ( BZ#2104481 )
!"Previously, the routeSelector parameter cleared the route status of the Ingress Controller prior to
the router deployment. Because of this, the route status repopulated incorrectly. To avoid using stale
data, route status detection has been updated to no longer rely on the Kubernetes object cache.
Additionally, this update includes a fix to check the generation ID on route deployment to determine
the route status. As a result, the route status is consistently cleared with a routeSelector update.
( BZ#2101878 )
!"Previously, a cluster that was upgraded from a version of OpenShift Container Platform earlier than
4.8 could have orphaned Route objects. This was caused by earlier versions of OpenShift Container
Platform translating Ingress objects into Route objects irrespective of a given Ingress object’s
indicated IngressClass . With this update, an alert is sent to the cluster administrator about any
orphaned Route objects still present in the cluster after Ingress-to-Route translation. This update
also adds another alert that notifies the cluster administrator about any Ingress objects that do not
specify an IngressClass . ( BZ#1962502 )
!"
Previously, if a configmap that the router deployment depends on is not created, then the router
deployment does not progress. With this update, the cluster Operator reports ingress
progressing=true if the default ingress controller deployment is progressing. This results in users
debugging issues with the ingress controller by using the command oc get co . ( BZ#2066560 )
!"Previously, when an incorrectly created network policy was added to the OVN-Kubernetes cache, it
would cause the OVN-Kubernetes leader to enter crashloopbackoff status. With this update,
OVN-Kubernetes leader does not enter crashloopbackoff status by skipping deleting nil policies.
( BZ#2091238 )
!"Previously, recreating an EgressIP pod with the same namespace or name within 60 seconds of
deleting an older one with the same namespace or name causes the wrong SNAT to be configured.
As a result, packets could go out with nodeIP instead of EgressIP SNAT. With this update, traffic
leaves the pod with EgressIP instead of nodeIP. ( BZ#2097243 ).
!"Previously, older Access Control Lists (ACL)s with arp produced unexpectedly found multiple
equivalent ACLs (arp v/s arp||nd) errors due to a change in the ACL from arp to arp II
nd . This prevented network policies from being created properly. With this update, older ACLs with
just the arp match have been removed so that only ACLs with the new arp II nd match exist so
that network policies can be created correctly and no errors will be observed on ovnkube-master .
NOTE: This effects customers upgrading into 4.8.14, 4.9.32, 4.10.13 or higher from older versions.
( BZ#2095852 ).
!"With this update, CoreDNS has been updated to version 1.10.0, which is based on Kubernetes 1.25.
This keeps both the CoreDNS version and OpenShift Container Platform 4.12, which is also based on
Kubernetes 1.25, in alignment with one another. ( OCPBUGS-1731 )
!"With this update, the OpenShift Container Platform router now uses k8s.io/client-go version
1.25.2, which supports Kubernetes 1.25. This keeps both the openshift-router and OpenShift
Container Platform 4.12, which is also based on Kubernetes 1.25, in alignment with one another.
( OCPBUGS-1730 )
!"With this update, the Ingress Operator now uses k8s.io/client-go version 1.25.2, which supports
Kubernetes 1.25. This keeps both the Ingress Operator and OpenShift Container Platform 4.12, which
is also based on Kubernetes 1.25, in alignment with one another. ( OCPBUGS-1554 )
!"Previously, the DNS Operator did not reconcile the openshift-dns namespace. Because OpenShift
Container Platform 4.12 requires the openshift-dns namespace to have pod-security labels, this
caused the namespace to be missing those labels upon cluster update. Without the pod-security
labels, the pods failed to start. With this update, the DNS Operator now reconciles the openshift-
dns namespace, and the pod-security labels are now present. As a result, pods start as expected.
( OCPBUGS-1549 )
!"
Previously, the Cluster DNS Operator used GO Kubernetes libraries that were based on Kubernetes
1.24 while OpenShift Container Platform 4.12 is based on Kubernetes 1.25. With this update, GO
Kubernetes API is v1.25.2, which aligns the Cluster DNS Operator with OpenShift Container Platform
4.12 that uses Kubernetes 1.25 APIs. (link: OCPBUGS-1558 )
!"Previously, setting the disableNetworkDiagnostics configuration to true did not persist when
the network-operator pod was re-created. With this update, the disableNetworkDiagnostics
configuration property of network`operator.openshift.io/cluster` no longer resets to its default value
after network operator restart. ( OCPBUGS-392 )
!"Previously, ovn-kubernetes did not configure the correct MAC address of bonded interfaces in
br-ex bridge. As a result, a node that uses bonding for the primary Kubernetes interface fails to join
the cluster. With this update, ovn-kubernetes configures the correct MAC address of bonded
interfaces in br-ex bridge, and nodes that use bonding for the primary Kubernetes interface
successfully join the cluster. ( BZ2096413 )
!"Previously, when the Ingress Operator was configured to enable the use of mTLS, the Operator
would not check if CRLs needed updating until some other event caused it to reconcile. As a result,
CRLs used for mTLS could become out of date. With this update, the Ingress Operator now
automatically reconciles when any CRL expires, and CRLs will be updated at the time specified by
their nextUpdate field. ( BZ#2117524 )
Node
!"
Previously, a symlinks error message was printed out as raw data instead of formatted as an error,
making it difficult to understand. This fix formats the error message properly, so that it is easily
understood. ( BZ#1977660 )
!"Previously, kubelet hard eviction thresholds were different from Kubernetes defaults when a
performance profile was applied to a node. With this release, the defaults have been updated to
match the expected Kubernetes defaults. ( OCPBUGS-4362 ).
!"Previously, on macOS arm64 architecture, the oc binary needed to be signed manually. As a result,
the oc binary did not work as expected. This update implements a self-signing binary for oc
mimicking. As a result, the oc binary on macOS arm64 architectures works properly. ( BZ#2059125 )
!"Previously, must-gather was trying to collect resources that were not present on the server.
Consequently, must-gather would print error messages. Now, before collecting resources, must-
gather checks whether the resource exists. As a result, must-gather no longer prints an error when
it fails to collect non-existing resources on the server. ( BZ#2095708 )
!"
The OpenShift Container Platform 4.12 release updates the oc-mirror library, so that the library
supports multi-arch platform images. This means that you can choose from a wider selection of
architectures, such as arm64 , when mirroring a platform release payload. ( OCPBUGS-617 )
!"Previously, Operator Lifecycle Manager (OLM) would attempt to update namespaces to apply a
label, even if the label was present on the namespace. Consequently, the update requests increased
the workload in API and etcd services. With this update, OLM compares existing labels against the
expected labels on a namespace before issuing an update. As a result, OLM no longer attempts to
make unnecessary update requests on namespaces. ( BZ#2105045 )
!"Previously, Operator Lifecycle Manager (OLM) would prevent minor cluster upgrades that should not
be blocked based on a miscalculation of the ClusterVersion custom resources’s
spec.DesiredVersion field. With this update, OLM no longer prevents cluster upgrades when the
upgrade should be supported. ( BZ#2097557 )
!"
Previously, the reconciler would update a resource’s annotation without making a copy of the
resource. This caused an error that would terminate the reconciler process. With this update, the
reconciler no longer stops due the error. ( BZ#2105045 )
!"The package-server-manifest (PSM) is a controller that ensures that the correct package-
server Cluster Service Version (CSV) is installed on a cluster. Previously, changes to the package-
server CSV were not being reverted because of a logical error in the reconcile function in which an
on-cluster object could influence the expected object. Users could modify the package-server
CSV and the changes would not be reverted. Additionally, cluster upgrades would not update the
YAML for the package-server CSV. With this update, the expected version of the CSV is now
always built from scratch, which removes the ability for an on-cluster object to influence the
expected values. As a result, the PSM now reverts any attempts to modify the package-server
CSV, and cluster upgrades now deploy the expected package-server CSV. ( OCPBUGS-858 )
!"Previously, OLM would upgrade an Operator according to the Operator’s CRD status. A CRD lists
component references in an order defined by the group/version/kind (GVK) identifier. Operators that
share the same components might cause the GVK to change the component listings for an Operator,
and this can cause the OLM to require more system resources to continuously update the status of a
CRD. With this update, the Operator Lifecycle Manager (OLM) now upgrades an Operator according
to the Operator’s component references. A change to the custom resource definition (CRD) status
of an Operator does not impact the OLM Operator upgrade process.( OCPBUGS-3795 )
Operator SDK
!"
With this update, you can now set the security context for the registry pod by including the
securityContext configuration field in the pod specification. This will apply the security context for
all containers in the pod. The securityContext field also defines the pod’s privileges.
( BZ#2091864 )
!"Previously, underlying dependencies of the File Integrity Operator changed how alerts and
notifications were handled, and the Operator didn’t send metrics as a result. With this release the
Operator ensures that the metrics endpoint is correct and reachable on startup. ( BZ#2115821 )
!"Previously, alerts issued by the File Integrity Operator did not set a namespace. This made it difficult
to understand where the alert was coming from, or what component was responsible for issuing it.
With this release, the Operator includes the namespace it was installed into in the alert, making it
easier to narrow down what component needs attention. ( BZ#2101393 )
!"Previously, the File Integrity Operator did not properly handle modifying alerts during an upgrade. As
a result, alerts did not include the namespace in which the Operator was installed. With this release,
the Operator includes the namespace it was installed into in the alert, making it easier to narrow
down what component needs attention. ( BZ#2112394 )
!"Previously, service account ownership for the File Integrity Operator regressed due to underlying
OLM updates, and updates from 0.1.24 to 0.1.29 were broken. With this update, the Operator defaults
to upgrading to 0.1.30. ( BZ#2109153 )
!"Previously, the File Integrity Operator daemon used the ClusterRoles parameter instead of the
Roles parameter for a recent permission change. As a result, OLM could not update the Operator.
With this release, the Operator daemon reverts to using the Roles parameter and updates from
older versions to version 0.1.29 are successful. ( BZ#2108475 )
Compliance Operator
!"Previously, the Compliance Operator used an old version of the Operator SDK, which is a
dependency for building Operators. This caused alerts about deprecated Kubernetes functionality
used by the Operator SDK. With this release, the Compliance Operator is updated to version 0.1.55,
which includes an updated version of the Operator SDK. ( BZ#2098581 )
!"Previously, the Compliance Operator hard coded notifications to the default namespace. As a result,
notifications from the Operator would not appear if the Operator was installed in a different
namespace. This issue is fixed in this release. ( BZ#2060726 )
!"
Previously, the Compliance Operator failed to fetch API resources when parsing machine
configurations without Ignition specifications. This caused the api-check-pods check to crash loop.
With this release, the Compliance Operator is updated to gracefully handle machine configuration
pools without Ignition specifications. ( BZ#2117268 )
!"Previously, the Compliance Operator held machine configurations in a stuck state because it could
not determine the relationship between machine configurations and kubelet configurations. This was
due to incorrect assumptions about machine configuration names. With this release, the Compliance
Operator is able to determine if a kubelet configuration is a subset of a machine configuration.
( BZ#2102511 )
!"
Previously, the podman exec command did not work well with nested containers. Users encountered
this issue when accessing a node using the oc debug command and then running a container with
the toolbox command. Because of this, users were unable to reuse toolboxes on RHCOS. This fix
updates the toolbox library code to account for this behavior, so users can now reuse toolboxes on
RHCOS. ( BZ#1915537 )
!"With this update, running the toolbox command now checks for updates to the default image
before launching the container. This improves security and provides users with the latest bug fixes.
( BZ#2049591 )
!"Previously, updating to Podman 4.0 prevented users from running the toolbox command on
RHCOS. This fix updates the toolbox library code to account for the new Podman behavior, so users
can now run toolbox on RHCOS as expected. ( BZ#2093040 )
!"Previously, custom SELinux policy modules were not properly supported by rpm-ostree , so they
were not updated along with the rest of the system upon update. This would surface as failures in
unrelated components. Pending SELinux userspace improvements landing in a future OpenShift
Container Platform release, this update provides a workaround to RHCOS that will rebuild and reload
the SELinux policy during boot as needed. ( OCPBUGS-595 )
!"
Previously, restarts of the tuned service caused improper reset of the irqbalance configuration,
leading to IRQ operation being served again on the isolated CPUs, therefore violating the isolation
guarantees. With this fix, the irqbalance service configuration is properly preserved across tuned
service restarts (explicit or caused by bugs), therefore preserving the CPU isolation guarantees with
respect to IRQ serving. ( OCPBUGS-585 )
!"Previously, when the tuned daemon was restarted out of order as part of the cluster Node Tuning
Operator, the CPU affinity of interrupt handlers was reset and the tuning was compromised. With this
fix, the irqbalance plugin in tuned is disabled, and OpenShift Container Platform now relies on the
logic and interaction between CRI-O and irqbalance .( BZ#2105123 )
!"Previously, a low latency hook script executing for every new veth device took too long when the
node was under load. The resultant accumulated delays during pod start events caused the rollout
time for kube-apiserver to be slow and sometimes exceed the 5-minute rollout timeout. With this
fix, the container start time should be shorter and within the 5-minute threshold. ( BZ#2109965 ).
!"Previously, the oslat control thread was collocated with one of the test threads, which caused
latency spikes in the measurements. With this fix, the oslat runner now reserves one CPU for the
control thread, meaning the test uses one less CPU for running the busy threads. ( BZ#2051443 )
!"Latency measurement tools, also known as oslat , cyclictest , and hwlatdetect , now run on
completely isolated CPUs without the helper process running in the background that might cause
latency spikes, therefore providing more accurate latency measurements. ( OCPBUGS-2618 )
!"Previously, if more than one secret was present for vSphere, the vSphere CSI Operator randomly
picked a secret and sometimes caused the Operator to restart. With this update, a warning appears
when there is more than one secret on the vCenter CSI Operator. ( BZ#2108473 )
!"Previously, OpenShift Container Platform detached a volume when a Container Storage Interface
(CSI) driver was not able to unmount the volume from a node. Detaching a volume without unmount
is not allowed by CSI specifications and drivers could enter an undocumented state. With this
update, CSI drivers are detached before unmounting only on unhealthy nodes preventing the
undocumented state. ( BZ#2049306 )
!"Previously, there were missing annotations on the Manila CSI Driver Operator’s
VolumeSnapshotClass. Consequently, the Manila CSI snapshotter could not locate secrets, and could
not create snapshots with the default VolumeSnapshotClass. This update fixes the issue so that
secret names and namespaces are included in the default VolumeSnapshotClass. As a result, users
can now create snapshots in the Manila CSI Driver Operator using the default VolumeSnapshotClass.
( BZ#2057637 )
!"Users can now opt into using the experimental VHD feature on Azure File. To opt in, users must
specify the fstype parameter in a storage class and enable it with --enable-vhd=true . If fstype
is used and the feature is not set to true , the volumes will fail to provision.
To opt out of using the VHD feature, remove the fstype parameter from your storage class.
( BZ#2080449 )
!"Previously, if more than one secret was present for vSphere, the vSphere CSI Operator randomly
picked a secret and sometimes caused the Operator to restart. With this update, a warning appears
when there is more than one secret on the vCenter CSI Operator. ( BZ#2108473 )
!"In OpenShift Container Platform 4.9, when it is minimal or no data in the Developer Perspective ,
most of the monitoring charts or graphs (CPU consumption, memory usage, and bandwidth) show a
range of -1 to 1. However, none of these values can ever go below zero. This will be resolved in a
future release. ( BZ#1904106 )
!"Before this update, users could not silence alerts in the Developer perspective in the OpenShift
Container Platform web console when a user-defined Alertmanager service was deployed because
the web console would forward the request to the platform Alertmanager service in the openshift-
monitoring namespace. With this update, when you view the Developer perspective in the web
console and try to silence an alert, the request is forwarded to the correct Alertmanager service.
( OCPBUGS-1789 )
!"
Previously, there was a known issue in the Add Helm Chart Repositories form to extend the
Developer Catalog of a project. The Quick Start guides shows that you can add the
ProjectHelmChartRepository CR in the required namespace whereas it does not mention that to
perform this you need permission from the kubeadmin. This issue was resolved with Quickstart
mentioning the correct steps to create ProjectHelmChartRepository CR. ( BZ#2057306 )
In the following tables, features are marked with the following statuses:
!"Technology Preview
!"General Availability
!"Not Available
!"Deprecated
Networking Technology Preview features
Table 13. Networking Technology Preview tracker
PTP single NIC hardware configured as boundary clock Technology General General
Preview Availability Availability
PTP dual NIC hardware configured as boundary clock Not Available Technology Technology
Preview Preview
Advertise using BGP mode the MetalLB service from a subset of nodes, Not Available Technology General
using a specific pool of IP addresses Preview Availability
Advertise using L2 mode the MetalLB service from a subset of nodes, Not Available Technology Technology
using a specific pool of IP addresses Preview Preview
Multi-network policies for SR-IOV networks Not Available Not Available Technology
Preview
Updating the interface-specific safe sysctls list Not Available Not Available Technology
Preview
MT2892 Family [ConnectX-6 Dx] SR-IOV support Not Available Not Available Technology
Preview
MT2894 Family [ConnectX-6 Lx] SR-IOV support Not Available Not Available Technology
Preview
MT42822 BlueField-2 in ConnectX-6 NIC mode SR-IOV support Not Available Not Available Technology
Preview
Silicom STS Family SR-IOV support Not Available Not Available Technology
Preview
Feature 4.10 4.11 4.12
MT2892 Family [ConnectX-6 Dx] OvS Hardware Offload support Not Available Not Available Technology
Preview
MT2894 Family [ConnectX-6 Lx] OvS Hardware Offload support Not Available Not Available Technology
Preview
MT42822 BlueField-2 in ConnectX-6 NIC mode OvS Hardware Offload Not Available Not Available Technology
support Preview
Switching Bluefield-2 from DPU to NIC Not available Not available Technology
Preview
Shared Resources CSI Driver and Build CSI Volumes in OpenShift Builds Technology Technology Technology
Preview Preview Preview
CSI Google Filestore Driver Operator Not Available Not Available Technology
Preview
CSI automatic migration (Azure file, VMware vSphere) Technology Technology Technology
Preview Preview Preview
CSI automatic migration (Azure Disk, OpenStack Cinder) Technology General General
Preview Availability Availability
CSI automatic migration (AWS EBS, GCP disk) Technology Technology General
Preview Preview Availability
CSI Google Filestore Driver Operator Not Available Not Available Technology
Preview
Automatic device discovery and provisioning with Local Storage Technology Technology Technology
Operator Preview Preview Preview
Disconnected mirroring with the oc-mirror CLI plugin Technology General General
Preview Availability Availability
Agent-based OpenShift Container Platform Installer Not Available Not Available General
Availability
Linux Control Group version 2 (cgroup v2) Not Available Not Available Technology
Preview
Feature 4.10 4.11 4.12
IBM Secure Execution on IBM Z and LinuxONE Not Available Not Available Technology
Preview
Serverless Technology Preview features
Table 18. Serverless Technology Preview tracker
Hub and spoke cluster support Not Available Not Available Technology
Preview
Web console Technology Preview features
Table 20. Web console Technology Preview tracker
Adding worker nodes to Single-node OpenShift clusters with GitOps ZTP Not Available Not Available Technology
Preview
Alerting rules based on platform monitoring metrics Not Available Technology Technology
Preview Preview
Red Hat OpenStack Platform (RHOSP) Technology Preview features
Table 24. RHOSP Technology Preview tracker
Support for external cloud providers for clusters on RHOSP Technology Technology General
Preview Preview Availability
Hosted control planes for OpenShift Container Platform on bare metal Not Available Not Available Technology
Preview
Hosted control planes for OpenShift Container Platform on Amazon Web Not Available Technology Technology
Services (AWS) Preview Preview
Machine management Technology Preview features
Table 26. Machine management Technology Preview tracker
Managing machines with the Cluster API Not Available Technology Technology
Preview Preview
Cloud controller manager for Amazon Web Services Technology Technology Technology
Preview Preview Preview
Cloud controller manager for Google Cloud Platform Technology Technology Technology
Preview Preview Preview
Cloud controller manager for Red Hat OpenStack Platform (RHOSP) Technology Technology General
Preview Preview Availability
Feature 4.10 4.11 4.12
Known issues
!"In OpenShift Container Platform 4.1, anonymous users could access discovery endpoints. Later
releases revoked this access to reduce the possible attack surface for security exploits because
some discovery endpoints are forwarded to aggregated API servers. However, unauthenticated
access is preserved in upgraded clusters so that existing use cases are not broken.
If you are a cluster administrator for a cluster that has been upgraded from OpenShift Container
Platform 4.1 to 4.12, you can either revoke or continue to allow unauthenticated access. Unless there
is a specific need for unauthenticated access, you should revoke it. If you do continue to allow
unauthenticated access, be aware of the increased risks.
If you have applications that rely on unauthenticated access, they might receive
# HTTP 403 errors if you revoke unauthenticated access.
This script removes unauthenticated subjects from the following cluster role bindings:
!" cluster-status-binding
!" discovery
!" system:basic-user
!" system:discovery
!" system:openshift:discovery
( BZ#1821771 )
!"Intermittently, an IBM Cloud VPC cluster might fail to install because some worker machines do not
start. Rather, these worker machines remain in the Provisioned phase.
There is a workaround for this issue. From the host where you performed the initial installation, delete
the failed machines and run the installation program again.
!"Verify that the status of the internal application load balancer (ALB) for the master API server
is active .
!"Log into the IBM Cloud account for your cluster and target the correct region for your
cluster.
!"Verify that the internal ALB status is active by running the following command:
!"Identify the machines that are in the Provisioned phase by running the following command:
Example output
NAME PHASE TYPE REGION
ZONE AGE
example-public-1-x4gpn-master-0 Running bx2-4x16 us-east
us-east-1 23h
example-public-1-x4gpn-master-1 Running bx2-4x16 us-east
us-east-2 23h
example-public-1-x4gpn-master-2 Running bx2-4x16 us-east
us-east-3 23h
example-public-1-x4gpn-worker-1-xqzzm Running bx2-4x16 us-east
us-east-1 22h
example-public-1-x4gpn-worker-2-vg9w6 Provisioned bx2-4x16 us-east
us-east-2 22h
example-public-1-x4gpn-worker-3-2f7zd Provisioned bx2-4x16 us-east
us-east-3 22h
!"Wait for the deleted worker machines to be replaced, which can take up to 10 minutes.
!"Verify that the new worker machines are in the Running phase by running the following
command:
Example output
NAME PHASE TYPE REGION ZONE
AGE
example-public-1-x4gpn-master-0 Running bx2-4x16 us-east us-
east-1 23h
example-public-1-x4gpn-master-1 Running bx2-4x16 us-east us-
east-2 23h
example-public-1-x4gpn-master-2 Running bx2-4x16 us-east us-
east-3 23h
example-public-1-x4gpn-worker-1-xqzzm Running bx2-4x16 us-east us-
east-1 23h
example-public-1-x4gpn-worker-2-mnlsz Running bx2-4x16 us-east us-
east-2 8m2s
example-public-1-x4gpn-worker-3-7nz4q Running bx2-4x16 us-east us-
east-3 7m24s
!"Complete the installation by running the following command. Running the installation program
again ensures that the cluster’s kubeconfig is initialized properly:
( OCPBUGS#1327 )
!"The oc annotate command does not work for LDAP group names that contain an equal sign ( = ),
because the command uses the equal sign as a delimiter between the annotation name and value. As
a workaround, use oc patch or oc edit to add the annotation. ( BZ#1917280 )
!"
Due to the inclusion of old images in some image indexes, running oc adm catalog mirror and
oc image mirror might result in the following error: error: unable to retrieve source
image . As a temporary workaround, you can use the --skip-missing option to bypass the error
and continue downloading the image index. For more information, see Service Mesh Operator
mirroring failed.
!"When using the egress IP address feature in OpenShift Container Platform on RHOSP, you can
assign a floating IP address to a reservation port to have a predictable SNAT address for egress
traffic. The floating IP address association must be created by the same user that installed the
OpenShift Container Platform cluster. Otherwise any delete or move operation for the egress IP
address hangs indefinitely because of insufficient privileges. When this issue occurs, a user with
sufficient privileges must manually unset the floating IP address association to resolve the issue.
( OCPBUGS-4902 )
!"There is a known issue with Nutanix installation where the installation fails if you use 4096-bit
certificates with Prism Central 2022.x. Instead, use 2048-bit certificates. ( KCS )
!"Deleting the bidirectional forwarding detection (BFD) profile and removing the bfdProfile added
to the border gateway protocol (BGP) peer resource does not disable the BFD. Instead, the BGP
peer starts using the default BFD profile. To disable BFD from a BGP peer resource, delete the BGP
peer configuration and recreate it without a BFD profile. ( BZ#2050824 )
!"Due to an unresolved metadata API issue, you cannot install clusters that use bare-metal workers on
RHOSP 16.1. Clusters on RHOSP 16.2 are not impacted by this issue. ( BZ#2033953 )
!"
The loadBalancerSourceRanges attribute is not supported, and is therefore ignored, in load-
balancer type services in clusters that run on RHOSP and use the OVN Octavia provider. There is no
workaround for this issue. ( OCPBUGS-2789 )
!"After a catalog source update, it takes time for OLM to update the subscription status. This can
mean that the status of the subscription policy may continue to show as compliant when Topology
Aware Lifecycle Manager (TALM) decides whether remediation is needed. As a result the operator
specified in the subscription policy does not get upgraded. As a workaround, include a status field
in the spec section of the catalog source policy as follows:
metadata:
name: redhat-operators-disconnected
spec:
displayName: disconnected-redhat-operators
image: registry.example.com:5000/disconnected-redhat-operators/disconnected-
redhat-operator-index:v4.11
status:
connectionState:
lastObservedState: READY
This mitigates the delay for OLM to pull the new index image and get the pod ready, reducing the
time between completion of catalog source policy remediation and the update of the subscription
status. If the issue persists and the subscription policy status update is still late you can apply another
ClusterGroupUpdate CR with the same subscription policy, or an identical ClusterGroupUpdate
CR with a different name. ( OCPBUGS-2813 )
!"
TALM skips remediating a policy if all selected clusters are compliant when the
ClusterGroupUpdate CR is started. The update of operators with a modified catalog source policy
and a subscription policy in the same ClusterGroupUpdate CR does not complete. The subscription
policy is skipped as it is still compliant until the catalog source change is enforced. As a workaround,
add the following change to one CR in the common-subscription policy, for example:
metadata.annotations.upgrade: "1"
This makes the policy non-compliant prior to the start of the ClusterGroupUpdate CR.
( OCPBUGS-2812 )
!"On a single-node OpenShift instance, rebooting without draining the node to remove all the running
pods can cause issues with workload container recovery. After the reboot, the workload restarts
before all the device plugins are ready, resulting in resources not being available or the workload
running on the wrong NUMA node. The workaround is to restart the workload pods when all the
device plugins have re-registered themselves during the reboot recovery procedure.
( OCPBUGS-2180 )
You should receive a 201 Created response and a header with Location: /redfish
/v1/EventService/Subscriptions/<sub_id> that indicates that the Redfish events subscription
is successfully created. ( OCPBUGSM-43707 )
!"When using the GitOps ZTP pipeline to install a single-node OpenShift cluster in a disconnected
environment, there should be two CatalogSource CRs applied in the cluster. One of the
CatalogSource CRs gets deleted following multiple node reboots. As a workaround, you can
change the default names, such as certified-operators and redhat-operators , of the catalog
sources. ( OCPBUGSM-46245 )
!"
If an invalid subscription channel is specified in the subscription policy that is used to perform a
cluster upgrade, the Topology Aware Lifecycle Manager indicates a successful upgrade right after
the policy is enforced because the Subscription state remains AtLatestKnown . ( OCPBUGSM-
43618 )
!"The SiteConfig disk partition definition fails when applied to multiple nodes in a cluster. When a
SiteConfig CR is used to provision a compact cluster, creating a valid diskPartition config on
multiple nodes fails with a Kustomize plugin error. ( OCPBUGSM-44403 )
!"If secure boot is currently disabled and you try to enable it using ZTP, the cluster installation does not
start. When secure boot is enabled through ZTP, the boot options are configured before the virtual
CD is attached. Therefore, the first boot from the existing hard disk has the secure boot turned on.
The cluster installation gets stuck because the system never boots from the CD. ( OCPBUGSM-
45085 )
!"Using Red Hat Advanced Cluster Management (RHACM), spoke cluster deployments on Dell
PowerEdge R640 servers are blocked when the virtual media does not disconnect the ISO in the
iDRAC console after writing the image to the disk. As a workaround, disconnect the ISO manually
through the Virtual Media tab in the iDRAC console. ( OCPBUGSM-45884 )
!"Low-latency applications that rely on high-resolution timers to wake up their threads might
experience higher wake up latencies than expected. Although the expected wake up latency is under
20us, latencies exceeding this can occasionally be seen when running the cyclictest tool for long
durations (24 hours or more). Testing has shown that wake up latencies are under 20us for over
99.999999% of the samples. ( RHELPLAN-138733 )
!"
A Chapman Beach NIC from Intel must be installed in a bifurcated PCIe slot to ensure that both ports
are visible. A limitation also exists in the current devlink tooling in RHEL 8.6 which prevents the
configuration of 2 ports in the bifurcated PCIe slot. ( RHELPLAN-142458 )
!"Disabling an SR-IOV VF when a port goes down can cause a 3-4 second delay with Intel NICs.
( RHELPLAN-126931 )
!"When using Intel NICs, IPV6 traffic stops when an SR-IOV VF is assigned an IPV6 address.
( RHELPLAN-137741 )
!"When using VLAN strip offloading, the offload flag ( ol_flag ) is not consistently set correctly with
the iavf driver. ( RHELPLAN-141240 )
!"A deadlock can occur if an allocation fails during a configuration change with the ice driver.
( RHELPLAN-130855 )
!"SR-IOV VFs send GARP packets with the wrong MAC address when using Intel NICs. ( RHELPLAN-
140971 )
!"When using the GitOps ZTP method of managing clusters and deleting a cluster which has not
completed installation, the cleanup of the cluster namespace on the hub cluster might hang
indefinitely. To complete the namespace deletion, remove the baremetalhost.metal3.io finalizer
from two CRs in the cluster namespace:
!"Remove the finalizer from the secret that is pointed to by the BareMetalHost CR
.spec.bmc.credentialsName .
!"
Remove the finalizer from the BareMetalHost CR. When these finalizers are removed the
namespace termination completes within a few seconds. ( OCPBUGS-3029 )
!"The addition of a new feature in OCP 4.12 that enables UDP GRO also causes all veth devices to have
one RX queue per available CPU (previously each veth had one queue). Those queues are
dynamically configured by OVN and there is no synchronization between latency tuning and this
queue creation. The latency tuning logic monitors the veth NIC creation events and starts
configuring the RPS queue cpu masks before all the queues are properly created. This means that
some of the RPS queue masks are not configured. Since not all NIC queues are configured properly
there is a chance of latency spikes in a real-time application that uses timing-sensitive cpus for
communicating with services in other containers. Applications that do not use kernel networking
stack are not affected. ( OCPBUGS-4194 )
!"Deleting a platform Operator results in a cascading deletion of the underlying resources. This
cascading deletion logic can only delete resources that are defined in the Operator Lifecycle
Manager-based (OLM) Operator’s bundle format. In the case that a platform Operator creates
resources that are defined outside of that bundle format, then the platform Operator is
responsible for handling this cleanup interaction. This behavior can be observed when installing
the cert-manager Operator as a platform Operator, and then removing it. The expected
behavior is that a namespace is left behind that the cert-manager Operator created.
!"
The platform Operators manager does not have any logic that compares the current and
desired state of the cluster-scoped BundleDeployment resource it is managing. This leaves
the possibility for a user who has sufficient role-based access control (RBAC) to manually
modify that underlying BundleDeployment resource and can lead to situations where users
can escalate their permissions to the cluster-admin role. By default, you should limit access
to this resource to a small number of users that explicitly require access. The only supported
client for the BundleDeployment resource during this Technology Preview release is the
platform Operators manager component.
!"OLM’s Marketplace component is an optional cluster capability that can be disabled. This has
implications during the Technology Preview release because platform Operators are currently
only sourced from the redhat-operators catalog source that is managed by the Marketplace
component. As a workaround, a cluster administrator can create this catalog source manually.
!"The RukPak provisioner implementations do not have the ability to inspect the health or state
of the resources that they are managing. This has implications for surfacing the generated
BundleDeployment resource state to the PlatformOperator resource that owns it. If a
registry+v1 bundle contains manifests that can be successfully applied to the cluster, but will
fail at runtime, such as a Deployment object referencing a non-existent image, the result is a
successful status being reflected in individual PlatformOperator and BundleDeployment
resources.
!"
Cluster administrators configuring PlatformOperator resources before cluster creation
cannot easily determine the desired package name without leveraging an existing cluster or
relying on documented examples. There is currently no validation logic that ensures an
individually configured PlatformOperator resource will be able to successfully roll out to the
cluster.
!"When using the Technology Preview OCI feature with the oc-mirror CLI plugin, the mirrored catalog
embeds all of the Operator bundles, instead of filtering only on those specified in the image set
configuration file. ( OCPBUGS-5085 )
!"There is currently a known issue when you run the Agent-based OpenShift Container Platform
Installer to generate an ISO image from a directory where the previous release was used for ISO
image generation. An error message is displayed with the release version not matching. As a
workaround, create and use a new directory. ( OCPBUGS#5159 )
!"The defined capabilities in the install-config.yaml file are not applied in the Agent-based
OpenShift Container Platform installation. Currently, there is no workaround. ( OCPBUGS#5129 )
!"Fully populated load balancers on RHOSP that are created with the OVN driver can contain pools
that are stuck in a pending creation status. This issue can cause problems for clusters that are
deployed on RHOSP. To resolve the issue, update your RHOSP packages. ( BZ#2042976 )
!"Bulk load-balancer member updates on RHOSP can return a 500 code in response to PUT requests.
This issue can cause problems for clusters that are deployed on RHOSP. To resolve the issue, update
your RHOSP packages. ( BZ#2100135 )
!"
Clusters that use external cloud providers can fail to retrieve updated credentials after rotation. The
following platforms are affected:
!"Alibaba Cloud
!"IBM Power
!"OpenShift Virtualization
!"RHOSP
( OCPBUGS-5036 )
!"There is a known issue when cloud-provider-openstack tries to create health monitors on OVN
load balancers by using the API to create fully populated load balancers. These health monitors
become stuck in a PENDING_CREATE status. After their deletion, associated load balancers are are
stuck in a PENDING_UPDATE status. There is no workaround. ( BZ#2143732 )
!"Due to a known issue, to use stateful IPv6 networks with cluster that run on RHOSP, you must include
ip=dhcp,dhcpv6 in the kernel arguments of worker nodes. ( OCPBUGS-2104 )
!"It is not possible to create a macvlan on the physical function (PF) when a virtual function (VF)
already exists. This issue affects the Intel E810 NIC. ( BZ#2120585 )
!"There is currently a known issue when manually configuring IPv6 addresses and routes on an IPv4
OpenShift Container Platform cluster. When converting to a dual-stack cluster, newly created pods
remain in the ContainerCreating status. Currently, there is no workaround. This issue is planned to
be addressed in a future OpenShift Container Platform release. ( OCPBUGS-4411 )
!"When an OVN cluster installed on IBM Public Cloud has more than 60 worker nodes, simultaneously
creating 2000 or more services and route objects can cause pods created at the same time to
remain in the ContainerCreating status. If this problem occurs, entering the oc describe pod
<podname> command shows events with the following warning: FailedCreatePodSandBox…failed
to configure pod interface: timed out waiting for OVS port binding (ovn-
installed) . There is currently no workaround for this issue. ( OCPBUGS-3470 )
!"When a control plane machine is replaced on a cluster that uses the OVN-Kubernetes network
provider, the pods related to OVN-Kubernetes might not start on the replacement machine. When
this occurs, the lack of networking on the new machine prevents etcd from allowing it to replace the
old machine. As a result, the cluster is stuck in this state and might become degraded. This behavior
can occur when the control plane is replaced manually or by the control plane machine set.
There is currently no workaround to resolve this issue if encountered. To avoid this issue, disable the
control plane machine set and do not replace control plane machines manually if your cluster uses
the OVN-Kubernetes network provider. ( OCPBUGS-5306 )
Asynchronous errata updates
Security, bug fix, and enhancement updates for OpenShift Container Platform 4.12 are released as
asynchronous errata through the Red Hat Network. All OpenShift Container Platform 4.12 errata is
available on the Red Hat Customer Portal. See the OpenShift Container Platform Life Cycle for more
information about asynchronous errata.
Red Hat Customer Portal users can enable errata notifications in the account settings for Red Hat
Subscription Management (RHSM). When errata notifications are enabled, users are notified through
email whenever new errata relevant to their registered systems are released.
Red Hat Customer Portal user accounts must have systems registered and consuming
! OpenShift Container Platform entitlements for OpenShift Container Platform errata
notification emails to generate.
This section will continue to be updated over time to provide notes on enhancements and bug fixes for
future asynchronous errata releases of OpenShift Container Platform 4.12. Versioned asynchronous
releases, for example with the form OpenShift Container Platform 4.12.z, will be detailed in subsections. In
addition, releases in which the errata text cannot fit in the space provided by the advisory will be detailed
in subsections that follow.
For any OpenShift Container Platform release, always review the instructions on updating
" your cluster properly.
RHSA-2022:7399 - OpenShift Container Platform 4.12.0 image release, bug
fix, and security update advisory
Issued: 2023-01-17
OpenShift Container Platform release 4.12.0, which includes security updates, is now available. The list of
bug fixes that are included in the update is documented in the RHSA-2022:7399 advisory. The RPM
packages that are included in the update are provided by the RHSA-2022:7398 advisory.
Space precluded documenting all of the container images for this release in the advisory. See the
following article for notes on the container images in this release:
You can view the container images in this release by running the following command:
Features
OpenShift Container Platform release 4.12.1, which includes security updates, is now available. The list of
bug fixes that are included in the update is documented in the RHSA-2023:0449 advisory. The RPM
packages that are included in the update are provided by the RHBA-2023:0448 advisory.
You can view the container images in this release by running the following command:
Bug fixes
!"Previously, due to a wrong check in the OpenStack cloud provider, the load balancers were populated
with External IP addresses when all of the Octavia load balancers were created. This increased the
time for the load balancers to be handled. With this update, load balancers are still created
sequentially and External IP addresses are populated one-by-one. ( OCPBUGS-5403 )
OpenShift Container Platform release 4.12.2, which includes security updates, is now available. The list of
bug fixes that are included in the update is documented in the RHSA-2023:0569 advisory. The RPM
packages that are included in the update are provided by the RHBA-2023:0568 advisory.
You can view the container images in this release by running the following command:
Updating
To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a
cluster using the CLI for instructions.
RHSA-2023:0728 - OpenShift Container Platform 4.12.3 bug fix and security
update
Issued: 2023-02-16
OpenShift Container Platform release 4.12.3, which includes security updates, is now available. The list of
bug fixes that are included in the update is documented in the RHSA-2023:0728 advisory. The RPM
packages that are included in the update are provided by the RHSA-2023:0727 advisory.
You can view the container images in this release by running the following command:
Bug fixes
!"Previously, when a control plane machine was replaced on a cluster that used the OVN-Kubernetes
network provider, the pods related to OVN-Kubernetes sometimes did not start on the replacement
machine, and prevented etcd from allowing it to replace the old machine. With this update, pods
related to OVN-Kubernetes start in the replacement machine as expected.( OCPBUGS-6494 )
Updating
To update an existing OpenShift Container Platform 4.12 cluster to this latest release, see Updating a
cluster using the CLI for instructions.