OCB Whats New in OpenShift Container Platform 3.9
OCB Whats New in OpenShift Container Platform 3.9
OCB Whats New in OpenShift Container Platform 3.9
Marc Curry
Steve Speicher
OpenShift Product Management Team
OpenShift = Enterprise Kubernetes+
Build, Deploy and Manage Containerized Apps
CONTAINER CONTAINER CONTAINER CONTAINER CONTAINER
SELF-SERVICE
SERVICE CATALOG
(LANGUAGE RUNTIMES, MIDDLEWARE, DATABASES, …)
LOGS &
NETWORKING STORAGE REGISTRY SECURITY
METRICS
ATOMIC HOST /
RED HAT ENTERPRISE LINUX
OpenShift Roadmap
OpenShift Container Platform 3.6 (August) OpenShift Container Platform 3.9 (March)
● Kubernetes 1.6 & Docker 1.12 ● Kubernetes 1.8 and 1.9 and docker 1.13
● New Application Services - 3Scale API Mgt ● CloudForms CM-Ops (CloudForms 4.6)
OnPrem, SCL 2.4 ● CRI-O (Full Support in z stream)
● Web UX Project Overview enhancements ● Device Manager (Tech Preview)
● Service Catalog/Broker & UX (Tech Preview) ● Central Auditing
● Ansible Service Broker (Tech Preview) ● Jenkins Improvements
● Secrets Encryption (3.6.1) ● HAProxy 1.8
● Signing/Scanning + OpenShift integration ● Web Console Pod
● Storage - CNS Gluster Block, AWS EFS, CephFS ● CNS (Resize, vol custom naming, vol metrics)
● OverlayFS with SELinux Support (RHEL 7.4)
● User Namespaces (RHEL 7.4)
● System Containers for docker
Q4 CY2017 Q2 CY2018
OpenShift Container Platform 3.10 (June)
Q3 CY2017 OpenShift Container Platform 3.7 (December) Q1 CY2018 ● Kubernetes 1.10 and CRI-O and Buildah (Tech Preview)
● Kubernetes 1.7 & Docker 1.12 ● Custom Metrics HPA
● Red Hat OpenShift Application Runtimes (GA) ● Smart Pruning
● Service Catalog/Broker & UX (GA) ● Istio (Dev Preview)
● OpenShift Ansible Broker (GA) ● IPv6 (Tech Preview)
● AWS Service Broker ● OVN (Tech Preview), Multi-Network, Kuryr, IP per Project
● Network Policy (GA) ● oc client for developers
● CRI-O (Tech Preview) ● AWS AutoScaling
● CNS for logging & metrics (iSCSI block), registry ● Golden Image Tooling and TLS bootstrapping
● CNS 3X density of PV’s (1000+ per 3 node, Integrated Install ● Windows Server Containers (Dev Preview))
● Prometheus Metrics and Alerts (Tech Preview) ● Prometheus Metrics and Alerts (GA)
● OCP + CNS integrated monitoring/Mgmt, S3 Svc Broker
3
OCP 3.9 - Extensible Application Platform
● Service Expansion
● Database APBs, SCL 3.0, Catalog view enhancement
● Security
● Auditing, Jenkins secret integration, private repo ease of use
● Manageability
● CFME 4.6, HAProxy 1.8, Egress port control, Soft Prune, PV resize
● Workload Diversity
● Device Manager, Local Storage
● Container Runtime
● CRI-O
EXCITING MIDDLEWARE SERVICES UPDATES
● Node core distro to be delivered only through RHOAR, no stand alone SKU
○ Evaluating NPM modules for future support, with focus on microservice development and deployment concerns
● Non-Distro efforts
○ Tooling & boosters for RHOAR integration
● Booster coverage
○ Showcases features in Node.js specific to RHOAR/microservices
○ Work continues on infrastructure/workflow
● Consumption
○ S2I images (supported for v8, unsupported but available for v9/v10)
March
○ Openshift Streams integration
12th!
Self-Service / UX
Expose and Provision Services
7
Self-Service / UX
Feature(s): OpenShift Ansible Broker
What’s New for 3.9:
● New upstream community website: Automation Broker
● Downstream will still be called ‘OpenShift Ansible Broker’ with main focus on APB ‘Service Bundles’ (application definition)
● Community contributed application repo: https://github.com/ansibleplaybookbundle
● Support for running the broker behind an HTTP proxy in a restricted network environment
● Documentation: https://github.com/openshift/ansible-service-broker/blob/master/docs/proxy.md
● Video: https://www.youtube.com/watch?v=-Fdfz1RqI94
● Plan or parameter updating of PostgreSQL, MariaDB, and MySQL APB-based services will preserve data
● Update logic in the APB that handles preserving data; useful for cases where you want to move between a service plan with
ephemeral storage to a different service plan utilizing a PV
● Video: https://www.youtube.com/watch?v=kslVbbQCZ8s&t=220s
● Now Official add-on for MiniShift
● Documentation: https://github.com/minishift/minishift-addons/tree/master/add-ons/ansible-service-broker
● Video: https://www.youtube.com/watch?v=6QSJOyt1Ix8
● Network isolation support for multi-tenant environments
● For joining networks that are isolated to allow APBs to talk to the resulting pods it creates over the network
● [Experimental] Async bind support in Broker
● Used to allow binds that need more time to execute than the 60 seconds response time defined in the OSB API spec.
● Async bind will spawn a binding job and return the job token immediately; the catalog will use the last_operation to monitor the
state of the running job until either successful completion or a failure.
8
Self-Service / UX
openshift_web_console_inactivity_timeout_minutes=n
Self-Service / UX
3.4 10.2
UPDATED
9.6
NEW
1.12
8
7.1 3.6
Networking 3.7
How it Works:
● Supported by the multitenant / networkpolicy plugins ● Once claimed, a pod in that NetNamespace on that
● Egress IPs do not accept connections on any port node will be able to send traffic to external IPs, with that
EgressIP as the source of traffic
● NetNamespace has an EgressIPs array that can be set
● For a pod in that NetNamespace on a different node,
(though only one IP, currently) for the egress IP traffic will first travel via VXLAN to the node hosting the
● The Egress IP must be on the local subnet of the node's egress IP, then it will be able to send traffic to external
primary network interface (added as additional address IPs
on that interface) ● Egress traffic from pods in other NetNamespaces are
● Once EgressIPs is set on a NetNamespace, and until still NAT’d to the primary IP address of the node, just
the EgressIP is claimed, pod-to-pod traffic is allowed, like in the no-automatic-egress-IP case
but pod-to-external traffic is dropped
How the HAProxy “soft reload” used to work:
Networking
The new process with its
Feature(s): Support our own HAProxy RPM for new configuration tries to
bind to all listening ports
consumption by the router
The new process
Succeed
Description: Route configuration changes and process listens for incoming
Fail connections.
upgrades performed under heavy load have typically
required a stop/start sequence of certain services, The new process sends a
signal to the old
causing temporary outages. There existed iptables process(es) asking it to
“trickery” to work around the issue. temporarily release the port
ports may not be
bound by any process...
In OpenShift 3.9, HAProxy 1.8 sees no difference
Try again
between updates and upgrades; a new process is used Signal the old
with a new configuration, and the listening socket’s file Succeed
process it can quit
once it has finished
descriptor is transferred from the old to the new process Fail serving existing
so the connection is never closed. The change is connections
Give up and signal the old
seamless, and enables our ability to do things, like process to continue taking
HTTP/2, in the future. care of the incoming
connections
Master
Feature(s): StatefulSets / DaemonSets / Deployments no longer Tech Preview
For OpenShift, this means that StatefulSets, DaemonSet and Deployments are
now stable/supported and the Tech Preview label is removed in OpenShift 3.9.
Additional Information:
● StatefulSets
● DaemonSets
● Deployments
Master
Feature(s): Central Audit Capability
Description: Provides auditing of items that admins would like to…
View (examples): How It Works:
● Event Timestamp
● The activity that generated the entry Setup auditing in the master-config file, and restart the
● The API endpoint that was called master-config service:
● The HTTP output
● The item changed due to an activity, with details of the change auditConfig:
● The username of the user that initiated an activity auditFilePath: "/var/log/audit-ocp.log"
● The name of the namespace the event occurred in where possible enabled: true
● The status of the event, either success or failure maximumFileRetentionDays: 10
maximumFileSizeMegabytes: 10
Trace (examples): maximumRetainedFiles: 10
logFormat: json
● User login and logout from (including session timeout) the
policyConfiguration: null
web interface, including unauthorised access attempts
policyFile: /etc/origin/master/audit-policy.yaml
● Account creation, modification, or removal webHookKubeConfig: ""
● Account role/policy assignment/de-assignment webHookMode: ""
● Scaling of pods
● Creation of new project or application
● Creation of routes and services
● Triggers of builds and/or pipelines
● Addition/removal or claim of persistent volumes
Master
The old (pre-3.9) output:
Feature(s): Add support for Deployments to oc status $ oc-3.7 status
In project dc-test on server
Description: Provides similar output for upstream https://127.0.0.1:8443
How it Works:
$ oc status
In project My Project (myproject) on server https://127.0.0.1:8443
svc/ruby-deploy - 172.30.174.234:8080
deployment/ruby-deploy deploys istag/ruby-deploy:latest <-
bc/ruby-deploy source builds https://github.com/openshift/ruby-ex.git on istag/ruby-22-centos7:latest
build #1 failed 5 hours ago - bbb6701: Merge pull request #18 from durandom/master (Joe User
<joeuser@users.noreply.github.com>)
deployment #2 running for 4 hours - 0/1 pods (warning: 53 restarts)
deployment #1 deployed 5 hours ago
Tech
Master Preview
Deprecation of unsupported “glue code” (ancillary scripts, ansible playbooks, related GitHub repos, …)
● No longer required as we’re using the provisioner code provided by the installer itself
● All cloud providers
1
The release dates for the Ref Arch update and RHV 4.2 are very close, so this may fall back to 4.1.
2
At-risk.
Questions
OpenShift = Enterprise Kubernetes+
Build, Deploy and Manage Containerized Apps
CONTAINER CONTAINER CONTAINER CONTAINER CONTAINER
SELF-SERVICE
SERVICE CATALOG
(LANGUAGE RUNTIMES, MIDDLEWARE, DATABASES, …)
LOGS &
NETWORKING STORAGE REGISTRY SECURITY
METRICS
ATOMIC HOST /
RED HAT ENTERPRISE LINUX
Clustered Container Infrastructure
Applications Run Across Multiple Containers & Hosts
LOGS &
NETWORKING STORAGE REGISTRY SECURITY
METRICS
ATOMIC HOST /
RED HAT ENTERPRISE LINUX
Red Hat Contributing Projects:
Container Orchestration ● Job Failure Policy
● Kubectl plugins
Feature(s): Kubernetes Upstream Red Hat ● Pod level QoS
Blog and Commons Webinar ● PV resizing
● Mount namespace
● CRD
Description: OCP 3.9 is a double rebase ● CronJob
release. We literally had to go through the ● HPA Metrics
same release motions twice. Red Hat ● StorageClass ReclaimPolicy
continues to influence the product in the areas ● Rules View API
● RBAC
of Storage, Networking, Resource
● Mount Options
Management, Authentication&Authorization, ● LIST queries
Multi-tenancy, Security, Service Deployments ● ClusterRole
and templating, and Controller functionality. ● Containerized Mounts
● PV to Pod track and Delete
● Raw Block Storage
OpenShift 3.9 Status of Kube 1.8 and 1.9 Upstream Features:
https://docs.google.com/spreadsheets/d/1xdjfFVyoUaDgZXak4OHA90wq_bNIKrrc7U2xr8fKXEU/edit?usp=sharing
Container Orchestration
Feature(s): Feature tracking documentation
How it Works:
● Restructured playbooks to push all fact gathering and common dependencies up into the
initialization plays so they’re only called once rather than each time a role needs access to their
computed values.
● Refactored playbooks to limit the hosts they touch to only those that are truly relevant to the
playbook.
● As an example, prior to these changes upgrading the control plane in our large online environments spent
>40 minutes gathering useless facts from 290 compute nodes that aren't relevant to the control plane
upgrade.
● Initial results showed a large reduction in overall installation times; up to 30% faster in some cases
Installation
Feature(s): Quick installation [deprecated]
Description: Quick installation is being deprecated in 3.9 and will be removed in 3.10
How it Works:
● quick installation will only be capable of installing 3.9
● It will not be able to upgrade from 3.7 or 3.8 to 3.9.
● The `atomic-openshift-installer upgrade` function will exit with a message indicating updates are not supported under this
version of the quick installer
● If an attempt to upgrade is made, reference the documentation explaining how to migrate from the existing quick
installer generated inventory to using openshift-ansible directly.
Description: Users can expand their persistent • Edit the field ‘spec→ requests → storage: new value’
volume claims online from OCP for CNS
glusterFS volumes
Description: Users can expand their persistent - Create a storageclass with AllowVolumeExpansion=true
- PVC uses the storageclass and submits a claim
volume claims online from OCP for following - Resize: PVC specifies a new increased size
- Underlying PV is resized
storage backends:
● CNS glusterFS
● gcePD
● cinder
Storage
Feature(s): CNS GlusterFS PV Consumption metrics Prometheus
available from OCP
How it Works:
● Metrics available from PVC end point
● User can now know PV size allocated as well as consumed
and use resize (Expand) of PV if needed from OCP ‘curl’
● Example Metrics added # TYPE kubelet_volume_stats_available_bytes gauge
● kubelet_volume_stats_capacity_bytes kubelet_volume_stats_available_bytes{namespace="default",p
● kubelet_volume_stats_inodes ersistentvolumeclaim="claim1"} 8.543010816e+09
● kubelet_volume_stats_inodes_free # TYPE kubelet_volume_stats_capacity_bytes gauge
● kubelet_volume_stats_inodes_used kubelet_volume_stats_capacity_bytes{namespace="default",pe
● kubelet_volume_stats_used_bytes ....etc rsistentvolumeclaim="claim1"} 8.57735168e+09
Storage
Example
Feature(s): CNS now supports Custom
[root@localhost cluster]# cat ../demo/glusterfs-storageclass_fast.yaml
Volume Naming at backend apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
name: fast
Description: OCP Users can specify custom provisioner: kubernetes.io/glusterfs
parameters:
volume names (prefixes) for PV’s from CNS resturl: "http://127.0.0.1:8081"
restuser: "admin"
backed storage class. secretNamespace: "default"
secretName: "heketi-secret"
volumenameprefix: "dept-dev"
How it Works:
● Previously PV Names (vol_<UUID> , vol_1213456) PV Names: dept-dev_storageproject_claim1_12312321
● Specify new attribute in CNS storage class called
VolumeNamPrefix_NameSpace_ClaimName_UUID
'volumenameprefix'
● CNS backend volumes will be named User supplied Name Space Claim UUID
myPrefix_NameSpace_PVCClaimName_UUID Prefix Project Name
● Easy to recognize, users follow naming convention, Name
● Easy to Search & Apply Policy based on prefix,
Namespace, Project Name, or Claim Name
● Demo Video
Storage OPENSHIFT NODE 1
How it Works:
● CNS storage device details are added to the
installer’s inventory file
● The advanced installer manages configuration RHGS Container
OPENSHIFT NODE 4
and deployment of CNS, file & block
provisioners, registry and ready to use PV o OCP + CNS deployed as one cluster
o CNS with Block & File provisioners deployed
o OCP Registry deployed on CNS
o Ready to deploy Logging, Metrics on CNS
Tech
Logging Preview
Feature(s):
How it Works:
syslog output plugin for fluentd
OpenShift Ansible Installer for Logging
Note: blocker bug will be delivered in 3.9.z; so GA will happen
in conjunction with that
openshift_logging_fluentd_remote_syslog = true
openshift_logging_fluentd_remote_syslog_host =
Description: <hostname> or <IP>
Users would like to send logs (system and openshift_logging_fluentd_remote_syslog_port = <port no,
container) from OCP nodes to external defaults to 514>
Feature(s):
How it Works:
● Prometheus stays in (Tech Preview)
● Prometheus, AlertManager and AlertBuffer ● New OpenShift installer playbook for
versions are updated installing Prometheus server, alert
● node_exporter included manager and oAuth-proxy
● Note: Hawkular is still the supported ● Deploys Statefulset comprising server,
Metrics stack alert-manager, buffer and oAuthProxy in
front and a PVC one for server and one
for alert manager
Description: ● Alerts can be created in a rule file
OpenShift Operators deploy Prometheus on an OCP cluster, and selected via inventory file
collect Kubernetes and Infrastructure metrics, get alerts.
Operators can see and query metrics and alerts on
Prometheus web dashboard. Or They can bring their own
Grafana and hook it up to Prometheus.
CFME 4.6 Container Mgmt
● OpenShift Template Provisioning
● Off-line OpenScap Scans
● Alert Management (Prometheus) - Tech Preview
● Reporting Updates
● Provider Updates
● Chargeback Enhancements
● UX Enhancements
4
4
Trusted Container OS
Containers Depend on Linux
ATOMIC HOST /
RED HAT ENTERPRISE LINUX
RHEL 7.5 Highlights
Containers / Atomic
● Docker 1.13
● Docker-latest deprecation
● RPM-OSTree package overrides Storage
Improvements include:
Description: CRI-O is an OCI compliant
implementation of the Kubernetes Container ● New CLI (podman) shipping in 7.5.z
Runtime Interface. By design it provides only the ● Image volume handling
runtime capabilities needed by the kubelet. CRI-O is ● Registry listings
designed to be part of Kubernetes and evolve in ● Pids cgroups controls
lock-step with the platform. ● SELinux support
CRI-O brings:
CNI Networking RunC
Kubelet
● A minimal and secure architecture Storage Image
● Excellent scale and performance
● Ability to run any OCI / Docker image
● Familiar operational tooling and commands
Buildah
Start from an existing
Feature: Buildah moving to full support with image or from scratch
RHEL 7.5