100% found this document useful (1 vote)
162 views164 pages

Kubernetes - Docker DLH Short Course

The Kubernetes - Docker DLH Short Course is an intensive and hands-on program designed to provide participants with a comprehensive understanding of Kubernetes, Docker, and DLH

Uploaded by

Don Ali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
162 views164 pages

Kubernetes - Docker DLH Short Course

The Kubernetes - Docker DLH Short Course is an intensive and hands-on program designed to provide participants with a comprehensive understanding of Kubernetes, Docker, and DLH

Uploaded by

Don Ali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 164

Welcome to DLH

Kubernetes training

1
Introduction
The training: Main Reference and contents
1. Introduction
2. Key concepts
3. Architecture overview
a. Control Plane components
b. Node components
c. Other components
4. Network
5. Concept and resources
6. Kubernetes Apis
a. Core Objects
b. Workloads
c. Batch API (optional)
7. Storage (optional)
8. ConfigMap and secrets (optional)
Introduction
What Does “Kubernetes” Mean?

Greek for “pilot” or


“Helmsman of a ship”

Image Source
What is Kubernetes?

● Project that was spun out of Google as an open


source container orchestration platform.
● Built from the lessons learned in the experiences of
developing and running Google’s Borg and Omega.
● Designed from the ground-up as a loosely coupled
collection of components centered around
deploying, maintaining and scaling workloads.
What Does Kubernetes do?

● Known as the linux kernel of distributed systems.


● Abstracts away the underlying hardware of the
nodes and provides a uniform interface for
workloads to be both deployed and consume the
shared pool of resources.
● Works as an engine for resolving state by
converging actual and the desired state
Decouples Infrastructure and Scaling

● All services within Kubernetes are natively Load


Balanced.
● Can scale up and down dynamically.
● Used both to enable self-healing and seamless
upgrading or rollback of applications.
Self Healing

Kubernetes will ALWAYS try and steer the cluster to its


desired state.

● Me: “I want 3 healthy instances of redis to always be running.”


● Kubernetes: “Okay, I’ll ensure there are always 3 instances up
and running.”
● Kubernetes: “Oh look, one has died. I’m going to attempt to
spin up a new one.”
What can Kubernetes REALLY do?
● Autoscale Workloads
● Blue/Green Deployments
● Fire off jobs and scheduled cronjobs
● Manage Stateless and Stateful Applications
● Provide native methods of service discovery
● Easily integrate and support 3rd party apps
Most Importantly...

Use the SAME API


across bare metal and
EVERY cloud provider!!!
Who “Manages” Kubernetes?

The CNCF is a child entity of the Linux Foundation and


operates as a vendor neutral governance group.
Project Stats

● Over 55,000 stars on Github


● 2000+ Contributors to K8s Core
● Most discussed Repository by a large margin
● 70,000+ users in Slack Team

07/2019
A couple of Key concepts
Pods

● Atomic unit or smallest “uniĞ of


work”of Kubernetes.
● Pods are one or MORE
containers that share volumes, a
network namespace, and are a
part of a single context.
Pods

They are
also
Ephemeral!
Services

● Unified method of
accessing the exposed
workloads of Pods.
● Durable resource
○ static cluster IP
○ static namespaced DNS
name
Services

● Unified method of
accessing the exposed
workloads of Pods.
● Durable resource
○ static cluster IP
○ static namespaced DNS
name

NOT Ephemeral!
Architecture Overview
Architecture Overview
Control planes components
Control Plane Components

● kube-apiserver
● etcd
● kube-controller-manager
● kube-scheduler
kube-apiserver

● Provides a forward facing REST interface into the


kubernetes control plane and datastore.
● All clients and other applications interact with
kubernetes strictly through the API Server.
● Acts as the gatekeeper to the cluster by handling
authentication and authorization, request validation,
mutation, and admission control in addition to being
the front-end to the backing datastore.
etcd

● Etcd acts as the cluster datastore.


● Purpose in relation to Kubernetes is to provide a strong,
consistent and highly available key-value store for
persisting cluster state.
● Stores objects and config information.
etcd

Uses “RafĞ Consensus”


among a quorum of
systems to create a
fault-tolerant consistent
“view” of the cluster.
https://raft.github.io/

Image Source
kube-controller-manager

● Serves as the primary daemon that manages all core


component control loops.
● Monitors the cluster state via the apiserver and steers
the cluster towards the desired state.

List of core controllers: Controllers


kube-scheduler

● Verbose policy-rich engine that evaluates workload


requirements and attempts to place it on a
matching resource.
● Default scheduler uses bin packing.
● Workload Requirements can include: general
hardware requirements, affinity/anti-affinity, labels,
and other various custom resource requirements.
Architecture Overview
Node component
Node Components

● kubelet
● kube-proxy
● Container Runtime Engine
kubelet

● Acts as the node agent responsible for managing


the lifecycle of every pod on its host.
● Kubelet understands YAML container manifests
that it can read from several sources:
○ file path
○ HTTP Endpoint
○ etcd watch acting on any changes
○ HTTP Server mode accepting container manifests over a
simple API.
kube-proxy

● Manages the network rules on each node.


● Performs connection forwarding or load balancing
for Kubernetes cluster services.
● Available Proxy Modes:
○ Userspace
○ iptables
○ ipvs (default if supported)
Container Runtime Engine

● A container runtime is a CRI (Container Runtime


Interface) compatible application that executes and
manages containers.
○ Containerd (docker)
○ Cri-o
○ Rkt
○ Kata (formerly clear and hyper)
○ Virtlet (VM CRI compatible runtime)
Architecture Overview
Other components
cloud-controller-manager

● Daemon that provides cloud-provider specific


knowledge and integration capability into the core
control loop of Kubernetes.
● The controllers include Node, Route, Service, and
add an additional controller to handle things such
as PersistentVolume Labels.
Cluster DNS

● Provides Cluster Wide DNS for Kubernetes Services.


○ Built on top of CoreDNS
Kube Dashboard

A limited, general purpose


web front end for the
Kubernetes Cluster.
Heapster / Metrics API Server

● Provides metrics for use with other Kubernetes


Components.
○ Heapster (deprecated, removed last Dec)
○ Metrics API (current)
Network
Kubernetes Networking

● Pod Network
○ Cluster-wide network used for pod-to-pod
communication managed by a CNI (Container Network
Interface) plugin.
● Service Network
○ Cluster-wide range of Virtual IPs managed by
kube-proxy for service discovery.
Container Network Interface (CNI)

● Pod networking within Kubernetes is plumbed via the


Container Network Interface (CNI).
● Functions as an interface between the container
runtime and a network implementation plugin.
● CNCF Project
● Uses a simple JSON Schema.
CNI Overview
CNI Overview
CNI plugin

● Amazon ECS ● GCE


● Calico ● kube-router
● Cillium ● Multus
● Contiv ● OpenVSwitch
● Contrail ● Romana
● Flannel ● Weave
Fundamental Networking Rules
● All containers within a pod can communicate with each
other unimpeded.
● All Pods can communicate with all other Pods without
NAT.
● All nodes can communicate with all Pods (and
vice-versa) without NAT.
● The IP that a Pod sees itself as is the same IP that others
see it as.
Fundamentals Applied

● Container-to-Container
○ Containers within a pod exist within the same network
namespace and share an IP.
○ Enables intrapod communication over localhosĞ.
● Pod-to-Pod
○ Allocated cluster unique IP for the duration of its life
cycle.
○ Pods themselves are fundamentally ephemeral.
Fundamentals Applied

● Pod-to-Service
○ managed by kube-proxy and given a persistent cluster
unique IP
○ exists beyond a Pod’s lifecycle.
● External-to-Service
○ Handled by kube-proxy.
○ Works in cooperation with a cloud provider or other
external entity (load balancer).
Exercise 1

Installation
bit.ly/dlh-kube-exo1
Concept and resources
Concept and resources
The API and object model
API Overview

● The REST API is the true


keystone of Kubernetes.
● Everything within the
Kubernetes is as an API
Object.

Image Source
API Groups
Format:
● Designed to make it
/apis/<group>/<version>/<resource>
extremely simple to
both understand and Examples:
extend. /apis/apps/v1/deployments
/apis/batch/v1beta1/cronjobs
● An API Group is a REST
compatible path that acts as the type descriptor for a
Kubernetes object.
● Referenced within an object as the apiVersion and
kind.
API Versioning
Format:
● Three tiers of API maturity
/apis/<group>/<version>/<resource>
levels.
● Also referenced within the
Examples:
object apiVersion. /apis/apps/v1/deployments
/apis/batch/v1beta1/cronjobs

● Alpha: Possibly buggy, And may change. Disabled by default.


● Beta: Tested and considered stable. However API Schema may
change. Enabled by default.
● Stable: Released, stable and API schema will not change.
Enabled by default.
Object Model

● Objects are a “record of inĞenĞ” or a persistent entity that


represent the desired state of the object within the
cluster.
● All objects MUST have apiVersion, kind, and poses
the nested fields metadata.name, metadata.namespace,
and metadata.uid.
Object Model Requirements
● apiVersion: Kubernetes API version of the Object
● kind: Type of Kubernetes Object
● metadata.name: Unique name of the Object
● metadata.namespace: Scoped environment name that the
object belongs to (will default to current).
● metadata.uid: The (generated) uid for an object.
apiVersion: v1
kind: Pod
metadata:
name: pod-example
namespace: default
uid: f8798d82-1185-11e8-94ce-080027b3c7a6
Object Expression - YAML

● Files or other representations of Kubernetes Objects are generally


represented in YAML.
● A “Human Friendly” data serialization standard.
● Uses white space (specifically spaces) alignment to denote
ownership.
● Three basic data types:
○ mappings - hash or dictionary,
○ sequences - array or list
○ scalars - string, number, boolean etc
Object Expression - YAML

apiVersion: v1
kind: Pod
metadata:
name: yaml
spec:
containers:
- name: container1
image: nginx
- name: container2
image: alpine
Object Expression - YAML

apiVersion: v1
kind: Pod Scalar
Mapping metadata:
Hash name: yaml
Dictionary spec:
containers:
- name: container1
image: nginx
- name: container2 Sequence
Array
image: alpine List
YAML vs JSON
apiVersion: v1 {
"apiVersion": "v1",
kind: Pod
"kind": "Pod",
metadata: "metadata": {
name: pod-example "name": "pod-example"
spec: },
containers: "spec": {
"containers": [
- name: nginx
{
image: nginx:stable-alpine "name": "nginx",
ports: "image": "nginx:stable-alpine",
- containerPort: 80 "ports": [ { "containerPort": 80 } ]
}
]
}
}
Object Model - Workloads

● Workload related objects within Kubernetes have an


additional two nested fields spec and status.
○ spec - Describes the desired state or configuration of the
object to be created.
○ status - Is managed by Kubernetes and describes the
actual state of the object and its history.
Workload Object Example
Example Object Example Status Snippet
status:

apiVersion: v1 conditions:

kind: Pod - lastProbeTime: null

lastTransitionTime: 2018-02-14T14:15:52Z
metadata:
status: "True"
name: pod-example type: Ready
spec: - lastProbeTime: null

containers: lastTransitionTime: 2018-02-14T14:15:49Z

- name: nginx status: "True"

image: nginx:stable-alpine type: Initialized

- lastProbeTime: null
ports:
lastTransitionTime: 2018-02-14T14:15:49Z
- containerPort: 80 status: "True"

type: PodScheduled
Exercise 2

Dashboard Installation
bit.ly/dlh-kube-exo2
Kubernetes APIs
Kubernetes APIs
Core Objects
Core Concepts

Kubernetes has several core building blocks that make up


the foundation of their higher level components.

Namespaces
Pods Services
Labels Selectors
Namespaces
Example Object Example Status Snippet
status:

apiVersion: v1 conditions:

kind: Pod - lastProbeTime: null

lastTransitionTime: 2018-02-14T14:15:52Z
metadata:
status: "True"
name: pod-example type: Ready
spec: - lastProbeTime: null

containers: lastTransitionTime: 2018-02-14T14:15:49Z

- name: nginx status: "True"

image: nginx:stable-alpine type: Initialized

- lastProbeTime: null
ports:
lastTransitionTime: 2018-02-14T14:15:49Z
- containerPort: 80 status: "True"

type: PodScheduled
Namespaces

Namespaces are a logical cluster or environment, and


are the primary method of partitioning a cluster or
scoping access.

apiVersion: v1 $ kubectl get ns --show-labels


NAME STATUS AGE LABELS
kind: Namespace default Active 11h <none>
metadata: kube-public Active 11h <none>
name: prod kube-system Active 11h <none>
prod Active 6s app=MyBigWebApp
labels:
app: MyBigWebApp
Default Namespaces

● default: The default


$ kubectl get ns --show-labels
namespace for any object NAME STATUS AGE LABELS
default Active 11h <none>
without a namespace. kube-public Active 11h <none>
● kube-system: Acts as kube-system Active 11h <none>
Default Namespaces
● default: The default namespace for any object without a
namespace.
● kube-system: Acts as the home for objects and resources
created by Kubernetes itself.
● kube-public: A special namespace; readable by all users
that is reserved for cluster bootstrapping and
configuration.
$ kubectl get ns --show-labels
NAME STATUS AGE LABELS
default Active 11h <none>
kube-public Active 11h <none>
kube-system Active 11h <none>
Pod

● Atomic unit or smallest “uniĞ of


work”of Kubernetes.
● Foundational building block of
Kubernetes Workloads.
● Pods are one or more containers
that share volumes, a network
namespace, and are a part of a
single context.
Pod example
apiVersion: v1
kind: Pod
metadata:
name: pod-example
spec:
containers:
- name: nginx
image: nginx:stable-alpine
ports:
- containerPort: 80
Key Pod Container Attributes
● name - The name of the container Container

● image - The container image


name: nginx
● ports - array of ports to expose. Can image: nginx:stable-alpine
be granted a friendly name and ports:
protocol may be specified - containerPort: 80
name: http
● env - array of environment variables
protocol: TCP
● command - Entrypoint array (equiv to env:
Docker ENTRYPOINT) - name: MYVAR
value: isAwesome
● args - Arguments to pass to the
command: [“/bin/sh”, “-c”]
command (equiv to Docker CMD)
args: [“echo ${MYVAR}”]
Labels

● key-value pairs that are used to


identify, describe and group
together related sets of objects or
resources.
● NOT characteristic of uniqueness.

● Have a strict syntax with a slightly


limited character set*.

* https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set
Labels Example
apiVersion: v1
kind: Pod
metadata:
name: pod-label-example
labels:
app: nginx
env: prod
spec:
containers:
- name: nginx
image: nginx:stable-alpine
ports:
- containerPort: 80
Selectors

apiVersion: v1
Selectors use labels to filter kind: Pod
or select objects, and are metadata:
name: pod-label-example
used throughout labels:
Kubernetes. app: nginx
env: prod
spec:
containers:
- name: nginx
image: nginx:stable-alpine
ports:
- containerPort: 80
nodeSelector:
gpu: nvidia
Selector Example

apiVersion: v1
kind: Pod
metadata:
name: pod-label-example
labels:
app: nginx
env: prod
spec:
containers:
- name: nginx
image: nginx:stable-alpine
ports:
- containerPort: 80
nodeSelector:
gpu: nvidia
Selector Types
Equality based selectors allow for Set-based selectors are supported
simple filtering (=,==, or !=). on a limited subset of objects.
However, they provide a method of
filtering on a set of values, and
supports multiple operators
including: in, notin, and exist.
selector: selector:
matchLabels: matchExpressions:
gpu: nvidia - key: gpu
operator: in
values: [“nvidia”]
Quizz

Selectors in actions
Labels
Labels
Labels
Labels
Labels
Services
● Unified method of accessing the exposed workloads of
Pods.
● Durable resource (unlike Pods)
○ static cluster-unique IP
○ static namespaced DNS name

<service name>.<namespace>.svc.cluster.local
Services

● Target Pods using equality based selectors.


● Uses kube-proxy to provide simple load-balancing.
● kube-proxy acts as a daemon that creates local entries
in the host’s iptables for every service.
Service Types

There are 4 major service types:


● ClusterIP (default)
● NodePort
● LoadBalancer
● ExternalName
ClusterIP Service

apiVersion: v1
ClusterIP services exposes a kind: Service
service on a strictly cluster metadata:
name: example-prod
internal virtual IP. spec:
selector:
app: nginx
env: prod
ports:
- protocol: TCP
port: 80
targetPort: 80
Cluster IP Service

Name: example-prod
Selector: app=nginx,env=prod
Type: ClusterIP
IP: 10.96.28.176
Port: <unset> 80/TCP
TargetPort: 80/TCP
Endpoints: 10.255.16.3:80,
10.255.16.4:80

/ # nslookup example-prod.default.svc.cluster.local

Name: example-prod.default.svc.cluster.local
Address 1: 10.96.28.176 example-prod.default.svc.cluster.local
NodePort Service

apiVersion: v1
● NodePort services extend kind: Service
the ClusterIP service. metadata:
name: example-prod
● Exposes a port on every spec:
type: NodePort
node’s IP. selector:
app: nginx
● Port can either be statically env: prod

defined, or dynamically ports:


- nodePort: 32410
taken from a range between protocol: TCP

30000-32767. port: 80
targetPort: 80
NodePort Service

Name: example-prod
Selector: app=nginx,env=prod
Type: NodePort
IP: 10.96.28.176
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 32410/TCP
Endpoints: 10.255.16.3:80,
10.255.16.4:80
LoadBalancer Service

apiVersion: v1
● LoadBalancer services kind: Service
extend NodePort. metadata:
name: example-prod
● Works in conjunction with spec:
an external system to map a type: LoadBalancer
selector:
cluster external IP to the app: nginx
exposed service. env: prod
ports:
protocol: TCP
port: 80
targetPort: 80
LoadBalancer Service

Name: example-prod
Selector: app=nginx,env=prod
Type: LoadBalancer
IP: 10.96.28.176
LoadBalancer
Ingress: 172.17.18.43
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 32410/TCP
Endpoints: 10.255.16.3:80,
10.255.16.4:80
ExternalName Service

apiVersion: v1
● ExternalName is used to kind: Service
reference endpoints metadata:
OUTSIDE the cluster. name: example-prod
spec:
● Creates an internal type: ExternalName
spec:
CNAME DNS entry that externalName: example.com
aliases another.
Exercise 3

Your first basic pod


bit.ly/dlh-kube-exo3
Kubernetes APIs
Workloads
Workloads

Workloads within Kubernetes are higher level objects that


manage Pods or other higher level objects.

In ALL CASES a Pod Template is included, and acts the


base tier of management.
Pod Template

● Workload Controllers manage instances of Pods


based off a provided template.
● Pod Templates are Pod specs with limited metadata.
● Controllers use apiVersion: v1 template:
kind: Pod metadata:
Pod Templates to metadata: labels:
name: pod-example app: nginx
make actual pods. labels: spec:
app: nginx containers:
spec: - name: nginx
containers: image: nginx
- name: nginx
image: nginx
ReplicaSet

● Primary method of managing pod replicas and their


lifecycle.
● Includes their scheduling, scaling, and deletion.
● Their job is simple: Always ensure the desired number
of pods are running.
ReplicaSet

apiVersion: apps/v1
● replicas: The desired
kind: ReplicaSet
number of instances of the metadata:
Pod. name: rs-example
spec:
● selector:The label selector
replicas: 3
for the ReplicaSet will manage selector:
ALL Pod instances that it matchLabels:
targets; whether it’s desired or app: nginx
not. env: prod
template:
<pod template>
ReplicaSet
$ kubectl get pods
apiVersion: apps/v1 NAME READY STATUS RESTARTS AGE
kind: ReplicaSet rs-example-9l4dt 1/1 Running 0 1h
metadata: rs-example-b7bcg 1/1 Running 0 1h
name: rs-example rs-example-mkll2 1/1 Running 0 1h
spec:
$ kubectl describe rs rs-example
replicas: 3 Name: rs-example
selector: Namespace: default
Selector: app=nginx,env=prod
matchLabels: Labels: app=nginx
app: nginx env=prod
Annotations: <none>
env: prod Replicas: 3 current / 3 desired
template: Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
metadata: Labels: app=nginx
env=prod
labels:
Containers:
app: nginx nginx:
Image: nginx:stable-alpine
env: prod Port: 80/TCP
spec: Environment: <none>
Mounts: <none>
containers: Volumes: <none>
- name: nginx Events:
Type Reason Age From Message
image: nginx:stable-alpine
ports: Normal SuccessfulCreate 16s replicaset-controller Created pod: rs-example-mkll2
Normal SuccessfulCreate 16s replicaset-controller Created pod: rs-example-b7bcg
- containerPort: 80 Normal SuccessfulCreate 16s replicaset-controller Created pod: rs-example-9l4dt
Deployment

● Declarative method of managing Pods via ReplicaSets.


● Provide rollback functionality and update control.
● Updates are managed through the pod-template-hash
label.
● Each iteration creates a unique label that is assigned to
both the ReplicaSet and subsequent Pods.
Deployment

● revisionHistoryLimit: The number of apiVersion: apps/v1

previous iterations of the Deployment to kind: Deployment


metadata:
retain. name: deploy-example
spec:
● strategy: Describes the method of replicas: 3

updating the Pods based on the type. revisionHistoryLimit: 3


selector:
Valid options are Recreate or matchLabels:
app: nginx
RollingUpdate. env: prod
strategy:
○ Recreate: All existing Pods are killed
type: RollingUpdate
before the new ones are created. rollingUpdate:
maxSurge: 1
○ RollingUpdate: Cycles through updating maxUnavailable: 0
the Pods according to the parameters: template:
<pod template>
maxSurge and maxUnavailable.
RollingUpdate Deployment

Updating pod template generates a new


ReplicaSet revision.

R1 pod-template-hash:
676677fff
R2 pod-template-hash:
54f7ff7d6d

$ kubectl get replicaset


NAME DESIRED CURRENT READY AGE
mydep-6766777fff 3 3 3 5h

$ kubectl get pods


NAME READY STATUS RESTARTS AGE
mydep-6766777fff-9r2zn 1/1 Running 0 5h mydep-6766777fff-
hsfz9 1/1 Running 0 5h mydep-6766777fff-sjxhf 1/1
Running 0 5h
RollingUpdate Deployment

New ReplicaSet is initially


scaled up based on maxSurge.

R1 pod-template-hash:
676677fff
R2 pod-template-hash:
54f7ff7d6d

$ kubectl get replicaset


NAME DESIRED CURRENT READY AGE
mydep-54f7ff7d6d 1 1 1 5s
mydep-6766777fff 2 3 3 5h

$ kubectl get pods


NAME READY STATUS RESTARTS AGE
mydep-54f7ff7d6d-9gvll 1/1 Running 0 2s mydep-6766777fff-
9r2zn 1/1 Running 0 5h mydep-6766777fff-hsfz9 1/1 Running
0 5h mydep-6766777fff-sjxhf 1/1 Running 0 5h
RollingUpdate Deployment

Phase out of old Pods managed by


maxSurge and maxUnavailable.

R1 pod-template-hash:
676677fff
R2 pod-template-hash:
54f7ff7d6d

$ kubectl get replicaset


NAME DESIRED CURRENT READY AGE
mydep-54f7ff7d6d 2 2 2 8s
mydep-6766777fff 2 2 2 5h

$ kubectl get pods


NAME READY STATUS RESTARTS AGE
mydep-54f7ff7d6d-9gvll 1/1 Running 0 5s mydep-54f7ff7d6d-
cqvlq 1/1 Running 0 2s mydep-6766777fff-9r2zn 1/1 Running
0 5h mydep-6766777fff-hsfz9 1/1 Running 0 5h
RollingUpdate Deployment

Phase out of old Pods managed by


maxSurge and maxUnavailable.

R1 pod-template-hash:
676677fff
R2 pod-template-hash:
54f7ff7d6d

$ kubectl get replicaset


NAME DESIRED CURRENT READY AGE
mydep-54f7ff7d6d 3 3 3 10s
mydep-6766777fff 0 1 1 5h

$ kubectl get pods


NAME READY STATUS RESTARTS AGE
mydep-54f7ff7d6d-9gvll 1/1 Running 0 7s mydep-54f7ff7d6d-
cqvlq 1/1 Running 0 5s mydep-54f7ff7d6d-gccr6 1/1 Running
0 2s mydep-6766777fff-9r2zn 1/1 Running 0 5h
RollingUpdate Deployment

Phase out of old Pods managed by


maxSurge and maxUnavailable.

R1 pod-template-hash:
676677fff
R2 pod-template-hash:
54f7ff7d6d

$ kubectl get replicaset


NAME DESIRED CURRENT READY AGE
mydep-54f7ff7d6d 3 3 3 13s
mydep-6766777fff 0 0 0 5h

$ kubectl get pods


NAME READY STATUS RESTARTS AGE
mydep-54f7ff7d6d-9gvll 1/1 Running 0 10s
mydep-54f7ff7d6d-cqvlq 1/1 Running 0 8s
mydep-54f7ff7d6d-gccr6 1/1 Running 0 5s
RollingUpdate Deployment

Updated to new deployment


revision completed.

R1 pod-template-hash:
676677fff
R2 pod-template-hash:
54f7ff7d6d

$ kubectl get replicaset


NAME DESIRED CURRENT READY AGE
mydep-54f7ff7d6d 3 3 3 15s
mydep-6766777fff 0 0 0 5h

$ kubectl get pods


NAME READY STATUS RESTARTS AGE
mydep-54f7ff7d6d-9gvll 1/1 Running 0 12s mydep-54f7ff7d6d-
cqvlq 1/1 Running 0 10s mydep-54f7ff7d6d-gccr6 1/1
Running 0 7s
DaemonSet

● Ensure that all nodes matching certain criteria will run an


instance of the supplied Pod.
● They bypass default scheduling mechanisms.
● Are ideal for cluster wide services such as log forwarding, or
health monitoring.
● Revisions are managed via a controller-revision-hash label.
DaemonSet

● revisionHistoryLimit: The number of apiVersion: apps/v1


kind: DaemonSet
previous iterations of the DaemonSet to metadata:
retain. name: ds-example
spec:
● updateStrategy: Describes the method revisionHistoryLimit: 3
selector:
of updating the Pods based on the type.
matchLabels:
Valid options are RollingUpdate or app: nginx
OnDelete. updateStrategy:
type: RollingUpdate
○ RollingUpdate: Cycles through updating the rollingUpdate:
Pods according to the value of maxUnavailable. maxUnavailable: 1
template:
○ OnDelete: The new instance of the Pod is spec:
deployed ONLY after the current instance is nodeSelector:
deleted. nodeType: edge
<pod template>
DaemonSet
apiVersion: apps/v1
kind: DaemonSet
● spec.template.spec.nodeSelector: metadata:
name: ds-example
The primary selector used to target nodes.
spec:
● Default Host Labels: revisionHistoryLimit: 3
selector:
○ kubernetes.io/hostname matchLabels:
app: nginx
○ beta.kubernetes.io/os updateStrategy:
○ beta.kubernetes.io/arch type: RollingUpdate
rollingUpdate:
● Cloud Host Labels: maxUnavailable: 1
template:
○ failure-domain.beta.kubernetes.io/zone spec:
○ failure-domain.beta.kubernetes.io/region nodeSelector:
nodeType: edge
○ beta.kubernetes.io/instance-type <pod template>
DaemonSet
apiVersion: apps/v1
kind: DaemonSet $ kubectl get pods
metadata: NAME READY STATUS RESTARTS AGE
name: ds-example ds-example-x8kkz 1/1 Running 0 1m
spec:
revisionHistoryLimit: 3
$ kubectl describe ds ds-example
selector: Name: ds-example
matchLabels: Selector: app=nginx,env=prod
app: nginx Node-Selector: nodeType=edge
Labels: app=nginx
updateStrategy: env=prod
type: RollingUpdate Annotations: <none>
rollingUpdate: Desired Number of Nodes Scheduled: 1
Current Number of Nodes Scheduled: 1
maxUnavailable: 1
Number of Nodes Scheduled with Up-to-date Pods: 1
template: Number of Nodes Scheduled with Available Pods: 1
metadata: Number of Nodes Misscheduled: 0
Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed
labels:
Pod Template:
app: nginx Labels: app=nginx
spec: env=prod
Containers:
nodeSelector:
nginx:
nodeType: edge Image: nginx:stable-alpine
containers: Port: 80/TCP
- name: nginx Environment: <none>
Mounts: <none>
image: nginx:stable-alpine Volumes: <none>
ports: Events:
- containerPort: 80 Type Reason Age From Message

Normal SuccessfulCreate 48s daemonset-controller Created pod: ds-example-x8kkz


StatefulSet

● Tailored to managing Pods that must persist or


maintain state.
● Pod identity including hostname, network, and
storage WILL be persisted.
● Assigned a unique ordinal name following the
convention of ‘<sĞaĞefulseĞ name>-<ordinal index>’.
StatefulSet

● Naming convention is also used in Pod’s network


Identity and Volumes.
● Pod lifecycle will be ordered and follow consistent
patterns.
● Revisions are managed via a controller-revision-hash
label
StatefulSet
apiVersion: apps/v1
kind: StatefulSet
<continued>
metadata: spec:
name: sts-example containers:
spec: - name: nginx
replicas: 2 image: nginx:stable-alpine
revisionHistoryLimit: 3 ports:
selector: - containerPort: 80
matchLabels: volumeMounts:
app: stateful - name: www
serviceName: app mountPath: /usr/share/nginx/html
updateStrategy: volumeClaimTemplates:
type: RollingUpdate - metadata:
rollingUpdate: name: www
partition: 0 spec:
template: accessModes: [ "ReadWriteOnce" ]
metadata: storageClassName: standard
labels: resources:
app: stateful requests:
storage: 1Gi
<continued>
StatefulSet

apiVersion: apps/v1
● revisionHistoryLimit: The kind: StatefulSet
number of previous iterations of the metadata:
name: sts-example
StatefulSet to retain. spec:
replicas: 2
● serviceName: The name of the
revisionHistoryLimit: 3
associated headless service; or a selector:
service without a ClusterIP. matchLabels:
app: stateful
serviceName: app
updateStrategy:
type: RollingUpdate
rollingUpdate:
partition: 0
template:
<pod template>
StatefulSet
apiVersion: apps/v1
● updateStrategy: Describes the method kind: StatefulSet
of updating the Pods based on the type. metadata:
name: sts-example
Valid options are OnDelete or spec:
RollingUpdate. replicas: 2
revisionHistoryLimit: 3
○ OnDelete: The new instance of the selector:
Pod is deployed ONLY after the matchLabels:
app: stateful
current instance is deleted. serviceName: app
updateStrategy:
○ RollingUpdate: Pods with an type: RollingUpdate
ordinal greater than the partition rollingUpdate:
value will be updated in one-by-one partition: 0
template:
in reverse order. <pod template>
Exercise 4

It’s time to deployment


bit.ly/dlh-kube-exo4
Kubernetes APIs
Batch api
Job

● Job controller ensures one or more pods are executed


and successfully terminate.
● Will continue to try and execute the job until it satisfies
the completion and/or parallelism condition.
● Pods are NOT cleaned up until the job itself is deleted.*
Job

apiVersion: batch/v1
● backoffLimit: The number of failures
kind: Job
before the job itself is considered failed. metadata:
name: job-example
● completions: The total number of
spec:
successful completions desired. backoffLimit: 4
completions: 4
● parallelism: How many instances of
parallelism: 2
the pod can be run concurrently. template:
spec:
● spec.template.spec.restartPolicy: restartPolicy: Never
Jobs only support a restartPolicy of <pod-template>
type Never or OnFailure.
Job
apiVersion: batch/v1 $ kubectl describe job job-example
kind: Job Name: job-example
Namespace: default
metadata: Selector: controller-uid=19d122f4-1576-11e8-a4e2-080027a3682b
name: job-example Labels: controller-uid=19d122f4-1576-11e8-a4e2-080027a3682b
job-name=job-example
spec: Annotations: <none>
Parallelism: 2
backoffLimit: 4 Completions: 4
completions: 4 Start Time: Mon, 19 Feb 2018 08:09:21 -0500
Pods Statuses: 0 Running / 4 Succeeded / 0 Failed
parallelism: 2 Pod Template:
Labels: controller-uid=19d122f4-1576-11e8-a4e2-080027a3682b
template: job-name=job-example
spec: Containers:
hello:
containers: Image: alpine:latest
Port: <none>
- name: hello Command:
image: alpine:latest /bin/sh
-c
command: ["/bin/sh", "-c"] Args:
args: ["echo hello from $HOSTNAME!"] echo hello from $HOSTNAME!
Environment: <none>
restartPolicy: Never Mounts: <none>
Volumes: <none>
Events:
$ kubectl get pods --show-all Type Reason Age From Message
NAME READY STATUS RESTARTS AGE ---- ------ ---- ---- -------
Normal SuccessfulCreate 52m job-controller Created pod: job-example-v5fvq
job-example-dvxd2 0/1 Completed 0 51m job-example- Normal SuccessfulCreate 52m job-controller Created pod: job-example-hknns
hknns 0/1 Completed 0 52m job-example-tphkm 0/1 Normal SuccessfulCreate 51m job-controller Created pod: job-example-tphkm
Completed 0 51m job-example-v5fvq 0/1 Normal SuccessfulCreate 51m job-controller Created pod: job-example-dvxd2

Completed 0 52m
CronJob

An extension of the Job Controller, it provides a method of


executing jobs on a cron-like schedule.

CronJobs within Kubernetes


use UTC ONLY.
CronJob

● schedule: The cron schedule apiVersion: batch/v1beta1


kind: CronJob
for the job.
metadata:
● successfulJobHistoryLimit: name: cronjob-example
The number of successful jobs to spec:
schedule: "*/1 * * * *"
retain.
successfulJobsHistoryLimit: 3
● failedJobHistoryLimit: The failedJobsHistoryLimit: 1
number of failed jobs to retain. jobTemplate:
spec:
completions: 4
parallelism: 2
template:
<pod template>
CronJob
apiVersion: batch/v1beta1 $ kubectl describe cronjob cronjob-example
kind: CronJob Name: cronjob-example
Namespace: default
metadata: Labels: <none>
name: cronjob-example Annotations: <none>
Schedule: */1 * * * *
spec:
Concurrency Policy: Allow
schedule: "*/1 * * * *" Suspend: False
successfulJobsHistoryLimit: 3 Starting Deadline Seconds: <unset>
Selector: <unset>
failedJobsHistoryLimit: 1
Parallelism: 2
jobTemplate: Completions: 4
spec: Pod Template:
Labels: <none>
completions: 4 Containers:
parallelism: 2 hello:
template: Image: alpine:latest
Port: <none>
spec: Command:
containers: /bin/sh
-c
- name: hello
Args:
image: alpine:latest echo hello from $HOSTNAME!
command: ["/bin/sh", "-c"] Environment: <none>
Mounts: <none>
args: ["echo hello from $HOSTNAME!"]
Volumes: <none>
restartPolicy: Never Last Schedule Time: Mon, 19 Feb 2018 09:54:00 -0500
Active Jobs: cronjob-example-1519052040
Events:
$ kubectl get jobs Type Reason Age From Message
NAME DESIRED SUCCESSFUL AGE ---- ------ ---- ---- -------
Normal SuccessfulCreate 3m cronjob-controller Created job cronjob-example-1519051860
cronjob-example-1519053240 4 4 2m
Normal SawCompletedJob 2m cronjob-controller Saw completed job: cronjob-example-1519051860
cronjob-example-1519053300 4 4 1m
Normal SuccessfulCreate 2m cronjob-controller Created job cronjob-example-1519051920
cronjob-example-1519053360 4 4 26s Normal SawCompletedJob 1m cronjob-controller Saw completed job: cronjob-example-1519051920
Normal SuccessfulCreate 1m cronjob-controller Created job cronjob-example-1519051980
Exercise 5

Jobs and scheduled Jobs


bit.ly/dlh-kube-exo5
Kubernetes Storage
Storage

Pods by themselves are useful, but many workloads


require exchanging data between containers, or
persisting some form of data.

For this we have Volumes, PersistentVolumes,


PersistentVolumeClaims, and StorageClasses.
Volumes
● Storage that is tied to the Pod’s Lifecycle.
● A pod can have one or more types of volumes attached
to it.
● Can be consumed by any of the containers within the
pod.
● Survive Pod restarts; however their durability beyond
that is dependent on the Volume Type.
Volume Types

● awsElasticBlockStore ● flocker ● projected


● azureDisk ● gcePersistentDisk ● portworxVolume
● azureFile ● gitRepo ● quobyte
● cephfs ● glusterfs ● rbd
● configMap ● hostPath ● scaleIO
● csi ● iscsi ● secret
● downwardAPI ● local ● storageos
● emptyDir ● nfs ● vsphereVolume
● fc (fibre channel) ● persistentVolume
Claim

Persistent Volume Supported


Volumes
apiVersion: v1
kind: Pod

● volumes: A list of volume objects metadata:


name: volume-example

to be attached to the Pod. Every spec:


containers:

object within the list must have - name: nginx


image: nginx:stable-alpine
it’s own unique name. volumeMounts:
- name: html
mountPath: /usr/share/nginx/html
● volumeMounts: A container ReadOnly: true
- name: content
specific list referencing the Pod image: alpine:latest
command: ["/bin/sh", "-c"]
volumes by name, along with their args:
- while true; do
desired mountPath. date >> /html/index.html;
sleep 5;
done
volumeMounts:
- name: html
mountPath: /html
volumes:
- name: html
emptyDir: {}
Volumes
● volumes: A list of volume objects apiVersion: v1
kind: Pod

to be attached to the Pod. Every metadata:


name: volume-example

object within the list must have spec:


containers:

it’s own unique name. - name: nginx


image: nginx:stable-alpine
volumeMounts:
● volumeMounts: A container - name: html
mountPath: /usr/share/nginx/html
specific list referencing the Pod ReadOnly: true
- name: content
volumes by name, along with their image: alpine:latest
command: ["/bin/sh", "-c"]
desired mountPath. args:
- while true; do
date >> /html/index.html;
sleep 5;
done
volumeMounts:
- name: html
mountPath: /html
volumes:
- name: html
emptyDir: {}
Volumes
● volumes: A list of volume objects apiVersion: v1
kind: Pod

to be attached to the Pod. Every metadata:


name: volume-example

object within the list must have spec:


containers:

it’s own unique name. - name: nginx


image: nginx:stable-alpine
volumeMounts:
● volumeMounts: A container - name: html
mountPath: /usr/share/nginx/html
specific list referencing the Pod ReadOnly: true
- name: content
volumes by name, along with their image: alpine:latest
command: ["/bin/sh", "-c"]
desired mountPath. args:
- while true; do
date >> /html/index.html;
sleep 5;
done
volumeMounts:
- name: html
mountPath: /html
volumes:
- name: html
emptyDir: {}
Persistent Volumes

● A PersistentVolume (PV) represents a storage resource.


● PVs are a cluster wide resource linked to a backing
storage provider: NFS, GCEPersistentDisk, RBD etc.
● Generally provisioned by an administrator.
● Their lifecycle is handled independently from a pod
● CANNOT be attached to a Pod directly. Relies on a
PersistentVolumeClaim
PersistentVolumeClaims

● A PersistentVolumeClaim (PVC) is a namespaced


request for storage.
● Satisfies a set of requirements instead of mapping to a
storage resource directly.
● Ensures that an application’s ‘claim’ for storage is
portable across numerous backends or providers.
Persistent Volumes and Claims

Cluster Cluster
Users Admins
PersistentVolume
● capacity.storage: The total apiVersion: v1
kind: PersistentVolume
amount of available storage. metadata:
name: nfsserver
● volumeMode: The type of volume, spec:
capacity:
this can be either Filesystem or storage: 50Gi
Block. volumeMode: Filesystem
accessModes:

● accessModes: A list of the - ReadWriteOnce


- ReadWriteMany
supported methods of accessing persistentVolumeReclaimPolicy: Delete
storageClassName: slow
the volume. Options include: mountOptions:
- hard
○ ReadWriteOnce - nfsvers=4.1
nfs:
○ ReadOnlyMany path: /exports
○ ReadWriteMany server: 172.22.0.42
PersistentVolume
● persistentVolumeReclaimPolicy:
apiVersion: v1
The behaviour for PVC’s that have kind: PersistentVolume
been deleted. Options include: metadata:
name: nfsserver
○ Retain - manual clean-up spec:
capacity:
○ Delete - storage asset deleted by storage: 50Gi
provider. volumeMode: Filesystem
accessModes:
● storageClassName: Optional name - ReadWriteOnce
- ReadWriteMany
of the storage class that PVC’s can persistentVolumeReclaimPolicy: Delete

reference. If provided, ONLY PVC’s storageClassName: slow


mountOptions:
referencing the name consume use it. - hard
- nfsvers=4.1
● mountOptions: Optional mount nfs:
path: /exports
options for the PV. server: 172.22.0.42
PersistentVolumeClaim

● accessModes: The selected method of kind: PersistentVolumeClaim


apiVersion: v1
accessing the storage. This MUST be a
metadata:
subset of what is defined on the target
name: pvc-sc-example
PV or Storage Class. spec:
○ ReadWriteOnce accessModes:
○ ReadOnlyMany - ReadWriteOnce
○ ReadWriteMany resources:
requests:
● resources.requests.storage: The storage: 1Gi
desired amount of storage for the claim storageClassName: slow

● storageClassName: The name of the


desired Storage Class
PVs and PVCs with Selectors

kind: PersistentVolume kind: PersistentVolumeClaim


apiVersion: v1 apiVersion: v1
metadata: metadata:
name: pv-selector-example name: pvc-selector-example
labels: spec:
type: hostpath accessModes:
spec: - ReadWriteMany
capacity: resources:
storage: 2Gi requests:
accessModes: storage: 1Gi
- ReadWriteMany selector:
hostPath: matchLabels:
path: "/mnt/data" type: hostpath
PersistentVolume

kind: PersistentVolume kind: PersistentVolumeClaim


apiVersion: v1 apiVersion: v1
metadata: metadata:
name: pv-selector-example name: pvc-selector-example
labels: spec:
type: hostpath accessModes:
spec: - ReadWriteMany
capacity: resources:
storage: 2Gi requests:
accessModes: storage: 1Gi
- ReadWriteMany selector:
hostPath: matchLabels:
path: "/mnt/data" type: hostpath
PV Phases

Available Bound Released Failed

PV is ready The PV has The binding An error has


and available been bound PVC has been
to be to a claim. been encountered
consumed. deleted, and attempting
the PV is to reclaim
pending the PV.
reclamation.
StorageClass

● Storage classes are an abstraction on top of an


external storage resource (PV)
● Work hand-in-hand with the external storage
system to enable dynamic provisioning of storage
● Eliminates the need for the cluster admin to
pre-provision a PV
StorageClass

2. StorageClass provisions
request through API with
1. PVC makes a request of external storage system.
the StorageClass.

uid: 9df65c6e-1a69-11e8-ae10-080027a3682b

4. provisioned PV is bound
to requesting PVC.
3. External storage system
creates a PV strictly satisfying
pv: pvc-9df65c6e-1a69-11e8-ae10-080027a3682b
the PVC request.
StorageClass

● provisioner: Defines the ‘driver’ kind: StorageClass


apiVersion: storage.k8s.io/v1
to be used for provisioning of the metadata:
external storage. name: standard
provisioner: kubernetes.io/gce-pd
● parameters: A hash of the parameters:
various configuration parameters type: pd-standard
zones: us-central1-a, us-central1-b
for the provisioner. reclaimPolicy: Delete
● reclaimPolicy: The behaviour
for the backing storage when the
PVC is deleted.
○ Retain - manual clean-up
○ Delete - storage asset deleted by
provider
Available StorageClasses

● AWSElasticBlockStore ● iSCSI
● AzureFile ● Quobyte
● AzureDisk ● NFS
● CephFS ● RBD
● Cinder ● VsphereVolume
● FC ● PortworxVolume
● Flocker ● ScaleIO
● GCEPersistentDisk ● StorageOS
● Glusterfs ● Local

Internal Provisioner
Exercise 6

Use storage
bit.ly/dlh-kube-exo6
Configmap and Secret
Configuration

Kubernetes has an integrated pattern for decoupling


configuration from application or container.

This pattern makes use of two Kubernetes components:


ConfigMaps and Secrets.
ConfigMap

● Externalized data stored within kubernetes.


● Can be referenced through several different means:
○ environment variable
○ a command line argument (via env var)
○ injected as a file into a volume mount
● Can be created from a manifest, literals, directories,
or files directly.
ConfigMap

data: Contains key-value apiVersion: v1


kind: ConfigMap
pairs of ConfigMap contents.
metadata:
name: manifest-example
data:
state: Michigan
city: Ann Arbor
content: |
Look at this,
its multiline!
ConfigMap Example

All produce a ConfigMap with the same content!


apiVersion: v1 $ kubectl create configmap literal-example \
> --from-literal="city=Ann Arbor" --from-literal=state=Michigan
kind: ConfigMap configmap “literal-example” created
metadata:
name: manifest-example $ cat info/city
Ann Arbor
data: $ cat info/state
city: Ann Arbor Michigan
$ kubectl create configmap dir-example --from-file=cm/
state: Michigan configmap "dir-example" created

$ cat info/city
Ann Arbor
$ cat info/state
Michigan
$ kubectl create configmap file-example --from-file=cm/city --from-file=cm/state
configmap "file-example" created
ConfigMap Example

All produce a ConfigMap with the same content!


apiVersion: v1 $ kubectl create configmap literal-example \
> --from-literal="city=Ann Arbor" --from-literal=state=Michigan
kind: ConfigMap configmap “literal-example” created
metadata:
name: manifest-example $ cat info/city
Ann Arbor
data: $ cat info/state
city: Ann Arbor Michigan
$ kubectl create configmap dir-example --from-file=cm/
state: Michigan configmap "dir-example" created

$ cat info/city
Ann Arbor
$ cat info/state
Michigan
$ kubectl create configmap file-example --from-file=cm/city --from-file=cm/state
configmap "file-example" created
ConfigMap Example

All produce a ConfigMap with the same content!


apiVersion: v1 $ kubectl create configmap literal-example \
> --from-literal="city=Ann Arbor" --from-literal=state=Michigan
kind: ConfigMap configmap “literal-example” created
metadata:
name: manifest-example $ cat info/city
Ann Arbor
data: $ cat info/state
city: Ann Arbor Michigan
$ kubectl create configmap dir-example --from-file=cm/
state: Michigan configmap "dir-example" created

$ cat info/city
Ann Arbor
$ cat info/state
Michigan
$ kubectl create configmap file-example --from-file=cm/city --from-file=cm/state
configmap "file-example" created
ConfigMap Example

All produce a ConfigMap with the same content!


apiVersion: v1 $ kubectl create configmap literal-example \
> --from-literal="city=Ann Arbor" --from-literal=state=Michigan
kind: ConfigMap configmap “literal-example” created
metadata:
name: manifest-example $ cat info/city
Ann Arbor
data: $ cat info/state
city: Ann Arbor Michigan
$ kubectl create configmap dir-example --from-file=cm/
state: Michigan configmap "dir-example" created

$ cat info/city
Ann Arbor
$ cat info/state
Michigan
$ kubectl create configmap file-example --from-file=cm/city --from-file=cm/state
configmap "file-example" created
Secret

● Functionally identical to a ConfigMap.


● Stored as base64 encoded content.
● Encrypted at rest within etcd (if configured!).
● Ideal for username/passwords, certificates or other
sensitive information that should not be stored in a
container.
● Can be created from a manifest, literals, directories, or
from files directly.
Secret

apiVersion: v1
● type: There are three different
kind: Secret
types of secrets within Kubernetes: metadata:
name: manifest-secret
○ docker-registry - credentials
type: Opaque
used to authenticate to a data:
container registry username: ZXhhbXBsZQ==
○ generic/Opaque - literal values password: bXlwYXNzd29yZA==

from different sources


○ tls - a certificate based secret
● data: Contains key-value pairs of
base64 encoded content.
Secret Example

All produce a Secret with the same content!


apiVersion: v1 $ kubectl create secret generic literal-secret \
> --from-literal=username=example \
kind: Secret > --from-literal=password=mypassword
metadata: secret "literal-secret" created

name: manifest-example
$ cat info/username
type: Opaque example
data: $ cat info/password
mypassword
username: ZXhhbXBsZQ== $ kubectl create secret generic dir-secret --from-file=secret/
password: bXlwYXNzd29yZA== Secret "file-secret" created

$ cat secret/username
example
$ cat secret/password
mypassword
$ kubectl create secret generic file-secret --from-file=secret/username --from-file=secret/password
Secret "file-secret" created
Secret Example

All produce a Secret with the same content!


apiVersion: v1 $ kubectl create secret generic literal-secret \
> --from-literal=username=example \
kind: Secret > --from-literal=password=mypassword
metadata: secret "literal-secret" created

name: manifest-example
$ cat info/username
type: Opaque example
data: $ cat info/password
mypassword
username: ZXhhbXBsZQ== $ kubectl create secret generic dir-secret --from-file=secret/
password: bXlwYXNzd29yZA== Secret "file-secret" created

$ cat secret/username
example
$ cat secret/password
mypassword
$ kubectl create secret generic file-secret --from-file=secret/username --from-file=secret/password
Secret "file-secret" created
Secret Example

All produce a Secret with the same content!


apiVersion: v1 $ kubectl create secret generic literal-secret \
> --from-literal=username=example \
kind: Secret > --from-literal=password=mypassword
metadata: secret "literal-secret" created

name: manifest-example
$ cat info/username
type: Opaque example
data: $ cat info/password
mypassword
username: ZXhhbXBsZQ== $ kubectl create secret generic dir-secret --from-file=secret/
password: bXlwYXNzd29yZA== Secret "file-secret" created

$ cat secret/username
example
$ cat secret/password
mypassword
$ kubectl create secret generic file-secret --from-file=secret/username --from-file=secret/password
Secret "file-secret" created
Secret Example

All produce a Secret with the same content!


apiVersion: v1 $ kubectl create secret generic literal-secret \
> --from-literal=username=example \
kind: Secret > --from-literal=password=mypassword
metadata: secret "literal-secret" created

name: manifest-example
$ cat info/username
type: Opaque example
data: $ cat info/password
mypassword
username: ZXhhbXBsZQ== $ kubectl create secret generic dir-secret --from-file=secret/
password: bXlwYXNzd29yZA== Secret "file-secret" created

$ cat secret/username
example
$ cat secret/password
mypassword
$ kubectl create secret generic file-secret --from-file=secret/username --from-file=secret/password
Secret "file-secret" created
Injecting as Environment Variable
apiVersion: batch/v1 apiVersion: batch/v1
kind: Job kind: Job
metadata: metadata:
name: cm-env-example name: secret-env-example
spec: spec:
template: template:
spec: spec:
containers: containers:
- name: mypod - name: mypod
image: alpine:latest image: alpine:latest
command: [“/bin/sh”, “-c”] command: [“/bin/sh”, “-c”]
args: [“printenv CITY”] args: [“printenv USERNAME”]
env: env:
- name: CITY - name: USERNAME
valueFrom: valueFrom:
configMapKeyRef: secretKeyRef:
name: manifest-example name: manifest-example
key: city key: username
restartPolicy: Never restartPolicy: Never
Exercise 7

ConfigMap
bit.ly/dlh-kube-exo7

You might also like