Skip to content

Commit b29d7be

Browse files
committed
2019-02-11 完善 K8S
1 parent 5ffb78e commit b29d7be

File tree

1 file changed

+76
-64
lines changed

1 file changed

+76
-64
lines changed

markdown-file/K8S-Install-And-Usage.md

Lines changed: 76 additions & 64 deletions
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,7 @@
4848
- <https://github.com/kubernetes-incubator/kubespray>
4949
- <https://github.com/apprenda/kismatic>
5050

51-
#### 开始安装 - Kubernetes 1.13.2 版本
51+
#### 开始安装 - Kubernetes 1.13.3 版本
5252

5353
- 三台机子:
5454
- master-1:`192.168.0.127`
@@ -61,25 +61,29 @@
6161
- 核心,查看可以安装的 Docker 列表:`yum list docker-ce --showduplicates`
6262
- 所有节点设置 kubernetes repo 源,并安装 Kubeadm、Kubelet、Kubectl 都设置阿里云的源
6363
- Kubeadm 初始化集群过程当中,它会下载很多的镜像,默认也是去 Google 家里下载。但是 1.13 新增了一个配置:`--image-repository` 算是救了命。
64-
- 具体流程:
6564

66-
```
67-
主机时间同步
68-
systemctl start chronyd.service && systemctl enable chronyd.service
65+
#### 安装具体流程
66+
67+
- 同步所有机子时间:`systemctl start chronyd.service && systemctl enable chronyd.service`
68+
- 所有机子禁用防火墙、selinux、swap
6969

70+
```
7071
systemctl stop firewalld.service
7172
systemctl disable firewalld.service
7273
systemctl disable iptables.service
7374
75+
iptables -P FORWARD ACCEPT
7476
7577
setenforce 0 && sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
7678
7779
echo "vm.swappiness = 0" >> /etc/sysctl.conf
7880
swapoff -a && sysctl -w vm.swappiness=0
81+
```
7982

8083

84+
- 给各自机子设置 hostname 和 hosts
8185

82-
86+
```
8387
hostnamectl --static set-hostname k8s-master-1
8488
hostnamectl --static set-hostname k8s-node-1
8589
hostnamectl --static set-hostname k8s-node-2
@@ -89,30 +93,30 @@ vim /etc/hosts
8993
192.168.0.127 k8s-master-1
9094
192.168.0.128 k8s-node-1
9195
192.168.0.129 k8s-node-2
96+
```
9297

93-
master 免密
94-
生产密钥对
95-
ssh-keygen -t rsa
98+
- 给 master 设置免密
9699

100+
```
101+
ssh-keygen -t rsa
97102
98-
公钥内容写入 authorized_keys
99103
cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys
100104
101-
测试:
105+
102106
ssh localhost
103107
104-
将公钥复制到其他机子
105108
ssh-copy-id -i ~/.ssh/id_rsa.pub -p 22 root@k8s-node-1(根据提示输入 k8s-node-1 密码)
106109
ssh-copy-id -i ~/.ssh/id_rsa.pub -p 22 root@k8s-node-2(根据提示输入 k8s-node-2 密码)
107110
108-
109-
在 linux01 上测试
110111
ssh k8s-master-1
111112
ssh k8s-node-1
112113
ssh k8s-node-2
114+
```
113115

114116

117+
- 给所有机子设置 yum 源
115118

119+
```
116120
vim /etc/yum.repos.d/kubernetes.repo
117121
118122
[kubernetes]
@@ -126,18 +130,13 @@ gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors
126130
127131
scp -r /etc/yum.repos.d/kubernetes.repo root@k8s-node-1:/etc/yum.repos.d/
128132
scp -r /etc/yum.repos.d/kubernetes.repo root@k8s-node-2:/etc/yum.repos.d/
133+
```
129134

135+
- 给 master 机子创建 flannel 配置文件
130136

137+
```
138+
mkdir -p /etc/cni/net.d && vim /etc/cni/net.d/10-flannel.conflist
131139
132-
所有机子
133-
iptables -P FORWARD ACCEPT
134-
135-
所有机子
136-
yum install -y kubelet-1.13.2 kubeadm-1.13.2 kubectl-1.13.2 --disableexcludes=kubernetes
137-
138-
139-
所有机子
140-
mkdir -p /etc/cni/net.d && vim /etc/cni/net.d/10-flannel.conflist,内容
141140
{
142141
"name": "cbr0",
143142
"plugins": [
@@ -156,17 +155,15 @@ mkdir -p /etc/cni/net.d && vim /etc/cni/net.d/10-flannel.conflist,内容
156155
}
157156
]
158157
}
158+
```
159159

160160

161161

162-
所有机子
163-
vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
164-
最后一行添加:Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"
165-
166-
162+
- 给所有机子创建配置
167163

168-
所有机子必须配置:
164+
```
169165
vim /etc/sysctl.d/k8s.conf
166+
170167
net.bridge.bridge-nf-call-ip6tables = 1
171168
net.bridge.bridge-nf-call-iptables = 1
172169
net.ipv4.ip_forward=1
@@ -177,37 +174,50 @@ scp -r /etc/sysctl.d/k8s.conf root@k8s-node-1:/etc/sysctl.d/
177174
scp -r /etc/sysctl.d/k8s.conf root@k8s-node-2:/etc/sysctl.d/
178175
179176
modprobe br_netfilter && sysctl -p /etc/sysctl.d/k8s.conf
177+
```
178+
179+
- 给所有机子安装组件
180+
181+
```
182+
yum install -y kubelet-1.13.3 kubeadm-1.13.3 kubectl-1.13.3 --disableexcludes=kubernetes
183+
```
184+
185+
- 给所有机子添加一个变量
186+
187+
```
188+
vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
180189
190+
最后一行添加:Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"
191+
```
181192

182193

183-
所有机子
194+
- 启动所有机子
195+
196+
```
184197
systemctl enable kubelet && systemctl start kubelet
185198
186199
kubeadm version
187200
kubectl version
188201
189-
190202
```
191203

192204
- 初始化 master 节点:
193205

194206
```
195207
echo 1 > /proc/sys/net/ipv4/ip_forward
196208
197-
推荐:
209+
198210
kubeadm init \
199211
--image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \
200212
--pod-network-cidr 10.244.0.0/16 \
201-
--kubernetes-version 1.13.2 \
202-
--service-cidr 10.96.0.0/12 \
203-
--apiserver-advertise-address=0.0.0.0 \
213+
--kubernetes-version 1.13.3 \
204214
--ignore-preflight-errors=Swap
205215
206-
10.244.0.0/16是 flannel 插件固定使用的ip段,它的值取决于你准备安装哪个网络插件
216+
其中 10.244.0.0/16 是 flannel 插件固定使用的ip段,它的值取决于你准备安装哪个网络插件
207217
208218
这个过程会下载一些 docker 镜像,时间可能会比较久,看你网络情况。
209219
终端会输出核心内容:
210-
[init] Using Kubernetes version: v1.13.2
220+
[init] Using Kubernetes version: v1.13.3
211221
[preflight] Running pre-flight checks
212222
[preflight] Pulling images required for setting up a Kubernetes cluster
213223
[preflight] This might take a minute or two, depending on the speed of your internet connection
@@ -216,19 +226,19 @@ kubeadm init \
216226
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
217227
[kubelet-start] Activating the kubelet service
218228
[certs] Using certificateDir folder "/etc/kubernetes/pki"
219-
[certs] Generating "ca" certificate and key
220-
[certs] Generating "apiserver" certificate and key
221-
[certs] apiserver serving cert is signed for DNS names [k8s-master-1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.127]
222-
[certs] Generating "apiserver-kubelet-client" certificate and key
223229
[certs] Generating "front-proxy-ca" certificate and key
224230
[certs] Generating "front-proxy-client" certificate and key
225231
[certs] Generating "etcd/ca" certificate and key
226232
[certs] Generating "etcd/server" certificate and key
227233
[certs] etcd/server serving cert is signed for DNS names [k8s-master-1 localhost] and IPs [192.168.0.127 127.0.0.1 ::1]
228-
[certs] Generating "apiserver-etcd-client" certificate and key
229234
[certs] Generating "etcd/peer" certificate and key
230235
[certs] etcd/peer serving cert is signed for DNS names [k8s-master-1 localhost] and IPs [192.168.0.127 127.0.0.1 ::1]
231236
[certs] Generating "etcd/healthcheck-client" certificate and key
237+
[certs] Generating "apiserver-etcd-client" certificate and key
238+
[certs] Generating "ca" certificate and key
239+
[certs] Generating "apiserver-kubelet-client" certificate and key
240+
[certs] Generating "apiserver" certificate and key
241+
[certs] apiserver serving cert is signed for DNS names [k8s-master-1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.127]
232242
[certs] Generating "sa" key and public key
233243
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
234244
[kubeconfig] Writing "admin.conf" kubeconfig file
@@ -241,13 +251,13 @@ kubeadm init \
241251
[control-plane] Creating static Pod manifest for "kube-scheduler"
242252
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
243253
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
244-
[apiclient] All control plane components are healthy after 18.002437 seconds
254+
[apiclient] All control plane components are healthy after 19.001686 seconds
245255
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
246256
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
247257
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-master-1" as an annotation
248258
[mark-control-plane] Marking the node k8s-master-1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
249259
[mark-control-plane] Marking the node k8s-master-1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
250-
[bootstrap-token] Using token: yes6xf.5huewerdtfxafde5
260+
[bootstrap-token] Using token: 8tpo9l.jlw135r8559kaad4
251261
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
252262
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
253263
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
@@ -271,35 +281,40 @@ Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
271281
You can now join any number of machines by running the following on each node
272282
as root:
273283
274-
kubeadm join 192.168.0.127:6443 --token yes6xf.5huewerdtfxafde5 --discovery-token-ca-cert-hash sha256:98dd48ac4298e23f9c275309bfd8b69c5b3166752ccf7a36c2affcb7c1988781
275-
284+
kubeadm join 192.168.0.127:6443 --token 8tpo9l.jlw135r8559kaad4 --discovery-token-ca-cert-hash sha256:d6594ccc1310a45cbebc45f1c93f5ac113873786365ed63efcf667c952d7d197
285+
```
276286

287+
- 给 master 机子设置配置
277288

278-
master 机子:
289+
```
279290
mkdir -p $HOME/.kube
280291
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
281292
sudo chown $(id -u):$(id -g) $HOME/.kube/config
282293
export KUBECONFIG=$HOME/.kube/config
294+
```
283295

284-
查询我们的 token
296+
- 在 master 上查看一些环境
297+
298+
```
285299
kubeadm token list
286300
287301
kubectl cluster-info
302+
```
288303

304+
- 给 master 安装 Flannel
289305

290-
master 安装 Flannel
306+
```
291307
cd /opt && wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
292308
293309
kubectl apply -f /opt/kube-flannel.yml
294-
295310
```
296311

297-
- 到 node 节点进行加入
312+
- 到 node 节点加入集群
298313

299314
```
300315
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
301316
302-
kubeadm join 192.168.0.127:6443 --token yes6xf.5huewerdtfxafde5 --discovery-token-ca-cert-hash sha256:98dd48ac4298e23f9c275309bfd8b69c5b3166752ccf7a36c2affcb7c1988781
317+
kubeadm join 192.168.0.127:6443 --token 8tpo9l.jlw135r8559kaad4 --discovery-token-ca-cert-hash sha256:d6594ccc1310a45cbebc45f1c93f5ac113873786365ed63efcf667c952d7d197
303318
304319
这时候终端会输出:
305320
@@ -323,35 +338,32 @@ This node has joined the cluster:
323338
* The Kubelet was informed of the new secure connection details.
324339
325340
Run 'kubectl get nodes' on the master to see this node join the cluster.
341+
```
326342

343+
- 如果 node 节点加入失败,可以:`kubeadm reset`,再来重新 join
344+
- 在 master 节点上:`kubectl get cs`
327345

328-
如果 node 节点加入失败,可以:kubeadm reset,再来重新 join
329-
330-
331-
332-
333-
在 master 节点上:kubectl get cs
346+
```
334347
NAME STATUS MESSAGE ERROR
335348
controller-manager Healthy ok
336349
scheduler Healthy ok
337350
etcd-0 Healthy {"health": "true"}
338351
结果都是 Healthy 则表示可以了,不然就得检查。必要时可以用:`kubeadm reset` 重置,重新进行集群初始化
352+
```
339353

340354

355+
- 在 master 节点上:`kubectl get nodes`
341356

342-
验证:
343-
kubectl get nodes
344-
如果还是 NotReady,则查看错误信息:
345-
kubectl get pods --all-namespaces,其中:Pending/ContainerCreating/ImagePullBackOff 都是 Pod 没有就绪,我们可以这样查看对应 Pod 遇到了什么问题
357+
```
358+
如果还是 NotReady,则查看错误信息:kubectl get pods --all-namespaces
359+
其中:Pending/ContainerCreating/ImagePullBackOff 都是 Pod 没有就绪,我们可以这样查看对应 Pod 遇到了什么问题
346360
kubectl describe pod <Pod Name> --namespace=kube-system
347361
或者:kubectl logs <Pod Name> -n kube-system
348362
tail -f /var/log/messages
349-
350363
```
351364

352365

353366

354-
355367
#### 主要概念
356368

357369
- Master 节点,负责集群的调度、集群的管理

0 commit comments

Comments
 (0)