欢迎您访问365答案网,请分享给你的朋友!
生活常识 学习资料

controller

时间:2023-07-27
一、pod控制器controller

controller官方文档
controller用于控制pod

控制器主要分为:

ReplicationController(相当于ReplicaSet的老版本,现在建议使用Deployments加ReplicaSet替代RC)ReplicaSet 副本集,控制pod扩容,裁减Deployments 控制pod升级,回退StatefulSets 部署有状态的pod应用DaemonSet 运行在所有集群节点(包括master), 比如使用filebeat,node_exporterJobs 一次性Cronjob 周期性 Deployment&ReplicaSet

Replicaset控制器的功能:

支持新的基于集合的selector(以前的rc里没有这种功能)通过改变Pod副本数量实现Pod的扩容和缩容

Deployment控制器的功能:

Deployment集成了上线部署、滚动升级、创建副本、回滚等功能Deployment里包含并使用了ReplicaSet

Deployment用于部署无状态应用

无状态应用的特点:

所有pod无差别所有pod中容器运行同一个image所有pod可以运行在集群中任意node上所有pod无启动顺序先后之分随意pod数量扩容或缩容例如简单运行一个静态web程序 创建deployment

1, 准备YAML文件

[root@master1 ~]# vim deployment-nginx.ymlapiVersion: apps/v1kind: Deploymentmetadata: name: deploy-nginx# deployment名spec: replicas: 1# 副本集,deployment里使用了replicaset selector: matchLabels: app: nginx# 匹配的pod标签,表示deployment和rs控制器控制带有此标签的pod template: # 代表pod的配置模板 metadata: labels: app: nginx# pod的标签 spec: containers:# 以下为pod里的容器定义 - name: nginx image: nginx:1.15-alpine imagePullPolicy: IfNotPresent ports: - containerPort: 80

2, 应用YAML文件创建deployment

[root@master1 ~]# kubectl apply -f deployment-nginx.ymldeployment.apps/deploy-nginx created

3, 查看验证

[root@master1 ~]# kubectl get deployment# deployment可简写成depolyNAME READY UP-TO-DATE AVAILABLE AGEdeploy-nginx 1/1 1 1 19s

[root@master1 ~]# kubectl get podsNAME READY STATUS RESTARTS AGEdeploy-nginx-6c9764bb69-pbc2h 1/1 Running 0 75s

[root@master1 ~]# kubectl get replicasets# replicasets可简写成rsNAME DESIRED CURRENT READY AGEdeploy-nginx-6c9764bb69 1 1 1 2m6s

访问deployment

1,查看pod的IP地址

[root@master1 ~]# kubectl get pods -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESdeploy-nginx-6c9764bb69-pbc2h 1/1 Running 0 4m 10.3.104.28 192.168.122.14 pod在192.168.122.14(node2)节点,pod的IP为10.3.104.28

2, 查看所有集群节点的网卡

[root@master1 ~]# ifconfig tunl0 |head -2tunl0: flags=193 mtu 1440 inet 10.3.137.64 netmask 255.255.255.255

[root@master2 ~]# ifconfig tunl0 |head -2tunl0: flags=193 mtu 1440 inet 10.3.180.0 netmask 255.255.255.255

[root@node1 ~]# ifconfig tunl0 |head -2tunl0: flags=193 mtu 1440 inet 10.3.166.128 netmask 255.255.255.255

[root@node2 ~]# ifconfig tunl0 |head -2tunl0: flags=193 mtu 1440 inet 10.3.104.0 netmask 255.255.255.255

可以看到所有集群节点的IP都为10.3.0.0/16这个大网段内的子网

3, 在任意集群节点上都可以访问此deploy里pod

# curl 10.3.104.28结果是任意集群节点都可以访问这个POD,但集群外部是不能访问的

删除deployment中的pod

1, 删除pod(注意: 是删除deployment中的pod,不是自主式pod)

[root@master1 ~]# kubectl delete pod deploy-nginx-6c9764bb69-pbc2hpod "deploy-nginx-6c9764bb69-pbc2h" deleted

2, 再次查看,发现又重新启动了一个pod(节点由192.168.122.134转为192.168.122.13 了,IP地址也变化了)

[root@master1 ~]# kubectl get pods -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESdeploy-nginx-6c9764bb69-x68zc 1/1 Running 0 49s 10.3.166.153 192.168.122.13

也就是说**pod的IP不是固定的,比如把整个集群关闭再启动,pod也会自动启动,但是IP地址也会变化**

既然IP地址不是固定的,所以需要一个固定的访问endpoint给用户,那么这种方式就是service.

pod版本升级

查看帮助

[root@master1 ~]# kubectl set image -h

1, 升级前验证nginx版本

[root@master1 ~]# kubectl describe pod deploy-nginx-6c9764bb69-x68zc |grep Image: Image: nginx:1.15-alpine [root@master1 ~]# kubectl exec deploy-nginx-6c9764bb69-x68zc -- nginx -vnginx version: nginx/1.15.12

2, 升级为1.16版

[root@master1 ~]# kubectl set image deployment deploy-nginx nginx=nginx:1.16-alpine --recorddeployment.apps/deploy-nginx image updated

说明:

deployment deploy-nginx代表名为deploy-nginx的deployment

nginx=nginx:1.16-alpine前面的nginx为容器名

–record 表示会记录

容器名怎么查看?

kubectl describe pod pod名查看

kubectl edit deployment deployment名来查看容器名

kubectl get deployment deployment名 -o yaml来查看容器名

3, 验证

如果升级的pod数量较多,则需要一定时间,可通过下面命令查看是否已经成功

[root@master1 ~]# kubectl rollout status deployment deploy-nginxdeployment "deploy-nginx" successfully rolled out

验证 pod

[root@master1 ~]# kubectl get podsNAME READY STATUS RESTARTS AGEdeploy-nginx-5f4749c8c8-nskp9 1/1 Running 0 104s 更新后,后面的id变了

验证版本

[root@master1 ~]# kubectl describe pod deploy-nginx-5f4749c8c8-nskp9 |grep Image: Image: nginx:1.16-alpine升级为1.16了 [root@master1 ~]# kubectl exec deploy-nginx-5f4749c8c8-nskp9 -- nginx -vnginx version: nginx/1.16.1升级为1.16了

练习: 再将nginx1升级为1.17版

[root@master1 ~]# kubectl set image deployment deploy-nginx nginx=nginx:1.17-alpine --recorddeployment.apps/deploy-nginx image updated

pod版本回退

1, 查看版本历史信息

[root@master1 ~]# kubectl rollout history deployment deploy-nginxdeployment.apps/deploy-nginxREVISION CHANGE-CAUSE1 原1.15版2 kubectl set image deployment deploy-nginx nginx=nginx:1.16-alpine --record=true3 kubectl set image deployment deploy-nginx nginx=nginx:1.17-alpine --record=true

2, 定义要回退的版本(还需要执行才是真的回退版本)

[root@master1 ~]# kubectl rollout history deployment deploy-nginx --revision=1deployment.apps/deploy-nginx with revision #1Pod Template: Labels: app=nginx pod-template-hash=6c9764bb69 Containers: nginx: Image: nginx:1.15-alpine可以看到这是要回退的1.15版本 Port: 80/TCP Host Port: 0/TCP Environment: Mounts: Volumes:

3, 执行回退

[root@master1 ~]# kubectl rollout undo deployment deploy-nginx --to-revision=1deployment.apps/deploy-nginx rolled back

4, 验证

[root@master1 ~]# kubectl rollout history deployment deploy-nginxdeployment.apps/deploy-nginxREVISION CHANGE-CAUSE2 kubectl set image deployment deploy-nginx nginx=nginx:1.16-alpine --record=true3 kubectl set image deployment deploy-nginx nginx=nginx:1.17-alpine --record=true4 回到了1.15版,但revision的ID变了

[root@master1 ~]# kubectl get podsNAME READY STATUS RESTARTS AGEdeploy-nginx-6c9764bb69-zgwpj 1/1 Running 0 54s

[root@master1 ~]# kubectl describe pod deploy-nginx-6c9764bb69-zgwpj |grep Image: Image: nginx:1.15-alpine回到了1.15版[root@master1 ~]# kubectl exec deploy-nginx-6c9764bb69-zgwpj -- nginx -vnginx version: nginx/1.15.12回到了1.15版

副本扩容

查看帮助

[root@master1 ~]# kubectl scale -h

1, 扩容为2个副本

[root@master1 ~]# kubectl scale deployment deploy-nginx --replicas=2deployment.apps/deploy-nginx scaled

2, 查看

[root@master1 ~]# kubectl get pods -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESdeploy-nginx-6c9764bb69-dksw5 1/1 Running 0 33s 10.3.104.33 192.168.122.14 deploy-nginx-6c9764bb69-zgwpj 1/1 Running 0 2m54s 10.3.166.156 192.168.122.13 在两个node节点上各1个pod

3, 继续扩容(我们这里只有2个node,但是可以大于node节点数据)

[root@master ~]# kubectl scale deployment deploy-nginx --replicas=4deployment.extensions/nginx1 scaled

[root@master1 ~]# kubectl get pods -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESdeploy-nginx-6c9764bb69-dksw5 1/1 Running 0 13m 10.3.104.33 192.168.122.14 deploy-nginx-6c9764bb69-gpv58 1/1 Running 0 13s 10.3.104.35 192.168.122.14 deploy-nginx-6c9764bb69-q2q9f 1/1 Running 0 13s 10.3.104.34 192.168.122.14 deploy-nginx-6c9764bb69-zgwpj 1/1 Running 0 15m 10.3.166.156 192.168.122.13

副本裁减

1, 指定副本数为1进行裁减

[root@master1 ~]# kubectl scale deployment deploy-nginx --replicas=1deployment.apps/deploy-nginx scaled

2, 查看验证

[root@master1 ~]# kubectl get podsNAME READY STATUS RESTARTS AGEdeploy-nginx-6c9764bb69-zgwpj 1/1 Running 0 16m

多副本滚动更新

1, 先扩容多点副本

[root@master1 ~]# kubectl scale deployment deploy-nginx --replicas=16deployment.apps/deploy-nginx scaled

2, 验证

[root@master ~]# kubectl get podsNAME READY STATUS RESTARTS AGEnginx1-7d9b8757cf-2hd48 1/1 Running 0 61snginx1-7d9b8757cf-5m72n 1/1 Running 0 61snginx1-7d9b8757cf-5w2xr 1/1 Running 0 61snginx1-7d9b8757cf-5wmdh 1/1 Running 0 61snginx1-7d9b8757cf-6szjj 1/1 Running 0 61snginx1-7d9b8757cf-9dgsw 1/1 Running 0 61snginx1-7d9b8757cf-dc7qj 1/1 Running 0 61snginx1-7d9b8757cf-l52pr 1/1 Running 0 61snginx1-7d9b8757cf-m7rt4 1/1 Running 0 26mnginx1-7d9b8757cf-mdkj2 1/1 Running 0 61snginx1-7d9b8757cf-s79kp 1/1 Running 0 61snginx1-7d9b8757cf-shhvk 1/1 Running 0 61snginx1-7d9b8757cf-sv8gb 1/1 Running 0 61snginx1-7d9b8757cf-xbhf4 1/1 Running 0 61snginx1-7d9b8757cf-zgdgd 1/1 Running 0 61snginx1-7d9b8757cf-zzljl 1/1 Running 0 61snginx2-559567f789-8hstz 1/1 Running 1 114m

3, 滚动更新

[root@master1 ~]# kubectl set image deployment deploy-nginx nginx=nginx:1.17-alpine --recorddeployment.apps/deploy-nginx image updated

4, 验证

[root@master1 ~]# kubectl rollout status deployment deploy-nginx......Waiting for deployment "deploy-nginx" rollout to finish: 13 of 16 updated replicas are available...Waiting for deployment "deploy-nginx" rollout to finish: 14 of 16 updated replicas are available...Waiting for deployment "deploy-nginx" rollout to finish: 15 of 16 updated replicas are available...deployment "deploy-nginx" successfully rolled out

删除deployment

如果使用 kubectl delete deployment deploy-nginx命令删除deployment,那么里面的pod也会被自动删除

YAML单独创建replicaset(拓展)

1, 编写YAML文件

[root@master ~]# vim rs-nginx.ymlapiVersion: apps/v1kind: ReplicaSetmetadata: name: rs-nginx namespace: defaultspec: # replicaset的spec replicas: 2 # 副本数 selector: # 标签选择器,对应pod的标签 matchLabels: app: nginx # 匹配的label template: metadata: name: nginx# pod名 labels: # 对应上面定义的标签选择器selector里面的内容 app: nginx spec: # pod的spec containers: - name: nginx image: nginx:1.15-alpine ports: - name: http containerPort: 80

2, 应用YAML文件

[root@master1 ~]# kubectl apply -f rs-nginx.ymlreplicaset.apps/rs-nginx created

3, 验证

[root@master1 ~]# kubectl get rsNAME DESIRED CURRENT READY AGErs-nginx 2 2 2 26s

[root@master1 ~]# kubectl get podsNAME READY STATUS RESTARTS AGErs-nginx-7j9hz 1/1 Running 0 44srs-nginx-pncsk 1/1 Running 0 43s

[root@master1 ~]# kubectl get deploymentNo resources found.找不到deployment,说明创建rs并没有创建deployment

二、pod控制器进阶 DaemonSet

DaemonSet能够让所有(或者特定)的节点运行同一个pod。当节点加入到K8S集群中,pod会被(DaemonSet)调度到该节点上运行,当节点从K8S集群中被移除,被DaemonSet调度的pod会被移除如果删除DaemonSet,所有跟这个DaemonSet相关的pods都会被删除。如果一个DaemonSet的Pod被杀死、停止、或者崩溃,那么DaemonSet将会重新创建一个新的副本在这台计算节点上。DaemonSet一般应用于日志收集、监控采集、分布式存储守护进程等

1, 编写YAML文件

[root@master ~]# vim daemonset-nginx.ymlapiVersion: apps/v1kind: DaemonSetmetadata: name: daemonset-nginxspec: selector: matchLabels: name: nginx-ds template: metadata: labels: name: nginx-ds spec: tolerations:# tolerations代表容忍 - key: node-role.kubernetes.io/master # 能容忍的污点key effect: NoSchedule # kubectl explain pod.spec.tolerations查看(能容忍的污点effect) containers: - name: c1 image: nginx:1.15-alpine imagePullPolicy: IfNotPresent resources: # resources资源限制是为了防止master节点的资源被占太多(根据实际情况配置) limits: memory: 100Mi requests: memory: 100Mi

2, apply应用YAML文件

[root@master1 ~]# kubectl apply -f daemonset-nginx.ymldaemonset.apps/daemonset-nginx created

3, 验证

[root@master ~]# kubectl get daemonset# daemonset可简写为ds[root@master1 ~]# kubectl get dsNAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGEdaemonset-nginx 4 4 4 4 4 114s

[root@master1 ~]# kubectl get pods -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESdaemonset-nginx-hxnst 1/1 Running 0 2m41s 10.3.104.52 192.168.122.14 daemonset-nginx-lhrxn 1/1 Running 0 2m40s 10.3.180.1 192.168.122.12 daemonset-nginx-m9hrf 1/1 Running 0 2m41s 10.3.137.65 192.168.122.11 daemonset-nginx-nlm7t 1/1 Running 0 2m41s 10.3.166.174 192.168.122.13 k8s集群中每个节点都会运行一个pod

补充:

daemonset不能像replicaset那样扩容,裁剪,但可以像deployment那样升级,操作如下:

[root@master1 ~]# kubectl set image daemonset daemonset-nginx c1=nginx:1.17-alpine --recorddaemonset.apps/ds-nginx image updated

会一个一个的升级,升级完后,用下面命令验证,确实升级到了1.17版本

[root@master1 ~]# kubectl describe pod daemonset-nginx-nlm7t |grep -i image: Image: nginx:1.17-alpine

Job

对于ReplicaSet而言,它希望pod保持预期数目、持久运行下去,除非用户明确删除,否则这些对象一直存在,它们针对的是耐久性任务,如web服务等。对于非耐久性任务,比如压缩文件,任务完成后,pod需要结束运行,不需要pod继续保持在系统中,这个时候就要用到Job。Job负责批量处理短暂的一次性任务 (short lived one-off tasks),即仅执行一次的任务,它保证批处理任务的一个或多个Pod成功结束。

案例1: 计算圆周率2000位

1, 编写YAML文件

[root@master ~]# vim job.ymlapiVersion: batch/v1kind: Jobmetadata: name: pi# job名spec: template: metadata: name: pi# pod名 spec: containers: - name: pi # 容器名 image: perl # 此镜像有800多M,可提前导入到所有节点,也可能指定导入到某一节点然后指定调度到此节点 imagePullPolicy: IfNotPresent command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] restartPolicy: Never # 执行完后不再重启

2, 应用YAML文件创建job

[root@master ~]# kubectl apply -f job.ymljob.batch/pi created

3, 验证

[root@master1 ~]# kubectl get jobsNAME COMPLETIONS DURATION AGEpi 1/1 11s 18s

[root@master1 ~]# kubectl get podsNAME READY STATUS RESTARTS AGEpi-tjq9b 0/1 Completed 0 27sCompleted状态,也不再是ready状态

[root@master1 ~]# kubectl logs pi-tjq9b3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482337867831652712019091456485669234603486104543266482133936072602491412737245870066063155881748815209209628292540917153643678925903600113305305488204665213841469519415116094330572703657595919530921861173819326117931051185480744623799627495673518857527248912279381830119491298336733624406566430860213949463952247371907021798609437027705392171762931767523846748184676694051320005681271452635608277857713427577896091736371787214684409012249534301465495853710507922796892589235420199561121290219608640344181598136297747713099605187072113499999983729780499510597317328160963185950244594553469083026425223082533446850352619311881710100031378387528865875332083814206171776691473035982534904287554687311595628638823537875937519577818577805321712268066130019278766111959092164201989380952572010654858632788659361533818279682303019520353018529689957736225994138912497217752834791315155748572424541506959508295331168617278558890750983817546374649393192550604009277016711390098488240128583616035637076601047101819429555961989467678374494482553797747268471040475346462080466842590694912933136770289891521047521620569660240580381501935112533824300355876402474964732639141992726042699227967823547816360093417216412199245863150302861829745557067498385054945885869269956909272107975093029553211653449872027559602364806654991198818347977535663698074265425278625518184175746728909777727938000816470600161452491921732172147723501414419735685481613611573525521334757418494684385233239073941433345477624168625189835694855620992192221842725502542568876717904946016534668049886272327917860857843838279679766814541009538837863609506800642251252051173929848960841284886269456042419652850222106611863067442786220391949450471237137869609563643719172874677646575739624138908658326459958133904780275901

案例2: 创建固定次数job

1, 编写YAML文件

[root@master ~]# vim job2.ymlapiVersion: batch/v1kind: Jobmetadata: name: busybox-jobspec: completions: 10 # 执行job的次数 parallelism: 1 # 执行job的并发数 template: metadata: name: busybox-job-pod spec: containers: - name: busybox image: busybox imagePullPolicy: IfNotPresent command: ["echo", "hello"] restartPolicy: Never

2, 应用YAML文件创建job

[root@master1 ~]# kubectl apply -f job2.ymljob.batch/busybox-job created

3, 验证

[root@master1 ~]# kubectl get jobNAME COMPLETIONS DURATION AGEbusybox-job 2/10 9s 9s[root@master1 ~]# kubectl get jobNAME COMPLETIONS DURATION AGEbusybox-job 3/10 12s 12s[root@master1 ~]# kubectl get jobNAME COMPLETIONS DURATION AGEbusybox-job 4/10 15s 15s[root@master1 ~]# kubectl get jobNAME COMPLETIONS DURATION AGEbusybox-job 10/10 34s 48s34秒左右结束

[root@master ~]# kubectl get podsNAME READY STATUS RESTARTS AGEbusybox-job-5zn6l 0/1 Completed 0 34sbusybox-job-cm9kw 0/1 Completed 0 29sbusybox-job-fmpgt 0/1 Completed 0 38sbusybox-job-gjjvh 0/1 Completed 0 45sbusybox-job-krxpd 0/1 Completed 0 25sbusybox-job-m2vcq 0/1 Completed 0 41sbusybox-job-ncg78 0/1 Completed 0 47sbusybox-job-tbzz8 0/1 Completed 0 51sbusybox-job-vb99r 0/1 Completed 0 21sbusybox-job-wnch7 0/1 Completed 0 32s

CronJob

类似于Linux系统的crontab,在指定的时间周期运行相关的任务

1, 编写YAML文件

[root@master1 ~]# vim cronjob.ymlapiVersion: batch/v1beta1kind: CronJobmetadata: name: cronjob1spec: schedule: "* * * * *" # 分时日月周 jobTemplate: spec: template: spec: containers: - name: hello image: busybox args: - /bin/sh - -c - date; echo hello kubernetes imagePullPolicy: IfNotPresent restartPolicy: onFailure

2, 应用YAML文件创建cronjob

[root@master1 ~]# kubectl apply -f cronjob.ymlcronjob.batch/cronjob1 created

3, 查看验证

[root@master1 ~]# kubectl get cronjobNAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGEcronjob1 * * * * * False 0 21s

[root@master1 ~]# kubectl get podsNAME READY STATUS RESTARTS AGEcronjob-1564993080-qlbgv 0/1 Completed 0 2m10scronjob-1564993140-zbv7f 0/1 Completed 0 70scronjob-1564993200-gx5xz 0/1 Completed 0 10s看AGE时间,每分钟整点执行一次

Copyright © 2016-2020 www.365daan.com All Rights Reserved. 365答案网 版权所有 备案号:

部分内容来自互联网,版权归原作者所有,如有冒犯请联系我们,我们将在三个工作时内妥善处理。