欢迎您访问365答案网,请分享给你的朋友!
生活常识 学习资料

helm部署ingress-controller

时间:2023-07-20

k8s中大多使用nginx-ingress-controller来实现ingress, 但是脆弱的nginx-controller通过ingress解析出nginx配置, 对于某些annotation会reload nignx配置失败, 然后controller就卡死了, 不断重启, 除非删除对应的ingress.

但是同时,webhook的引入增加了部署ingress-nginx-controller的难度,ingress-nginx-admission失败:

[root@k8s-master ingress]# kubectl get pods -n ingress-nginxNAME READY STATUS RESTARTS AGEingress-nginx-admission-create-k9z4h 0/1 Completed 0 29singress-nginx-admission-patch-gc9p5 0/1 Completed 0 29singress-nginx-controller-776889d8cb-wd84p 1/1 Running 0 29s[root@k8s-master ingress]# [root@k8s-master ingress]# kubectl logs -f ingress-nginx-admission-create-k9z4h -n ingress-nginxW0202 10:49:07.866187 1 client_config.go:615] Neither --kubeconfig nor --master was specified、 Using the inClusterConfig、 This might not work.{"err":"secrets "ingress-nginx-admission" not found","level":"info","msg":"no secret found","source":"k8s/k8s.go:229","time":"2022-02-02T10:49:07Z"}{"level":"info","msg":"creating new secret","source":"cmd/create.go:28","time":"2022-02-02T10:49:07Z"}[root@k8s-master ingress]#

曲线救国,用helm工具部署ingress-nginx-controller,并禁止webhook组件 通过helm部署ingress-nginx-controller:

(snap是一个比yum更智能的系统包版本管理工具,可以指定版本升级、回退版本)

root@k8s-master:~/work/ingress/1.1.1# apt install snap -yroot@k8s-master:~/work/ingress/1.1.1# snap install helm --classichelm 3.7.0 from Snapcrafters installedroot@k8s-master:~/work/ingress/1.1.1#root@k8s-master:~/work/ingress/1.1.1#root@k8s-master:~/work/ingress/1.1.1# helm upgrade --install ingress-nginx ingress-nginx --repo https://kubernetes.github.io/ingress-nginx --namespace ingress-nginx --create-namespace --set controller.service.type=NodePort --set controller.admissionWebhooks.enabled=falseRelease "ingress-nginx" has been upgraded、Happy Helming!NAME: ingress-nginxLAST DEPLOYED: Sat Feb 5 07:19:03 2022NAMESPACE: ingress-nginxSTATUS: deployedREVISION: 2TEST SUITE: NoneNOTES:The ingress-nginx controller has been installed.Get the application URL by running these commands: export HTTP_NODE_PORT=$(kubectl --namespace ingress-nginx get services -o jsonpath="{.spec.ports[0].nodePort}" ingress-nginx-controller) export HTTPS_NODE_PORT=$(kubectl --namespace ingress-nginx get services -o jsonpath="{.spec.ports[1].nodePort}" ingress-nginx-controller) export NODE_IP=$(kubectl --namespace ingress-nginx get nodes -o jsonpath="{.items[0].status.addresses[1].address}") echo "Visit http://$NODE_IP:$HTTP_NODE_PORT to access your application via HTTP." echo "Visit https://$NODE_IP:$HTTPS_NODE_PORT to access your application via HTTPS."An example Ingress that makes use of the controller: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: example namespace: foo spec: ingressClassName: nginx rules: - host: www.example.com http: paths: - backend: service: name: exampleService port: number: 80 path: / # This section is only required if TLS is to be enabled for the Ingress tls: - hosts: - www.example.com secretName: example-tlsIf TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided: apiVersion: v1 kind: Secret metadata: name: example-tls namespace: foo data: tls.crt: tls.key: type: kubernetes.io/tlsroot@k8s-master:~/work/ingress/1.1.1#[root@k8s-master ingress]# helm install ingress-nginx -n ingress-nginx [root@k8s-master ingress]# helm uninstall ingress-nginx -n ingress-nginx

查看 ingress-nginx-controller状态,发现失败:

root@k8s-master:~/work/ingress/1.1.1# kubectl get pods -ANAMESPACE NAME READY STATUS RESTARTS AGEingress-nginx ingress-nginx-controller-6bb7fdbf54-589qd 0/1 ImagePullBackOff 0 9m20skube-system calico-kube-controllers-958545d87-99fgc 1/1 Running 8 (81m ago) 41hkube-system calico-node-gcrcz 1/1 Running 6 (81m ago) 41hkube-system calico-node-gwsrh 1/1 Running 8 (81m ago) 41hkube-system calico-node-zbkj2 1/1 Running 6 (81m ago) 41hkube-system coredns-7f6cbbb7b8-g89bp 1/1 Running 8 (81m ago) 41hkube-system coredns-7f6cbbb7b8-tt8ts 1/1 Running 8 (81m ago) 41hkube-system etcd-k8s-master 1/1 Running 16 (81m ago) 41hkube-system kube-apiserver-k8s-master 1/1 Running 16 (81m ago) 41hkube-system kube-controller-manager-k8s-master 1/1 Running 17 (81m ago) 41hkube-system kube-proxy-47xmf 1/1 Running 10 (81m ago) 41hkube-system kube-proxy-4r95c 1/1 Running 9 (81m ago) 41hkube-system kube-proxy-j4jt4 1/1 Running 8 (81m ago) 41hkube-system kube-scheduler-k8s-master 1/1 Running 9 (81m ago) 41hroot@k8s-master:~/work/ingress/1.1.1#

发现是image拉取失败:

root@k8s-master:~/work/ingress/1.1.1# kubectl describe pods ingress-nginx-controller-6bb7fdbf54-589qd -n ingress-nginxName: ingress-nginx-controller-6bb7fdbf54-589qdNamespace: ingress-nginxPriority: 0Node: k8s-node2/192.168.1.103Start Time: Sat, 05 Feb 2022 07:21:24 +0000Labels: app.kubernetes.io/component=controller app.kubernetes.io/instance=ingress-nginx app.kubernetes.io/name=ingress-nginx pod-template-hash=6bb7fdbf54Annotations: cni.projectcalico.org/containerID: 94cf0ed05737eb0721172af2a2d5063e4e74b51b85e51f560a487741f2f7cb30 cni.projectcalico.org/podIP: 10.122.169.189/32 cni.projectcalico.org/podIPs: 10.122.169.189/32Status: PendingIP: 10.122.169.189IPs: IP: 10.122.169.189Controlled By: ReplicaSet/ingress-nginx-controller-6bb7fdbf54Containers: controller: Container ID: Image: k8s.gcr.io/ingress-nginx/controller:v1.1.1@sha256:0bc88eb15f9e7f84e8e56c14fa5735aaa488b840983f87bd79b1054190e660de Image ID: Ports: 80/TCP, 443/TCP Host Ports: 0/TCP, 0/TCP Args: /nginx-ingress-controller --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller --election-id=ingress-controller-leader --controller-class=k8s.io/ingress-nginx --configmap=$(POD_NAMESPACE)/ingress-nginx-controller State: Waiting Reason: ImagePullBackOff Ready: False Restart Count: 0 Requests: cpu: 100m memory: 90Mi Liveness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=5 Readiness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3 Environment: POD_NAME: ingress-nginx-controller-6bb7fdbf54-589qd (v1:metadata.name) POD_NAMESPACE: ingress-nginx (v1:metadata.namespace) LD_PRELOAD: /usr/local/lib/libmimalloc.so Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-m62fd (ro)Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: kube-api-access-m62fd: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: trueQoS Class: BurstableNode-Selectors: kubernetes.io/os=linuxTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300sEvents: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 6m37s default-scheduler Successfully assigned ingress-nginx/ingress-nginx-controller-6bb7fdbf54-589qd to k8s-node2 Warning Failed 5m52s (x2 over 6m21s) kubelet Failed to pull image "k8s.gcr.io/ingress-nginx/controller:v1.1.1@sha256:0bc88eb15f9e7f84e8e56c14fa5735aaa488b840983f87bd79b1054190e660de": rpc error: code = Unknown desc =Error response from daemon: Get "https://k8s.gcr.io/v2/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Warning Failed 5m11s kubelet Failed to pull image "k8s.gcr.io/ingress-nginx/controller:v1.1.1@sha256:0bc88eb15f9e7f84e8e56c14fa5735aaa488b840983f87bd79b1054190e660de": rpc error: code = Unknown desc =Error response from daemon: Get "https://k8s.gcr.io/v2/": dial tcp 142.250.157.82:443: i/o timeout Normal Pulling 4m31s (x4 over 6m36s) kubelet Pulling image "k8s.gcr.io/ingress-nginx/controller:v1.1.1@sha256:0bc88eb15f9e7f84e8e56c14fa5735aaa488b840983f87bd79b1054190e660de" Warning Failed 4m15s (x4 over 6m21s) kubelet Error: ErrImagePull Warning Failed 4m15s kubelet Failed to pull image "k8s.gcr.io/ingress-nginx/controller:v1.1.1@sha256:0bc88eb15f9e7f84e8e56c14fa5735aaa488b840983f87bd79b1054190e660de": rpc error: code = Unknown desc =Error response from daemon: Get "https://k8s.gcr.io/v2/": context deadline exceeded Warning Failed 3m48s (x6 over 6m21s) kubelet Error: ImagePullBackOff Normal BackOff 87s (x15 over 6m21s) kubelet Back-off pulling image "k8s.gcr.io/ingress-nginx/controller:v1.1.1@sha256:0bc88eb15f9e7f84e8e56c14fa5735aaa488b840983f87bd79b1054190e660de"root@k8s-master:~/work/ingress/1.1.1#

下载国内镜像:

--下载镜像docker pull liangjw/kube-webhook-certgen:v1.1.1docker pull liangjw/ingress-nginx-controller:v1.1.1--改名称docker tag liangjw/kube-webhook-certgen:v1.1.1 k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1docker tag liangjw/ingress-nginx-controller:v1.1.1 k8s.gcr.io/ingress-nginx/controller:v1.1.1--删除old namedocker image delete liangjw/kube-webhook-certgen:v1.1.1docker image delete liangjw/ingress-nginx-controller:v1.1.1

修改完成后,controller的镜像内容如下:

root@k8s-master:~/work/ingress/1.1.1/zz# docker images --digestsREPOSITORY TAG DIGEST IMAGE ID CREATED SIZEk8s.gcr.io/ingress-nginx/controller v1.1.1 2461b2698dcd 3 weeks ago 285MBroot@k8s-master:~/work/ingress/1.1.1/zz#

修改:

root@k8s-master:~/work/ingress/1.1.1/zz# kubectl edit deploy ingress-nginx-controller -n ingress-nginxdeployment.apps/ingress-nginx-controller editedroot@k8s-master:~/work/ingress/1.1.1#

修改image字段:

# 修改yaml文件:image: k8s.gcr.io/ingress-nginx/controller:v1.1.1@sha256:0bc88eb15f9e7f84e8e56c14fa5735aaa488b840983f87bd79b1054190e660de修改为:image: k8s.gcr.io/ingress-nginx/controller:v1.1.1

修改镜像后,ingress-nginx-controller终于跑了起来: 导出为yaml文件:

root@k8s-master:~/work/ingress/1.1.1/zz# kubectl get deploy -n ingress-nginx -o yaml > deploy.yamlroot@k8s-master:~/work/ingress/1.1.1/zz# lltotal 16drwxr-xr-x 2 root root 4096 Feb 5 07:44 ./drwxr-xr-x 3 root root 4096 Feb 5 07:31 ../-rw-r--r-- 1 root root 4491 Feb 5 07:44 deploy.yamlroot@k8s-master:~/work/ingress/1.1.1/zz#

测试:


通过192.168.1.102:32670访问ingress-nginx-controller服务:
(由于此时,尚未部署ingress实例(ingress-nginx-controller的路由目的),因此返回404未找到服务)

通过192.168.1.103:32670访问ingress-nginx-controller服务:

通过https协议访问服务:

部署ingress 实例:

1 创建证书:

root@k8s-master:~/work/ing# root@k8s-master:~/work/ing# openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=nginxsvc/0=nginxsvc"Generating a RSA private key....+++++...+++++writing new private key to 'tls.key'-----req: Skipping unknown attribute "0"root@k8s-master:~/work/ing# root@k8s-master:~/work/ing# kubectl create secret tls tls-secret --cert=./tls.crt --key=./tls.keysecret/tls-secret createdroot@k8s-master:~/work/ing# root@k8s-master:~/work/ing# kubectl get secretNAME TYPE DATA AGEdefault-token-kvxvz kubernetes.io/service-account-token 3 2d18htls-secret kubernetes.io/tls 2 9sroot@k8s-master:~/work/ing# kubectl get secret -o wideNAME TYPE DATA AGEdefault-token-kvxvz kubernetes.io/service-account-token 3 2d18htls-secret kubernetes.io/tls 2 17sroot@k8s-master:~/work/ing#


2 创建ingress实例:

root@k8s-master:~/work/ing# lltotal 32drwxr-xr-x 2 root root 4096 Feb 6 08:46 ./drwxr-xr-x 9 root root 4096 Feb 6 06:13 ../-rw-r--r-- 1 root root 743 Feb 6 08:45 delopy1.yaml-rw-r--r-- 1 root root 743 Feb 6 08:46 delopy2.yaml-rw-r--r-- 1 root root 507 Feb 6 08:36 ingress.yaml-rw-r--r-- 1 root root 2890 Feb 5 09:49 Readme.md-rw-r--r-- 1 root root 1111 Feb 6 07:50 tls.crt-rw------- 1 root root 1704 Feb 6 07:50 tls.keyroot@k8s-master:~/work/ing# kubectl apply -f delopy1.yaml deployment.apps/deployment1 createdservice/svc-1 createdroot@k8s-master:~/work/ing# kubectl apply -f delopy2.yaml deployment.apps/deployment2 createdservice/svc-2 createdroot@k8s-master:~/work/ing# kubectl get deployNAME READY UP-TO-DATE AVAILABLE AGEdeployment1 3/3 3 3 17sdeployment2 2/3 3 2 11sroot@k8s-master:~/work/ing# kubectl get podsNAME READY STATUS RESTARTS AGEdeployment1-6797878c4f-6n562 1/1 Running 0 21sdeployment1-6797878c4f-qv9h2 1/1 Running 0 21sdeployment1-6797878c4f-rflgs 1/1 Running 0 21sdeployment2-76d4cc985f-9gzrk 1/1 Running 0 15sdeployment2-76d4cc985f-gvtjg 1/1 Running 0 15sdeployment2-76d4cc985f-hvcx6 1/1 Running 0 15sroot@k8s-master:~/work/ing# kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkubernetes ClusterIP 10.10.0.1 443/TCP 2d19hsvc-1 NodePort 10.10.244.118 80:31527/TCP,443:32630/TCP 7m19ssvc-2 NodePort 10.10.92.1 80:31397/TCP,443:31415/TCP 7m13sroot@k8s-master:~/work/ing# root@k8s-master:~/work/ing# root@k8s-master:~/work/ing# curl 10.10.244.118Hello MyApp | Version: v1 | Pod Nameroot@k8s-master:~/work/ing# root@k8s-master:~/work/ing# curl 10.10.92.1Hello MyApp | Version: v2 | Pod Nameroot@k8s-master:~/work/ing#

delopy1.yaml 内容如下:

apiVersion: apps/v1kind: Deploymentmetadata: name: deployment1spec: replicas: 3 selector: matchLabels: app: nginx-1 template: metadata: labels: app: nginx-1 spec: containers: - name: nginx-1 image: wangyanglinux/myapp:v1 # v1镜像 imagePullPolicy: IfNotPresent # Always/IfNotPresent/Never ports: - containerPort: 80 # pod端口:80---apiVersion: v1kind: Servicemetadata: labels: app: svc-1 name: svc-1 namespace: defaultspec: ports: - name: http port: 80 # service port protocol: TCP targetPort: 80 # pod port - name: https port: 443 protocol: TCP targetPort: 443 selector: app: nginx-1 type: NodePort # ClusterIP

delopy1.yaml 内容如下:

apiVersion: apps/v1kind: Deploymentmetadata: name: deployment2spec: replicas: 3 selector: matchLabels: app: nginx-2 template: metadata: labels: app: nginx-2 spec: containers: - name: nginx-2 image: wangyanglinux/myapp:v2 # v2镜像 imagePullPolicy: IfNotPresent # Always/IfNotPresent/Never ports: - containerPort: 80 # pod端口:80---apiVersion: v1kind: Servicemetadata: labels: app: svc-2 name: svc-2 namespace: defaultspec: ports: - name: http port: 80 # service port protocol: TCP targetPort: 80 # pod port - name: https port: 443 protocol: TCP targetPort: 443 selector: app: nginx-2 type: NodePort # ClusterIP

创建ingress:

root@k8s-master:~/work/ing# kubectl apply -f ingress-rule.yaml ingress.networking.k8s.io/ingress1 createdingress.networking.k8s.io/ingress2 createdroot@k8s-master:~/work/ing#

ingress-rule.yaml 内容如下:

apiVersion: networking.k8s.io/v1kind: Ingressmetadata: name: ingress1spec: rules: - host: www1.atguigu.com http: paths: - pathType: Prefix path: / backend: service: name: svc-1 port: number: 80 #name: https---apiVersion: networking.k8s.io/v1kind: Ingressmetadata: name: ingress2spec: rules: - host: www2.atguigu.com http: paths: - pathType: Prefix path: / backend: service: name: svc-2 port: number: 80 #name: https

附:
ubuntu默认安装了snapd工具。

centos 需要手动去安装snapd:

yum install epel-release -yyum install snapd -ysystemctl enable --now snapd.socketln -s /var/lib/snapd/snap /snapSnap会每天自动更新通过其安装的软件snap install XXX ##安装软件snap list XXX ##列出本机已安装的软件snap refresh XXX ##升级软件snap remove XXX ##删除软件snap run xxx.xxx ##运行某个bin文件snap alias XXX YYY ##把XXXalias为YYYsnap安装的软件的bin文件,位于/snap/bin下面

Copyright © 2016-2020 www.365daan.com All Rights Reserved. 365答案网 版权所有 备案号:

部分内容来自互联网,版权归原作者所有,如有冒犯请联系我们,我们将在三个工作时内妥善处理。