Yiner

Mar 24, 2022

安装&使用nginx ingress

 
 
 

使用场景

  • 由于平台需要让用户发布算法,并通过指定的url访问,但是为了不暴露过多的服务器端口,所以通过ingress来根据路径进行请求转发,找到指定的算法。
  • 发布一个算法具体需要创建对应的pod+service+ingress三个资源。

安装helm

helm介绍

Helm 是一个用于对需要在 k8s 上部署的复杂应用进行定义、安装和更新。Helm 以 Char 的方式对应用软件进行描述,可以方便地创建、版本化、共享和发布复杂的应用软件。

helm的主要概念

  • Chart
    • Helm的应用包,采用tgz格式。类似于 Yum 的 RPM 包,其包含了一组定义 Kubernetes 资源相关的 YAML 文件,也称为应用 Chart。
  • Repoistory
    • Helm 的应用仓库,Repository 本质上是一个 Web 服务器,该服务器保存了一系列的 Chart 应用包以供用户下载,并且提供了一个该 Repository 的 Chart 包的清单文件以供查询,Helm可以同时管理多个不同的Repository。
      Helm社区官方提供了stable和incubator仓库,但Helm社区没有打算独占仓库,而是允许其他人和组织也可以搭建仓库。仓库可以是公共仓库,也可以是私有仓库。
  • Release
    • 在 Kubernetes 集群上运行的 Chart 的一个实例。在同一个集群上,一个 Chart 可以安装很多次。每次安装都会创建一个新的 Release。例如一个 MySQL Chart,如果想在服务器上运行两个 MySQL 数据库,就可以把这个 Chart 安装两次。每次安装都会生成一个新的Release。

安装helm

# 下载 wget https://get.helm.sh/helm-v3.6.0-linux-amd64.tar.gz # 解压 tar -zxvf helm-v3.6.0-linux-amd64.tar.gz # 移动到环境变量目录里面即可 mv linux-amd64/helm /usr/local/bin/helm # 输出版本 helm version version.BuildInfo{Version:"v3.6.3", GitCommit:"d506314abfb5d21419df8c7e7e68012379db2354", GitTreeState:"clean", GoVersion:"go1.16.5"}
 

chart包的目录结构

# 创建一个chart,chart的名称叫 helm-test [root@k8s-master ~]$ helm create helm-test Creating helm-test [root@k8s-master ~]$ cd helm-test/ # 查看 chart 的目录结构 [root@k8s-master helm-test]$ tree . . ├── charts ├── Chart.yaml ├── templates │ ├── deployment.yaml │ ├── _helpers.tpl │ ├── hpa.yaml │ ├── ingress.yaml │ ├── NOTES.txt │ ├── serviceaccount.yaml │ ├── service.yaml │ └── tests │ └── test-connection.yaml └── values.yaml 3 directories, 10 files
chart 是 Helm 的应用打包格式。chart 由一系列文件组成,这些文件描述了 Kubernetes 部署应用时所需要的资源。上面通过 helm 命令创建了一个 chart 包,目录结构说明如下:
  • helm-test:是 chart 包的名称
  • charts 目录: 保存依赖文件的目录,如果依赖其他的 chart,则会保存在这里
  • Chart.yaml 文件:用于描述 chart 信息的 yaml 文件,如版本信息等
  • values.yaml 文件:chart 支持在安装的时根据参数进行定制化配置,而 values.yaml 则提供了这些配置参数的默认值,可以在安装前根据需要修改 values.yaml 的参数。
  • templates 目录:各类 Kubernetes 资源的配置模板都放置在这里。Helm 会将 values.yaml 中的参数值注入到模板中生成标准的 YAML 配置文件
 

helm安装ingress

添加 ingress-nginx 官方helm仓库

[root@k8s-master ~]$ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx "ingress-nginx" has been added to your repositories [root@k8s-master ~]$ helm repo update Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "ingress-nginx" chart repository Update Complete. ⎈Happy Helming!⎈
 

下载ingress-nginx的chart包

# 查找ingress-nginx的chart包 [root@k8s-master src]$ helm search repo ingress-nginx NAME CHART VERSION APP VERSION DESCRIPTION ingress-nginx/ingress-nginx 4.0.1 1.0.0 Ingress controller for Kubernetes using NGINX a... # 下载下来 [root@k8s-master src]$ helm pull ingress-nginx/ingress-nginx # 以tgz为后缀的包就是我们下载的chart包 [root@k8s-master src]$ ls helm-v3.6.3-linux-amd64.tar.gz ingress-nginx-4.0.1.tgz linux-amd64 # 解压 [root@k8s-master src]$ tar -zxvf ingress-nginx-4.0.1.tgz # 目录结构如下 [root@k8s-master src]$ cd ingress-nginx [root@k8s-master ingress-nginx]$ ls CHANGELOG.md Chart.yaml ci OWNERS README.md templates values.yaml
 

修改 values.yaml 文件

下载下来的 chart 包,需要修改一下资源清单配置文件,修改 values.yaml 文件如下:
修改后的 value.yaml 全文
## nginx configuration ## Ref: https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/nginx-configuration/index.md ## ## Overrides for generated resource names # See templates/_helpers.tpl # nameOverride: # fullnameOverride: ## Labels to apply to all resources ## commonLabels: {} # scmhash: abc123 # myLabel: aakkmd controller: name: controller image: registry: docker.io image: willdockerhub/ingress-nginx-controller ## for backwards compatibility consider setting the full image url via the repository value below ## use *either* current default registry/image or repository format or installing chart by providing the values.yaml will fail ## repository: tag: "v1.1.2" # digest: sha256:28b11ce69e57843de44e3db6413e98d09de0f6688e33d4bd384002a44f78405c pullPolicy: IfNotPresent # www-data -> uid 101 runAsUser: 101 allowPrivilegeEscalation: true # -- Use an existing PSP instead of creating one existingPsp: "" # -- Configures the controller container name containerName: controller # -- Configures the ports that the nginx-controller listens on containerPort: http: 80 https: 443 # -- Will add custom configuration options to Nginx https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/ config: {} # -- Annotations to be added to the controller config configuration configmap. configAnnotations: {} # -- Will add custom headers before sending traffic to backends according to https://github.com/kubernetes/ingress-nginx/tree/main/docs/examples/customization/custom-headers proxySetHeaders: {} # -- Will add custom headers before sending response traffic to the client according to: https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#add-headers addHeaders: {} # -- Optionally customize the pod dnsConfig. dnsConfig: {} # -- Optionally customize the pod hostname. hostname: {} # -- Optionally change this to ClusterFirstWithHostNet in case you have 'hostNetwork: true'. # By default, while using host network, name resolution uses the host's DNS. If you wish nginx-controller # to keep resolving names inside the k8s network, use ClusterFirstWithHostNet. dnsPolicy: ClusterFirstWithHostNet # -- Bare-metal considerations via the host network https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#via-the-host-network # Ingress status was blank because there is no Service exposing the NGINX Ingress controller in a configuration using the host network, the default --publish-service flag used in standard cloud setups does not apply reportNodeInternalIp: false # -- Process Ingress objects without ingressClass annotation/ingressClassName field # Overrides value for --watch-ingress-without-class flag of the controller binary # Defaults to false watchIngressWithoutClass: false # -- Process IngressClass per name (additionally as per spec.controller). ingressClassByName: false # -- This configuration defines if Ingress Controller should allow users to set # their own *-snippet annotations, otherwise this is forbidden / dropped # when users add those annotations. # Global snippets in ConfigMap are still respected allowSnippetAnnotations: true # -- Required for use with CNI based kubernetes installations (such as ones set up by kubeadm), # since CNI and hostport don't mix yet. Can be deprecated once https://github.com/kubernetes/kubernetes/issues/23920 # is merged hostNetwork: true ## Use host ports 80 and 443 ## Disabled by default hostPort: # -- Enable 'hostPort' or not enabled: false ports: # -- 'hostPort' http port http: 80 # -- 'hostPort' https port https: 443 # -- Election ID to use for status update electionID: ingress-controller-leader ## This section refers to the creation of the IngressClass resource ## IngressClass resources are supported since k8s >= 1.18 and required since k8s >= 1.19 ingressClassResource: # -- Name of the ingressClass name: nginx # -- Is this ingressClass enabled or not enabled: true # -- Is this the default ingressClass for the cluster default: false # -- Controller-value of the controller that is processing this ingressClass controllerValue: "k8s.io/ingress-nginx" # -- Parameters is a link to a custom resource containing additional # configuration for the controller. This is optional if the controller # does not require extra parameters. parameters: {} # -- For backwards compatibility with ingress.class annotation, use ingressClass. # Algorithm is as follows, first ingressClassName is considered, if not present, controller looks for ingress.class annotation ingressClass: nginx # -- Labels to add to the pod container metadata podLabels: {} # key: value # -- Security Context policies for controller pods podSecurityContext: {} # -- See https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/ for notes on enabling and using sysctls sysctls: {} # sysctls: # "net.core.somaxconn": "8192" # -- Allows customization of the source of the IP address or FQDN to report # in the ingress status field. By default, it reads the information provided # by the service. If disable, the status field reports the IP address of the # node or nodes where an ingress controller pod is running. publishService: # -- Enable 'publishService' or not enabled: true # -- Allows overriding of the publish service to bind to # Must be <namespace>/<service_name> pathOverride: "" # Limit the scope of the controller to a specific namespace scope: # -- Enable 'scope' or not enabled: false # -- Namespace to limit the controller to; defaults to $(POD_NAMESPACE) namespace: "" # -- When scope.enabled == false, instead of watching all namespaces, we watching namespaces whose labels # only match with namespaceSelector. Format like foo=bar. Defaults to empty, means watching all namespaces. namespaceSelector: "" # -- Allows customization of the configmap / nginx-configmap namespace; defaults to $(POD_NAMESPACE) configMapNamespace: "" tcp: # -- Allows customization of the tcp-services-configmap; defaults to $(POD_NAMESPACE) configMapNamespace: "" # -- Annotations to be added to the tcp config configmap annotations: {} udp: # -- Allows customization of the udp-services-configmap; defaults to $(POD_NAMESPACE) configMapNamespace: "" # -- Annotations to be added to the udp config configmap annotations: {} # -- Maxmind license key to download GeoLite2 Databases. ## https://blog.maxmind.com/2019/12/18/significant-changes-to-accessing-and-using-geolite2-databases maxmindLicenseKey: "" # -- Additional command line arguments to pass to nginx-ingress-controller # E.g. to specify the default SSL certificate you can use extraArgs: {} ## extraArgs: ## default-ssl-certificate: "<namespace>/<secret_name>" # -- Additional environment variables to set extraEnvs: [] # extraEnvs: # - name: FOO # valueFrom: # secretKeyRef: # key: FOO # name: secret-resource # -- Use a `DaemonSet` or `Deployment` kind: DaemonSet # -- Annotations to be added to the controller Deployment or DaemonSet ## annotations: {} # keel.sh/pollSchedule: "@every 60m" # -- Labels to be added to the controller Deployment or DaemonSet and other resources that do not have option to specify labels ## labels: {} # keel.sh/policy: patch # keel.sh/trigger: poll # -- The update strategy to apply to the Deployment or DaemonSet ## updateStrategy: {} # rollingUpdate: # maxUnavailable: 1 # type: RollingUpdate # -- `minReadySeconds` to avoid killing pods before we are ready ## minReadySeconds: 0 # -- Node tolerations for server scheduling to nodes with taints ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ ## tolerations: [] # - key: "key" # operator: "Equal|Exists" # value: "value" # effect: "NoSchedule|PreferNoSchedule|NoExecute(1.6 only)" # -- Affinity and anti-affinity rules for server scheduling to nodes ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity ## affinity: {} # # An example of preferred pod anti-affinity, weight is in the range 1-100 # podAntiAffinity: # preferredDuringSchedulingIgnoredDuringExecution: # - weight: 100 # podAffinityTerm: # labelSelector: # matchExpressions: # - key: app.kubernetes.io/name # operator: In # values: # - ingress-nginx # - key: app.kubernetes.io/instance # operator: In # values: # - ingress-nginx # - key: app.kubernetes.io/component # operator: In # values: # - controller # topologyKey: kubernetes.io/hostname # # An example of required pod anti-affinity # podAntiAffinity: # requiredDuringSchedulingIgnoredDuringExecution: # - labelSelector: # matchExpressions: # - key: app.kubernetes.io/name # operator: In # values: # - ingress-nginx # - key: app.kubernetes.io/instance # operator: In # values: # - ingress-nginx # - key: app.kubernetes.io/component # operator: In # values: # - controller # topologyKey: "kubernetes.io/hostname" # -- Topology spread constraints rely on node labels to identify the topology domain(s) that each Node is in. ## Ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/ ## topologySpreadConstraints: [] # - maxSkew: 1 # topologyKey: failure-domain.beta.kubernetes.io/zone # whenUnsatisfiable: DoNotSchedule # labelSelector: # matchLabels: # app.kubernetes.io/instance: ingress-nginx-internal # -- `terminationGracePeriodSeconds` to avoid killing pods before we are ready ## wait up to five minutes for the drain of connections ## terminationGracePeriodSeconds: 300 # -- Node labels for controller pod assignment ## Ref: https://kubernetes.io/docs/user-guide/node-selection/ ## nodeSelector: kubernetes.io/os: linux ingress: "true" ## Liveness and readiness probe values ## Ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes ## ## startupProbe: ## httpGet: ## # should match container.healthCheckPath ## path: "/healthz" ## port: 10254 ## scheme: HTTP ## initialDelaySeconds: 5 ## periodSeconds: 5 ## timeoutSeconds: 2 ## successThreshold: 1 ## failureThreshold: 5 livenessProbe: httpGet: # should match container.healthCheckPath path: "/healthz" port: 10254 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 timeoutSeconds: 1 successThreshold: 1 failureThreshold: 5 readinessProbe: httpGet: # should match container.healthCheckPath path: "/healthz" port: 10254 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 timeoutSeconds: 1 successThreshold: 1 failureThreshold: 3 # -- Path of the health check endpoint. All requests received on the port defined by # the healthz-port parameter are forwarded internally to this path. healthCheckPath: "/healthz" # -- Address to bind the health check endpoint. # It is better to set this option to the internal node address # if the ingress nginx controller is running in the `hostNetwork: true` mode. healthCheckHost: "" # -- Annotations to be added to controller pods ## podAnnotations: {} replicaCount: 1 minAvailable: 1 ## Define requests resources to avoid probe issues due to CPU utilization in busy nodes ## ref: https://github.com/kubernetes/ingress-nginx/issues/4735#issuecomment-551204903 ## Ideally, there should be no limits. ## https://engineering.indeedblog.com/blog/2019/12/cpu-throttling-regression-fix/ resources: ## limits: ## cpu: 100m ## memory: 90Mi requests: cpu: 100m memory: 90Mi # Mutually exclusive with keda autoscaling autoscaling: enabled: false minReplicas: 1 maxReplicas: 11 targetCPUUtilizationPercentage: 50 targetMemoryUtilizationPercentage: 50 behavior: {} # scaleDown: # stabilizationWindowSeconds: 300 # policies: # - type: Pods # value: 1 # periodSeconds: 180 # scaleUp: # stabilizationWindowSeconds: 300 # policies: # - type: Pods # value: 2 # periodSeconds: 60 autoscalingTemplate: [] # Custom or additional autoscaling metrics # ref: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-custom-metrics # - type: Pods # pods: # metric: # name: nginx_ingress_controller_nginx_process_requests_total # target: # type: AverageValue # averageValue: 10000m # Mutually exclusive with hpa autoscaling keda: apiVersion: "keda.sh/v1alpha1" ## apiVersion changes with keda 1.x vs 2.x ## 2.x = keda.sh/v1alpha1 ## 1.x = keda.k8s.io/v1alpha1 enabled: false minReplicas: 1 maxReplicas: 11 pollingInterval: 30 cooldownPeriod: 300 restoreToOriginalReplicaCount: false scaledObject: annotations: {} # Custom annotations for ScaledObject resource # annotations: # key: value triggers: [] # - type: prometheus # metadata: # serverAddress: http://<prometheus-host>:9090 # metricName: http_requests_total # threshold: '100' # query: sum(rate(http_requests_total{deployment="my-deployment"}[2m])) behavior: {} # scaleDown: # stabilizationWindowSeconds: 300 # policies: # - type: Pods # value: 1 # periodSeconds: 180 # scaleUp: # stabilizationWindowSeconds: 300 # policies: # - type: Pods # value: 2 # periodSeconds: 60 # -- Enable mimalloc as a drop-in replacement for malloc. ## ref: https://github.com/microsoft/mimalloc ## enableMimalloc: true ## Override NGINX template customTemplate: configMapName: "" configMapKey: "" service: enabled: true # -- If enabled is adding an appProtocol option for Kubernetes service. An appProtocol field replacing annotations that were # using for setting a backend protocol. Here is an example for AWS: service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http # It allows choosing the protocol for each backend specified in the Kubernetes service. # See the following GitHub issue for more details about the purpose: https://github.com/kubernetes/kubernetes/issues/40244 # Will be ignored for Kubernetes versions older than 1.20 ## appProtocol: true annotations: {} labels: {} # clusterIP: "" # -- List of IP addresses at which the controller services are available ## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips ## externalIPs: [] # loadBalancerIP: "" loadBalancerSourceRanges: [] enableHttp: true enableHttps: true ## Set external traffic policy to: "Local" to preserve source IP on providers supporting it. ## Ref: https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-typeloadbalancer # externalTrafficPolicy: "" ## Must be either "None" or "ClientIP" if set. Kubernetes will default to "None". ## Ref: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies # sessionAffinity: "" ## Specifies the health check node port (numeric port number) for the service. If healthCheckNodePort isn’t specified, ## the service controller allocates a port from your cluster’s NodePort range. ## Ref: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip # healthCheckNodePort: 0 # -- Represents the dual-stack-ness requested or required by this Service. Possible values are # SingleStack, PreferDualStack or RequireDualStack. # The ipFamilies and clusterIPs fields depend on the value of this field. ## Ref: https://kubernetes.io/docs/concepts/services-networking/dual-stack/ ipFamilyPolicy: "SingleStack" # -- List of IP families (e.g. IPv4, IPv6) assigned to the service. This field is usually assigned automatically # based on cluster configuration and the ipFamilyPolicy field. ## Ref: https://kubernetes.io/docs/concepts/services-networking/dual-stack/ ipFamilies: - IPv4 ports: http: 80 https: 443 targetPorts: http: http https: https type: LoadBalancer ## type: NodePort ## nodePorts: ## http: 32080 ## https: 32443 ## tcp: ## 8080: 32808 nodePorts: http: "" https: "" tcp: {} udp: {} external: enabled: true internal: # -- Enables an additional internal load balancer (besides the external one). enabled: false # -- Annotations are mandatory for the load balancer to come up. Varies with the cloud service. annotations: {} # loadBalancerIP: "" # -- Restrict access For LoadBalancer service. Defaults to 0.0.0.0/0. loadBalancerSourceRanges: [] ## Set external traffic policy to: "Local" to preserve source IP on ## providers supporting it ## Ref: https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-typeloadbalancer # externalTrafficPolicy: "" # -- Additional containers to be added to the controller pod. # See https://github.com/lemonldap-ng-controller/lemonldap-ng-controller as example. extraContainers: [] # - name: my-sidecar # image: nginx:latest # - name: lemonldap-ng-controller # image: lemonldapng/lemonldap-ng-controller:0.2.0 # args: # - /lemonldap-ng-controller # - --alsologtostderr # - --configmap=$(POD_NAMESPACE)/lemonldap-ng-configuration # env: # - name: POD_NAME # valueFrom: # fieldRef: # fieldPath: metadata.name # - name: POD_NAMESPACE # valueFrom: # fieldRef: # fieldPath: metadata.namespace # volumeMounts: # - name: copy-portal-skins # mountPath: /srv/var/lib/lemonldap-ng/portal/skins # -- Additional volumeMounts to the controller main container. extraVolumeMounts: [] # - name: copy-portal-skins # mountPath: /var/lib/lemonldap-ng/portal/skins # -- Additional volumes to the controller pod. extraVolumes: [] # - name: copy-portal-skins # emptyDir: {} # -- Containers, which are run before the app containers are started. extraInitContainers: [] # - name: init-myservice # image: busybox # command: ['sh', '-c', 'until nslookup myservice; do echo waiting for myservice; sleep 2; done;'] extraModules: [] ## Modules, which are mounted into the core nginx image # - name: opentelemetry # image: busybox # # The image must contain a `/usr/local/bin/init_module.sh` executable, which # will be executed as initContainers, to move its config files within the # mounted volume. admissionWebhooks: annotations: {} # ignore-check.kube-linter.io/no-read-only-rootfs: "This deployment needs write access to root filesystem". ## Additional annotations to the admission webhooks. ## These annotations will be added to the ValidatingWebhookConfiguration and ## the Jobs Spec of the admission webhooks. enabled: true failurePolicy: Fail # timeoutSeconds: 10 port: 8443 certificate: "/usr/local/certificates/cert" key: "/usr/local/certificates/key" namespaceSelector: {} objectSelector: {} # -- Labels to be added to admission webhooks labels: {} # -- Use an existing PSP instead of creating one existingPsp: "" service: annotations: {} # clusterIP: "" externalIPs: [] # loadBalancerIP: "" loadBalancerSourceRanges: [] servicePort: 443 type: ClusterIP createSecretJob: resources: {} # limits: # cpu: 10m # memory: 20Mi # requests: # cpu: 10m # memory: 20Mi patchWebhookJob: resources: {} patch: enabled: true image: registry: registry.aliyuncs.com image: google_containers/kube-webhook-certgen ## for backwards compatibility consider setting the full image url via the repository value below ## use *either* current default registry/image or repository format or installing chart by providing the values.yaml will fail ## repository: tag: v1.5.1 # digest: sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660 pullPolicy: IfNotPresent # -- Provide a priority class name to the webhook patching job ## priorityClassName: "" podAnnotations: {} nodeSelector: kubernetes.io/os: linux tolerations: [] # -- Labels to be added to patch job resources labels: {} runAsUser: 2000 fsGroup: 2000 metrics: port: 10254 # if this port is changed, change healthz-port: in extraArgs: accordingly enabled: false service: annotations: {} # prometheus.io/scrape: "true" # prometheus.io/port: "10254" # clusterIP: "" # -- List of IP addresses at which the stats-exporter service is available ## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips ## externalIPs: [] # loadBalancerIP: "" loadBalancerSourceRanges: [] servicePort: 10254 type: ClusterIP # externalTrafficPolicy: "" # nodePort: "" serviceMonitor: enabled: false additionalLabels: {} ## The label to use to retrieve the job name from. ## jobLabel: "app.kubernetes.io/name" namespace: "" namespaceSelector: {} ## Default: scrape .Release.Namespace only ## To scrape all, use the following: ## namespaceSelector: ## any: true scrapeInterval: 30s # honorLabels: true targetLabels: [] relabelings: [] metricRelabelings: [] prometheusRule: enabled: false additionalLabels: {} # namespace: "" rules: [] # # These are just examples rules, please adapt them to your needs # - alert: NGINXConfigFailed # expr: count(nginx_ingress_controller_config_last_reload_successful == 0) > 0 # for: 1s # labels: # severity: critical # annotations: # description: bad ingress config - nginx config test failed # summary: uninstall the latest ingress changes to allow config reloads to resume # - alert: NGINXCertificateExpiry # expr: (avg(nginx_ingress_controller_ssl_expire_time_seconds) by (host) - time()) < 604800 # for: 1s # labels: # severity: critical # annotations: # description: ssl certificate(s) will expire in less then a week # summary: renew expiring certificates to avoid downtime # - alert: NGINXTooMany500s # expr: 100 * ( sum( nginx_ingress_controller_requests{status=~"5.+"} ) / sum(nginx_ingress_controller_requests) ) > 5 # for: 1m # labels: # severity: warning # annotations: # description: Too many 5XXs # summary: More than 5% of all requests returned 5XX, this requires your attention # - alert: NGINXTooMany400s # expr: 100 * ( sum( nginx_ingress_controller_requests{status=~"4.+"} ) / sum(nginx_ingress_controller_requests) ) > 5 # for: 1m # labels: # severity: warning # annotations: # description: Too many 4XXs # summary: More than 5% of all requests returned 4XX, this requires your attention # -- Improve connection draining when ingress controller pod is deleted using a lifecycle hook: # With this new hook, we increased the default terminationGracePeriodSeconds from 30 seconds # to 300, allowing the draining of connections up to five minutes. # If the active connections end before that, the pod will terminate gracefully at that time. # To effectively take advantage of this feature, the Configmap feature # worker-shutdown-timeout new value is 240s instead of 10s. ## lifecycle: preStop: exec: command: - /wait-shutdown priorityClassName: "" # -- Rollback limit ## revisionHistoryLimit: 10 ## Default 404 backend ## defaultBackend: ## enabled: false name: defaultbackend image: registry: k8s.gcr.io image: defaultbackend-amd64 ## for backwards compatibility consider setting the full image url via the repository value below ## use *either* current default registry/image or repository format or installing chart by providing the values.yaml will fail ## repository: tag: "1.5" pullPolicy: IfNotPresent # nobody user -> uid 65534 runAsUser: 65534 runAsNonRoot: true readOnlyRootFilesystem: true allowPrivilegeEscalation: false # -- Use an existing PSP instead of creating one existingPsp: "" extraArgs: {} serviceAccount: create: true name: "" automountServiceAccountToken: true # -- Additional environment variables to set for defaultBackend pods extraEnvs: [] port: 8080 ## Readiness and liveness probes for default backend ## Ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/ ## livenessProbe: failureThreshold: 3 initialDelaySeconds: 30 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 5 readinessProbe: failureThreshold: 6 initialDelaySeconds: 0 periodSeconds: 5 successThreshold: 1 timeoutSeconds: 5 # -- Node tolerations for server scheduling to nodes with taints ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ ## tolerations: [] # - key: "key" # operator: "Equal|Exists" # value: "value" # effect: "NoSchedule|PreferNoSchedule|NoExecute(1.6 only)" affinity: {} # -- Security Context policies for controller pods # See https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/ for # notes on enabling and using sysctls ## podSecurityContext: {} # -- Security Context policies for controller main container. # See https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/ for # notes on enabling and using sysctls ## containerSecurityContext: {} # -- Labels to add to the pod container metadata podLabels: {} # key: value # -- Node labels for default backend pod assignment ## Ref: https://kubernetes.io/docs/user-guide/node-selection/ ## nodeSelector: kubernetes.io/os: linux # -- Annotations to be added to default backend pods ## podAnnotations: {} replicaCount: 1 minAvailable: 1 resources: {} # limits: # cpu: 10m # memory: 20Mi # requests: # cpu: 10m # memory: 20Mi extraVolumeMounts: [] ## Additional volumeMounts to the default backend container. # - name: copy-portal-skins # mountPath: /var/lib/lemonldap-ng/portal/skins extraVolumes: [] ## Additional volumes to the default backend pod. # - name: copy-portal-skins # emptyDir: {} autoscaling: annotations: {} enabled: false minReplicas: 1 maxReplicas: 2 targetCPUUtilizationPercentage: 50 targetMemoryUtilizationPercentage: 50 service: annotations: {} # clusterIP: "" # -- List of IP addresses at which the default backend service is available ## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips ## externalIPs: [] # loadBalancerIP: "" loadBalancerSourceRanges: [] servicePort: 80 type: ClusterIP priorityClassName: "" # -- Labels to be added to the default backend resources labels: {} ## Enable RBAC as per https://github.com/kubernetes/ingress-nginx/blob/main/docs/deploy/rbac.md and https://github.com/kubernetes/ingress-nginx/issues/266 rbac: create: true scope: false ## If true, create & use Pod Security Policy resources ## https://kubernetes.io/docs/concepts/policy/pod-security-policy/ podSecurityPolicy: enabled: false serviceAccount: create: true name: "" automountServiceAccountToken: true # -- Annotations for the controller service account annotations: {} # -- Optional array of imagePullSecrets containing private registry credentials ## Ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ imagePullSecrets: [] # - name: secretName # -- TCP service key:value pairs ## Ref: https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/exposing-tcp-udp-services.md ## tcp: {} # 8080: "default/example-tcp-svc:9000" # -- UDP service key:value pairs ## Ref: https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/exposing-tcp-udp-services.md ## udp: {} # 53: "kube-system/kube-dns:53" # -- (string) A base64-encoded Diffie-Hellman parameter. # This can be generated with: `openssl dhparam 4096 2> /dev/null | base64` ## Ref: https://github.com/kubernetes/ingress-nginx/tree/main/docs/examples/customization/ssl-dh-param dhParam:
 
具体修改的内容有:
  • 修改镜像仓库地址为docker.io/willdockerhub/ingress-nginx-controller,注释掉digest
  • hostNetwork 的值为 true
  • dnsPolicy的值改为: ClusterFirstWithHostNet
  • nodeSelector 添加标签: ingress: "true",用于部署 ingress-controller 到指定节点
  • kind类型更改为:DaemonSet
  • kube-webhook-certgen的镜像地址改为国内仓库地址 registry.aliyuncs.com/google_containers/kube-webhook-certgen,标签修改,注释掉digest
 

安装ingress-nginx

💡
新版ingress引入webhook,直接安装可能会报错:Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.,通过设置配置禁止webhook组件。
# 进入chart目录 [root@k8s-master ~]$ cd xxx/xxx/ingress-nginx # helm安装 # --create-namespace选项自动创建命名空间 # --set controller.admissionWebhooks.enabled=false,禁用webhook组件 helm install ingress-nginx -n ingress-nginx . --create-namespace --set controller.admissionWebhooks.enabled=false # 或更新 helm upgrade --install ingress-nginx -n ingress-nginx . --create-namespace --set controller.admissionWebhooks.enabled=false
 
安装成功,会打印以下信息:
NAME: ingress-nginx LAST DEPLOYED: Wed Sep 15 10:17:09 2021 NAMESPACE: ingress-nginx STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: The ingress-nginx controller has been installed. It may take a few minutes for the LoadBalancer IP to be available. You can watch the status by running 'kubectl --namespace ingress-nginx get services -o wide -w ingress-nginx-controller' An example Ingress that makes use of the controller: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: name: example namespace: foo spec: rules: - host: www.example.com http: paths: - backend: serviceName: exampleService servicePort: 80 path: / # This section is only required if TLS is to be enabled for the Ingress tls: - hosts: - www.example.com secretName: example-tls If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided: apiVersion: v1 kind: Secret metadata: name: example-tls namespace: foo data: tls.crt: <base64 encoded cert> tls.key: <base64 encoded key> type: kubernetes.io/tls
 

设置标签&检查

安装完成后,需要给节点打上刚刚设置的标签ingress=true,让 Pod 调度到指定的节点,比如调度到 master 节点:
# 给master节点打上标签 ingress=ture [root@k8s-master ingress-nginx]$ kubectl label node master1 ingress=true node/master1 labeled # k8s默认集群中,出于安全考虑,默认配置下Kubernetes不会将Pod调度到Master节点。测试环境无所谓,所以执行下面命令去除master的污点: [root@k8s-master ingress-nginx]$ kubectl taint node master1 node-role.kubernetes.io/master-
 
执行完成之后,就可以看到 ingress-nginx 部署到了master节点了:
[root@k8s-master ~]$ kubectl get all -n ingress-nginx NAME READY STATUS RESTARTS AGE pod/ingress-nginx-controller-sm4m2 1/1 Running 0 6d1h NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/ingress-nginx-controller LoadBalancer 10.105.126.148 <pending> 80:32080/TCP,443:32443/TCP 6d1h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/ingress-nginx-controller 1 1 1 1 1 ingress=true,kubernetes.io/os=linux 6d1h
 
💡
通过的信息看到:PORT(S)的Http部分——80:32080/TCP,表示来自集群任意物理机器32080端口的请求,会转发10.105.126.148:80(CLUSTER-IP:port)上。
 
Pod也是部署在了master节点:
[root@k8s-master ~]$ kubectl get pods -n ingress-nginx -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ingress-nginx-controller-sm4m2 1/1 Running 0 6d1h 10.170.67.132 slave01 <none> <none>
💡
ingress-nginx-controller通过在pod中创建一个nginx应用来起作用。
 

测试 ingress-nginx

创建后端的 nginx 的 Pod 和 Service

创建后端的 nginx 的 Pod 和 Service,ingress-demo.yaml:
apiVersion: apps/v1 kind: Deployment metadata: name: svc-demo spec: replicas: 2 selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - image: nginx:1.18.0 name: svc-demo ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: svc-demo spec: selector: app: myapp ports: - targetPort: 80 # 后端Pod的端口 port: 31081 # 服务要暴露的端口
 
查看 Pod 和 Service:
[root@k8s-master service]$ kubectl apply -f ingress-demo.yaml [root@k8s-master service]$ kubectl get pods,svc NAME READY STATUS RESTARTS AGE pod/svc-demo-f9785fc46-cqsmq 1/1 Running 0 3d pod/svc-demo-f9785fc46-s24hq 1/1 Running 0 3d NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 84d service/svc-demo ClusterIP 10.99.187.151 <none> 31081/TCP 3d
 

创建 ingress 规则

💡
一条ingress规则,会自动在ingress-nginx-controller所在的pod中的nginx应用的nginx.conf配置中生成一个location配置项。
创建 ingress 规则,ingress-nginx.yaml
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: "nginx" name: example spec: rules: # 一个ingress可以配置多个rules - host: # 域名配置,可以不写,匹配*,或者写 *.bar.com http: paths: # 相当于nginx的location,同一个host可以配置多个path - path: / pathType: Prefix backend: service: name: svc-demo # 代理到哪个svc port: number: 31081 # svc的端口
创建ingress:
[root@k8s-master service]$ kubectl apply -f ingress-nginx.yaml [root@k8s-master service]$ kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE example <none> * 80 2d1h
 

访问服务

在客户端机器上通过curl ip:32080即可访问对应服务。其中ip为集群任意机器ip。
💡
原理如下:ip:32080 → 转到service/ingress-nginx-controller,发现路径为"/",符合example这条ingress规则,此时请求应转发到svc-demo这个服务的31081端口,这个服务端口又对应着pod的80端口,所以最后显示出以下内容。
通过打印信息可以看到已经成功通过ingress将请求转发到后端Pod上了:
<!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>

Copyright © 2024 Yiner

logo