Kubernetes fournit toujours le service 503 temporairement indisponible avec plusieurs entrées TLS


12

J'ai une configuration de cluster kubernetes par kops sur Amazon Web Services

J'ai une configuration de 2 sites. L'un est sécurisé via SSL / TLS / https et l'autre est simplement http. Les deux sont des sites Wordpress. Les domaines ont été modifiés pour protéger l'identité du site

Configuration d'entrée:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: my-rules
spec:
  tls:
  - hosts:
    - site1.com
    secretName: site1-tls-secret
  - hosts:
    - www.site1.com
    secretName: site1-tls-secret
  rules:
  - host: site1.com
    http:
      paths:
      - path: /
        backend:
          serviceName: site1
          servicePort: 80
  - host: www.site1.com
    http:
      paths:
      - path: /
        backend:
          serviceName: site1
          servicePort: 80
  - host: blog.site2.com
    http:
      paths:
      - path: /
        backend:
          serviceName: site2
          servicePort: 80

Service d'entrée

apiVersion: v1
kind: Service
metadata:
  name: nginx-ingress
  labels:
    app: nginx-ingress
    k8s-addon: ingress-nginx.addons.k8s.io
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: 'tcp'
    service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
    service.beta.kubernetes.io/aws-load-balancer-ssl-ports: '443'
spec:
  type: LoadBalancer
  ports:
  - name: http
    port: 80
    targetPort: 80
  - name: https
    port: 443
    targetPort: 443
  selector:
    app: nginx-ingress

Déploiement d'entrée:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-ingress
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: nginx-ingress
    spec:
      containers:
      - name: nginx-ingress
        image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.11
        imagePullPolicy: Always
        readinessProbe:
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
        livenessProbe:
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 10
          timeoutSeconds: 1
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        ports:
        - containerPort: 80
          hostPort: 80
        args:
        - /nginx-ingress-controller
        - --default-backend-service=$(POD_NAMESPACE)/echoheaders-default
        - --configmap=$(POD_NAMESPACE)/nginx-load-balancer-conf

Généré nginx.conf

daemon off;

worker_processes 1;
pid /run/nginx.pid;

worker_rlimit_nofile 1047552;
events {
    multi_accept        on;
    worker_connections  16384;
    use                 epoll;
}

http {
    set_real_ip_from    0.0.0.0/0;
    real_ip_header      proxy_protocol;

    real_ip_recursive   on;

    geoip_country       /etc/nginx/GeoIP.dat;
    geoip_city          /etc/nginx/GeoLiteCity.dat;
    geoip_proxy_recursive on;
    # lua section to return proper error codes when custom pages are used
    lua_package_path '.?.lua;/etc/nginx/lua/?.lua;/etc/nginx/lua/vendor/lua-resty-http/lib/?.lua;';
    init_by_lua_block {
        require("error_page")
    }

    sendfile            on;
    aio                 threads;
    tcp_nopush          on;
    tcp_nodelay         on;

    log_subrequest      on;

    reset_timedout_connection on;
    keepalive_timeout  75s;
    keepalive_requests 100;

    client_header_buffer_size       1k;
    large_client_header_buffers     4 8k;
    client_body_buffer_size         8k;

    http2_max_field_size            4k;
    http2_max_header_size           16k;

    types_hash_max_size             2048;
    server_names_hash_max_size      1024;
    server_names_hash_bucket_size   64;
    map_hash_bucket_size            64;

    proxy_headers_hash_max_size     512;
    proxy_headers_hash_bucket_size  64;

    variables_hash_bucket_size      64;
    variables_hash_max_size         2048;

    underscores_in_headers          off;
    ignore_invalid_headers          on;

    include /etc/nginx/mime.types;
    default_type text/html;
    gzip on;
    gzip_comp_level 5;
    gzip_http_version 1.1;
    gzip_min_length 256;
    gzip_types application/atom+xml application/javascript application/x-javascr
ipt application/json application/rss+xml application/vnd.ms-fontobject applicati
on/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml applicat
ion/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-comp
onent;
    gzip_proxied any;

    # Custom headers for response

    server_tokens on;

    # disable warnings
    uninitialized_variable_warn off;

    log_format upstreaminfo '$the_real_ip - [$the_real_ip] - $remote_user [$time
_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $
request_length $request_time [$proxy_upstream_name] $upstream_addr $upstream_res
ponse_length $upstream_response_time $upstream_status';

    map $request_uri $loggable {
        default 1;
    }

    access_log /var/log/nginx/access.log upstreaminfo if=$loggable;
    error_log  /var/log/nginx/error.log notice;

    resolver 100.64.0.10 valid=30s;

    # Retain the default nginx handling of requests without a "Connection" heade
r
    map $http_upgrade $connection_upgrade {
        default          upgrade;
        ''               close;
    }
    # trust http_x_forwarded_proto headers correctly indicate ssl offloading
    map $http_x_forwarded_proto $pass_access_scheme {
        default          $http_x_forwarded_proto;
        ''               $scheme;
    }

    map $http_x_forwarded_port $pass_server_port {
       default           $http_x_forwarded_port;
       ''                $server_port;
    }

    map $http_x_forwarded_for $the_real_ip {
        default          $http_x_forwarded_for;
        ''               $proxy_protocol_addr;
    }

    # map port 442 to 443 for header X-Forwarded-Port
    map $pass_server_port $pass_port {
        442              443;
        default          $pass_server_port;
    }

    # Map a response error watching the header Content-Type
    map $http_accept $httpAccept {
        default          html;
        application/json json;
        application/xml  xml;
        text/plain       text;
    }

    map $httpAccept $httpReturnType {
        default          text/html;
        json             application/json;
        xml              application/xml;
        text             text/plain;
    }

    # Obtain best http host
    map $http_host $this_host {
        default          $http_host;
        ''               $host;
    }

    map $http_x_forwarded_host $best_http_host {
        default          $http_x_forwarded_host;
        ''               $this_host;
    }

    server_name_in_redirect off;
    port_in_redirect        off;

    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

    # turn on session caching to drastically improve performance
    ssl_session_cache builtin:1000 shared:SSL:10m;
    ssl_session_timeout 10m;

    # allow configuring ssl session tickets
    ssl_session_tickets on;

    # slightly reduce the time-to-first-byte
    ssl_buffer_size 4k;

    # allow configuring custom ssl ciphers
    ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE
-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:D
HE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-
SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE
-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-
SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AE
S256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AE
S256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPOR
T:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-D
ES-CBC3-SHA';
    ssl_prefer_server_ciphers on;

    ssl_ecdh_curve secp384r1;

    proxy_ssl_session_reuse on;

    upstream upstream-default-backend {
        # Load balance algorithm; empty for round robin, which is the default
        least_conn;
        server 100.96.1.49:8080 max_fails=0 fail_timeout=0;
    }

    upstream default-site1-80 {
        # Load balance algorithm; empty for round robin, which is the default
        least_conn;
        server 127.0.0.1:8181 max_fails=0 fail_timeout=0;
    }

    upstream default-site2blog-80 {
        # Load balance algorithm; empty for round robin, which is the default
        least_conn;
        server 100.96.2.127:80 max_fails=0 fail_timeout=0;
        server 100.96.1.52:80 max_fails=0 fail_timeout=0;
    }
    server {
        server_name _;
        listen 80 proxy_protocol default_server reuseport backlog=511;
        listen [::]:80 proxy_protocol default_server reuseport backlog=511;
        set $proxy_upstream_name "-";

        listen 442 proxy_protocol default_server reuseport backlog=511 ssl http2;
        listen [::]:442 proxy_protocol  default_server reuseport backlog=511 ssl http2;
        # PEM sha: ------
        ssl_certificate                         /ingress-controller/ssl/default-fake-certificate.pem;
        ssl_certificate_key                     /ingress-controller/ssl/default-fake-certificate.pem;

        more_set_headers                        "Strict-Transport-Security: max-age=15724800; includeSubDomains;";
        location / {
            set $proxy_upstream_name "upstream-default-backend";

            port_in_redirect off;

            proxy_set_header Host                   $best_http_host;

            # Pass the extracted client certificate to the backend

            # Allow websocket connections
            proxy_set_header                        Upgrade           $http_upgrade;
            proxy_set_header                        Connection        $connection_upgrade;

            proxy_set_header X-Real-IP              $the_real_ip;
            proxy_set_header X-Forwarded-For        $the_real_ip;
            proxy_set_header X-Forwarded-Host       $best_http_host;
            proxy_set_header X-Forwarded-Port       $pass_port;
            proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
            proxy_set_header X-Original-URI         $request_uri;
            proxy_set_header X-Scheme               $pass_access_scheme;

            # mitigate HTTPoxy Vulnerability
            # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
            proxy_set_header Proxy                  "";

            # Custom headers to proxied server

            proxy_connect_timeout                   10s;
            proxy_send_timeout                      120s;
            proxy_read_timeout                      120s;

            proxy_redirect                          off;
            proxy_buffering                         off;
            proxy_buffer_size                       "4k";
            proxy_buffers                           4 "4k";

            proxy_http_version                      1.1;

            proxy_cookie_domain                     off;
            proxy_cookie_path                       off;

            # In case of errors try the next upstream server before returning an error
            proxy_next_upstream                     error timeout invalid_header http_502 http_503 http_504;

            proxy_pass http://upstream-default-backend;
        }

        # health checks in cloud providers require the use of port 80
        location /healthz {
            access_log off;
            return 200;
        }

        # this is required to avoid error if nginx is being monitored
        # with an external software (like sysdig)
        location /nginx_status {
            allow 127.0.0.1;
            allow ::1;
            deny all;

            access_log off;
            stub_status on;
        }
    }
    server {
        server_name blog.site2.com;
        listen 80 proxy_protocol;
        listen [::]:80 proxy_protocol;
        set $proxy_upstream_name "-";
        location / {
            set $proxy_upstream_name "default-site2blog-80";

            port_in_redirect off;

            client_max_body_size                    "20m";

            proxy_set_header Host                   $best_http_host;

            # Pass the extracted client certificate to the backend

            # Allow websocket connections
            proxy_set_header                        Upgrade           $http_upgrade;
            proxy_set_header                        Connection        $connection_upgrade;

            proxy_set_header X-Real-IP              $the_real_ip;
            proxy_set_header X-Forwarded-For        $the_real_ip;
            proxy_set_header X-Forwarded-Host       $best_http_host;
            proxy_set_header X-Forwarded-Port       $pass_port;
            proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
            proxy_set_header X-Original-URI         $request_uri;
            proxy_set_header X-Scheme               $pass_access_scheme;

            # mitigate HTTPoxy Vulnerability
            # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
            proxy_set_header Proxy                  "";
            # Custom headers to proxied server

            proxy_connect_timeout                   10s;
            proxy_send_timeout                      120s;
            proxy_read_timeout                      120s;

            proxy_redirect                          off;
            proxy_buffering                         off;
            proxy_buffer_size                       "4k";
            proxy_buffers                           4 "4k";

            proxy_http_version                      1.1;

            proxy_cookie_domain                     off;
            proxy_cookie_path                       off;

            # In case of errors try the next upstream server before returning an error
            proxy_next_upstream                     error timeout invalid_header http_502 http_503 http_504;

            proxy_pass http://default-site2blog-80;
        }

    }

    server {
        server_name site1.com;
        listen 80 proxy_protocol;
        listen [::]:80 proxy_protocol;
        set $proxy_upstream_name "-";

        listen 442 proxy_protocol ssl http2;
        listen [::]:442 proxy_protocol  ssl http2;
        # PEM sha: ---
        ssl_certificate                         /ingress-controller/ssl/default-site1-tls-secret.pem;
        ssl_certificate_key                     /ingress-controller/ssl/default-site1-tls-secret.pem;

        more_set_headers                        "Strict-Transport-Security: max-age=15724800; includeSubDomains;";
        location / {
            set $proxy_upstream_name "default-site1-80";

            # enforce ssl on server side
            if ($pass_access_scheme = http) {
                return 301 https://$best_http_host$request_uri;
            }
            port_in_redirect off;

            client_max_body_size                    "20m";

            proxy_set_header Host                   $best_http_host;

            # Pass the extracted client certificate to the backend

            # Allow websocket connections
            proxy_set_header                        Upgrade           $http_upgr
ade;
            proxy_set_header                        Connection        $connectio
n_upgrade;

            proxy_set_header X-Real-IP              $the_real_ip;
            proxy_set_header X-Forwarded-For        $the_real_ip;
            proxy_set_header X-Forwarded-Host       $best_http_host;
            proxy_set_header X-Forwarded-Port       $pass_port;
            proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
            proxy_set_header X-Original-URI         $request_uri;
            proxy_set_header X-Scheme               $pass_access_scheme;

            # mitigate HTTPoxy Vulnerability
            # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
            proxy_set_header Proxy                  "";

            # Custom headers to proxied server

            proxy_connect_timeout                   10s;
            proxy_send_timeout                      120s;
            proxy_read_timeout                      120s;

            proxy_redirect                          off;
            proxy_buffering                         off;
            proxy_buffer_size                       "4k";
            proxy_buffers                           4 "4k";

            proxy_http_version                      1.1;

            proxy_cookie_domain                     off;
            proxy_cookie_path                       off;

            # In case of errors try the next upstream server before returning an error
            proxy_next_upstream                     error timeout invalid_header http_502 http_503 http_504;

            proxy_pass http://default-site1-80;
        }

    }

    # default server, used for NGINX healthcheck and access to nginx stats
    server {
        # Use the port 18080 (random value just to avoid known ports) as default port for nginx.
        # Changing this value requires a change in:
        # https://github.com/kubernetes/contrib/blob/master/ingress/controllers/nginx/nginx/command.go#L104
        listen 18080 default_server reuseport backlog=511;
        listen [::]:18080 default_server reuseport backlog=511;
        set $proxy_upstream_name "-";

        location /healthz {
            access_log off;
            return 200;
        }

        location /nginx_status {
            set $proxy_upstream_name "internal";

            access_log off;
            stub_status on;
        }

        # this location is used to extract nginx metrics
        # using prometheus.
        # TODO: enable extraction for vts module.
        location /internal_nginx_status {
            set $proxy_upstream_name "internal";

            allow 127.0.0.1;
            allow ::1;
            deny all;

            access_log off;
            stub_status on;
        }

        location / {
            set $proxy_upstream_name "upstream-default-backend";
            proxy_pass             http://upstream-default-backend;
        }

    }

    # default server for services without endpoints
    server {
        listen 8181;
        set $proxy_upstream_name "-";

        location / {
            return 503;
        }
    }
}

stream {
    log_format log_stream [$time_local] $protocol $status $bytes_sent $bytes_received $session_time;

    access_log /var/log/nginx/access.log log_stream;

    error_log  /var/log/nginx/error.log;

    # TCP services

    # UDP services
}

1
J'ai corrigé le 503 pour le site http blog.site2.com en recréant le déploiement et le service pour ce site. Cela n'a pas corrigé le site https.
Greg Pagendam-Turner

Réponses:


16

Cela a été causé par la configuration Ingress référençant le nom de service incorrect. après avoir mis à jour la référence Ingress au service, je n'obtiens plus de 503.


3
Notez également que vous pouvez avoir plusieurs entrées. Je crée généralement une entrée par un service. Il n'est pas nécessaire de tout bloquer dans un seul fichier et de vous confondre.
cohadar

1

Vous pouvez obtenir une 503erreur de nginx lorsque basic-authest activé dans Ingress et que l'annotation nginx.ingress.kubernetes.io/auth-secretfait référence à un secret inexistant.

L'ajout du secret manquant ou la suppression de toutes les basic-authannotations de l'entrée peut résoudre cette situation.


0

Dans mon cas, cela a été causé par l'utilisation du mauvais servicePort (un autre port que celui défini dans le service).


0

Habituellement, dans le fichier YAML d'entrée, n'écrivez pas le bon nom de service (- backend: serviceName :) Alors, soyez prudent lorsque vous écrivez des fichiers YAML.

Comme:

service.yml:

apiVersion: v1
kind: Service
metadata:
  name: my-service-name

ingress.yml:

spec:
  tls:
  - hosts:
    - my-domanin-name.com
    secretName: ingress-tls
  rules:
  - host: my-domanin-name.com
    http:
      paths:
      - backend:
          serviceName: MY-SERVICE-NAME
En utilisant notre site, vous reconnaissez avoir lu et compris notre politique liée aux cookies et notre politique de confidentialité.
Licensed under cc by-sa 3.0 with attribution required.