CertManager Letsencrypt CertificateRequest "failed to perform self check GET request" CertManager Letsencrypt CertificateRequest "failed to perform self check GET request" kubernetes kubernetes

CertManager Letsencrypt CertificateRequest "failed to perform self check GET request"


This might be worthwhile to look at. I was facing similar issue with Connection Timeout

Change LoadBalancer in ingress-nginx service.

Add/Change externalTrafficPolicy: Cluster.

Reason being, pod with the certificate-issuer wound up on a different node than the load balancer did, so it couldn’t talk to itself through the ingress.

Below is complete block taken from https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.26.1/deploy/static/provider/cloud-generic.yaml

kind: ServiceapiVersion: v1metadata:  name: ingress-nginx  namespace: ingress-nginx  labels:    app.kubernetes.io/name: ingress-nginx    app.kubernetes.io/part-of: ingress-nginxspec:  #CHANGE/ADD THIS  externalTrafficPolicy: Cluster  type: LoadBalancer  selector:    app.kubernetes.io/name: ingress-nginx    app.kubernetes.io/part-of: ingress-nginx  ports:    - name: http      port: 80      targetPort: http    - name: https      port: 443      targetPort: https---


In my case the cert-manager wanted to request the challenge via an internal ip-address.

failed to perform self check GET request 'http:///.well-known/acme-challenge/': Get http:///.well-known/acme-challenge/: dial tcp 10.67.0.8:80: connect: connection timed out

i.e. the DNS-resolution was broken. I fixed this by changing the deployment of cert-manager to accept only external DNS-servers like so

spec:  template:    spec:      dnsConfig:        nameservers:        - 8.8.8.8      dnsPolicy: None

This is how you do it. Also created an Issue so we can change that with the helm installation


I had the exact same issue it seems to be related with a bug in how Digital Ocean load balancer work. This thread lets-encrypt-certificate-issuance suggested adding the annotation service.beta.kubernetes.io/do-loadbalancer-hostname: "kube.mydomain.com" to the load balancer. In my case i did not had a yaml config file for the load balancer, I just copied the load balancer declaration from the nginx-ingress install script and applied the new configuration to the kubernetes cluster. Below is the final config for the load balancer.

apiVersion: v1kind: Servicemetadata:  annotations:    service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: 'true'    # See https://github.com/digitalocean/digitalocean-cloud-controller-manager/blob/master/docs/controllers/services/examples/README.md#accessing-pods-over-a-managed-load-balancer-from-inside-the-cluster    service.beta.kubernetes.io/do-loadbalancer-hostname: "kube.mydomain.com"  labels:    helm.sh/chart: ingress-nginx-3.19.0    app.kubernetes.io/name: ingress-nginx    app.kubernetes.io/instance: ingress-nginx    app.kubernetes.io/version: 0.43.0    app.kubernetes.io/managed-by: Helm    app.kubernetes.io/component: controller  name: ingress-nginx-controller  namespace: ingress-nginxspec:  type: LoadBalancer  externalTrafficPolicy: Local  ports:    - name: http      port: 80      protocol: TCP      targetPort: http    - name: https      port: 443      protocol: TCP      targetPort: https  selector:    app.kubernetes.io/name: ingress-nginx    app.kubernetes.io/instance: ingress-nginx    app.kubernetes.io/component: controller