Jenkins kubernetes plugin not working Jenkins kubernetes plugin not working kubernetes kubernetes

Jenkins kubernetes plugin not working


Instead of using certificates, I suggest you to use credentials in kubernetes, by creating a serviceAccount:

---apiVersion: v1kind: ServiceAccountmetadata:  name: jenkins---kind: RoleapiVersion: rbac.authorization.k8s.io/v1beta1metadata:  name: jenkinsrules:- apiGroups: [""]  resources: ["pods"]  verbs: ["create","delete","get","list","patch","update","watch"]- apiGroups: [""]  resources: ["pods/exec"]  verbs: ["create","delete","get","list","patch","update","watch"]- apiGroups: [""]  resources: ["pods/log"]  verbs: ["get","list","watch"]- apiGroups: [""]  resources: ["secrets"]  verbs: ["get"]---apiVersion: rbac.authorization.k8s.io/v1beta1kind: RoleBindingmetadata:  name: jenkinsroleRef:  apiGroup: rbac.authorization.k8s.io  kind: Role  name: jenkinssubjects:- kind: ServiceAccount  name: jenkins

and deploying jenkins using that serviceAccount:

apiVersion: extensions/v1beta1kind: Deploymentmetadata:  labels:    app: jenkins  name: jenkinsspec:  replicas: 1  selector:    matchLabels:      app: jenkins  template:    metadata:      labels:        app: jenkins    spec:                 serviceAccountName: jenkins ....

I show you my screenshots for Kubernetes plugin (note Jenkins tunnel for the JNLP port, 'jenkins' is the name of my kubernetes service):

enter image description here

enter image description here

For credentials:

enter image description here

Then fill the fileds (ID will be autogenerated, description will be shown in credentials listbox), but be sure to have created serviceAccount in kubernetes as I said before:

enter image description here

My instructions are for the Jenkins master inside kubernetes. If you want it outside the cluster (but slaves inside) I think you have to use simple login/password credentials.

For what concerns your last error, it seems to be a host resolution error: the slave cannot resolve your host.

I hope it helps you.


Ok! I find the issue, I am giving container cap as 10 (in default Namespace)which is too low for my cluster. I have 15 Worker nodes cluster and when K8s master trying starting a pod it starts multiple pods at once(though terminates rest after one is scheduled) which eventually crosses the container cap limit(which was 10). I changed the CAP to 100 and now things are working as expected.

One thing which I noticed with K8s Jenkins plugins, it will not clear out the error container itself which increases the container count and leads to this problem.