How to run kubectl within a job in a namespace? How to run kubectl within a job in a namespace? kubernetes kubernetes

How to run kubectl within a job in a namespace?


Is it possible to run kubectl inside a Job resource in a specified namespace? Did not see any documentation or examples for the same..

A Job creates one or more Pods and ensures that a specified number of them successfully terminate. It means the permission aspect is the same as in a normal pod, meaning that yes, it is possible to run kubectl inside a job resource.

TL;DR:

  • Your yaml file is correct, maybe there were something else in your cluster, I recommend deleting and recreating these resources and try again.
  • Also check the version of your Kubernetes installation and job image's kubectl version, if they are more than 1 minor-version apart, you may have unexpected incompatibilities

Security Considerations:

  • Your job role's scope is the best practice according to documentation (specific role, to specific user on specific namespace).
  • If you use a ClusterRoleBinding with the cluster-admin role it will work, but it's over permissioned, and not recommended since it's giving full admin control over the entire cluster.

Test Environment:

  • I deployed your config on a kubernetes 1.17.3 and run the job with bitnami/kubectl and bitnami/kubectl:1:17.3. It worked on both cases.
  • In order to avoid incompatibility, use the kubectl with matching version with your server.

Reproduction:

$ cat job-kubectl.yaml apiVersion: batch/v1kind: Jobmetadata:  name: testing-stuff  namespace: my-namespacespec:  template:    metadata:      name: testing-stuff    spec:      serviceAccountName: internal-kubectl      containers:      - name: tester        image: bitnami/kubectl:1.17.3        command:         - "bin/bash"         - "-c"         - "kubectl get pods -n my-namespace"      restartPolicy: Never $ cat job-svc-account.yaml apiVersion: v1kind: ServiceAccountmetadata:  name: internal-kubectl    namespace: my-namespace   ---apiVersion: rbac.authorization.k8s.io/v1kind: Rolemetadata:  name: modify-pods  namespace: my-namespacerules:  - apiGroups: [""]    resources: ["pods"]    verbs: ["get", "list", "delete"]      ---apiVersion: rbac.authorization.k8s.io/v1kind: RoleBindingmetadata:  name: modify-pods-to-sa  namespace: my-namespacesubjects:  - kind: ServiceAccount    name: internal-kubectlroleRef:  kind: Role  name: modify-pods  apiGroup: rbac.authorization.k8s.io
  • I created two pods just to add output to the log of get pods.
$ kubectl run curl --image=radial/busyboxplus:curl -i --tty --namespace my-namespacethe pod is running$ kubectl run ubuntu --generator=run-pod/v1 --image=ubuntu -n my-namespacepod/ubuntu created
  • Then I apply the job, ServiceAccount, Role and RoleBinding
$ kubectl get pods -n my-namespaceNAME                    READY   STATUS      RESTARTS   AGEcurl-69c656fd45-l5x2s   1/1     Running     1          88stesting-stuff-ddpvf     0/1     Completed   0          13subuntu                  0/1     Completed   3          63s
  • Now let's check the testing-stuff pod log to see if it logged the command output:
$ kubectl logs testing-stuff-ddpvf -n my-namespaceNAME                    READY   STATUS    RESTARTS   AGEcurl-69c656fd45-l5x2s   1/1     Running   1          76stesting-stuff-ddpvf     1/1     Running   0          1subuntu                  1/1     Running   3          51s

As you can see, it has succeeded running the job with the custom ServiceAccount.

Let me know if you have further questions about this case.


Create service account like this.

apiVersion: v1kind: ServiceAccountmetadata:  name: internal-kubectl

Create ClusterRoleBinding using this.

apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:  name: modify-pods-to-saroleRef:  apiGroup: rbac.authorization.k8s.io  kind: ClusterRole  name: cluster-adminsubjects:- kind: ServiceAccount  name: internal-kubectl

Now create pod with same config that are given at Documentation.


When you use kubectl from the pod for any operation such as getting pod or creating roles and role bindings it will use the default service account. This service account don't have permission to perform those operations by default. So you need to

  1. create a service account, role and rolebinding using a more privileged account.You should have a kubeconfig file with admin privilege or admin like privilege. Use that kubeconfig file with kubectl from outside the pod to create the service account, role, rolebinding etc.

  2. After that is done create pod by specifying that service account and you should be able perform operations which are defined in the role from within this pod using kubectl and the service account.


apiVersion: v1kind: Podmetadata:  name: my-podspec:  serviceAccountName: internal-kubectl