How to run kubectl commands inside a container? How to run kubectl commands inside a container? kubernetes kubernetes

How to run kubectl commands inside a container?


I would use kubernetes api, you just need to install curl, instead of kubectl and the rest is restful.

curl http://localhost:8080/api/v1/namespaces/default/pods

Im running above command on one of my apiservers. Change the localhost to apiserver ip address/dns name.

Depending on your configuration you may need to use ssl or provide client certificate.

In order to find api endpoints, you can use --v=8 with kubectl.

example:

kubectl get pods --v=8

Resources:

Kubernetes API documentation

Update for RBAC:

I assume you already configured rbac, created a service account for your pod and run using it. This service account should have list permissions on pods in required namespace. In order to do that, you need to create a role and role binding for that service account.

Every container in a cluster is populated with a token that can be used for authenticating to the API server. To verify, Inside the container run:

cat /var/run/secrets/kubernetes.io/serviceaccount/token

To make request to apiserver, inside the container run:

curl -ik \     -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \     https://kubernetes.default.svc.cluster.local/api/v1/namespaces/default/pods


Bit late to the party here, but this is my two cents:

I've found using kubectl within a container much easier than calling the cluster's api

(Why? Auto authentication!)

Say you're deploying a Node.js project that needs kubectl usage.

  1. Download & Build kubectl inside the container
  2. Build your application, copying kubectl to your container
  3. Voila! kubectl provides a rich cli for managing your kubernetes cluster

Helpful documentation

--- EDITS ---

After working with kubectl in my cluster pods, I found a more effective way to authenticate pods to be able to make k8s API calls. This method provides stricter authentication.

  1. Create a ServiceAccount for your pod, and configure your pod to use said account. k8s Service Account docs
  2. Configure a RoleBinding or ClusterRoleBinding to allow services to have the authorization to communicate with the k8s API. k8s Role Binding docs
  3. Call the API directly, or use a the k8s-client to manage API calls for you. I HIGHLY recommend using the client, it has automatic configuration for pods which removes the authentication token step required with normal requests.

When you're done, you will have the following:ServiceAccount, ClusterRoleBinding, Deployment (your pods)

Feel free to comment if you need some clearer direction, I'll try to help out as much as I can :)

All-in-on example

apiVersion: extensions/v1beta1kind: Deploymentmetadata:  name: k8s-101spec:  replicas: 3  template:    metadata:      labels:        app: k8s-101    spec:      serviceAccountName: k8s-101-role      containers:      - name: k8s-101        imagePullPolicy: Always        image: salathielgenese/k8s-101        ports:        - name: app          containerPort: 3000---kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata:  name: k8s-101-rolesubjects:- kind: ServiceAccount  name: k8s-101-role  namespace: defaultroleRef:  kind: ClusterRole  name: cluster-admin  apiGroup: rbac.authorization.k8s.io---apiVersion: v1kind: ServiceAccountmetadata:  name: k8s-101-role

The salathielgenese/k8s-101 image contains kubectl. So one can just log into a pod container & execute kubectl as if he was running it on k8s host: kubectl exec -it pod-container-id -- kubectl get pods


First Question

/usr/local/bin/kubectl: cannot execute binary file

It looks like you downloaded the OSX binary for kubectl. When running in Docker you probably need the Linux one:

https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl

Second Question

If you run kubectl in a proper configured Kubernetes cluster, it should be able to connect to the apiserver.

kubectl basically uses this code to find the apiserver and authenticate: github.com/kubernetes/client-go/rest.InClusterConfig

This means:

  • The host and port of the apiserver are stored in the environment variables KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT.
  • The access token is mounted to var/run/secrets/kubernetes.io/serviceaccount/token.
  • The server certificate is mounted to /var/run/secrets/kubernetes.io/serviceaccount/ca.crt.

This is all data kubectl needs to know to connect to the apiserver.

Some thoughts why this might won't work:

  • The container doesn't run in Kubernetes.
    • It's not enough to use the same Docker host; the container needs to run as part of a pod definition.
  • The access is restricted by using an authorization plugin (which is not the default).
  • The service account credentials are overwritten by the pod definition (spec.serviceAccountName).