How to deploy to AWS Kubernetes from Azure DevOps
After a research and try and failure, I found another way to do it, without messing around with shell scripts.
You just need to apply the following to Kubernetes, It will create a ServiceAccount and bind it to a custom Role, that role will have the permissions to create/delete deployments and pods (tweak it for services permissions).
deploy-robot-conf.yaml
apiVersion: v1kind: ServiceAccountmetadata: name: deploy-robotautomountServiceAccountToken: false---apiVersion: v1kind: Secretmetadata: name: deploy-robot-secret annotations: kubernetes.io/service-account.name: deploy-robottype: kubernetes.io/service-account-token---kind: RoleapiVersion: rbac.authorization.k8s.io/v1metadata: name: deploy-robot-role namespace: defaultrules: # ## Customize these to meet your requirements ##- apiGroups: ["apps"] resources: ["deployments"] verbs: ["create", "delete"]- apiGroups: [""] resources: ["pods"] verbs: ["create", "delete"]---kind: RoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata: name: global-rolebinding namespace: defaultsubjects:- kind: ServiceAccount name: deploy-robot namespace: defaultroleRef: kind: Role name: deploy-robot-role apiGroup: rbac.authorization.k8s.io
This will have the minimum permissions needed for Azure DevOps be able to deploy to the cluster.
Note: Please tweak the rules at the role resource to meet your need, for instance services resources permissions.
Then go to your release and create a Kubernetes Service Connection:
Fill the boxes, and follow the steps required to get your secret from the service account, remember that is deploy-robot if you didn't change the yaml file.
And then just use your Kubernetes Connection:
Another option would be to use 'kubeconf' based authentication, where 'kubeconf' file can be obtained with following AWS CLI command:
aws eks --region region update-kubeconfig --name cluster_name --kubconfig ~/.kube/AzureDevOpsConfig