Profiling Kubernetes Deployment Process
I am not really sure you can achieve the outcome you want without extensive knowledge about certain components and some deep dive coding.
What can be retrieved from Kubernetes:
Information about events
Like pod creation, termination, allocation with timestamps:
$ kubectl get events --all-namespaces
Even in the json
format there is nothing about CPU/RAM usage in this events.
Information about pods
$ kubectl get pods POD_NAME -o json
No information about CPU/RAM usage.
$ kubectl describe pods POD_NAME
No information about CPU/RAM usage either.
Information about resource usage
There is some tools to monitor and report basic resource usage:
$ kubectl top node
With output:
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%MASTER 90m 9% 882Mi 33%WORKER1 47m 5% 841Mi 31%WORKER2 37m 3% 656Mi 24%
$ kubectl top pods --all-namespaces
With output:
NAMESPACE NAME CPU(cores) MEMORY(bytes)default nginx-local-84ddb99b55-2nzdb 0m 1Midefault nginx-local-84ddb99b55-nxfh5 0m 1Midefault nginx-local-84ddb99b55-xllw2 0m 1Mi
There is CPU/RAM usage but in basic form.
Information about deployments
$ kubectl describe deployment deployment_name
Provided output gives no information about CPU/RAM usage.
Getting information about resources
Getting resources like CPU/RAM usage specific to some actions like pulling the image or scaling the deployment could be problematic. Not all processes are managed by Kubernetes and additional tools at OS level might be needed to fetch that information.
For example pulling an image for deployment engages the kubelet agent as well as the CRI to talk to Docker or other Container Runtime your cluster is using. Adding to that, the Container Runtime not only downloads the image, it does other actions that are not directly monitored by Kubernetes.
For another example HPA (Horizontal Pod Autoscaler) is Kubernetes abstraction and getting it's metrics would be highly dependent on how the metrics are collected in the cluster in order to determine the best way to fetch them.
I would highly encourage you to share what exactly (case by case) you want to monitor.
You can find these in the events feed for the pod, check kubectl describe pod
.