Kubernetes Job Cleanup Kubernetes Job Cleanup kubernetes kubernetes

Kubernetes Job Cleanup


It looks like starting with Kubernetes 1.6 (and the v2alpha1 api version), if you're using cronjobs to create the jobs (that, in turn, create your pods), you'll be able to limit how many old jobs are kept. Just add the following to your job spec:

successfulJobsHistoryLimit: XfailedJobsHistoryLimit: Y

Where X and Y are the limits of how many previously run jobs the system should keep around (it keeps jobs around indefinitely by default [at least on version 1.5.])

Edit 2018-09-29:

For newer K8S versions, updated links with documentation for this are here:


It's true that you used to have to delete jobs manually. @puja's answer was correct at the time of writing.

Kubernetes 1.12.0 released a TTL feature (in alpha) where you can set it to automatically clean up jobs a specified number of seconds after completion (changelog). You can set it to zero for immediate cleanup. See the Jobs docs.

Example from the doc:

apiVersion: batch/v1kind: Jobmetadata:  name: pi-with-ttlspec:  ttlSecondsAfterFinished: 100  template:    spec:      containers:      - name: pi        image: perl        command: ["perl",  "-Mbignum=bpi", "-wle", "print bpi(2000)"]      restartPolicy: Never


I recently built a kubernetes-operator to do this task.

After deploy it will monitor selected namespace and delete completed jobs/pods if they completed without errors/restarts.

https://github.com/lwolf/kube-cleanup-operator