Job with multiple containers never succeeds Job with multiple containers never succeeds kubernetes kubernetes

Job with multiple containers never succeeds


I know it's a year too late, but best practice would be to run single cloudsql proxy service for all app's purposes, and then configure DB access in app's image to use this service as a DB hostname.

This way you will not require putting cloudsql proxy container into every pod which uses DB.


each Pod can be configured with a init container which seems to be a good fit for your issue. So instead of having a Pod with two containers which have to run permanently, you could rather define a init container to do your migration upfront. E.g. like this:

apiVersion: v1kind: Podmetadata:  name: init-container  annotations:    pod.beta.kubernetes.io/init-containers: '[        {            "name": "migrate",            "image": "application:version",            "command": ["migrate up"],        }    ]'spec:  containers:  - name: application    image: application:version    ports:    - containerPort: 80


You haven't posted enough details about your specific problem. But I'm taking a guess based on experience.

TL;DR: Move your containers into separate jobs if they are independent.

--

Kubernetes jobs keep restarting till the job succeeds.A kubernetes job will succeed only if every container within succeeds.

This means that your containers should be return in a restart proof way. Once a container sucessfully runs, it should return a success even if it runs again. Otherwise, say container1 is successful, container2 fails. Job restarts. Then, container1 fails (because it has already been successful). Hence, Job keeps restarting.