Containerising application - design pattern Containerising application - design pattern kubernetes kubernetes

Containerising application - design pattern


If your applications are monolithic, the most obvious way is to package each application as a single process in a container image, and deploy the containers to Kubernetes as pods (one container per pod) managed by a Deployment resource (allows you to replicate the pods and do rolling updates). This corresponds to approach 1 of your list.

If the components of your applications are already loosely coupled, you could go for a microservices type of architecture. For example, you could extract the common logic of s1, s2, and s3 for all applications into separate microservices, package each of them as a container image, and run them on Kubernetes as pods (one container per pod, managed by a Deployment). The core of each application would then be its own "microservice", packaged as a container image and deployed to Kubernetes as pods managed by a Deployment. These "core" pods would then use the services provided by the s1, s2, and s3 pods as clients. This would correspond to approach 3 in your list.

Regarding approach 2, this isn't a best practice. In most cases, a pod contains only a single container. In some cases, there are multiple containers in a pod, but then one of them is the main container doing the main job and the other ones are tightly coupled sidecar containers that do auxiliary jobs.


Summary

If you have a lot of common logic across your applications, then approach 3 makes sense, to avoid code duplication. Also it provides the finest granularity for scaling. Groups of pods are managed by Deployment resources, which you will use anyway, even if you deploy each application as a single pod, so this is no overhead.

If the common logic is not so big, then approach 1 is the simplest solution.