In the previous article, you have learned about Kubernetes Pods and explored Single Pod in details. In this article, you will learn about combining multiple Containers in a Single Pod. Linking multi-containers in a Single Pod comes with several advantages as well as limitations. Resources like IP address, Storage sharing is easier, however you should not blindly get into it without getting a deep inside of it. You will also learn about various design patterns that emerged in Kubernetes depending on the application of Multi-Container Pods.
In this article, you will learn:
- Init Container
- Sidecar Pattern
- Ambassador Pattern
- Adapter Pattern
What is Multi-container Pods in Kubernetes?
When you run two or more containers in a single pod, it is known as multi-container pod. Biggest advantage it offers is, containers share the resources like storage volume, namespace, IP address. Communication between the containers is simplified.
You should not get exicted and think of deploying your microservices to multi-container deployments. Short answer is always go with Single Container Pod wherever possible. If so then why shoud you learn this?
- Sometimes, you’ll want to initialize your Pod with a scripts or run some preconfiguration procedure before the application container should start. This logic runs in a so-called init container.
- Other times, you’ll want to provide helper functionality that runs alongside the application container e.g. Log aggregator, Service Mesh etc. Containers running helper logic are called sidecars.
Why Multi-Container Pods?
While single-container Pods offer simplicity, multi-container Pods provide several advantages:
- Efficient Resource Sharing: Containers within a Pod share storage volumes and network namespaces, enabling seamless communication and data exchange. This reduces resource duplication and optimizes resource utilization.
- Modularity: Complex applications can be broken down into smaller, more manageable units, promoting code maintainability and easier deployments.
- Enhanced Functionality: You can leverage sidecar containers to inject additional functionalities like logging, monitoring, or security alongside the primary application container.
Types of Multi-container Pod design Patterns:
- Init Container Pattern
- Sidecar Pattern
- Adapter Pattern
- Ambassador Pattern
Init Containers Pattern
Init Containers provide initialization logic to be run before the main application even starts. They are used to perform initialization tasks such as setting up configuration files, fetching dependencies, or initializing data volumes. It is also used for preparing database schemas or populating initial data in databases. Fetching sensitive information like secrets or credentials from external sources.
- A Pod can have one or more init containers and they run in sequence.
- Init containers always run to completion. Each init container must complete successfully before the next one starts.
- If a Pod’s init container fails, the kubelet repeatedly restarts that init container until it succeeds.
- Only after all the init containers finish execution, the application container starts.
- Init containers run and complete their tasks before the main application container starts. Unlike sidecar containers, init containers are not continuously running alongside the main containers.
apiVersion: v1
kind: Pod
metadata:
name: init-pod-example
labels:
app: init-pod-example
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: myvol
mountPath: /usr/share/nginx/html
# These containers are run during pod initialization
initContainers:
- name: alpine
image: alpine
command:
- sh
- "-c"
- |
mkdir -p /init-cont && \
touch "/init-cont/index.html" && \
echo "<h1>Welcome: init Container Works! </h1>" > "/init-cont/index.html"
sleep 30
volumeMounts:
- name: myvol
mountPath: "/init-cont"
volumes:
- name: myvol
emptyDir: {}
---
### Optional
apiVersion: v1
kind: Service
metadata:
name: svc-init-container
spec:
selector:
app: init-pod-example
ports:
- port: 80
type: LoadBalancer
This YAML definition creates a Pod with an init container and a Service that routes traffic to the Pod. The init container creates a directory and a dummy html file in a shared volume. The main container serves the contents of the volume on port 80
. The Service routes traffic to the Pod based on the label selector. The Service svc-init-container
is created so that you can create the init container Pod and check the output.
Sidecar Pattern:
Sidecar containers are the secondary containers that run along with the main application container within the same Pod. Used to enhance or to extend the functionality of the main application container. Sidecar can be used for logging, monitoring, security and data sync purchases etc. Taking the cross-cutting responsibilities away from from the main application.
- Sidecars can be combined with Init Containers.
- Sidecar containers share the same network and storage namespaces with the primary container. This co-location allows them to interact closely and share resources.
The Pod here serves a sample index.html pages from github repo and refreshes the content whever the git file is updatesd on the remote repo. Let us take a look:
# Update the version in git repo and refresh html without redeploying the changes
apiVersion: v1
kind: Pod
metadata:
name: git-sync
labels:
app: sidecar-pods-example
spec:
containers:
- name: ctr-webpage
image: nginx
volumeMounts:
- name: html
mountPath: /usr/share/nginx/
- name: ctr-sync
image: k8s.gcr.io/git-sync:v3.1.6
volumeMounts:
- name: html
mountPath: /tmp/git
env:
- name: GIT_SYNC_REPO
value: https://github.com/jstobigdata/sample-html.git
- name: GIT_SYNC_BRANCH
value: main
- name: GIT_SYNC_DEPTH
value: "1"
- name: GIT_SYNC_DEST
value: "html"
volumes:
- name: html
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: svc-sidecar
spec:
selector:
app: sidecar-pods-example
ports:
- port: 80
type: LoadBalancer
- The container ctr-webpage runs the
nginx
image and mounts a volume namedhtml
at/usr/share/nginx/
. So when you visit the URL, it serves the index.html fromhtml
volume. - tr-sync: This container runs the k8s.gcr.io/git-sync:v3.1.6 image, which is designed to pull a git repository into a local directory
html
volume. So whenever you update the remote file, this conatiner will pull the content. - The Service, named svc-sidecar, exposes the
git-sync
Pod on port 80.
Adapter Pattern:
The adapter pattern helps enable communication between two incompatible interfaces. An adapter container is placed between the two components to facilitate the interaction. For example, you may need to run a legacy app that only exposes a SOAP API. You could deploy an adapter that exposes a REST API interface and translates requests to the SOAP API for the legacy app.
Let us say, we have a legacy SOAP service that we need to make it work with modern json protocol. This script would take the captured JSON data ($body) and convert it to a valid SOAP request body for the specific legacy service. The implementation depends on the structure of your JSON data and the SOAP service requirements. You can use tools like jq for parsing JSON and libraries like python-zeep for constructing SOAP requests in Python.
http {
upstream soap_service {
server legacy-soap-service:8080; # Replace with actual SOAP service address
}
server {
listen 80;
location /soap {
# Match incoming JSON requests
if ($request_method = POST) {
if ($content_type = application/json) {
# Set variables from JSON request body
set $body '$request_body';
# Convert JSON to SOAP request using custom logic (replace with actual conversion script)
rewrite ^ /convert_json_to_soap break;
}
}
# Proxy SOAP requests to the legacy service
proxy_pass http://soap_service/$uri;
}
}
}
Create a ConfigMap
named adapter-config with the adapter.conf file content.
kubectl create configmap adapter-config --from-file=nginx-adapter.conf
- Your web application sends JSON requests to the Pod’s exposed port (80).
- The Nginx container intercepts these requests and checks for the JSON content type.
- If the content type is valid, the JSON body is captured and passed to the conversion script (implemented in
convert_json_to_soap.sh
). - Nginx then proxies the constructed SOAP request to the legacy service.
apiVersion: v1
kind: Pod
metadata:
name: my-app-with-adapter
spec:
containers:
- name: my-app
image: my-app-image:latest
ports:
- containerPort: 8080
- name: nginx-adapter
image: nginx:latest
volumeMounts:
- name: adapter-config
mountPath: /etc/nginx/conf.d/adapter.conf
ports:
- containerPort: 80
volumes:
- name: adapter-config
configMap:
name: adapter-config # ConfigMap containing nginx-adapter.conf
Ambassador Pattern – Proxy
The ambassador pattern uses a proxy container to handle inbound requests before forwarding them to the main application container. This allows the proxy to handle things like authentication, caching, load balancing, and rate limiting before the requests reach the application.
We will use the Nginx as a proxy to expose my java application on port 80. So first create the nginx-ambassador.yml file as below.
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-conf
data:
nginx.conf: |
user nginx;
worker_processes 3;
error_log /var/log/nginx/error.log;
events {
worker_connections 10240;
}
http {
log_format main
'remote_addr:$remote_addr\t'
'time_local:$time_local\t'
'method:$request_method\t'
'uri:$request_uri\t'
'host:$host\t'
'status:$status\t'
'bytes_sent:$body_bytes_sent\t'
'referer:$http_referer\t'
'useragent:$http_user_agent\t'
'forwardedfor:$http_x_forwarded_for\t'
'request_time:$request_time';
access_log /var/log/nginx/access.log main;
include /etc/nginx/virtualhost/virtualhost.conf;
}
virtualhost.conf: |
upstream spring-petclinic {
server localhost:8080;
keepalive 1024;
}
server {
listen 80 default_server;
access_log /var/log/nginx/app.access_log main;
error_log /var/log/nginx/app.error_log;
location / {
proxy_pass http://spring-petclinic;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
Now create the configmap as below:
kubectl apply --f nginx-ambassador.yml
Now create the yaml file and create kubernetes pod using the kubectl apply -f myapp-ambassador.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: spring-petclinic
image: dockerbikram/spring-petclinic:3.2.0
imagePullPolicy: Always
ports:
- containerPort: 8080
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- mountPath: /etc/nginx # mount nginx-conf volumn to /etc/nginx
readOnly: true
name: nginx-conf
- mountPath: /var/log/nginx
name: log
volumes:
- name: nginx-conf
configMap:
name: nginx-conf # place ConfigMap `nginx-conf` on /etc/nginx
items:
- key: nginx.conf
path: nginx.conf
- key: virtualhost.conf
path: virtualhost/virtualhost.conf # dig directory
- name: log
emptyDir: {}
- This YAML uses Deployment controller to create a Pod with two containers:
spring-petclinic
: Your Java application container.nginx
: The Nginx container acting as the Ambassador proxy.
- The
volumeMounts
section maps theambassador-config
ConfigMap containing theambassador.conf
file to the Nginx container’s configuration directory. - The
ports
section defines the exposed port (80) for the Nginx container, which will act as the entry point for external communication.
Additionally, you can create a service to expose the port:80 to outside world and access the app. Use kubectl apply -f svc-ambassador.yml
.
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
selector:
app: nginx
Now run the kubectl get svc nginx
to find the external IP address. Using the IP you can test the app from browser.
Managing Resources for Multi-Container Pods
CPU and Memory
When you have multiple containers in a single pod, Kubernetes allows you to allocate CPU and memory resources to each container. This ensures that no single container can dominate the resources and starve the other containers in the pod.
For example, say you have a pod with two containers: a web server and a database. The web server may need more CPU to handle requests, while the database needs more memory. You can specify resource requests and limits for each container to make sure they get the resources they need.
Storage
In addition to CPU and memory, you’ll want to consider storage for your multi-container pods. Each container has its own ephemeral storage, which means storage that exists for the lifetime of the container. If you want data to persist when containers restart, you’ll need to use persistent storage options like:
- Volume mounts: Mount a Kubernetes volume into one or more containers so they can read/write data. The volume will persist even when containers restart.
- PersistentVolumeClaims: Request storage from a PersistentVolume and mount the claim as a volume in your pod.
With persistent storage, you can have one container write data and other containers read that data, even if they restart at different times.
Sharing data between containers
There are a few ways containers in a pod can share data with each other:
- Volume mounts: As mentioned, you can mount the same Kubernetes volume in multiple containers to share data.
- Environment variables: You can define environment variables in your pod and the variables will be available to all containers. Use this to pass small amounts of data between containers.
- Ports: Containers in a pod share the same network namespace by default, so they can communicate over localhost. You can have one container expose a port that another container connects to.
- tmpfs mounts: You can mount an emptyDir volume, which is backed by tmpfs (RAM), into multiple containers. This allows the containers to share data through the volume, with the data being lost when the pod is removed.
- Sidecar containers: Add a “sidecar” container to your pod solely for the purpose of transferring data between the main containers. The sidecar can expose ports or volumes to share data with the main containers.
By properly managing resources and enabling data sharing between containers, you can run efficient multi-container pods in Kubernetes. Let me know if you have any other questions!
Debugging Issues in Multi-Container Pods
Once you start running multi-container pods, you’re bound to run into issues now and then. Here are some tips for debugging pods with multiple containers. To start, check the logs of each container individually.
Use the kubectl logs
command, specifying the container name:
kubectl logs my-pod -c container1
kubectl logs my-pod -c container2
This will show you the stdout of each container, which often contains useful error messages or warnings.
You can also exec into a running container to investigate further. Again, specify the container name:
kubectl exec -it my-pod -c container1 -- bash
This will drop you into a bash shell in the container, where you can check filesystems, environment variables, and run any debugging commands.
If a pod is crashing altogether, view the events to see if Kubernetes has logged any useful information:
kubectl get events
Look for events related to your pod. Often Kubernetes will log events indicating a container exited with a non-zero code, or a container could not start. This can point you to the source of the issue.
As a last resort, you may need to delete the pod altogether to force Kubernetes to recreate it. This will restart all containers with a clean slate, which can fix issues caused by transient errors or corrupt state. To delete a pod, run:
kubectl delete pod my-pod --force
Kubernetes will then recreate the pod according to the pod spec, with all fresh containers. Between the container logs, execing into containers, checking events, and restarting pods, you have a good set of tools for debugging those tricky multi-container pods! Stay patient and keep inspecting—the solution is in there somewhere.
Conclusion:
So that’s the lowdown on using multi-container pods in Kubernetes! You now have the basics to start running multiple containers together and managing their resources as a single unit. While it may seem complex at first, stick with it – pods unlock a ton of power and flexibility. Keep playing around, break some stuff, learn what works best for your apps. Before you know it, you’ll be deploying and scaling multi-container workloads like a pro.
Do not jump on using multicontainer Pods in production without consulting with your Architects. I suggest to use the Single Container Pods only as far as possible.