So far you have learned how to create a Single-container Pod and a Multi-container pod. To access these services from the outside world, we need to make use of Services. In this article, we will learn the different types of Services and how to use them.
Any pod you create, it can only be accessed within the cluster. Service allows users to expose their applications as network services within a cluster, so you can access them outside the cluster.
Do not be confused with Kubernetes Services with Operating Systems Services. K8s Services are specifically desgined for network access.
What is a Service in Kubernetes?
Kubernetes Services are resources that map network traffic to the Pods deployed in a cluster. Every time you want to expose a set of Pods over the network, you have to create a service, whether within your cluster or externally.
Understanding the internals of Kubernetes Networking Models could become complex, and it is different on different Cloud Platforms. Services in Kubernetes abstracts the underlying complexity and provide us with simple high-level APIs for use.
Why Learn Services in Kubernetes?
- Service Discovery: In a dynamic and distributed environment like Kubernetes, where Pods (containers) come and go frequently, Service Discovery is crucial. Services provide a stable endpoint that other Pods can use to access your application, regardless of the underlying Pod IP addresses.
- Load Balancing: Kubernetes Services automatically distribute incoming traffic across multiple backend Pods that are part of the Service. This load balancing ensures high availability, scalability, and efficient utilization of resources by distributing traffic evenly.
- Pod Scaling and Resilience: Kubernetes allows you to scale your application horizontally by adding or removing Pods dynamically based on demand. Services ensure that traffic is directed to the available Pods, even as they scale up or down, thereby maintaining application availability and resilience.
- Abstraction of Pod Details: Services abstract the underlying Pod details, such as IP addresses and individual Pod lifecycles, from consumers of the service. This abstraction simplifies application development and deployment, as clients can interact with the service without needing to know the specific Pod instances.
- Internal and External Communication: Kubernetes Services facilitate communication both within the cluster (internal services) and with external systems or clients (external services). Internal services enable inter-service communication, while external services allow exposing applications to the internet or connecting to external resources.
- Support for Multiple Service Types: Kubernetes supports various service types, such as ClusterIP, NodePort, LoadBalancer, and ExternalName, each serving different use cases. These service types provide flexibility in how applications are exposed and accessed within and outside the cluster.
Types of Services in Kubernetes:
No matter what type of Services you deploy, ultimately it forwards the network request to a set of Pods. Depending upon the need you can choose the type of Service you need.
- ClusterIP Services
- NodePort Services
- LoadBalancer Services
- ExternalName Services
- Headless Services
In practical use cases, you would often endup using Ingress Controller to expose your services. Ingress is not discussed here.
Automatic DNS for Services:
Kubernetes automatically enables DNS for Services through its service discovery system. Thus providing a reliable way for Pods and other resources within the cluster to communicate with Services using DNS names.
- Automatic DNS Assignment: Each Service created in Kubernetes is automatically assigned a DNS A or AAAA record. The DNS name follows the format
<service-name>.<namespace-name>.svc.cluster-domain
. For example, a Service nameddemo
in the default namespace will be accessible within thecluster.local
cluster atd
emo.default.svc.cluster.local
. - Namespace Isolation: The DNS names are namespaced, meaning that Services within different namespaces have unique DNS names. This allows for isolation and prevents naming conflicts between Services in different namespaces.
- Cluster Domain Configuration: The cluster domain (e.g.,
cluster.local
) is configurable and can be set during Kubernetes cluster installation or configuration. It’s typically set to a domain that is internal to the cluster, ensuring that DNS resolution occurs within the cluster’s network. - Reliable In-Cluster Networking: With DNS-enabled Services, Pods and other resources within the Kubernetes cluster can reliably communicate with Services using their DNS names. This eliminates the need to look up the Service IP addresses manually and simplifies intra-cluster networking.
- Consistent Access: Using DNS names for Service access ensures consistency and reliability, even as Pods are scaled up or down, and Service endpoints change dynamically. Clients can rely on the DNS names to access Services without needing to worry about the underlying IP addresses.
1. ClusterIP Services in Kubernetes
In Kubernetes, a ClusterIP Service is a type of service that exposes an application internally within the cluster. It allocates a virtual IP address (ClusterIP) that other components within the Kubernetes cluster can use to communicate with the service. ClusterIP Services are primarily used for inter-service communication, allowing different parts of an application to communicate with each other without exposing the service to the outside world.
- Internal Service Exposure: ClusterIP Services are designed to expose applications internally within the Kubernetes cluster. They are accessible only from within the cluster and cannot be accessed from outside the cluster’s network.
- Stable Virtual IP Address: When you create a ClusterIP Service, Kubernetes assigns it a stable virtual IP address (ClusterIP) that serves as the endpoint for accessing the service. This ClusterIP is used by other Pods and resources within the cluster to communicate with the service.
- Inter-Service Communication: ClusterIP Services are commonly used for inter-service communication within a microservices architecture. For example, if you have multiple microservices running in your cluster, each microservice can communicate with other microservices by accessing their ClusterIP Services.
- Load Balancing: Behind the scenes, Kubernetes manages load balancing for ClusterIP Services by distributing incoming traffic across the Pods that belong to the service. This ensures that requests are evenly distributed and that each Pod receives a fair share of traffic.
- Automatic DNS Integration: Kubernetes automatically assigns a DNS name to each ClusterIP Service in the format
<service-name>.<namespace-name>.svc.cluster-domain
. This DNS name allows other components within the cluster to resolve the service’s IP address using DNS, simplifying service discovery.
1. You can Create a ClusterIP Service as below.
### ClusterIP Service
apiVersion: v1
kind: Service
metadata:
name: mariadb-service
spec:
type: ClusterIP
selector:
app: mariadb
ports:
- protocol: TCP
port: 3306
targetPort: 3306
2. For the above example to work, let us create PersistentVolumeClaim as:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mariadb-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
3. MariaDB needs a root password. Make sure you use a Base64 encoded password
apiVersion: v1
kind: Secret
metadata:
name: mariadb-secret
type: Opaque
data:
root-password: TWFyaWFEQlBhc3N3b3JkQGFjYyEwTg==
4. Create the deployment file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: mariadb-deployment
spec:
replicas: 1
selector:
matchLabels:
app: mariadb
template:
metadata:
labels:
app: mariadb
spec:
containers:
- name: mariadb
image: mariadb:latest
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mariadb-secret
key: root-password
ports:
- containerPort: 3306
volumeMounts:
- name: mariadb-data
mountPath: /var/lib/mysql
volumes:
- name: mariadb-data
persistentVolumeClaim:
claimName: mariadb-pvc
Apply the YAML files: Save the YAML configurations in separate files (e.g., mariadb-pvc.yaml
, mariadb-secret.yaml
, mariadb-deployment.yaml
, mariadb-service.yaml
) and apply them to your Kubernetes cluster using the kubectl apply -f <file.yaml>
command.
$ kubectl apply -f mariadb-pvc.yaml
persistentvolumeclaim/mariadb-pvc created
$ kubectl apply -f mariadb-secret.yaml
secret/mariadb-secret created
$ kubectl apply -f mariadb-deployment.yaml
deployment.apps/mariadb-deployment created
$ kubectl apply -f mariadb-service.yaml
service/mariadb-service created
$ kubectl get svc mariadb-service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.1.240.1 <none> 443/TCP 5h23m
mariadb-service ClusterIP 10.1.249.125 <none> 3306/TCP 113s
Once deployed, you can access the MariaDB service internally within the cluster using the mariadb-service
ClusterIP. Other applications within the cluster can connect to MariaDB using the ClusterIP and port 3306
. NOTE: Applications from outside the cluster can not access the service.
2. NodePort Services in Kubernetes
NodePort Services in Kubernetes provide a way to expose your service externally through a specified static port binding on each of your cluster’s Nodes. While NodePort Services have their uses, they come with functional limitations and potential security risks. Let’s explore their characteristics, use cases, and considerations in more detail:
Characteristics of NodePort Services:
- External Accessibility: NodePort Services expose your service externally by binding to a static port on each Node in your cluster. This allows external clients to access the service by connecting to any Node’s IP address and the specified port.
- Cluster IP Address: Like ClusterIP Services, NodePort Services are also assigned a cluster IP address. This IP address can be used to reach the service from within the cluster, providing internal accessibility similar to ClusterIP Services.
Considerations and Limitations:
- Security Risks: NodePort Services can pose security risks as anyone who can connect to the port on your Nodes can potentially access the service. This exposes your service to external threats and unauthorized access if proper security measures are not in place.
- Port Conflicts: Each port number can only be used by one NodePort Service at a time to prevent conflicts. This limitation can become challenging to manage, especially in larger clusters with multiple services.
- Default Node Listening: Every Node in your cluster has to listen to the specified port by default, even if they’re not running a Pod associated with the NodePort Service. This can lead to resource consumption and potential overhead on Nodes.
- Manual Load Balancing: NodePort Services do not provide automatic load balancing. Clients are served by the Node they connect to, which may result in uneven distribution of traffic and potential performance issues.
Use Cases and Best Practices:
- Custom Load Balancing: NodePort Services can be useful for facilitating the use of custom load-balancing solutions that reroute traffic from outside the cluster. This allows you to implement your own load-balancing mechanisms tailored to your specific requirements.
- Temporary Debugging and Development: NodePort Services can also be convenient for temporary debugging, development, and troubleshooting scenarios where you need to quickly test different configurations or access services from external environments.
With the help of an example, I will explain you how to access the same MariaDB Service from outside the Cluster.
Use the `mariadb-deployment.yaml` from ClusterIP Services section
Create the svc-nodeport-mariadb.yml
NodePort spec as shown below. It is important to understand about:
- port: Specifies the ports that the service will expose. The port is the port on the service, remember service can be accessed only from within the cluster.
- targetPort: is the port that the Pods are listening on. This is the POD port that we want to connect to.
- nodePort: Specifies the port number that will be exposed on each Node in the cluster. It must be in the range of 30000-32767, or you can omit it to let Kubernetes allocate a random port from this range.
- protocol: Is the protocol that will be used for communications like TCP, UDP etc
## deployment is in 03-svc-clusterip.yml
apiVersion: v1
kind: Service
metadata:
name: svc-nodeport-mariadb
spec:
type: NodePort
ports:
- port: 3306
# By default and for convenience, the `targetPort` is set to
# the same value as the `port` field.
protocol: TCP
targetPort: 3306 # Container Port
# Optional field
# By default and for convenience, the Kubernetes control plane
# will allocate a port from a range (default: 30000-32767)
nodePort: 31111 # Node Port
selector:
app: mariadb
# Reference: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
---
Now follow the commands to deploy and get the Node IP address.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
mariadb-deployment-68b7fcd4cd-7sc5l 1/1 Running 0 12h
$ kubectl apply -f svc-nodeport-mariadb.yml
service/svc-nodeport-mariadb created
$ kubectl get service/svc-nodeport-mariadb -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
svc-nodeport-mariadb NodePort 10.1.247.114 <none> 3306:31111/TCP 100s app=mariadb
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP
gke-k8s-training-clu-k8s-training-wor-f25d17e7-gvs0 Ready <none> 5d15h v1.27.8-gke.1067004 10.128.0.3 35.232.188.109
gke-k8s-training-clu-k8s-training-wor-f25d17e7-hw1h Ready <none> 5d15h v1.27.8-gke.1067004 10.128.0.4 34.135.196.169
gke-k8s-training-clu-k8s-training-wor-f25d17e7-w53n Ready <none> 5d15h v1.27.8-gke.1067004 10.128.0.5 35.226.195.137
EXTERNAL-IP is the public IP address of the nodes. Now, you can use any one of the NodeIP:31111 to access the mariadb. For exmple 35.232.188.109:3111
1. Here is a YouTube Video that just explains the same.
3. LoadBalancer Service in Kubernetes
In Kubernetes, a LoadBalancer Service exposes your application to the external world by automatically provisioning an external load balancer. This type of service is commonly used when you want to make your application accessible from the internet or other external networks.
- External Accessibility: LoadBalancer Services provide external accessibility to your application by automatically provisioning an external load balancer. This load balancer distributes incoming traffic across the backend Pods that are part of the service.
- Automatic Provisioning: When you create a LoadBalancer Service in Kubernetes, the Kubernetes control plane automatically requests and provisions a load balancer from the underlying cloud provider (such as AWS, GCP, or Azure). This load balancer is responsible for routing external traffic to the Pods that are part of the service.
- Public IP Address: The load balancer assigned to a LoadBalancer Service typically comes with a public IP address or DNS name, which clients can use to access your application from the internet. This public IP address serves as the entry point for external traffic into your Kubernetes cluster.
- Port Configuration: LoadBalancer Services allow you to specify the ports that your application will listen on and expose to the external world. You can define multiple ports and protocols (e.g., TCP, UDP) as needed for your application.
- Health Checks and Load Balancing: The external load balancer provided by the cloud provider typically performs health checks on the backend Pods to ensure they are healthy and capable of handling traffic. It also balances the incoming traffic across the healthy Pods to ensure efficient utilization of resources and high availability of the application.
- Dynamic Updates: LoadBalancer Services in Kubernetes support dynamic updates, meaning that you can modify the service configuration (e.g., add or remove backend Pods, change port configurations) without disrupting the external accessibility of your application. The load balancer automatically adjusts its routing rules based on the changes in the service configuration.
We will use the same mariadb-deployment.yaml, so I am assuming you have the MariaDB Pod running. To create a LoadBalancer service for the MariaDB deployment, you can define a service YAML configuration file with the appropriate specifications as shown below:
apiVersion: v1
kind: Service
metadata:
name: svc-mariadb-loadbalancer
spec:
selector:
app: mariadb
ports:
- protocol: TCP
port: 3306
targetPort: 3306
type: LoadBalancer
Follow the kubectl
commands below to deploy the service and find the allocated public IP to access it from outside.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
mariadb-deployment-68b7fcd4cd-7sc5l 1/1 Running 0 12h
$ kubectl apply -f svc-mariadb-loadbalancer.yml
service/svc-mariadb-loadbalancer created
$ kubectl get service/svc-mariadb-loadbalancer
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc-mariadb-loadbalancer LoadBalancer 10.1.249.120 34.27.95.160 3306:30942/TCP 55s
Just like you accessed the MariaDB earlier using NodeIP:NodePort
, now you can use the EXTERNAL-IP:PORT
to access 34.27.95.160:3306
.
LoadBalancer is the more preferred way to expose underlying service.
4. ExternalName Services in Kubernetes
ExternalName Services in Kubernetes provide a way to map a Service to a DNS name. Unlike other types of Services that expose internal resources within the cluster, ExternalName Services act as an alias to an external resource located outside the Kubernetes cluster. They are particularly useful when you need to access services or resources that reside outside of your Kubernetes cluster, such as Cloud Managed Databases, APIs, or legacy systems.
- DNS Mapping: ExternalName Services map a Kubernetes Service to a DNS name (external name) rather than to a set of Pods. When a client within the cluster attempts to access the ExternalName Service, Kubernetes resolves the DNS name specified in the ExternalName Service to an IP address or another DNS name specified in the DNS system configured for the cluster.
- Accessing External Resources: ExternalName Services are primarily used to access external resources or services that are not running within the Kubernetes cluster. For example, you can use an ExternalName Service to provide a stable DNS alias for an external database service hosted outside of the cluster.
- Transparent Proxying: When a client within the Kubernetes cluster accesses the ExternalName Service, Kubernetes transparently proxies the request to the external resource by resolving the DNS name specified in the ExternalName Service. From the client’s perspective, it appears as though it is accessing a service within the cluster.
- No Load Balancing: Unlike other types of Services in Kubernetes, ExternalName Services do not perform load balancing or expose multiple backend instances. They simply serve as a DNS alias to a single external resource.
- Use Cases: Common use cases for ExternalName Services include integrating with external databases, accessing third-party APIs, connecting to legacy systems, and providing stable DNS names for external services that may change their IP addresses or locations over time.
Let us say we want to create an external name for the AWS Managed Aurora DB. You would define a service YAML configuration file with the appropriate specifications. Note that AWS Aurora is typically accessed using its endpoint DNS name provided by AWS rather than a custom DNS name. Therefore, you would create an ExternalName service that maps to the AWS Aurora endpoint DNS name. Here’s an example YAML configuration:
apiVersion: v1
kind: Service
metadata:
name: training-aurora-svc
#namespace: test
spec:
type: ExternalName
externalName: your-aurora-cluster.cluster-identifier.region.rds.amazonaws.com
Use the kubectl
command to deploy and see teh details.
$ kubectl apply -f training-aurora-svc.yml
service/training-aurora-svc created
$ kubectl get service/training-aurora-svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
training-aurora-svc ExternalName <none> your-aurora-cluster.cluster-identifier.region.rds.amazonaws.com <none> 97s
5. HeadLess Services in Kubernetes:
Headless Services in Kubernetes are a type of service that does not allocate a cluster IP to the service itself. Unlike regular services, which provide a stable virtual IP address and load balancing for accessing a set of Pods, headless services are used when you don’t need load balancing or a single stable IP address. Instead, they allow direct communication with individual Pods that are part of the service.
- No Cluster IP: Headless Services do not allocate a cluster IP address. When you create a Headless Service, Kubernetes does not assign a virtual IP address to the service. Instead, DNS resolution is used to discover individual Pod IP addresses directly.
- DNS Resolution: Kubernetes automatically creates a DNS record for each Pod that is part of the Headless Service. These DNS records resolve to the IP addresses of the individual Pods. Clients can use DNS queries to discover and communicate directly with the Pods without going through a load balancer.
- Direct Pod Communication: Headless Services enable direct communication with individual Pods, bypassing the need for load balancing or proxying. This can be useful in scenarios where you require direct access to individual Pods for tasks such as distributed databases, peer-to-peer networking, or stateful applications.
- Service Discovery: Headless Services provide a mechanism for service discovery within the Kubernetes cluster. By querying the DNS records associated with the Headless Service, clients can dynamically discover and connect to the Pods that are part of the service, even as Pods are scaled up or down.
- Use Cases: Common use cases for Headless Services include stateful applications where each Pod represents a distinct instance or shard of the application (e.g., databases, message brokers), distributed systems where direct peer-to-peer communication is required, and scenarios where you need to bypass the overhead of load balancing or proxying for improved performance or control.
apiVersion: v1
kind: Service
metadata:
name: headless-svc
spec:
#type: ClusterIP
clusterIP: None
selector:
app: myapp
ports:
- protocol: TCP
port: 80
targetPort: 8080
NOTE: It is a clusterIP type service with the clusterIP field as None. This will allow to directly communicate with the Container instead of the POD.
DNS ENTRY: The DNS access for a pod in a headless service is <pod-name>.<headless-service-name>.<namespace>.svc.cluster.local
. For example, if there is a pod named pod-1 inside the default namespace and the headless service is named as headless-svc, the DNS entry would be:
pod-1.headless-svc.default.svc.cluster.local
Conclusion:
In conclusion, Kubernetes offers a versatile set of service types to facilitate networking and access control within a cluster, each catering to different use cases and requirements:
- ClusterIP: Ideal for internal communication between Pods within the cluster, ClusterIP services provide a stable virtual IP address and load balancing, ensuring reliable communication and high availability of backend services.
- NodePort: NodePort services expose applications externally by allocating a static port on each Node in the cluster. While useful for development and debugging purposes, NodePort services may not be suitable for production deployments due to security risks and limitations.
- LoadBalancer: LoadBalancer services are designed for external access to applications, automatically provisioning a cloud load balancer to distribute incoming traffic across backend Pods. This type of service is well-suited for production environments requiring high availability and scalability.
- ExternalName: ExternalName services act as aliases to external resources outside the Kubernetes cluster, providing a stable DNS mapping for accessing third-party services or legacy systems. They enable seamless integration with external dependencies without exposing internal cluster details.
- Headless: Headless Services in Kubernetes provide a unique approach to service discovery and communication within a cluster, offering direct access to individual Pods without the need for load balancing or a stable virtual IP address.
By leveraging these service types effectively, Kubernetes users can orchestrate complex microservices architectures, facilitate seamless communication between components, and ensure robust external access to applications, thereby empowering scalable and resilient containerized deployments. Each service type offers its unique advantages and considerations, allowing Kubernetes users to tailor their networking strategies to meet the specific needs of their applications and infrastructure.