skip to content
anntz.com

networking basics in kubernetes

each pod within a Kuberenets cluster gets it’s own IP ip address.

for this networking pods gets assigned an IP address within 10.244.0.0/16 CIDR block.

kubernetes expects us to configure networking in such a way that:

  • all PODs can communicate to one another without the NAT.
  • all nodes can communicate with PODS and viceversa without the NAT.

to address this requirement, most of the kubernetes cluster platforms support this feature out of the box. this includes:

  • calico
  • cisco ACI networks
  • vmware NSX
  • cilium
  • flannel
  • bitcloud fabric

service

we know that each pods within a kubernetes cluster has it’s own ip address. ❌ but pods are ephemeral, so are these ip addresses. because of this, it’s impractical to use our pod ip addresses directly in our applicaiton as we have to update the ip address with each new spinup of our pod.

to tackle this scenario, we need a way to generate a stable ip addresses of these pods. these is where services comes in play.

  1. stable ip address: in each pod(s), we can setup a service that represents a stable ip address access to that pod(s).
  2. loadbalancing: provides loadbalancing, if a client want to access a service, the client can communicate directly with the service which can route the request to any avaialble replica pods.
  3. loose coupling: creates a good abstraction for loose coupling. both within/outside the cluster.

these features enables communication between various components within inside and outside of the application.

tldr; it enables communication

aside from the underlying infrastructure, kubernetes services enable communication between various components within and outside the application. services help applications communicate with other applications or users.

let’s get into some of the common services in kubernetes:

NodePort

provides external access to the application by mapping port on the node -> port on the pod.

  • exposes a port to access internal pod on the node.
  • it allows traffic from outside the world and re-routes the traffic to the respective pod.
  • since it’s a service, it has it’s own IP address(clusterIp).
  • range: 30000 - 32767

service.yaml

apiVersion: apps/v1
kind: Service
metadata:
  name: myapp-service

spec:
  type: NodePort
  ports:
    - targetPort: 80
      port: 80
      nodePort: 30008
  selector: # links to the pod with the labels
    app: myapp
    type: frontend

for this example, let’s assume we have a pod with this definition: pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: myapp-pod
  labels:
    app: myapp
    type: frontend
spec:
  containers:
    - name: nginx-container
      image: nginx

after applying our service using kubectl apply -f service.yaml, we can check the avaialble services using kubectl get services

we can send a GET request to access the pod using the command curl http://10.244.0.0:30008

This is how we mapped a service to a single pod. But in production, we might have multiple instances of application running for high availability/load balancing. But these replicated instances will have the same labels. In that case, the NodePort service will automatically loadbalance the request to an instance using random algorithm algorithm. That’s loadbalancer out of the box.

What if, the pods are distributed accross multiple nodes? In this case, kuberenetes automatically spans out across all the nodes, and re-routes our traffic accordingly. No matter to which node the request is sent! OUT OF THE BOX 🤯.

ClusterIP

if service type isn’t defined, it defaults to ClusterIP service

  • creates a virtual ip inside the cluster to enable communication between different services
  • exposes a set of pods to other objects in the cluster
  • these ip addresses cannot be accessed outside the cluster. to achieve that, we can setup ingress service to route traffic from outside cluster to respective pods and viceversa.

service.yaml

apiVersion: v1
kind: Service
metadata:
  name: backend

spec:
  type: ClusterIp
  ports:
    targetPort: 80
    port: 80
  selector:
  app: myapp
  type: backend

we can also create a service using imperitve definition: kubectl create service clusterip backend --tcp=80:80

to expose a pod, we can use the command kubectl expose pod httpd --port=80 --target-port=80

for this example, let’s assume we have a pod with this definition: pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: myapp-pod
  labels:
    app: myapp
    type: backend
spec:
  containers:
    - name: nginx-container
      image: nginx

upon creating the services using kubectl create -f service.yaml, we can see the backend service is running usign kubectl get services.

LoadBalancer

Let’s come into the user-facing service. Suppose we have two backend services Form Service & Report Service replicated across multiple pods and on multiple nodes. How can we access these services from the frontend/client application? We need a way to route the requests to respective services and also try to balance load across these replicas.

For this, we can setup a entirely separate service like nginx and setup routing to respective nodes. But, what if we can configure this within kubernetes itself. That would make our workflow more efficient. With kubernetes, we can create the loadbalancer using the Native Load Balancer provided by the popular cloud platforms like AWS, AZURE, GCP etc.

apiVersion: v1
kind: Service
metadata:
  name: myapp-service

spec:
  type: LoadBalancer
  ports:
    - targetPort: 80
      port: 80
      nodePort: 30008