skip to content
anntz.com

Kuberenets Cluster

a set of nodes that run containerd applications.

it is comprised of one master node and a number of worker nodes. the nodes can either be physical computers or virtual machines, dpeending on the cluster.

the master node is the origin of all tasks and controls the state of the cluster. it coordinates processes such as:

  • scheduling and scaling applications
  • mantaining clluste’s state
  • implementing states

there must be minimum of one master node and one worker node for a Kubernetes cluster to be operational.

Kubernetes Components

API Server

an interface to all Kubernetes resources.

etcd

distributed key-value store

  • manages: state data, config data and meta-data of kubernetes cluster
  • fully replecated through the nodes in cluster
  • contats kubernetes through API Server

kubelet

  • agent that runs on each node within a cluster
  • responsible for making sure containers are running as expected

container runtime

  • underlying software to run containers

controller

  • brain
  • responds to changes within the nodes
  • makes decision to bring up or down new containers

scheduler

  • distributes containers accross multiple nodes

kubectl

It’s a primary cli tool to deploy & manage applications in kubernetes clusters

Pods

  • smallest deployable units
  • containers are packaged to create pods
  • we can share resources such as networking, ports, volumes between containers in a pod.

a. Create a pod

kubectl run nginx --image nginx
## This will create a pod name nginx pulling image from the nginx:latest from dockerhub.
  • Create a pod using the yaml configuration file. Suppose we have a YAML file named pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: myapp-pod
  labels:
    app: myapp

spec:
  containers:
    - name: nginx-container
      image: nginx

to create create our resources using this YAML file, we can simply using the command:

kubectl create -f pod-definition.yaml or kubectl apply -f pod.yaml

b. Show a list of pods

kubectl get pods

c. Show a detailed list of pods

kubectl get pods -o wide

d. Describe pods

kubectl descrive pod [my-pod]

e. Delete a pod

kubectl delete pod [my-pod]

Replica Sets

updated version of Replication Controller. helps to run multiple instances of a single pod in a cluster. (high availability)

uses:

  1. high availability: ensures specified number of pods are running at all the times. it can automatically build a new pod when the existing one fails on the kubernetes controller.

  2. load balancing & scaling: create multiple pods to share loads among them suppose we have a cluster with a single pod. suddenly, if the demand increases and if we were to run out of resources, we could deploy additional pods within the same node or different node in a cluster. (spans multiple nodes on a cluster)

Replication Controller

a. Create a replication controller replication-controller.yaml

apiVersion: v1
kind: ReplicationController
metadata:
    name: my-rc
    labels:
        app: myapp
        type: frontend

spec:
    template:
        # pod-definition
        metadata:
            name: myapp-pod
            labels:
                app: myapp
                type: frontend-pod
        spec:
            containers:
                - name: nginx-container
                  image: nginx
    replicas: 3

let’s load our definition template: kubectl create -f replication-controller.yaml

b. get replicaiton controllers kubectl get replicationcontroller

we can also look into our pods using the command kubectl get pods. you’ll see that there are 3 new pods with the name my-rc-XXXXX where XXXXX is added to uniquely identify the pods within the replication controller my-rc.

Replica Set

a. Create a replicaset replicaset.yaml

apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: myapp-replicaset
  labels:
    app: myapp
    type: frontend
spec:
  template:
    metadata:
      name: myapp-pod
      labels:
        app: myapp
        type: frontpod
    spec:
      containers:
        - name: nginx-container
          image: nginx
  replicas: 3
    # ++
  selector:
    matchLabels:
      type: frontpod
    # ++

Compared to ReplicationController, we have new field called selector. This is used to identify the pods based on the labels provided within matchLabels. Now, even if our pods are deployed ahead of replicaset, our replicaset won’t replace the pods that were created before it, but will check if we have required pods, and only create if necessariy.

b. Scaling our pods

  1. Suppose in an event of huge traffic, if we want to upgrade our pods, we can do the following actions:
    • update our replicaset.yaml config with replicas: 6
    • apply the changeskubectl replace -f replicaset.yaml
  2. kubectl scale --replicas=6 -f replicaset.yaml
  3. kubectl scale --replicas=6 replicaset myapp-replicaset

c. Editing You can edit your replicasets by using the command kubectl edit replicaset new-replica-set and restarting the pods.

deployments

let’s assume you deploy a simple webapplication. some of the scenarios you might think about:

  1. To make it scalable, you chose to deploy multiple instances of the same applications.
  2. To make easy updates and deployments, you might want to setup a continious deployment workflow on a rolling updates(upgrade one instance after another without disturbing active users).
  3. You might want to setup a mechanism to rollback the updates if required.
  4. You might make changes to your runtime environment such as upgrading hardware of your server, updating webserver application, updating resources allocation etc. So your server needs to pause -> make changes -> resume.

this is where kubernetes deployments comes into action. it provides us with the capability to:

  • upgrade the underlying instances seamlessly with rolling updates
  • undo changes
  • pause/resume changes as required.

let’s create a simple deployment config deployment.yaml

apiVersion: apps/v1
# ++++
kind: Deployment
# ++++
metadata:
    name: myapp-deployment
    labels:
        app: myapp
        type: frontend
spec:
    template:
        metadata:
            name: myapp-pod
            labels:
                app: myapp
                type: frontend
        spec:
            containers:
                - name: nginx-container
                  image: nginx
replicas: 3
selector:
    matchLabels:
        type: frontend

now, running kubectl create -f deployment.yaml will create our deployment, which will create our replicaset, which will create our pods.

view deployments

kubectl get deployments #get deployments
kubectl get replicaset #get replicasets
kubectl get pods #get pods

tip: to get all our kubernetes objects, we can use the command kubectl get all

updates and rollbacks

creating/updating deployments triggers a rollout. a new rollout creates a new deployment revision. In future, if the application is updated, a new rollout is triggered and a new deployment revision is created.

this helps us keep track of the changes made in our deployment, and enables us to rollback to previous deployment version.

to view the status of the rollout kubectl rollout status deployment/deployment-1

deployment strategies

if you want to update your deployment, you have two options:

  • recreate strategy: destroy running application instances and recreate them all at once. this creates downtime(where application isn’t usable) between the deployment
  • rollout strategy: take down older version and bring up newer version one-by-one (rolling update). this is a default deployment strategy in kubernetes.

updating your deployment

suppose you want to update your current deployment, this is how you do a simple deployment:

  • update your image with updated codebase & infrastructre in image registry
  • update the version of image if you have any specific version listed in your deployment-definition.yaml config
  • run kubectl apply -f deployment-definition.yaml
  • this will trigger a new rollout, creating a new revision

or, you can

  • directly update the image using kubectl set image deployment/myapp-deployment nginx-container=nginx:1.9.1 but, it won’t affect the deployment configuration file, so be mindful of that.

deployment commands

  • create deployment: kubectl create -f deployment.yaml (adding --record saves changes within rollout history)
  • get deployments: kubectl get deployments
  • update deployment:
    • kubectl apply -f deployment.yaml
    • kubectl set image deployment/myapp-deployment nginx=nginx:1.9.1
    • edit running deployment: kubectl edit deployment myapp-deployment --record

rollouts and versioning

creating deployment -> creates a rollout -> creates a new deployment revision

  • status:

    • kubectl rollout status deployment/myapp-deployment
    • kubectl rollout history deployment/myapp-deployment this will show the revisions with REVISION Number
  • rollback: kubectl rollout undo deployment/myapp-deployment rollbacks to last revision

under the hood: upon deployment through rollout,

  • our application creates a new replicaset,
  • creates new pods based on the updatesaccordingly.
  • these pods are then replaced with the old pods.
  • if we rollback, the application creates the pods of old replicasets and destroys pods of the new replicaset.

namespaces

kubernetes provides a way to virtually organize resources using namespaces. by default, kubernetes consists of these namespaces:

  • default // default namespaces for the user
  • kube-system // resources for kubernetes internal components
  • kube-public // resources available for all users are created
  • kube-node-lease

we can view available namespaces in the system using kubectl get namespaces

uses

  • isolate resources between run-time environments
  • limit resources by creating ResourceQuota
apiVersion: v1
kind: ResourceQuota
metadata:
  name: compute-quota
  namespace: dev
spec:
  hard:
    pods: "10"
    requests.cpu: "4"
    requests.memory: 5Gi
    limits.cpu: "10"
    limits.memory: 10Gi

connection

  • resources within a namespaces can simply refer by names
pg-svc: service name is pg-svc
dev: within the namespace `dev`
svc: it's a type of service
cluster.local: domain

namespace commands

  • creating namespace:

    1. kubectl create namespace my-namespace
    apiVersion: v1
    kind: Namespace
    metadata:
      name: dev
  • specifiying namespace:

    1. withkubectl command: add --namespace=dev or --namespace dev at the end of command to direct kubectl to direct the action within the given namespace dev. example:

      • kubectl get pods —namespace=dev
      • kubectl get pods —namespace=prod we can set default namespace for kubectl to use as well: kubectl config set-contenxt $(kubectl config current-context) --namespace=dev kubectl get pods // this will show pods from dev namespace now
    2. within config:

    apiVersion: v1
    kind: Pod
    metadata:
      name: app-pod
      namespace: dev
      labels:
        app: app
        type: backend
    spec:
      containers:
        - name: nginx
          image: nginx
  • view all resources regardless of any namespace: add --all-namespaces or -A to view resources within all namespaces. for instance, we can get all pods from all available namespaces using the command kubectl get pods -A

  • replacing an existing environment with a TEMP_CONFIGURATION.yaml can be done using: kubectl replace --force -f /tmp/TEMP_CONFIGURATION.yaml. this will delete the existing configuration and update it with the updated configuration if exists in the environment.

pod lifecycle

when a pod is created, it is in pending state. in this stage, the scheduler tries to place the pod in a node. if the scheduler can’t find a place in the node, it remains in the pending state.

we can run the command kubectl describe pod POD_NAME to see if the pod remains in the pending state.

once the pod is scheduled, it goes into ContainerCreating status. in this stage, the images required for the applications are pulled and container starts.

once the container starts, it goes into running state. the pod will remain in the running status until the program is completed or terminated.

pod conditions

to check the status of the pods, we can use pod conditions.

  • Initialized
  • Ready
  • ContainerReady
  • PodScheduled