Contents

Deconstructing a Kubernetes Deployment

Deconstructing a Kubernetes Deployment

Think back to the first time you laid eyes on a Kubernetes deployment manifest. Did it make any sense to you apart from the image and container parameters? Wait, it did? Well, that makes one of us!

When I first saw a Kubernetes deployment, I was hit with a flurry of questions. Questions that made me feel like I had opened the Matrix. Now, after some much-needed experience (and a few existential crises), I think I can finally answer some of those burning questions.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  replicas: 3
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1      
      maxUnavailable: 1   
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: my-app-container
        image: my-app:latest

Why is a Deployment Such a Nested Document?

Seriously, all I wanted to do was run an image! But instead, I got handed an onion with layers of YAML, where every layer seemed to unlock more mystery and confusion. Why the nesting? Well, Kubernetes has a thing for structure. Those layers aren’t just there to test your patience; they’re there to give Kubernetes all the context it needs to run, monitor, and scale your app like a pro.

To break it down: at the top level, you declare that it’s a “Deployment” (so Kubernetes knows you’re serious). Then you define your spec (what you actually want to happen), and finally, deep in the abyss, lies your container configuration. It’s all for good reason—honestly!

Breaking it Down:

  1. Deployment: This top-level resource manages Pods by creating a ReplicaSet to ensure that the desired number of Pods is always running. It’s the boss, telling Kubernetes, “Hey, I need three Pods, make it happen and keep it that way.”

  2. ReplicaSet Section:

    • Replicas: This line (replicas: 3) indicates how many Pods should be running at any given time.
    • Selector: The selector matchLabels matches Pods with the label app: my-app. This is how the ReplicaSet knows which Pods to manage. If one dies, it spins up a new one. This is how it knows the difference between your dev and prod pods which can run in the same cluster and namespace. (if you are confused see the labels section below)
  3. Pod Section:

    • Pod Template: Nested deep inside the Deployment is the Pod template. The template section defines the Pods that will be created and managed by the ReplicaSet.
    • Containers: Within the spec of the Pod, we define the containers. In this case, it’s a single nginx container running inside each Pod, with port 80 exposed.

So, the Deployment controls the ReplicaSet, which controls the Pods. It’s like a well-orchestrated chain of command where each piece has a clear job to do. If one Pod goes down, the ReplicaSet ensures a new one is created to keep things running smoothly.

But is that all a deployment does? Control a Replica Set? In that case i can just go ahead and manage everything with a Replica Set without needing a Deployment? Well you can! But here are reasons you might not want to.

Running a Deployment on top of a ReplicaSet in Kubernetes is a common pattern because a Deployment provides additional management capabilities beyond what a ReplicaSet offers. Here are the key reasons why someone would use a Deployment over a ReplicaSet:

  • Declarative Updates: A Deployment allows declarative updates to applications. You define the desired state of your application (number of replicas, the image version, etc.), and the Deployment controller ensures that the actual state matches the desired state.

  • Rolling Updates: Deployments manage rolling updates seamlessly. They ensure that updates to your application happen gradually, with a controlled number of Pods updated at a time, reducing downtime and providing a rollback option in case of failures.

  • Rollback Support: Deployments keep track of versions (revisions) of ReplicaSets and allow rolling back to a previous version if something goes wrong during an update. This makes it easy to revert to a stable state.

  • Multiple ReplicaSets: During updates, Deployments can manage multiple ReplicaSets at the same time (e.g., when transitioning between versions of an application), ensuring smooth version transitions without manual intervention.

In short, the Deployment abstracts the complexity of managing ReplicaSets by providing lifecycle management, scaling, updates, and rollbacks, making it the preferred choice for most use cases.

What’s with Labels Everywhere?

Okay, let’s talk about labels—yes, those little key-value pairs you have to sprinkle all over the place like confetti, At first, you’re thinking, “Did I really need to specify this label here… and here… and, oh great, here too?” It feels like overkill.

But think of labels as your app’s name tag at a Kubernetes conference. Each resource in Kubernetes needs to find its friends (or at least its services and pods). Labels help Kubernetes match things up—kind of like speed dating but for resources. The Deployment, Service, and Pods all need to share a common label so they can discover each other and work together. No labels, no pod-to-service magic.

Example: Labels and Selectors in Action

Let’s talk about labels and selectors—Kubernetes’ way of organizing its party guests. Labels are like sticky name tags: “Hi, I’m app: my-app.” Selectors? Well, they’re the bouncers at the door, making sure only those with the right label get into the party (or in this case, get managed by services, deployments, etc.).

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
apiVersion: v1
kind: Pod
metadata:
  name: my-app-pod
  labels:
    app: my-app
    environment: production
spec:
  containers:
  - name: nginx-container
    image: nginx

Now, let’s say you have a Service that’s standing there with its checklist, making sure only the cool kids (a.k.a Pods with app: my-app and environment: production) get the VIP treatment (traffic):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
# A Service selecting Pods with the right label
apiVersion: v1
kind: Service
metadata:
  name: my-app-service
spec:
  selector:
    app: my-app
    environment: production  # Only those with this label can enter
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80

So, what’s happening here? The Service looks for Pods wearing the label app: my-app and environment: production, and boom—it routes traffic to them like a pro. Kubernetes is like, “I got you; traffic’s headed your way!”

Example Deployment YAML File

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app-deployment
  labels:
    app: my-app
spec:
  replicas: 3  # We want 3 replicas of the Pod, which will be created by a replica set
  selector:
    matchLabels:
      app: my-app  # Selector matches Pods with the label 'app: my-app'

  # --- ReplicaSet Section ---
  # This part (the spec of the Deployment) defines the logic that creates ReplicaSets.
  # The Deployment makes sure that 3 replicas of the Pods are running at all times.
  
  template:
    metadata:
      labels:
        app: my-app  # Pods will carry this label, which the ReplicaSet uses to track them
    
    # --- Pod Section ---
    # Here is where the actual Pod definition lives.
    # This section defines the containers and their behavior inside the Pod.
    spec:
      containers:
      - name: nginx-container  # The container within the Pod
        image: nginx:1.19  # The container image (nginx in this case)
        ports:
        - containerPort: 80  

Why Are There Multiple ReplicaSets Associated with a Single Deployment?

Great question! You might notice that a single Deployment can have multiple ReplicaSets associated with it. But why?

When you update a Deployment (like changing the container image or resource limits), Kubernetes doesn’t just update the Pods in place. Instead, it creates a new ReplicaSet for the new version of the Pods. This allows Kubernetes to:

  1. Perform rolling updates: It gradually replaces old Pods managed by the previous ReplicaSet with new Pods managed by the new ReplicaSet.
  2. Rollback capability: The old ReplicaSet is kept around in case you need to roll back to a previous version.

Example:

  • You deploy version v1 of your app, which creates ReplicaSet A.
  • You then update the Deployment to version v2. Kubernetes creates ReplicaSet B.
  • If something goes wrong with version v2, you can roll back to version v1, and ReplicaSet A will take over again.

This versioning of ReplicaSets ensures smooth upgrades and quick recovery from issues.


How Do I Upgrade and Rollback via a Deployment?

Upgrades and rollbacks in Kubernetes are super easy and well-orchestrated. The Deployment is designed to handle this elegantly using rolling updates.

Upgrading a Deployment

Let’s say you want to update your app to a new version (say, from nginx:1.19 to nginx:1.20).

  1. Modify the Deployment YAML: Update the image version in the deployment:

    1
    2
    3
    
    containers:
    - name: nginx-container
      image: nginx:1.20
    
  2. Apply the Updated Deployment: Run the following command to update your Deployment:

    1
    
    kubectl apply -f deployment.yaml
    
  3. Watch the Rolling Update: Kubernetes will gradually terminate old Pods and create new ones using the updated ReplicaSet, ensuring no downtime (thanks to the maxUnavailable and maxSurge settings).

    You can check the status of the rollout with:

    1
    
    kubectl rollout status deployment my-app-deployment
    

Rolling Back a Deployment

If the new version (nginx:1.20) starts acting up, don’t worry! You can easily roll back to the previous stable version (nginx:1.19).

  1. Rollback Command: Roll back the Deployment to the previous version by running:

    1
    
    kubectl rollout undo deployment my-app-deployment
    

    This command will switch back to the old

ReplicaSet (with nginx:1.19) and start replacing the problematic Pods with the stable ones.

  1. Check the Rollback Status: You can check the progress of the rollback:

    1
    
    kubectl rollout status deployment my-app-deployment
    
  2. Specific Revision Rollback: If you have multiple versions, you can also roll back to a specific revision:

    1
    
    kubectl rollout undo deployment my-app-deployment --to-revision=2
    

And just like that, you’ve rolled back to a stable version, with Kubernetes doing all the heavy lifting for you!

Demo Time !!

Deploying three versions of Nginx

Each deployed version will have its own replica set

Each replica set will manage a specific deployment revision

Rolling back the deployment to run a specific version

Deployment is now running the revision specificed


Note
Kubectl annotate command is a handy way to tag the last deployment done with a message, as seen in the rollout history command. Previously –record was used with kubectl create/apply commands which is now on the verge of being deprecated.

Final Thoughts

So there you have it. Labels and selectors are the glue that binds your Kubernetes resources together, and Deployments are the orchestrators, making sure your Pods are always running, scaling, and updating smoothly. When you understand how these pieces fit together, Kubernetes becomes less of a scary YAML monster and more of a well-oiled machine with you in the driver’s seat.