Continuous Delivery for Kubernetes Applications

May 7, 2020


This is part 3 of our 3 part Kubernetes CI/CD series. In the first part, we learnt at a high level about the overall CI/CD strategy. Then in the second part, we discussed in detail the continuous integration workflow. In this blog we will go in detail into the Continuous Delivery pipeline for deploying your applications to Kubernetes. 

While developing your CI/CD strategy, it is important to consider how you will monitor your application stack. A robust monitoring stack provides deep insight into the application stack and helps identify issues early on. MetricFire specializes in monitoring systems and you can use our product with minimal configuration to gain in-depth insight into your environments. If you would like to learn more about it please book a demo with us, or sign on to the free trial today.

Continuous Delivery is the ability to get changes of all types—including new features, configuration changes, bug fixes and experiments—into production, or into the hands of users, safely and quickly in a sustainable way.

Our goal is to make deployments—whether of a large-scale distributed system, a complex production environment, an embedded system, or an app—predictable, routine affairs that can be performed on demand. 

You’re doing continuous delivery when:

  • Your software is deployable throughout its lifecycle
  • Your team prioritizes keeping the software deployable over working on new features
  • Anybody can get fast, automated feedback on the production readiness of their systems any time somebody makes a change to them
  • You can perform push-button deployments of any version of the software to any environment on demand

In the previous blog we established that the hand-off from Continuous Integration to Continuous Delivery takes place when the CI pipeline pushes the docker image to the docker repository, and then pushes the updated helm chart to a helm repository or artifact store. Let’s go further today.  

Some CD tools

Some widely used CD tools are:

  1. Jenkins
  5. Weave Flux
  6. ArgoCD

You must have noticed that Jenkins can be used as both a Continuous Integration and Continuous Delivery tool, and this is primarily because of its rich feature set and flexibility. 

Spinnaker and Harness follow a more traditional continuous delivery approach where we fetch the deployable artifact, bake it and deploy to a desired environment. 

Weave Flux and ArgoCD use a GitOps based approach where both the application source code and helm charts are a part of a git repository and are continuously synced to a desired environment. However, we will learn more about the GitOps approach in a later post. 

Artifact Management

The most important artifact for the Continuous Delivery pipeline is the Helm Chart. It is extremely important that the helm charts are properly versioned and stored. Along with that we should ensure that our pipeline should have access to the artifact store. The artifacts could be fetched over HTTP using appropriate authorization. Some options for artifact management are:

  1. Jfrog Artifactory: We can use this as a docker repository and helm repository. Additionally, helm charts can be packaged as tar.gz files and stored  here. Access to Jfrog can be managed using a user/token and helm packages can be fetched over HTTPS
  2. GCS Bucket: This is the traditional Google Compute storage and is great for storing packaged helm charts. Make sure to enable bucket archive policy in order to reduce costs. The access to GCS buckets should be managed using service account key files. 
  3. S3 Bucket: This is the object store service provided by AWS. If your continuous delivery pipeline infrastructure is running in AWS then it is a good choice to use S3 to store packaged helm charts. It is highly recommended to use IAM roles to manage access to S3 buckets. 

Deployment Strategies

As we previously learned that Deployment strategies can be broadly classified as:

  1. Blue/Green Deployment
  2. Rolling Upgrade
  3. Canary Deployment 

Kubernetes inherently supports only 2 rollout strategies, Rolling Upgrade and Recreate: 

With Rolling Upgrade we decide a minimum number of unavailable replicas and max surge for the available replicas during a new version rollout. For example, imagine the currently running deployment is having 4 replicas running, and minUnavailable parameter is set to 1, and maxSurge parameter is also set to 1. Then during the rollout, one replica of the currently running version will be terminated, and at the same time a new replica with the new version will be created and so on. This is a zero downtime deployment method. It is important to understand that this method should be used when both new and old versions of the application are backward compatible. The configuration looks something like this:


In case of Recreate, all the pods of the existing deployment will be terminated and new pods of the new version are created. We should use this strategy when: 

  • Your application can withstand a short amount of downtime. 
  • When your application does not support having new and old versions of your application code running at the same time.


A sample CD pipeline looks something like the following:

  1. Fetch Artifact
    As soon as there is a hand-off from the CI system to the CD system, the CD system will fetch the artifact. This artifact is a helm package whose location is relayed by the payload delivered by the CI system. We also pre-configure the location of these artifacts in the CD system. That location could be a GCS bucket or JFrog Artifactory. In the previous blog we established a convention to name and uploaded the helm package to a particular path. We should ensure that the CD system has the fetch URL preconfigured. 
  1. Bake Artifact
    This is one of the most crucial parts of the deployment pipeline. Helm packages are designed to use override values during the deployment process. However, before the actual deployment occurs we can bake the artifact and inspect the actual manifest which will be deployed to the cluster. Consider it using the helm template command. For example:


           The manifest above will be the one which is deployed to the cluster with all the values overridden.   

  1. Choose Environment
    The decision to deploy to a particular environment is made depending upon the information relayed by the trigger payload. This payload is generally delivered by our CI system to the CD system. A travisCI payload looks something like this:


The payload contains very crucial information which is consumed by our CD system to deploy to different environments. For example  "branch": "master", indicates that the master branch of the source code repo was built. The rule of thumb is that master branch should always be deployable, and whenever master is built the artifact is deployed to the production environment. Similarly, if the branch is staging we deploy to the staging environment. If we trigger CI builds for tags then that information is also relayed by this payload which can be processed by the CD system

  1. Deploy and Notify
    Once the artifact has been fetched, baked, and we have decided which env to deploy to, the actual deployment triggers. Depending upon the deployment strategy (as discussed above) new pods are created and old pods are deleted.
    An important aspect of the deployment pipeline is Notifications. Using the same payload we discussed above we can get committer_name and committer_email to deliver notifications. Additionally, every CD tool has integrations with Slack and Hipchat which can be used to deliver notifications to a broader audience. 


In this blog we went over in detail about the Continuous Delivery system and the various stages involved. This concludes our 3 part Kubernetes CI/CD series. We tried to delve into as much detail as possible and provide production ready configuration. However, it is important to understand that each environment is different and so is every application stack. Therefore, our CI/CD pipeline should also be curated around it. 

If you need help designing a custom CI/CD pipeline feel free to reach out to myself through LinkedIn. Additionally, MetricFire can help you monitor your applications across various environments and different stages during the CI/CD process. Monitoring is extremely essential for any application stack, and you can get started with your monitoring using MetricFire’s free trial. Robust monitoring and a well designed CI/CD system will not only help you meet SLA’s for your application but also ensure a sound sleep for the operations and development teams. If you would like to learn more about it please book a demo with us.    

Related Posts