Deploying CI/CD on GKE with Cloud Build: Step by Step GuideA step-by-step guide on how to deploy microservices on GKE using Cloud Build CI/CD.
June 07, 2021 | Technical | Google Cloud | DevOps | Cloud Management
Managing the docker images and the associated containers can be a real headache. Today, startups and large companies alike deploy their applications as containers and they can end up having 20 to 100 different containers running simultaneously. This creates a real problem - developers can’t possibly have the time to update their container images manually.
The solution? Google Kubernetes Engine (GKE) and Cloud Build. GKE is a service where you can build and manage an IT infrastructure on your own, create ‘clusters’ and deploy containers. Essentially, it is a powerful service tool that manages microservices. For automation, however, another Google tool is required: Cloud Build (for CI/CD).
In this guide, we will see how to deploy microservices step by step on GKE using Cloud Build CI/CD.
Understanding the services we will use today
If this is your first project with CI/CD pipelines, it’s a good idea to familiarize yourself with the services we’ll be using to create the pipeline.
What is Google Kubernetes Engine?
Google Kubernetes Engine (GKE is a service in the Google Cloud Platform (GCP) that helps us manage, deploy, and scale our containerized application on GKE clusters. GKE clusters are powered by an open-source project called Kubernetes.
Some of the powerful features of GKE that we often use in CI/CD pipelines include:
- Node pools
- Node auto repair
- Automatic scaling
- Automatic upgrades
- Logging and monitoring
What is Google Cloud Build?
Google Cloud Build is a service in Google Cloud Platform that allows us to execute our builds quickly and efficiently. It is also arguably the most important tool in building CI/CD pipelines as it is used to automate the process.
What is Google Container Registry?
Google Container Registry (GCR) is a service in GCP where we manage Docker images. We can also use GCR to manage access control and decide who has access to Docker images. GCR also has a very powerful feature called in-depth vulnerability scanning which is a big help in testing and debugging.
What is Github App?
GitHub App triggers authorize you to automatically invoke the setup on Git pushes and pull demands, and view your setup results on GitHub and Cloud Console. Furthermore, GitHub App triggers support all of the highlights upheld by the current GitHub triggers and utilize the Cloud Build GitHub app to design and authorize GitHub.
Importance of Cloud Build on GKE
The importance of Cloud Build is best explained with a simple example:
Suppose we have ten microservices running on a GKE cluster and each has one deployment, one configmap and one secret file. And for all 10 microservices, we have one organization which we’ll call D3V. Inside that organization, we have twenty repo (App repo and Env. repo (one microservice each) because we follow Gitops Style). So when someone pushes code changes in the App repo, we need to run the Dockerfile again to create a new image. After that, we have to once again push that image to GCR and then again go to Env. repo templates before updating the image and deploying the now updated templates to GKE.
Clearly, this is a major productivity problem, especially as the application grows and becomes more complex with new microservices. Thankfully, we have a solution and that is to use Cloud Build to build a CI/CD pipeline that can automate this process.
The CI part (Continuous Integration) will handle the App repo. This means that when changes are made to the code, it will automatically start the trigger in Cloud Build, create an image, and push that image to Google Container Registry (GCR ). As soon as the image is pushed to GCR, the second trigger will begin the second part of the pipeline - Continuous Deployment. The CD part of the pipeline deals with the Env. repo and is responsible for updating templates with new images and deploy those templates to GKE.
In essence, we’re building a fully automated assembly line. The end result of which is that your software development team does not have to wait for the DevOps team to create the image and deploy it using their own changes. All the development team has to do is update the code and everything else will be taken care of by the Cloud Build (leaving the DevOps team to just write a script and build the trigger).
With the “why” of this argument well established, let’s take a look at how to actually build a CI/CD pipeline using Cloud Build.
Deploying a CI/CD pipeline on GKE in 11 Steps
The process of deploying a CI/CD pipeline on CKE using Cloud Build is relatively complex but we’ve tried to simplify and summarize it in just 11 steps - feel free to follow along.
Step 1:Create a GKE cluster as per your requirements. Alternatively, if you’re just testing the waters, use a single node cluster.
Step 2:Write YAML files where you describe the specification of your pod. While deploying the YAML files to GKE, you should have at least one YAML file. Alternatively, you can have the following three YAML files:
- Deployment file: In the deployment file, you define application specs, which image should it use, the link/path of image file, number of pods etc.
- Configmap file: A configmap stores non-confidential data in key value pairs.
- Secret file: The secret file contain confidential data like Password, TLS certificates, and keys.
Step 3:In Cloud Build, go to the settings and change the enable Kubernetes Engine (which would be disabled by default). You’ll also need to enable Service Account to interact with other services.
Step 4:For repository, we can follow GitOps style which means we’ll be creating two separate repositories: the App repo and the Env. repo
- App repo: In this repo, we will have codes and docker files required to build the image and push it to Google Container Registry (GCR)
- Env. Repo: In this repo, we will have manifests/templates and scripts to deploy to GKE.
Step 5:Next we need to build a trigger that is very straightforward. Go back to the Cloud Build page and click on Create Trigger. You’ll be asked to fill in the following information:
- Name: Choose a name that is unique in the project
- Description: Briefly state what this trigger will do
- Tags: You can use tags to add arbitrary strings to organize triggers. For instance, you can use tags to indicate different environments, services, teams, etc.
- Push to a branch: It means if any push happens in some specific Branch (which we mentioned in the Cloud Build trigger) then it will start the trigger.
- Push a Tag: The trigger will start based on some specific tag
- Pull Request: The trigger will start if any pull request is created. But this feature is only available for the Github app trigger
- Repository: Select your repository from which you want to build a trigger. If your repository is not present, then before building the trigger, click on Connect Repository and connect the repository of your choice.
- Branch: Select your branch name
- Included file: If someone wants to push changes in some specific file, then mention that file or directory path in this column. By doing this, you ensure that the trigger will only activate if the files changed are included in the Included Files list. If someone wants to run a trigger on the basis of a file or directory( Let’s say File A) then he/she can mention the file or directory path here in this column then on that basis only the trigger will run. Like when we change something in File A then only the trigger will run but if we make any changes in any other file then this trigger won’t run.
- Ignored files: In this option, if we mention any file name then the trigger will ignore changes made in all of those files and not start.
- Autodetected: In this option, Cloud Build will automatically detect Docker files and/or Cloudbuild.YAML file.
- Cloud build configuration file: In this option, you have to mention the path of the Cloud Build.YAML file from the repo. Alternatively, you can also write the Cloud Build file right here using the editor
- Dockerfile: In this option, we have to include the Docker file path from Repo
- Substitution Variable: This is optional but if we choose Cloud Build config as your config choice by clicking on Add Vehicle, we can choose to define trigger-specific substitution variables using this field.
Step 6:Now just click on Create to finish creating a trigger.
Step 7:Now In Github (App repo), write a Cloud Build.YAML file. In this file, mention the Docker file path and steps to run the Dockerfile and after creating the image push it to GCR.
Step 8:In the same repo (App repo) write three more steps:
- To clone the Github app repo and go inside that repo
- Make a temporary file for deployment and instead of image name give a “COMMIT_SHA” variable to replace the image name in templates directory deployment File
- In the last step when the changes had been made then add those changes and push Those changes, this all we will do using the git command
Step 9:Now in the Env. repo, write another Cloud Build YAML file. In this file, include cluster name, zone of the cluster, and give the script file path where we will give the command to connect to the cluster where we want to deploy the manifests/templates on GKE as well as the script to replace earlier deployment file.
Step 10:Create two triggers: one for App repo and another for Env. repo, like we have mentioned in step 5.
Step 11:After Step 8, the first trigger (app trigger) will create an image and push it to the Container Register (GCR). Due to the three steps within Step 8 also being part of the first CloudBuild.YAML file, they will run as well. After that, the second trigger (created in Step 9) will also run as it updates the image name from GCR to GKE Manifest and also deploys the updated manifest to To GKE.
The challenges of CI/CD pipelines
Nothing is ever fully devoid of limitations and while CI/CD pipelines are fairly resilient, we still need to follow a series of best practices and principles - not doing so can result in part of the pipeline to not work as intended or even fail.
Developers are far more susceptible to this if they do not have the proper knowledge of GKE and CI/CD. At the very least, you need to have a basic understanding of how these platforms work in order to write the script and build a trigger.
Additionally, without the proper expertise, it can get very difficult to maintain triggers, especially when you have a lot of microservices running. In fact, it’s not uncommon for developers to lose track of how many triggers they have built. Improper monitoring of triggers will also result in inflated cloud bills and inefficient usage of resources.
This is important because Cloud Run, while very affordable, isn’t free (except for the first 120 builds). Google Cloud Build is billed at $0.003 per minute for every build after the initial 120. On top of this, idle microservices in GKE will still consume resources such as CPU and memory which increase cloud bills even more. Needless to say, keeping track is very important.
In the end, just like an assembly line with thousands of moving parts, it takes a lot of discipline and best practices to ensure a CI/CD pipeline works flawlessly when someone does make a mistake eventually. is required
The process of maintaining even a single microservice can get convoluted and mundane over time. As your application continues to scale and improve, the number of microservices will grow too, which would make the already repetitive workload even more so.
To prevent developer burnout and to use limited resources more efficiently, companies have started using Google Cloud Build to automate a big part of their workload - while enabling developers to focus on innovation. If you’d like to build and deploy powerful and versatile CI/CD pipelines, reach out to one of our Google-certified cloud engineers for a free consultation today.