How to Set Up a Deployment Pipeline on Google Cloud with Cloud Build, Container Registry and Cloud Run

Automatically building and deploying containers into Cloud Run when changes get pushed to your Git repositories.

Ivam Luz
CI&T

--

Photo by Quinten de Graaf on Unsplash

Introduction

In this article, we’ll see how to configure a deployment pipeline on Google Cloud Platform powered by Cloud Build, Container Registry and Cloud Run.

We’ll go through the process of configuring a pipeline that:

  1. Watches for changes to a specific branch of a GitHub repository;
  2. Once changes are detected to this specific branch, builds the Docker image for the application;
  3. Pushes the Docker image into Container Registry;
  4. Deploy the Docker image to be served by Cloud Run.

Warning: you need to have a configured billing account to follow this tutorial. If you don’t have a credit card you can use, please read the article and leave any questions you may have in the comments section below. I’ll be happy to try to help.

Cloud Build

Cloud Build is a service that executes your builds on Google Cloud Platform infrastructure. Cloud Build can import source code from Google Cloud Storage, Cloud Source Repositories, GitHub, or Bitbucket, execute a build to your specifications, and produce artifacts such as Docker containers or Java archives.

Cloud Build executes your build as a series of build steps, where each build step is run in a Docker container. A build step can do anything that can be done from a container irrespective of the environment. To perform your tasks, you can either use the supported build steps provided by Cloud Build or write your own build steps.

Reference: https://cloud.google.com/cloud-build/docs

IAM Permissions

In order to our Cloud Build pipeline to work properly, we need to update the default service account used by the service, identified by the address <project-number>@cloudbuild.gserviceaccount.com, with some new permissions. To do so:

  • From the top-left menu, select IAM & Admin;
  • Find the service account identified by <project-number>@cloudbuild.gserviceaccount.com;
  • Edit the service account and add the Cloud Run Admin and Service Account User roles.

Cloud Run Admin is needed, so Cloud Build has the permissions necessary to deploy the Cloud Run service; Service Account User is necessary, so the Cloud Run service may be configured to allow access from unauthenticated users, as described here.

Container Registry

Container Registry is a private container image registry that runs on Google Cloud. Container Registry supports Docker Image Manifest V2 and OCI image formats.

Many people use Dockerhub as a central registry for storing public Docker images, but to control access to your images you need to use a private registry such as Container Registry.

You can access Container Registry through secure HTTPS endpoints, which allow you to push, pull, and manage images from any system, VM instance, or your own hardware. Additionally, you can use the Docker credential helper command-line tool to configure Docker to authenticate directly with Container Registry.

Reference: https://cloud.google.com/container-registry/docs/overview

Cloud Run

Cloud Run is a managed compute platform that enables you to run stateless containers that are invocable via web requests or Pub/Sub events. Cloud Run is serverless: it abstracts away all infrastructure management, so you can focus on what matters most — building great applications. It is built from Knative, letting you choose to run your containers either fully managed with Cloud Run, in your Google Kubernetes Engine cluster, or in workloads on-premises with Cloud Run for Anthos.

Reference: https://cloud.google.com/run/docs

Set Up the GCP Project

To follow this tutorial, you’ll need to have access to a GCP Project. To do so:

  1. Access the GCP Console, enter a name for your new project and click the CREATE button;
  2. Once your project is created, make sure it’s selected on the top-left corner, right beside the Google Cloud Platform logo;
  3. From the top-left menu, select APIs & Services, then click the ENABLE APIS AND SERVICES button;
  4. Enable these APIs: Cloud Build API, Google Container Registry API and the Cloud Run API.

Well done! It’s time to get our hands dirty!

The Sample Application

The sample application we’ll use for this tutorial is a very simple Flask application. The application exposes two endpoints:

  • /health: a public endpoint to test if the application is alive;
  • /hello: a private endpoint protected with Basic Auth. All it does is to return a simple JSON. The intent of this endpoint is to demonstrate how we can make use of environment variables on Cloud Run.

Sample Application Security

The Basic Auth is implemented as a decorator saved under app/secure.py. All it does is to:

  1. Read the value of the x-api-key from the request;
  2. Hash the read value with SHA-512;
  3. Compare the received hashed value against the value of the HASHED_API_KEY environment variable (which is also stored in the SHA-512 form);
  4. If the values match, it allows the request to proceed. Otherwise, a 401 error is returned.
Basic Auth decorator for our sample application

To set up and test the application locally, follow the steps described in its README file.

Dockerfile

The Dockerfile for our application is very simple. All it does is to:

  1. Copy the source code of the application inside the container;
  2. Install application dependencies with pip;
  3. Run the application with gunicorn and expose it through a given port.
The Dockerfile of our sample application

Configuring our Cloud Build Pipeline

The steps of our pipeline are defined in a YML file called cloudbuild.yaml. As you can see, our pipeline is composed of three steps:

  1. The first step is responsible for building and tagging the Docker image of our application.
  2. The second step is responsible for pushing the Docker image built on step one to Container Registry.
  3. The third step is responsible for deploying the Docker image to Cloud Run, by pointing to the address of the image pushed into Container Registry on step two.
cloudbuild.yaml — Our Cloud Build pipeline file

Some things worth noticing in the file above:

  • It’s possible to use different docker images on each step. For example, on the first two steps, we used an image called docker; on the third step, we used an image called gcloud. Both images are provided by Google, but you can also upload your own images into Container Registry and reference them in your build steps.
  • We make use of some variables like ${PROJECT_ID} and ${SHORT_SHA}. These are provided by Cloud Build, but, as you can see in Substituting variable values, it’s also possible to provide your own values. That’s the case of ${_SERVICE_NAME}, a user defined variable we use for managing our generated Docker image, as well as to set the name of the deployed Cloud Run service.

Set Up the Cloud Build Trigger

With everything in place, it’s now time to set up our Cloud Build Trigger. To do so, follow these steps:

  • From the top-left menu, select Cloud Build, select Triggers in the left menu and then click the Connect repository button:
The Cloud Build Triggers page
  • Select GitHub (Cloud Build GitHub App), and click Continue:
Selecting the source to configure the Cloud Builder trigger
  • Authorize Google Cloud Build access to GitHub:
GitHub authorization page
  • Install the Google Cloud Build GitHub App:
Cloud Build GitHub App installation prompt
  • Select the GitHub account to install the Google Cloud Build GitHub App:
Selecting the account to install the Google Cloud Build app for GitHub
  • Then select the repositories:
Selecting the repository to install the Google Cloud Build app for GitHub
  • And finally connect the GitHub repository to Cloud Build:
Connecting the GitHub repository to Cloud Build
  • Select the GitHub repository and click Create push trigger.
Creating the GitHub repository push trigger
  • Notice you aren’t able to configure the trigger parameters at the time of its creation, but it’s possible to do so after it’s created:
Editing the Cloud Build trigger
  • Configure the trigger as shown in the image below:
Cloud Build trigger configuration

Here, we specify:

  • The Name and Description of the trigger;
  • That the build should be triggered whenever stuff is pushed into the master branch of the repository;
  • That the build configuration is provided by the cloudbuild.yaml file from our repository;
  • That the _SERVICE_NAME variable from our cloudbuild.yaml should be replaced with the pipeline-demo value. As described before, this variable is used for managing our generated Docker image, as well as to set the name of the deployed Cloud Run service.

Triggering builds

To test the configuration done so far, you have two options:

  1. Commit and push any changes to the master branch of your repository;
  2. Run the trigger manually by clicking the Run trigger button:
Option to run the Cloud Build trigger manually

To see your build in action, select Dashboard in the left side menu:

Cloud Build dashboard

For each configured build, the Dashboard shows:

  • The date and time of the latest build;
  • The build duration;
  • A description of the trigger;
  • A link to the source repository;
  • The hash of the commit for which the build was triggered;
  • A small chart with the Success/Failure build history;
  • The average duration of the buidls;
  • The percentage of success and failures.

To view details about it, click in the link shown under Latest Build. You should see something like this:

Cloud Build — Build details

Notice you are able to see the output for each of the build steps defined in our cloudbuild.yaml file.

Viewing Registered Containers

If you want to see / manage the containers generated by your build, you can do so by accessing the Container Registry service from the top-left menu:

Container Registry — List of images

From this page, you can see all the registered containers, as well as delete older containers that aren’t in use anymore.

Accessing the deployed application

Now that the application is built and deployed, you should be able to access it through the endpoint generated by Cloud Run. To get its address:

  • In the GCP Console, select Cloud Run from the top-left menu;
  • Click on the name of the deployed service;
  • Copy the URL at the top of the page:
Cloud Run — Service details

Remember that, as described earlier, our application exposes two endpoints:

  • /health: a public endpoint to test if the application is alive;
  • /hello: a private endpoint protected with Basic Auth. All it does is to return a simple JSON with “Hello, World”. The intent of this endpoint is to demonstrate how we can make use of environment variables on Cloud Run.

To test the /health endpoint, run the following curl command:

Testing the deployed application with curl

To test the /hello endpoint, run the following curl command:

Testing the protected application endpoint with curl — HTTP 401

Now, if you remember, the /hello endpoint is protected with a Basic Auth mechanism, and the reason for this is to demonstrate how we can make use of environment variables on Cloud Run.

To fix this problem, we can make use of some scripts versioned into our repository. If you have gone through the steps to run and test the application locally, as described in the application README file, you’ll remember of the scripts/hash_value.py script, which we can use to hash values in the SHA-512 form.

Earlier in this article, we talked about the require_api_key decorator, which is used to secure our /hello endpoint with Basic Auth. Remember that, under the hood, this decorator reads the HASHED_API_KEY environment variable, hashes the received x-api-key HTTP header value with SHA-512 and compares both values to decide whether or not to allow the request to proceed.

To set the HASHED_API_KEY environment variable, follow these steps:

  • Inside the scripts folder, run the following command:
Generating a hashed value for securing the application
  • Back into the GCP Console, select Cloud Run from the top-left menu;
  • Click on the name of the deployed service, then click the EDIT & DEPLOY NEW REVISION button;
  • Under Advanced Settings, select the VARIABLES tab;
  • Enter HASHED_API_KEY for the Name and the hashed value generated above for the Value:
Setting environment variables manually
  • Click the DEPLOY button;

Wait for the application to be deployed and retest it with the following curl command:

Testing the protected application endpoint with curl — HTTP 200

Notice how we now get a 200 response back containing our “Hello, world” JSON.

Automating the environnment variable configuration

Alternatively, if you have gcloud configured locally, you can use the scripts/set_env_vars.sh script to configure the environment variable configuration. To do so, run the following command from inside the scripts folder:

Automating the environnment variable configuration

The command will prompt for the service name to be updated and for the unhashed value of the API Key. It will then update the Cloud Run service with the hashed value for the provided API Key:

The script to update the service environment variables

Clean-up

To undo the changes done while following this tutorial, make sure to:

  • Delete the deployed Cloud Run service;
  • Delete the Container Registry saved images;
  • Delete the Cloud Build configured triggers;
  • Disconnect any connected repositories.

Final Thoughts

In this tutorial, we have gone through the process of setting up a deployment pipeline powered by GitHub, Cloud Build, Container Registry and Cloud Run.

The pipeline was configured to be triggered everytime new code was pushed into the master branch of the connected repository. Once that happens, the pipeline:

  • Builds the Docker image;
  • Pushes the built Docker image into Container Registry;
  • Deploy the Docker image into Cloud Run;

We have also seen how to make use of environment variables on Cloud Run. Keep in mind the Basic Auth approach presented here was only for illustration purposes. For a more secure approach, take a look at the Secret Manager service, also provided by Google.

Even though a GitHub repository was used here, the process for configuring a BitBucket repository is very similar. As of now, I don’t know if GitLab is planned in the roadmap of the Cloud Build service.

As of the time of this writing, Google Artifact Registry is still in beta, but keep in mind it’s expected to replace Container Registry when it becomes generally available. Though I would expect Artifact Registry to be backwards compatible with Container Registry, it’s always good to double check it to avoid any unpleasant surprises in the future.

Finally, it’s important to highlight that, for illustration purposes, many manual steps were taken along the process. In a real scenario, that would definitely be worth automating as much as possible of the process by leveraging tools such as Terraform or Deployment manager, gcloud, bash etc. Additionally, it’s also important to make sure you have automated tests properly implemented and running, to make sure your pipeline is reliable.

I hope you had a good time reading this article and learned some new stuff along the way.

Happy coding!

References

--

--