Cloud source Repository, CloudBuild and Terraform integration

In current agile times, the term CI/CD has gained popularity. This blog is going to help Google Cloud users. I would like to clarify the concepts and how they are used in the cloud before moving on.

1. Cloud source Repository(CSR) does store the actual code very similar to github, bitbucket and many more

2.CloudBuild is quiet similar to Jenkins. CSR and CloudBuild are hosted both on Google Cloud Platform.

3. Terraform is an open-source IaaC software and is primarily used to provision and manage infrastructure in any cloud.

This blog will explain how to use CSR and CloudBuild to provision infrastructure on the Google Cloud Platform (GCP). In order to offer a thorough overview, I have listed the key processes and included links to Google cloud blogs.

Note: You can use any IDE tool like VSCode, IntelliJ for below listed steps.

Step 1: Create a Terraform script:

Terraform is responsible for managing life cycle of all infrastructure components within GCP. Terraform script for any resource will have 4 major files —

a) main.tf: Calls any module, data source and starting point for any terraform script.

b) versions.tf : Maintains versions for terraform, various components like helm, google-provider.

c) variables.tf: To manage any variables used within main.tf file.

d) outputs.tf: Stores the output post execution of terraform script.

In order to execute any of the terraform script, these commands are primarily used — init, refresh, plan, destroy and apply. You can also create the backend configuration(such as input.tfvars) to store environment variables, region, zone and machine type. This provides an ease in managing large infrastructure projects. It is to be noted that managing tfstate is one of the critical step. The tfstate can be stored at various places but storing within Cloud storage is recommended.

Refer to terraform link in order to deep-dive around Google provider. Google cloud document can guide you with the steps as well.

Step 2: Create build configuration file(most critical):

Configuration file is an important element of Cloud Build. It can be written in JSON or YAML. I have attached a code snippet of cloudbuild.yaml for your reference.

steps:
- id: installPackage
name: 'gcr.io/cloud-builders/gcloud'
entrypoint: "bash"
waitFor:
- preRequisites
args:
- "-c"
- |
echo "-== APT-GET UPDATE==-"
apt-get -y update

echo "-== APT-GET TERRAFORM AND INSTALL TERRAFORM ==-"
apt-get install software-properties-common gnupg2
curl https://apt.releases.hashicorp.com/gpg | gpg --dearmor > hashicorp.gpg
install -o root -g root -m 644 hashicorp.gpg /etc/apt/trusted.gpg.d/
apt-add-repository "deb [arch=$(dpkg --print-architecture)] https://apt.releases.hashicorp.com focal main"
apt-get -y install terraform
terraform version

echo "-==Change the working directory ==-"
cd /workspace/terraform

echo "-== INIT TF ==-"
terraform init -backend-config=backend.conf -reconfigure \
&& terraform plan -var-file=./input.tfvars -out=./tfplan -lock=true \
&& terraform apply -auto-approve
env:
- 'CLOUDSDK_CORE_PROJECT=<GCP Project Name>'
options:
logging: CLOUD_LOGGING_ONLY

Important things to be noted from above code snippet:

  1. name is one of the important field which need to added within a step.
  2. entrypoint is provided to let code know where the specific commands need to be performed.
  3. waitFor is used to add dependency for any step which need to be accomplished before.
  4. backend.conf file has been used here to impersonate service account and to provide the storage bucket for storing the state.
  5. lock has been set as true so that no other script or process can change or corrupt the state.
  6. auto-approve is mainly done to approve the changes the script is going to make. Note: Always run the terraform init and plan together to check what all components will be created. Apply must be executed once you are sure about infra components.

You can check Google Cloud link for more details.

Step 3: Push code into CSR:

Create a repository and push the terraform code along with configuration build file.

a. Create a source repository from gcloud shell

gcloud source repos create <repository name>

b. Clone the repository to local Git repository using-

gcloud source repos clone <repository-name>

c. Go to your code working directory and run following git commands-

git add.
git commit -m <"Commit Message">
git push origin master

Once done, you can review the code in CSR. For more information, refer to Google Cloud document.

Step 4: Setup Cloud Build and execute:

This is the last and critical step which need to be completed before running the deployment pipeline. Cloud build can run using gcloud command, invoking API(request.json is mandatory) or manually releasing trigger from cloud console page. Follow below given steps for creating trigger in Cloud Build:

  1. Go to Cloud Build from Cloud Console page and click on Create Trigger link.
  2. Provide the information on the Create trigger screen as attached below. You have an option to decide if you want build to perform build automatically or manually. It also gives you an option to trigger the build based on a new message landing in Pub/Sub topic(like an event trigger).
ChandraMohit_0-1689248992843.png

 

3. Provide the repository path and cloudbuild.yaml file location. Enable cloud approval check if you want build to go through the approval process. Anyone with CloudBuild Approver Role will be able to approve the build pipeline.

ChandraMohit_1-1689248992844.png

 

This entire setup can be done in multiple ways. One way is to have this deployment pipeline running in the project where infrastructure components are going to be created. Alternatively, you can create seperate project for deployment pipeline and carry out all the deployments for a organization within a restricted boundary(preferred choice from Security standpoint). Refer to this Google Cloud document for additional knowledge.

Thanks for reading this blog!

Contributors
Version history
Last update:
‎07-13-2023 05:07 AM
Updated by: