Use gitlab CI/CD to deploy terraform code for AWS resources II

Following the last article we are able to use gitlab runner to create AWS resources (VPC, subnets, rt tables, NGWs, IGWs, etc), in this article I will show:

  • How to use AWS role instead of AWS_SECRET_ACCESS_KEY and AWS_ACCESS_KEY_ID to get authorisation to AWS resources creation, listing and delete, etc
  • How to use AWS S3 bucket for gitlab-runner to sync tfstate file
How to use AWS role to get authorisation for AWS resource creation, listing and deletion.

In last article I have the following setup in .gitlab-ci.yml:

.base-terraform: image: name: "hashicorp/terraform:1.5.5" entrypoint: - '/usr/bin/env' - 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin' - 'AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}' - 'AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}' - 'AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}'

AWS_SECRET_ACCESS_KEY is attached to the AWS IAM user in AWS account (AWS_ACCESS_KEY_ID) .Where AWS_SECRET_ACCESS_KEY ,AWS_ACCESS_KEY_ID and AWS_DEFAULT_REGION is used to decide AWS account and region that this code will be implemented on. This setup requires us to specifically configure AWS_SECRET_ACCESS_KEY as parameter that can be reached by the the coder reader.

Now I want to modify the way of authentication and authorisation by using role instead. This requires the gitlab-runner that is supposed to implement change on AWS account owns correct role. so I modify the code as below:

in .gitlab-ci.yml, remove the following lines:

'AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}' - 'AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}' - 'AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}'

In provider.tf, add the following script:

provider "aws" {
  allowed_account_ids = ["awsaccountid"]
  region              = "region"

  assume_role {
    session_name = "terraform-aws-session"
    role_arn     = "arn:aws:iam::awsaccountid:role/assume-role"
  }

}

Then I need define the “assume-role” in aws, I grant this role “administrator access” in permission policy. “administrator access”means that any entity with this role can do anything like administrator for this specifically AWS account. This is wide authorisation, and need to be narrow down to the resources that you want to grant your gitlab runner can assume.

I dont want to attach this role with such wider authorisation into the gitlab runner instance directly. In stead another role can be created and attached to gitlab-runner instance that was created in the last article. Let’s name this role as “gitlabrunnerole”, the role is defined as below:

Trusted Entity:
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "ec2.amazonaws.com"
            },
            "Action": "sts:AssumeRole"
        }
    ]
}
Permissions:
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:bucket",
                "s3:bucket/*"
            ],
            "Resource": "*"
        }
    ]
}

I granted this role for any actions in S3 bucket, this is for gitlab-runner able to visit, modify, push or delete tfstate file that I am going to save in S3 bucket. I will get that this part later.

Now I have 2 roles created “gitlabrunnerrole” and “assumerole”, I will allow “gitlabrunnerrole” to assume “assumerole” by adding the following trusted entity in to role “assumerole”:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Statement1",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::awsaccountid:role/gitlabrunnerrole"
            },
            "Action": "sts:AssumeRole"
        }
    ]
}

Now the last is to attch “gitlabrunnerrole” into the ec2 instance “gitlabrunner”

How to use AWS S3 bucket for gitlab-runner to sync tfstate file

Before go to the code we need understand how terraform save its tfstate file. here is the article about that.

A backend defines where Terraform stores its state data files.

Terraform uses persisted state data to keep track of the resources it manages. Most non-trivial Terraform configurations either integrate with Terraform Cloud or use a backend to store state remotely. This lets multiple people access the state data and work together on that collection of infrastructure resources.

To use s3 bucket for tfstate file saving, I have the following code:

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 4.18.0"
    }
  }
  backend "s3" {
    bucket = "terraformtf"
    key    = "path/"
    region = "region"
  }
}

Before this I need S3 bucket with name “terraformtf”, and I will grant role “gitlabrunnerrole” to be able to do necessary actions to this bucket:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Statement1",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::awsaccountid:role/gitlabrunnerrole"
            },
            "Action": [
                "s3:GetObject",
                "s3:PutObject",
                "s3:DeleteObject",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::pcnitf/*",
                "arn:aws:s3:::pcnitf"
            ]
        }

It shows in aws management console as below:

With all above change I have removed AWS_SECRET_ACCESS_KEY and AWS_ACCESS_KEY_ID from my code, and I have a tf.state file save in the S3 bucket.

Use gitlab CI/CD to deploy terraform code for creating AWS resource

Following the last article  we have known how to use terraform to deploy AWS resources, in this article I will introduce how to move terraform code into Gitlab repository and then use gitlab runner to deploy terraform codes towards AWS.

Briefly below are the steps for this task:

1, Create project in gitlab platform.

2, Use git command to upload terraform code into the project repository

3, Create gitlab runner instance and register gitlab runner into the project CI/CD runner configuration.

4, Create .gitlab-ci.yml file and push the file into gitlab project repository.

5. gitlab runner will execute jobs defined in .gitlab-ci.yml accordingly.

I assume that above step 1 and step 2 have been completed, and the readers have basic understanding for gitlab. And the link in above step 2 has given some basic commands introduction about gitlab usage.

The introduction below will started from step 3 to step 5:

3, Create gitlab runner instance and register gitlab runner into the project CI/CD runner configuration.

In my example I have created a AWS linux instance, the following steps are used to deploy gitlab runner in docker:

1, start a AWS linux instance gitlab-runner, login to the instance

2, Download gitlab-runner packages

3, Install docker:

sudo apt update
sudo apt install -y docker.io
Or
sudo yum install -y docker
then:
sudo usermod -aG docker gitlab-runner

4, start gitlab runner in docker

docker run -d --name gitlab-runner --restart always \
    -v /srv/gitlab-runner/config:/etc/gitlab-runner \
    -v /var/run/docker.sock:/var/run/docker.sock \
    gitlab/gitlab-runner:v15.8.2
Or
docker run -d --name gitlab-runner --restart always   -v /srv/gitlab-runner/config:/etc/gitlab-runner   -v /var/run/docker.sock:/var/run/docker.sock   gitlab/gitlab-runner:latest

5, Register gitlab-runner as CI/CD runner for the created gitlab project:

  • Go to gitlab project -> Settings -> CI/CD ->Runners, here you can find project dedicated runner token, then use the following command in gitlab-runner instance to register the runner:
docker exec -it gitlab-runner  gitlab-runner register  --url "https://git.giblab.net/“ 
     --registration-token "YybyCyJ9u5kz" \
     --docker-privileged \
     --executor docker \
     --description "Docker Runner" \
     --docker-image "docker:stable" \
     --docker-volumes /var/run/docker.sock:/var/run/docker.sock

6, Restart the docker again:

sudo docker restart gitlab-runner

7, check running docker, an example of return result would be like this:

[ec2-user@gitlabrunner ~]$ sudo docker ps
CONTAINER ID   IMAGE                         COMMAND                  CREATED      STATUS      PORTS     NAMES
b224d1a2b1dd   gitlab/gitlab-runner:latest   "/usr/bin/dumb-init …"   6 days ago   Up 6 days             gitlab-runner
  • Go back to gitlab platform, check if the runner is correctly registered, an example would be like below:

At this stage we have completed the gitlab-runner creation and registration. Next step I will create a project dedicated “.gitlab-ci.yml ” and explain the setup.

4, Create .gitlab-ci.yml file and push the file into gitlab project repository.

Below is an example of the .gitlab-ci.yml template, you will have multiple variant .gitlab-ci.yml template online and most of them are workable. There is no fundamental difference within those templates. To choose which template is totally up to the use case and personal preference.

In this template below we have to configure three parameters first in gitlab project: AWS_SECRET_ACCESS_KEY, AWS_ACCESS_KEY_ID, and AWS_DEFAULT_REGION.

Those parameters can be configured in gitlab platform -> project UI -> settings ->CI/CD ->Variables

.gitlab-ci.yml:

stages:
  - scan:terraform
  - terraform:validate
  - terraform:plan
  - terraform:apply
  - terraform:destroy

.base-terraform:
  image:
    name: "hashicorp/terraform:1.5.5" 
    entrypoint:
      - '/usr/bin/env'
      - 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
      - 'AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}' 
      - 'AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}' 
      - 'AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}'
  before_script: 
    - terraform version
  variables:
    GIT_SUBMODULE_STRATEGY: recursive
  tags:
    - pcni

tf-fmt:
  stage: scan:terraform
  extends: .base-terraform
  script:
    - terraform fmt -check -recursive ./
  only:
    refs:
      - merge_requests
      - master

tf-validate:
  stage: terraform:validate
  extends: .base-terraform
  script:
    - terraform init
    - terraform validate
  only:
    refs:
      - merge_requests
      - master

tf-plan:
  stage: terraform:plan
  extends: .base-terraform
  when: manual
  needs:
    - tf-validate
  script:
    - terraform init
    - terraform plan -lock=false -out=pcni-lab.tfplan
  artifacts:
    paths:
      - pcni-lab.tfplan
    expire_in: 1 day
  only:
    refs:
      - master

tf-apply:
  stage: terraform:apply
  extends: .base-terraform
  when: manual
  needs:
    - tf-plan
  allow_failure: false
  script:
    - terraform init
    - terraform apply -auto-approve pcni-lab.tfplan
  only:
    refs:
      - master

Especially “tf-fmt” is just for terraform code format checking, “tf-fmt” failure in most cases does not mean the code will fail to apply . however it is a good habit to run “terraform fmt” first in local git repository before push the branch into gitlab server.

The code below indicated when the pipeline will run the job:

  only:
    refs:
      - merge_requests
      - master

Above code means that this job pipeline will start when merge_request is requested AND change has been merged into master branch

  only:
    refs:
      - master

Above code means that this job will start running in pipeline only when the change has been merged into master branch.

  needs:
    - tf-plan

Above code means that job “tf-apply” will rely on the successful running result from the previous job “tf-plan”, the result of “tf-plan” has been saved into artifacts “tf-plan” file name is “pcni-lab.tfplan”

  script:
    - terraform init
    - terraform plan -lock=false -out=pcni-lab.tfplan
  artifacts:
    paths:
      - pcni-lab.tfplan
    expire_in: 1 day
  tags:
    - pcni

The usage of tag is for gitlab runner can pick up this job. When we register gitlab runner into the project, we can setup tag for gitlab runner. In the other words, gitlab runner will only pickup job which has the same tag as configured in runner registration.

5. gitlab runner will execute jobs defined in .gitlab-ci.yml accordingly.

After git push the above -gitllab-ci.yml file into gitlab repository, you may create a merge-request accordingly. The merge-request will trigger the following pipe jobs:

After merge request is successful the change has been merged into the “master” branch, the following pipe will be triggered: