Analysis to AWS network firewall and deployment models

For the old school network engineers who are familiar with the on-prem data center structure and firewall solution, AWS network firewall is quite different. Firewall vendors like Cisco, Palo Alto or Fortigate, etc provide ample functions in their fw products, functions are not limited in traffic filtering, inspection and protection, but also include dynamic routing, policy based source routing, nat translation, ipsec tunnel termination etc. The later group of functions are not available in AWS network firewall as illustrated in the table below:

FIREWALL functions supportedAWS network firewallTraditional Palo Alto, Cisco, Fortigate
traffic filteringyesyes
traffic inspection and protectionyesyes
Zone based policynoyes
NAT translationnoyes
Dynamic routings noyes
IPsec noyes
functions available in firewall product

The comparation above does not mean to show AWS network firewall is a limited product, AWS uses other items to realize those missing functions in the network firewall: ipsec is configured using customer GW or TGW, both of them support partially BGP dynamic routings. NAT translation is provided via AWS IGW and NGW. The comparation above is to show the difference that we have to take into consideration when we make architect design in cloud network. And those difference is very much shared by other public cloud networking solution beside AWS.

With such difference the recommended deployment models from AWS for network firewall solution are mostly one-arm design, that is all traffics ingress and egress share the same interface. Firewall two-arm design, which means all traffic coming in from one interface always go out via the other interface in the firewall, are supported as well but AWS suggest to consult them before you make that design. We have deployed 2-arm model in the environment and verified it is a working model but require some careful consideration in TGW and VPN rt tables.

Now let’s have a look at deployment models of network firewall recommended by AWS. AWS has recommended 3 deployment models: Distributed AWS network firewall deployment model; Centralized deployment model; and Combined AWS deployment model. The blog can be found here, which gives detailed description of traffic flow under each model. I will not repeat what is described in AWS’s recommendation but give my analysis about pros and cons of each suggested deployment model.

1, Distributed AWS Network Firewall deployment model: AWS Network Firewall is deployed into each individual VPC

In the model each VPC is regarded as an independent data center which owns it is own firewall resource and detect ingress/egress traffic of the VPC itself, good point is that it makes design simple and clear to understand if your network only need very limited number of VPCs that need more advanced traffic inspections beside AWS already provided Security group, network acl and flow log. Once the network involves multi vpcs in multi regions, the cost will increase proportionally. According to AWS price a single network firewall endpoint is priced as $0.395/hr, one vpc availability zone need 1 network firewall endpoint attached. 1 years cost for network firewall in a single vpc with 2 availability zone will be 0.395 x 24 x 365 x 2 = 6920,4 USD. And this cost excludes traffic process costs, which is $0.065/GB in region Ireland. And for the Advanced inspection endpoint, the cost of 1 year excluding traffic process fee for the same VPC will reach 8567 USD.

In summary, for a complicated cloud network including multi vpcs in multi regions, distributed AWS network firewall deployment model is a solution with very high cost.

2, Centralized AWS Network Firewall deployment model: AWS Network Firewall is deployed into centralized VPC for East-West (VPC-to-VPC) and/or North-South (internet egress and ingress, on-premises) traffic

Centralized module allow firewall to be deployed in the inspection vpc. This model will greatly reduce the cost comparing with the above distributed model, and it works perfectly for East-West traffic (VPC-to-VPC) process. However when it comes to North-South traffic process I would say it somewhat problematic and I will explain as below: (AWS categorise AWS direct connect and Site to Site VPN as part of North-South traffic as well, but in my case I focus on North-South traffic only for internet traffic.)

Again, the detailed diagram can be found in the linked chapter: 4) North-South: Centralized Internet Ingress via Transit Gateway and NLB/ALB or reverse proxy

And I made an simplified one to highlight the part I am about to explain:

According to the diagram Internet traffic is coming via IGW in the centralized VPC and then rerouted to inspection VPC via AWS transit Gateway. Traffic flow seems work but where the nat translations happen? Everyone who is familiar with AWS knows NAT translation happens either in IGW or NGW, in this model, it surely happens in IGW of the centralized VPC, but does this IGW know all EIPs attached to every resources in different spoke VPCs ? So far I dont find any information that EIP map information within the vpc can be shared to another totally different VPC. If this can’t be realised the above solution for centralized internet ingress traffic is useless.

3, Combined AWS Network Firewall deployment model: In this model Internet ingress traffic is excluded or is processed in each local VPC. The other traffic is processed in the centralized way as showed in the second deployment module above.

So far this seems the most balanced solution when taking cost and required functions into total consideration. However model is yet a perfect one when it comes to internet egress traffic. The concern is explained as below:

I drawed the digram to explain the senario: 3 different spoke VPCs carrying difference service traffic that need to visit internet, and we want all those egress traffic get inspected before it goes out to internet, and we used centralized inspection vpc for those traffic and after traffic got inspected, it will be rerouted to a centralized egress VPC and get out via NGW there.

As we know that NAT translation is done in NGW, all traffic , no matter they are from spoke A or Spoke B or C will be source natted to the same pub ip in NGW. Is there any way to give Spoke A is source nat ip A, spoke B a source nat ip B, spoke C a source nat ip C? If NGW is deployed in each specifically spoke VPC then it is not a problem. But now we are using centralized egress VPC it becomes a problem.

Centralised solution for internet egress traffic inspections

So far I have analysed 3 different network firewall deployment models recommended by AWS. I hope this would be helpful when there is need for network security solution design for public cloud networking, or hybrid networking including both on-prem site and on cloud site.

Use gitlab CI/CD to deploy terraform code for AWS resources II

Following the last article we are able to use gitlab runner to create AWS resources (VPC, subnets, rt tables, NGWs, IGWs, etc), in this article I will show:

  • How to use AWS role instead of AWS_SECRET_ACCESS_KEY and AWS_ACCESS_KEY_ID to get authorisation to AWS resources creation, listing and delete, etc
  • How to use AWS S3 bucket for gitlab-runner to sync tfstate file
How to use AWS role to get authorisation for AWS resource creation, listing and deletion.

In last article I have the following setup in .gitlab-ci.yml:

.base-terraform: image: name: "hashicorp/terraform:1.5.5" entrypoint: - '/usr/bin/env' - 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin' - 'AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}' - 'AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}' - 'AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}'

AWS_SECRET_ACCESS_KEY is attached to the AWS IAM user in AWS account (AWS_ACCESS_KEY_ID) .Where AWS_SECRET_ACCESS_KEY ,AWS_ACCESS_KEY_ID and AWS_DEFAULT_REGION is used to decide AWS account and region that this code will be implemented on. This setup requires us to specifically configure AWS_SECRET_ACCESS_KEY as parameter that can be reached by the the coder reader.

Now I want to modify the way of authentication and authorisation by using role instead. This requires the gitlab-runner that is supposed to implement change on AWS account owns correct role. so I modify the code as below:

in .gitlab-ci.yml, remove the following lines:

'AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}' - 'AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}' - 'AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}'

In provider.tf, add the following script:

provider "aws" {
  allowed_account_ids = ["awsaccountid"]
  region              = "region"

  assume_role {
    session_name = "terraform-aws-session"
    role_arn     = "arn:aws:iam::awsaccountid:role/assume-role"
  }

}

Then I need define the “assume-role” in aws, I grant this role “administrator access” in permission policy. “administrator access”means that any entity with this role can do anything like administrator for this specifically AWS account. This is wide authorisation, and need to be narrow down to the resources that you want to grant your gitlab runner can assume.

I dont want to attach this role with such wider authorisation into the gitlab runner instance directly. In stead another role can be created and attached to gitlab-runner instance that was created in the last article. Let’s name this role as “gitlabrunnerole”, the role is defined as below:

Trusted Entity:
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "ec2.amazonaws.com"
            },
            "Action": "sts:AssumeRole"
        }
    ]
}
Permissions:
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:bucket",
                "s3:bucket/*"
            ],
            "Resource": "*"
        }
    ]
}

I granted this role for any actions in S3 bucket, this is for gitlab-runner able to visit, modify, push or delete tfstate file that I am going to save in S3 bucket. I will get that this part later.

Now I have 2 roles created “gitlabrunnerrole” and “assumerole”, I will allow “gitlabrunnerrole” to assume “assumerole” by adding the following trusted entity in to role “assumerole”:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Statement1",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::awsaccountid:role/gitlabrunnerrole"
            },
            "Action": "sts:AssumeRole"
        }
    ]
}

Now the last is to attch “gitlabrunnerrole” into the ec2 instance “gitlabrunner”

How to use AWS S3 bucket for gitlab-runner to sync tfstate file

Before go to the code we need understand how terraform save its tfstate file. here is the article about that.

A backend defines where Terraform stores its state data files.

Terraform uses persisted state data to keep track of the resources it manages. Most non-trivial Terraform configurations either integrate with Terraform Cloud or use a backend to store state remotely. This lets multiple people access the state data and work together on that collection of infrastructure resources.

To use s3 bucket for tfstate file saving, I have the following code:

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 4.18.0"
    }
  }
  backend "s3" {
    bucket = "terraformtf"
    key    = "path/"
    region = "region"
  }
}

Before this I need S3 bucket with name “terraformtf”, and I will grant role “gitlabrunnerrole” to be able to do necessary actions to this bucket:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Statement1",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::awsaccountid:role/gitlabrunnerrole"
            },
            "Action": [
                "s3:GetObject",
                "s3:PutObject",
                "s3:DeleteObject",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::pcnitf/*",
                "arn:aws:s3:::pcnitf"
            ]
        }

It shows in aws management console as below:

With all above change I have removed AWS_SECRET_ACCESS_KEY and AWS_ACCESS_KEY_ID from my code, and I have a tf.state file save in the S3 bucket.

Use gitlab CI/CD to deploy terraform code for creating AWS resource

Following the last article  we have known how to use terraform to deploy AWS resources, in this article I will introduce how to move terraform code into Gitlab repository and then use gitlab runner to deploy terraform codes towards AWS.

Briefly below are the steps for this task:

1, Create project in gitlab platform.

2, Use git command to upload terraform code into the project repository

3, Create gitlab runner instance and register gitlab runner into the project CI/CD runner configuration.

4, Create .gitlab-ci.yml file and push the file into gitlab project repository.

5. gitlab runner will execute jobs defined in .gitlab-ci.yml accordingly.

I assume that above step 1 and step 2 have been completed, and the readers have basic understanding for gitlab. And the link in above step 2 has given some basic commands introduction about gitlab usage.

The introduction below will started from step 3 to step 5:

3, Create gitlab runner instance and register gitlab runner into the project CI/CD runner configuration.

In my example I have created a AWS linux instance, the following steps are used to deploy gitlab runner in docker:

1, start a AWS linux instance gitlab-runner, login to the instance

2, Download gitlab-runner packages

3, Install docker:

sudo apt update
sudo apt install -y docker.io
Or
sudo yum install -y docker
then:
sudo usermod -aG docker gitlab-runner

4, start gitlab runner in docker

docker run -d --name gitlab-runner --restart always \
    -v /srv/gitlab-runner/config:/etc/gitlab-runner \
    -v /var/run/docker.sock:/var/run/docker.sock \
    gitlab/gitlab-runner:v15.8.2
Or
docker run -d --name gitlab-runner --restart always   -v /srv/gitlab-runner/config:/etc/gitlab-runner   -v /var/run/docker.sock:/var/run/docker.sock   gitlab/gitlab-runner:latest

5, Register gitlab-runner as CI/CD runner for the created gitlab project:

  • Go to gitlab project -> Settings -> CI/CD ->Runners, here you can find project dedicated runner token, then use the following command in gitlab-runner instance to register the runner:
docker exec -it gitlab-runner  gitlab-runner register  --url "https://git.giblab.net/“ 
     --registration-token "YybyCyJ9u5kz" \
     --docker-privileged \
     --executor docker \
     --description "Docker Runner" \
     --docker-image "docker:stable" \
     --docker-volumes /var/run/docker.sock:/var/run/docker.sock

6, Restart the docker again:

sudo docker restart gitlab-runner

7, check running docker, an example of return result would be like this:

[ec2-user@gitlabrunner ~]$ sudo docker ps
CONTAINER ID   IMAGE                         COMMAND                  CREATED      STATUS      PORTS     NAMES
b224d1a2b1dd   gitlab/gitlab-runner:latest   "/usr/bin/dumb-init …"   6 days ago   Up 6 days             gitlab-runner
  • Go back to gitlab platform, check if the runner is correctly registered, an example would be like below:

At this stage we have completed the gitlab-runner creation and registration. Next step I will create a project dedicated “.gitlab-ci.yml ” and explain the setup.

4, Create .gitlab-ci.yml file and push the file into gitlab project repository.

Below is an example of the .gitlab-ci.yml template, you will have multiple variant .gitlab-ci.yml template online and most of them are workable. There is no fundamental difference within those templates. To choose which template is totally up to the use case and personal preference.

In this template below we have to configure three parameters first in gitlab project: AWS_SECRET_ACCESS_KEY, AWS_ACCESS_KEY_ID, and AWS_DEFAULT_REGION.

Those parameters can be configured in gitlab platform -> project UI -> settings ->CI/CD ->Variables

.gitlab-ci.yml:

stages:
  - scan:terraform
  - terraform:validate
  - terraform:plan
  - terraform:apply
  - terraform:destroy

.base-terraform:
  image:
    name: "hashicorp/terraform:1.5.5" 
    entrypoint:
      - '/usr/bin/env'
      - 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
      - 'AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}' 
      - 'AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}' 
      - 'AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}'
  before_script: 
    - terraform version
  variables:
    GIT_SUBMODULE_STRATEGY: recursive
  tags:
    - pcni

tf-fmt:
  stage: scan:terraform
  extends: .base-terraform
  script:
    - terraform fmt -check -recursive ./
  only:
    refs:
      - merge_requests
      - master

tf-validate:
  stage: terraform:validate
  extends: .base-terraform
  script:
    - terraform init
    - terraform validate
  only:
    refs:
      - merge_requests
      - master

tf-plan:
  stage: terraform:plan
  extends: .base-terraform
  when: manual
  needs:
    - tf-validate
  script:
    - terraform init
    - terraform plan -lock=false -out=pcni-lab.tfplan
  artifacts:
    paths:
      - pcni-lab.tfplan
    expire_in: 1 day
  only:
    refs:
      - master

tf-apply:
  stage: terraform:apply
  extends: .base-terraform
  when: manual
  needs:
    - tf-plan
  allow_failure: false
  script:
    - terraform init
    - terraform apply -auto-approve pcni-lab.tfplan
  only:
    refs:
      - master

Especially “tf-fmt” is just for terraform code format checking, “tf-fmt” failure in most cases does not mean the code will fail to apply . however it is a good habit to run “terraform fmt” first in local git repository before push the branch into gitlab server.

The code below indicated when the pipeline will run the job:

  only:
    refs:
      - merge_requests
      - master

Above code means that this job pipeline will start when merge_request is requested AND change has been merged into master branch

  only:
    refs:
      - master

Above code means that this job will start running in pipeline only when the change has been merged into master branch.

  needs:
    - tf-plan

Above code means that job “tf-apply” will rely on the successful running result from the previous job “tf-plan”, the result of “tf-plan” has been saved into artifacts “tf-plan” file name is “pcni-lab.tfplan”

  script:
    - terraform init
    - terraform plan -lock=false -out=pcni-lab.tfplan
  artifacts:
    paths:
      - pcni-lab.tfplan
    expire_in: 1 day
  tags:
    - pcni

The usage of tag is for gitlab runner can pick up this job. When we register gitlab runner into the project, we can setup tag for gitlab runner. In the other words, gitlab runner will only pickup job which has the same tag as configured in runner registration.

5. gitlab runner will execute jobs defined in .gitlab-ci.yml accordingly.

After git push the above -gitllab-ci.yml file into gitlab repository, you may create a merge-request accordingly. The merge-request will trigger the following pipe jobs:

After merge request is successful the change has been merged into the “master” branch, the following pipe will be triggered:

Cisco Smart license – How to activate DNA licenses in Cisco C8000 platform

If you want to deploy Catalyst8000v software automonus mode with BYOL license mode from AWS, below is helpful tips to activate licenses in the instance:

1, make sure you have correct license ready in Cisco smart portal, you need network license and DNA add-on license at the same time.

For example device that requires 250M or lower bandwidth, you need the following licenses:

network advantage 250M

DNA advantage 250M

license with bandwidth 250M or lower is not enforced license, which means you do not need authorise the license before usage. (Advance licenses will enable IPSec and all crypto function if you need ipsec encryption enabled in the new devices)

For device with 500M or higher bandwidth, you need the following licenses:

network advantage 500M

DNA advantage 500M

HSECK9 license

HSECK9 license will grant you higher encryption bandwidth, one believe is that the limitation for encryption bandwidth without HSECK9 license is 80Mbps for one direction, and 160 for both directions.

2, reload the instance with correct license level after adding the following commands:

license boot level <Network license level> [addon <DNA license level>]

wr mem

reload

3, setup correct bandwidth level by using

Platform hardware throughput level MB 250

4, configure DNS lookup and domain name:

ip name-server <DNS server IP>

ip domain name <Domain name>

ip domain lookup source-interface <IF name>

ip http client source-interface <IF name>

5, Configure correct call home and transport method:

#Global setting
license smart transport smart
license smart url default

6,Register devices into smart portal by:

# license smart trust idtoken <token> <local | all> [force]

Especially you can get this token by creating it Cisco smart portal for this devices.

7, After installing the token, and device successfully visit smart portal then you will get license status by using

# show license all
# show license status

8, For devices that needs 500M bandwidth or more, you have to enable HSECK9 license before configured platform throughput, we may use the following command to add HSECK9 license:

cEdge# license smart authorization request add hseck9 local
cEdge# show logging | include SMART
*Aug 18 21:11:41.553: %SMART_LIC-6-AUTHORIZATION_INSTALL_SUCCESS: A new licensing authorization code was successfully installed on PID:C1111-8PWE,SN:FGL2149XXXX
*Aug 18 21:11:41.641: %SMART_LIC-6-EXPORT_CONTROLLED: Usage of export controlled features is allowed for feature hseck9

9, Use the following commands to t-shoot smart license registration problem:

show license eventlog
show license tech support | begin License Usag
show license summary
show license status
show license all
show call-home profile all
debug license smart

10, check software mode using the following command (C8000v have 2 modes, autonomous mode and controller-managed mode)

show platform software device-mode
show version | include mode

11, trouble shooting smart portal connection problems, A good article here

Devices failing to communicate with smart portal most likely are due to the following reasons:

Devices can not correctly resolve hostname tools.cisco.com, testing by telnet tools.cisco.com 443 to make sure connection is ok

The Public Key Infrastructure (PKI) key generated during the Cisco device registration needs to be saved if it is not automatically saved after registration. If the device fails to save the PKI key then a syslog is generated stating to save the configuration via “copy running-config startup-config” or “write memory”.If the PKI key of the Cisco device is not properly saved, then the license state can be lost on failovers or reloads.

How to Manually Import Certification as a TrustPoint:
Note, the certificate will need be in a BASE64 format to be copied and pasted onto the device as a TrustPoint. 

The following example shown below uses "LicRoot" as the TrustPoint name, however, this name can be changed as desired.

Device#conf t 
Device(config)#crypto pki trustpoint LicRoot 
Device(ca-trustpoint)#enrollment terminal 
Device(ca-trustpoint)#revocation-check none 
Device(ca-trustpoint)#exit 
Device(config)#crypto pki authenticate LicRoot 
Enter the base 64 encoded CA certificate. 
End with a blank line or the word "quit" on a line by itself 
-----BEGIN CERTIFICATE----- 
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
-----END CERTIFICATE----- 
Certificate has the following attributes: 
     Fingerprint MD5: XXXXXXXX
   Fingerprint SHA1: XXXXXXX
% Do you accept this certificate? [yes/no]: yes 
Trustpoint CA certificate accepted. 
% Certificate successfully imported

12, how to trigger report sending from device to CSSM:

Once the Trust Code is installed successfully, the PI can report the usage to CSSM directly. These conditions result in license reporting:

  • A successful Trust Code installation
  • On every default Reporting Interval
  • On-device Reload/Boot-up
  • A switchover
  • A stack member addition or removal
  • Manual trigger of license sync

License reporting to CSSM can be triggered with these CLI:

Switch#license smart sync all

Smart licensing Overview

Overview of Smart licensing Policy and how to set it

IOS-XE 17.3.2 and later devices (some devices 17.4.1 and later) support the Smart Licensing Using Policy. The licenses in the Smart Licensing Using Policy are classified as enforcement types, and there are three types:

  • Unenforced or Not Enforced
    • Does not require authorization or registration before use
    • All licenses available on Catalyst access / core / aggregation switches are of this type
  • Enforced
    • Requires authorization before use
    • Authorization code needs to be installed on the target device
    • An example of this type of license is the Media Redundancy Protocol (MRP) client license available on Cisco Industrial Ethernet switches.
  • Export-Controlled
  • Exports are restricted by U.S. trade control laws and these licenses require authorization before use
  • Authorization code needs to be installed on the target device
  • An example of this type of license is a fast encryption (HSECK9) license that can be used on a particular Cisco router.
  • License type
  • In addition to the enforcement type described above, licenses are classified as license types, and there are the following two types.
  • Perpetual
    • No expiration date
    • C9000 series Network Essentials and Network Advantage are of this type
  • Subscription
    • Has an expiration date
    • C9000 series DNA EssentialsDNA Advantage is this type

Terraform beginner -use terraform to create an instance in AWS

Following the last article if we have installed AWS cli and terraform, has bind aws account using “aws configure”, we are ready to make our first instance creation in aws.

It is better to have ssh key pair in hand before creating new instance, you may create instance without key pair if the instance you created has default ssh password login enabled and you know what is default password. unfortunately the one I create is Cisco CSR instance which I failed to find ssh login password. And if you do not has ssh key pair then you may create a key pair first, then refer to that key pair you created. In my case, I have key pair already created in AWS, so I just refer to the name of key pair when creating instance.

Below is main.tf script that I used:

provider "aws" {
   region = "us-east-1"
 }
 #Create a SG group first:
 resource "aws_security_group" "admin_SG” {
   name          = "admin_SG"
   vpc_id        = "vpc-xxxxx".  #refer to VPC id that was created already
   ingress {
     description = "Allow ICMP"
     from_port   = "-1"
     to_port     = "-1"
     protocol    = "icmp"
     cidr_blocks = ["0.0.0.0/0"]
   }
   ingress {
     description = "test ssh access"
     from_port   = 22
     to_port     = 22
     protocol    = "tcp"
     cidr_blocks = [“x.x.x.x/32"]
   }
   egress {
     from_port   = 0
     to_port     = 0
     protocol    = "-1"
     cidr_blocks = ["0.0.0.0/0"]
   }
   tags = {
     Name = "admin_SG”
   }
 
#create instance using previously created SG group and and keypair, the primary subnet is created already so we just refer to that subnet id:
 resource "aws_instance" “instance_test” {
#ami can be found by subscription to the image that you want to use in AWS marketplace,be aware that even the same image has different AMI number in different AWS region
   ami                        = "ami-06fb3765405ee3735"  
   instance_type              = "c5n.large"
   private_ip                 = “x.x.x.x”
   subnet_id                  = "subnet-08xxxxxdwerxs"
   get_password_data          = "false"   
#"false" is default value, put this into "true" to get password if we create windows instance, otherwise put it into false 
   vpc_security_group_ids     = [aws_security_group.admin_SG.id] #id of Security Group
   key_name                   = “keypairname”
   tags = {
     Name = "us1grit-vpn-az1"
   }
 }

#This primary interface is created at the same time when the new instance created, the configuration here is added manually after "terraform import" and information of the interface is imported and saved in "terraform.tfstate". the reason that I add configuration here is in order to keep all config in terraform script and able to change some value of it, for example to change parameter "source_dest_check". Terraform import is a important for those not creating everything from terraform, but rather start using terraform for the already existed AWS cloud.

 resource "aws_network_interface" "test-eth0" {
   subnet_id         = "subnet-08bxxxxxxxxf" #point to subnet that the interface should be located in, the subnet needs to be created first 
   private_ips       = ["x.x.x.x"]           #manually configure IP address or leave it and allow AWS assign ip from the specified subnet
   security_groups   = [aws_security_group.admin_SG.id] 
   source_dest_check = "false" 
#Default value is"true" when no traffic is allow to transmit via this interface to other node or interface. We put it into "false" because instance we created will be worked with router

   attachment {
     instance     = aws_instance.instance_test.id
     device_index = 0
   }
   tags = {
     name = "test-eth0"
   } 
 }
#If instance has multiple interface, we may create new interface and associate the interface to the newly created instance
 resource "aws_network_interface" "test-eth1" {
   subnet_id         = "subnet-02cxxxxxxxxxxd"
   private_ips       = ["x.x.x.x"]
   security_groups   = [aws_security_group.admin_SG.id]
   source_dest_check = "false"
   attachment {
     instance     = aws_instance.us1grit-vpn-az1.id
     device_index = 1
   }
   tags = {
     name = "test-eth1"
   }
 }

#Create another interface and associate it into the newly created instance again
 resource "aws_network_interface" "test-eth2" {
   subnet_id         = "subnet-01dxxxxxxxc"
   private_ips       = ["x.x.x.x"]
   security_groups   = [aws_security_group.admin_SG.id]
   source_dest_check = "false"
   attachment {
     instance     = aws_instance.SG.id
     device_index = 2
   }
   tags = {
     name = "test-eth2"
   }
 }
#create eip and associated it to one of the interface in the newly created instance
 resource "aws_eip" "eip-gritus1-outside" {
   instance = aws_instance.instance_test.id
   network_interface = "eni-0axxxxxdfd"
   tags = {
     Name = "eip-outside"
   }
 }
 } 

After we are ready with main.tf script, we may run terraform commands as:

#terraform init
#terraform plan

Check carefully “terraform plan” result and confirm that the changes is expected, after that, run “terraform apply”, you may alway cancel “terraform apply” if the changes are not in line with what is expected.

#terraform apply

Terraform beginner

To begin with, we shall to the follow preparation on the host:

Install Terraform  (version 12.9)
* Download zip file from:  https://www.terraform.io/downloads.html
* Unzip contains only a single “terraform” executable.
* Rename with version: i.e. – terraform_v0.12.9
* Copy to /usr/local/bin and create a symlink:
* $ ln -s terraform_v0.12.9 terraform

In Mac terraform can be installed or upgraded by using brew

#brew install terraform

#terraform –version “check current terraform version”

Install the AWS CLI
* https://docs.aws.amazon.com/cli/latest/userguide/install-macos.html
* Here are the steps described below in one easy to copy-and-paste group.
* curl “https://s3.amazonaws.com/aws-cli/awscli-bundle.zip” -o “awscli-bundle.zip”
* unzip awscli-bundle.zip
* sudo ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws

Configure the AWS CLI & Create Access Keys
* https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html

Here is a useful step by step guide to use terraform to configure AWS resources: