Terraform - 6-03-2021
Today's topic
Remote state management
- using S3
- Lock state - DynamoDB
mkdir ws/s; cd ws/s
provider "aws" {
region = "ap-south-1"
profile = "default"
}
google:- terraforom ec2 resource
go to instance adn get the resorce
resource "aws-instance" "web" {
ami = "ami-....."
instance_type = "t3.micro" # you can change this and run apply
tags = {
Name = "Hello"
}
}
> tf init
-----------------------
tfstate file - stores current state of the project not the desire
the desire state will be on our program file that we just wrote
if you work alone, its ok, but what if multiple folks working?
> tf apply
What happens if one person want to change something and other person also want to change at the same time.
-> whoever start the apply command first, tfstate file will be locked by default and second guy will have to wait..
$ cat s.tf
provider "aws" {
region = "ap-south-1"
profile = "default"
}
resource "aws-instance" "web" {
ami = "ami-....."
instance_type = "t2.small" # you can change this and run apply
tags = {
Name = "Hello"
}
}
> tf apply
tfstate file will be changed to .terraform.tfstate.lock.info
until the job is completed, other person can not run the apply command.
if everyone is working on their own laptop, code is stored on github.
two employe have their own tfstate file.
if two developers working on same project, then don't maintain two state file on common project.
We don't maintain it locally. We will store it on centralized storage such as nfs. in our case its going to be s3.
You can do everything from your local laptop but we will move the tfstate file to s3 which is going to be a centralised shared location.
How do we manage state file remotely?
How do you manage the lock?
-> We will maintain the tfstate file. s3 is object storage.
- lets go to aws console -> s3 and create a folder which is called bucket
a. Create a bucket :
- click on create bucket - name it
or
create using code.
> mkdir s3; cd s3
search resource s3 bucket
also look for versioning
-someone may delete bucket or a file.
- we don't want any one to delete it. -> look for lifecycle
google for lifecycle prevent_destroy = true
> notepad s3.tf
provider "aws" {
region = "ap-south-1"
profile = "default"
}
resource "aws_s3_bucket" "b" {
bucket = "my-tf-bucket..."
# state file keep on chaging, so its a good idea to versioning. if someone makes mistakes, you can revert it back.
lifecycle {
prevent_destroy = true
}
versioning = {
enabled = true
}
}
> tf apply -auto-approve
Note: bucket need to be unique.
now, we have to upload the state file.
> mkdir remotestate; cd remotestate
> notepad r.tf
Now, we will create a new project and we want to create a statefile remotely.
We want the state file to be managed remotely
google: terraform remote state
review the document.
backend ..
provider "aws" {
region = "ap-south-1"
profile = "default"
}
resource "aws_s3_bucket" "b" {
bucket = "my-tf-bucket..."
lifecycle {
prevent_destroy = true
}
versioning = {
enabled = true
}
}
terraform {
backend "s3"
bucket = "my-tf-bucket..."
key = "my.tfstate"
region = "us-east-1"
}
}
}
> tf init # review the message you see on the screen
> notepad e.tf
resource "aws-instance" "web" {
ami = "ami-....."
instance_type = "t2.small"
tags = {
Name = "Hello"
}
}
> tf apply
tfstate file will be created on s3 bucket.
any change will be updated on remove server/storage.
go to s3, you will see file is uploaded
this does not resolve the chanllenge of lock.
for locking support, they suggest to use dynamo DB.
We are going to create a table on dynamoDB.
------------------------------
Go to aws console and DynamoDB manually or use terraform
on teffarom web page, search for: dynamodb table
dynamoDB state locking - search on tf.com site
> mkdir ddb; cd ddb
> notepad d.tf
provider "aws" {
region = "ap-south-1"
profile = "default"
}
resource "aws_dunamodb_table" "basic-dynaodb-table" {
name = "tfstate-lock-table"
read_capacity = 5
write_capacity = 5
hash_key = "LockID"
attribute {
name = "LockID"
type = "S"
}
}
> tf apply
enter a value: yes
it will create a dynamoDB table. go to the console and verify it.
you will see the table with primary key.
now, localing state will be managed by dynamoDB table that we just created.
> notepad r.tf
provider "aws" {
region = "ap-south-1"
profile = "default"
}
terraform {
backend "s3" {
bucket = "my-tf-bucket..."
key = "my.tfstate"
region = "us-east-1"
dynamodb_table = "tfstate-lock-table"
}
}
> tf init
> tf apply
this time, if other team member try to use the state file, first member can execute the program but had to wait the second one.
if other member tries to run, gets an error.
This is how you can lock the locking state.
This is how you can colaborate with your team member.
------------------------------------
registry.terrafrom.io/providers/hashicorp/azurerm/latest/docs#example-usage
how to use azure>
go to registry-> azure -> document -> authentication
authenticate using cli or locally -> find the authentication methid, provider.
once you authenticate, you can launch some resource group.
create VPC, or instances, containers...
Lets see other member wants the check the state
> tf state
you can pull your state
> tf state pull
you will see what is current state going on..
Thursday, June 3, 2021
Terraform - Remote State Management
Subscribe to:
Post Comments (Atom)
Git branch show detached HEAD
Git branch show detached HEAD 1. List your branch $ git branch * (HEAD detached at f219e03) 00 2. Run re-set hard $ git reset --hard 3. ...
-
snmpconfig command allow you to managge snmpv1/v3 agent configuration on SAN switch. Event trap level is mapped with event severity level....
-
Firmware upgrade on HPE SuperDom Flex 280 - prerequisites tasks a. Set up repo b. Upload firmware to your webserver 1. For foundation so...
-
Disabling the Telnet protocol on Brocade SAN switches By default, telnet is enabled on Brocade SAN switches. As part of security hardening o...
No comments:
Post a Comment