Friday, May 28, 2021

RHEL8- STIG, SCC, benchmark

 Task to do,

1. Install STIG viewer 2.14
2. Install SCC 5.4
3. Install benchmark


RHEL7 - Alternative to sys-unconfig command for Centos 7

 
Directly copied from
https://www.reddit.com/r/sysadmin/comments/8f10mc/command_sysunconfig_alternative_for_centos_7/

posted by user: ckozler

I posted, just to get an idea.

Command sys-unconfig alternative for Centos 7

-------------------------
I just made my own. I hated that they removed that

 #!/usr/bin/env bash
 
 #
 # This is the sys_prep script
 # It will clear out all non-revelent information for a new VM
 #
 # 1. Force logs to rotate and clear old.
 /usr/sbin/logrotate -f /etc/logrotate.conf
 /bin/rm -f /var/log/*-20* /var/log/*.gz
 #
 # 2. Clear the audit log & wtmp.
 /bin/cat /dev/null > /var/log/audit/audit.log
 /bin/cat /dev/null > /var/log/wtmp
 #
 # 3. Remove the udev device rules.
 /bin/rm -f /etc/udev/rules.d/70*
 #
 # 4. Remove the traces of the template MAC address and UUIDs.
 /bin/sed -i '/^\(HWADDR\|UUID\|IPADDR\|NETMASK\|GATEWAY\)=/d' /etc/sysconfig/network-scripts/ifcfg-e*
 #
 # 5. Clean /tmp out.
 /bin/rm -rf /tmp/*
 /bin/rm -rf /var/tmp/*
 #
 # 6. Remove the SSH host keys.
 /bin/rm -f /etc/ssh/*key*
 #
 # 7. Remove the root user's shell history.
 /bin/rm -f /root/.bash_history
 unset HISTFILE
 #
 # 8. Set hostname to localhost
 /bin/sed -i "s/HOSTNAME=.*/HOSTNAME=localhost.localdomain/g" /etc/sysconfig/network
 /bin/hostnamectl set-hostname localhost.localdomain
 
 #
 # 9. Remove rsyslog.conf remote log server IP.
 /bin/sed -i '/1.1.1.1.1/'d /etc/rsyslog.conf
 
 # 10. Clear out the osad-auth.conf file to stop dupliate IDs
 #
 rm -v /etc/sysconfig/rhn/osad-auth.conf
 rm -v /etc/sysconfig/rhn/systemid
 
 
 # clean hosts
 #hostname_check=$(hostname)
 #if ! [[ "${hostname_check}" =~ "local" ]]; then
 #    cp -v /etc/hosts /etc/hosts.sys_prep
 #    sed -i "s,$(hostname),,g" /etc/hosts
 #    sed -i "s,$(hostname -s),,g" /etc/hosts
 #fi
 
 rm -v /root/.ssh/known_hosts
 
 #
 # 11. Shutdown the VM. Poweron required to scan new HW addresses.
 poweroff
 #

--------------------------------

 Reply by
clearing_sky

-Use Cloud-Init. It should handle all the things you need to do, but a kickstart would probably be a better idea if you are doing a physical deployment.

Wednesday, May 26, 2021

HP - How to remove BIOS password

 How to remove BIOS password

HP ProLiant
- Reboot your machine
- When it prompts the password, enter the password followed by / (slash) no space.
- Password deleted message pops up.

say your password is BrIGht(!)Side@Edg3
you enter the password as: BrIGht(!)Side@Edg3/


Monday, May 24, 2021

Day8 - terraform, provider

 Day 8 - terraform


$ cat providers.tf





$ cat variables.tf

variable 'x' {
   default = "t2.micro"
     type = string
}

output "01" {
  value = var.x
}



> tf apply -var-'x-t2.medium"


$ cat aws-main.tf
resource "aws_instance" "web" {
  ami    = "ami-012...."
  instance_type    = var.x

}



$ cat variables.tf

# variable 'x' {
#    default = "t2.micro"
#     type = string
# }

variable 'x' {}

output "01" {
  value = var.x
}

> tf apply -var="x=t2.medium"

in this case, either you pass the value or it will prompt

don't change the code, pass the variable.

or you can create a config with key/value pair.

name must the same, fix file.
> notepad terraform.tfvars
#x="value"
x="t2.micro"

here, you can come and change
we change the value of x.

> tf apply

if you don't define, it will prompt but we defined, it will not prompt but will grab from config file.



> notepad terraform.tfvars
#x="value"
x="t2.large"

> tf apply -var="x=t2.micro"


Lets go to variable file


$ cat variables.tf

# variable 'x' {
#    default = "t2.micro"
#     type = string
# }

variable 'x' {}

variable "y" {
  type=boot

}


output "01" {
  value = var.y
}

commentout the content on aws_main.tf file.

> tf apply
ask you for true or false


boolean is good if you create condition
if you want to do something check the condition, if condition is true, do this if not do the else part.

turnety operator way ,,

condition - question mark? - value1: value2

if condition comes true, display value1 else display value2


Note: true and false should be on lower case

output "01" {
  value = var.y ? "Sam" : "Ram"
}

if true it will return Sam else Ram.



> cd google

provider "aws" {
 region    = "ap-south-1"
  profile    = "default"

}

provider "google" {
   project    = "myproj"
   region    = "asia-south1"
}


$ modify variable file

> tc apply

> tf plan

webapp    -> Testing (QA Team) -> Prod


$ cat aws_main.tf
resource "aws_instance" "web" {
 ami    = "ami-01..."
 instance_type = var.x
 count = 5

}


cat variables.tf
append
variable "lstest" {
type = bool

}

change the value of 5

$ cat aws_main.tf
resource "aws_instance" "web" {
 ami    = "ami-01..."
 instance_type = var.x
# count = 5
 count = var.lstest ? 0    : 1    # if this is true, 0 , count 0 , this instance will not run

}


gcp_main.tf
resource "google_compute_instance" os1" {
  name    = "os1"
  machine_type = var.mtype
  zone    = "asia-south-c"
  count = var.istest ? 1 : 0

boot_disk {
  initialize_params {
    image = "debian-cloud/debian-g"
  }
}

> tf apply -var="lstest=true"


ca variables.tf
append


variable "lstest" {
type = bool

}

variable "azaws" {
default = [ "ap-south-1a", "ap-south-1b", "ap-south-1c" ]
type = list
}

#output "0s2" {
#  value = var.azaws


$ cat aws_main.tf
resource "aws_instance" "web" {
 ami    = "ami-01..."
 instance_type = var.x
 availability_zone = var.azaws[1]
 count = 1

}


> tf apply




map data type

[ 'a', 'b', 'c' ]

system gives the index

you may want to do your own value
say a is id




variable "azaws" {
default = [ "ap-south-1a", "ap-south-1b", "ap-south-1c" ]
type = list
}

variable "types" {
  type = map
  default = {        # maps in curly braces
    us-est-1 = "t2.nano",
        ap-south-1 = "t2.micro",
        us-west-1 = "t2.medium"
  }
}


# output "os3" {
#   value = var.types
#   value = var.types["ap-south-1"]
# }


> tf apply




when your signature becomes autograph you are something ...


RHEL8 - YUM Repository Setup, SELinux, IPv6 set up

 In SELinux, Context is set up on a file and you can see it with ls -lZ command.
$ ls -lZ

On the output, you can see the context set on user base (_u), role(_r), type(_t), or the Security label (_t)

--------------------------

Yum repo set up

- AppSteam | BaseOS | YUM Repository Setup RHEL8

BaseOS - Required Software

# mkdir /rpms

Go to your DVD and copy AppStream and BaseOS directories which contains importaint packages.
# cd /mnt; ls
# cp -aRv BaseOS AppStreas  /rpms
# cd /etc/yum.repos.d
# vi appstream.repo
[AppStream-Repo]
name="App Sreadm Repo"
baseurl=file:///rpms/AppStream/
enabled=1
gpgcheck=0

# cat baseos-repo.repo

[BaseOS-Repo]
name="BaseOS Repo"
baseurl=file:///rpms/BaseOS/
enabled=1
gpgcheck=0

# yum clean all
# yum repolist

Register all packages (sign with gpg key)
# rpm --import RMP-GPG-KEY-redhat-release


raaho ke jamate ke tujhe kya sabut diun
manjil mile to paawo ke chhale nahi rahe
chalte chalte mujhse puchha mere paheron ki chhlon ne
kitne dur maiful sajadi rahiso waalone

# IPv6 set up
# nmcli connection show
# ip addr
# nmcli conn modify eni123 ipv6.addresses "<IP-V6 address>" ipv6.method manual
# ip a
# nmcli con down eno123
# nmcli con up eno123
# ping6 <IPv-6_address>

# vi /etc/syconfig/network-scripts/ifcfg-eno123
IPV6INIT=:yes"


# nmcli connection modyf eno123 ipv6.addresses "IPv6" ipv6.method manual
# nmcli conn down eno123
# nmcli conn up eno123
# ip addr
# ping6 <ipv-6 address>





SAN Switch - Configure MAPS email alerts on SAN switch (Brocade Switch)

 Configure MAPS email alerts on SAN switch (Brocade Switch)
- deliver your alerts to your email

Pre-req tasks
- Complete domain server configuration (AD-Account set up)
- Set up smart reply server to filter email coming from outside to the switch (Optional)

> mapsconfig --show
> dnsconfig --add -domain -switch1
> dnsconfig --show

> relayconfig --config -rla_ip -rla_dname
> relayconfig --show    # verify

1. Display config info
> mapsconfig --show

2. Set the confi
> mapsconfig --emailcfg -address root@everest.local
> mapsconfig --testmail -subject "Test" -message "Test message from `hostname`"
> mapsconfig --actions raslog,email,sw_critical,sw_marginal,sfp_marginal

or
> mapsconfig --action email

> mspsconfig --emailcfg -address
> mapsconfig --show

send test message (email)
> mapsconfig --testmail -subject Test Mail -message Testing

3. Verify the change
> mapsconfig --show


Thursday, May 20, 2021

Amazon: Leadership Principles

 https://www.amazon.jobs/en/principles

Leadership Principles

We use our Leadership Principles every day, whether we're discussing ideas for new projects or deciding on the best approach to solving a problem. It is just one of the things that makes Amazon peculiar.

Customer Obsession

Leaders start with the customer and work backwards. They work vigorously to earn and keep customer trust. Although leaders pay attention to competitors, they obsess over customers.

Ownership

Leaders are owners. They think long term and don’t sacrifice long-term value for short-term results. They act on behalf of the entire company, beyond just their own team. They never say “that’s not my job."

Invent and Simplify

Leaders expect and require innovation and invention from their teams and always find ways to simplify. They are externally aware, look for new ideas from everywhere, and are not limited by “not invented here." As we do new things, we accept that we may be misunderstood for long periods of time.

Are Right, A Lot

Leaders are right a lot. They have strong judgment and good instincts. They seek diverse perspectives and work to disconfirm their beliefs.

Learn and Be Curious

Leaders are never done learning and always seek to improve themselves. They are curious about new possibilities and act to explore them.

Hire and Develop the Best

Leaders raise the performance bar with every hire and promotion. They recognize exceptional talent, and willingly move them throughout the organization. Leaders develop leaders and take seriously their role in coaching others. We work on behalf of our people to invent mechanisms for development like Career Choice.

Insist on the Highest Standards

Leaders have relentlessly high standards — many people may think these standards are unreasonably high. Leaders are continually raising the bar and drive their teams to deliver high quality products, services, and processes. Leaders ensure that defects do not get sent down the line and that problems are fixed so they stay fixed.

Think Big

Thinking small is a self-fulfilling prophecy. Leaders create and communicate a bold direction that inspires results. They think differently and look around corners for ways to serve customers.

Bias for Action

Speed matters in business. Many decisions and actions are reversible and do not need extensive study. We value calculated risk taking. 

Frugality

Accomplish more with less. Constraints breed resourcefulness, self-sufficiency, and invention. There are no extra points for growing headcount, budget size, or fixed expense.

Earn Trust

Leaders listen attentively, speak candidly, and treat others respectfully. They are vocally self-critical, even when doing so is awkward or embarrassing. Leaders do not believe their or their team’s body odor smells of perfume. They benchmark themselves and their teams against the best.

Dive Deep

Leaders operate at all levels, stay connected to the details, audit frequently, and are skeptical when metrics and anecdote differ. No task is beneath them.

Have Backbone; Disagree and Commit

Leaders are obligated to respectfully challenge decisions when they disagree, even when doing so is uncomfortable or exhausting. Leaders have conviction and are tenacious. They do not compromise for the sake of social cohesion. Once a decision is determined, they commit wholly.

Deliver Results

Leaders focus on the key inputs for their business and deliver them with the right quality and in a timely fashion. Despite setbacks, they rise to the occasion and never settle.

Wednesday, May 19, 2021

Find files and directories with UID,GID,Sticky bit set

 Find files and directories with UID,GID,Sticky bit set


# find ./ -type -d -perm 1000 -exec ls -ld {} \;

# find ./ -type f  \( -perm -2000 -o -perm -4000 \) -exec ls -ld {} \;

 

$ ansible -i hostlist all -o -a "ls -l /etc/resolv.conf" | sort | grep -v "Mar 20 10"

 

Day7 - Terraform variable multi instances deployment

Day 7


$ cat providers.tf





$ cat variables.tf

variable 'x' {
   default = "t2.micro"
     type = string
}

output "01" {
  value = var.x
}



> tf apply -var-'x-t2.medium"


$ cat aws-main.tf
resource "aws_instance" "web" {
  ami    = "ami-012...."
  instance_type    = var.x

}



$ cat variables.tf

# variable 'x' {
#    default = "t2.micro"
#     type = string
# }

variable 'x' {}

output "01" {
  value = var.x
}

> tf apply -var="x=t2.medium"

in this case, either you pass the value or it will prompt

don't change the code, pass the variable.

or you can create a config with key/value pair.

name must the same, fix file.
> notepad terraform.tfvars
#x="value"
x="t2.micro"

here, you can come and change
we change the value of x.

> tf apply

if you don't define, it will prompt but we defined, it will not prompt but will grab from config file.



> notepad terraform.tfvars
#x="value"
x="t2.large"

> tf apply -var="x=t2.micro"


Lets go to variable file


$ cat variables.tf

# variable 'x' {
#    default = "t2.micro"
#     type = string
# }

variable 'x' {}

variable "y" {
  type=boot

}


output "01" {
  value = var.y
}

commentout the content on aws_main.tf file.

> tf apply
ask you for true or false


boolean is good if you create condition
if you want to do something check the condition, if condition is true, do this if not do the else part.

turnety operator way ,,

condition - question mark? - value1: value2

if condition comes true, display value1 else display value2


Note: true and false should be on lower case

output "01" {
  value = var.y ? "Sam" : "Ram"
}

if true it will return Sam else Ram.



> cd google

provider "aws" {
 region    = "ap-south-1"
  profile    = "default"

}

provider "google" {
   project    = "myproj"
   region    = "asia-south1"
}


$ modify variable file

> tc apply

> tf plan

webapp    -> Testing (QA Team) -> Prod


$ cat aws_main.tf
resource "aws_instance" "web" {
 ami    = "ami-01..."
 instance_type = var.x
 count = 5

}


cat variables.tf
append
variable "lstest" {
type = bool

}

change the value of 5

$ cat aws_main.tf
resource "aws_instance" "web" {
 ami    = "ami-01..."
 instance_type = var.x
# count = 5
 count = var.lstest ? 0    : 1    # if this is true, 0 , count 0 , this instance will not run

}


gcp_main.tf
resource "google_compute_instance" os1" {
  name    = "os1"
  machine_type = var.mtype
  zone    = "asia-south-c"
  count = var.istest ? 1 : 0

boot_disk {
  initialize_params {
    image = "debian-cloud/debian-g"
  }
}

> tf apply -var="lstest=true"


ca variables.tf
append


variable "lstest" {
type = bool

}

variable "azaws" {
default = [ "ap-south-1a", "ap-south-1b", "ap-south-1c" ]
type = list
}

#output "0s2" {
#  value = var.azaws


$ cat aws_main.tf
resource "aws_instance" "web" {
 ami    = "ami-01..."
 instance_type = var.x
 availability_zone = var.azaws[1]
 count = 1

}


> tf apply




map data type

[ 'a', 'b', 'c' ]

system gives the index

you may want to do your own value
say a is id




variable "azaws" {
default = [ "ap-south-1a", "ap-south-1b", "ap-south-1c" ]
type = list
}

variable "types" {
  type = map
  default = {        # maps in curly braces
    us-est-1 = "t2.nano",
        ap-south-1 = "t2.micro",
        us-west-1 = "t2.medium"
  }
}


# output "os3" {
#   value = var.types
#   value = var.types["ap-south-1"]
# }


> tf apply



Tuesday, May 18, 2021

आफ्नै धर्तीलाई सम्झंदा !!!

आफ्नै धर्तीलाई सम्झंदा !!!

नयाँ भाग्य कोर्न  
नयाँ मुलुकको लागी
जब घर देश बाट हिंड्छ
परिवारको भविष्यलाई
राम्रोसँग स्रिङ्गारेर
खुशि,र दुखको दोसाँझमा
अल्मलिंदै, रुमलिंदै
आफ्नो प्यारो गाउँ बेशि
पाईला पाईलाले पछि पार्दै
नयाँ  सपना सम्झंदै
एउटा उदेष्यमा  लम्किन्छ - परदेशी

परिवारमा बात कम
खबर कम भो परदेश बाट
न गफ छ, न भेट छ
सुन्य सुन्य छ
एकान्त छ,
उता,
आमा पिंढिमा  बसेर
सोध्छिन रे !
छोरा कहिले आउँछ ?
छोरा दिन गन्दै,
महिना अनि बर्ष गन्दै
समयको वहावमा
यसरि चल्दै छ,
थाहा छैन जीवन
कहाँ पुग्दै छ,
शरीरको रुपरेखा बदलिंदै छ
कपाल फुलेर
कनिका छरे जस्तै भो
खै के को खोजीमा
केहि पाए जस्तो
केहि नपाए जस्तो
दोधारको जिन्दगी
सबै मिलेर नि
कतै केहि नमिले जस्तो
केहि छुटे जस्तो
सन्तोषको सास
फेर्न नसके जस्तो
न देश अट्यो न परिवार
यो मनमा सांच्न लाई
नयाँ मुलुक, नयाँ संसार,
नयाँ  परिवार
अनि नयाँ परिवेश भो विदेश !!!

न देश भो
न भो परिवार यो मनमा साँच्न लाई

न त् देश न त् परिवार

सिक्का हावामा पल्टाएर
भाग्य बदल्न सकिने रहेनछ
बर्मा गए कर्म संगै
त्यसै भएको रहेनछ
सबैको भाग्य एउटै
हुँदो रहेनछ !!!
sikkaa paltaaer sahi halat chuttaaun sakinna



५/१८/२०२१
Vienna, VA

Day6-terraform on google - GCE

 day6-terraform on google - GCE

> gcloud init

create account on gcloud

Instance/VM

compute service - EC2


In your account first
1. create a project.
- create service
- create database service
- deploy webus/cli/api


based on the project, blling and everything

1. click on your account and
create new project (through web or GUI or CLI) to create a new project

Name: myterraform

remember to note the ID, which is also the project IP. Give name wisely. its very imporant

now, you will be on project.

click o your account and you will see what project you are in.

on the left side, you will see different menus services and you can directly launch fromt here.

click on compute Engine -> VM instances

-> compute Engine API

click on enable to activate it..



google: gcp region list
global locations


COde:

Goal: Step by step
-------------------


step1

Login: auth: user/pass
code: Key
Project ID: myterraform
region: asia_south1



=========================

go to terraform : searh provider: google
- google provider
- authentication

look for credential


> mkdir day5; cd day5
> notepad main.tf


provider "gogole" {
  project    = "myterraform"
  region    = "asia-south1"

}

look for authentication part

- we use access key to login and store in a file.
  - create service account
  - create key and store in a file

IAM and Admin
-> service account -> You will see your existing account.
-> this account is associated some policy or role
-> click on it and click on Keys
-> click on add key - create new key
-> select JSON (by default)
-> Click on Create and will be downloaded on your local PC.
Note: Keey it secret

go to documents/terraform-training/google/gce-key.json



> notepad main.tf

provider "gogole" {
  project    = "myterraform"
  region    = "asia-south1"
  credentials     = "gce-key.json"
}


> terraform init

downloads the plugins for provider


go back to compute service, and write how to launch manually
- computer Engine -> VM instance -> Create a new instance


give os name: os2
vm: os2
region: select mumbai
instance type - machine family -> by default E2-medium
machineType: ec-medium
boot disk (AMI) - click on change and select the os you want.
specify boot-disk image name: debian
boot disk type and size: 10G

go to network settting
click on management / networking
VPC/Network: default (Network Interface)



give os

name: os2
vm: os2
region: select mumbai
instance type - machine family -> by default E2-medium
machineType: ec-medium
boot disk (AMI) - click on change and select the os you want.
specify boot-disk image name: debian
boot disk type and size: 10G

go to network settting
click on management / networking
VPC/Network: default (Network Interface)

convert it into terrafrom code.


google for google service acount and look for example.



> notepad main.tf

provider "gogole" {
  project    = "myterraform"
  region    = "asia-south1"
  credentials     = "gce-key.json"
}

resource "google_compute_instance" "os1"
  name        = "os1"
  machine_type    = "e2-medium"
  zone:     = "asia_south1"
}
 

tags =

look for document what keyworks are optional and mandetory

> terraform plan

errors: account_id is require


google for: gcloud cli command
go to cloud sdk and click on install SDK and click on cloud DSK installer

after you install, one cmd is avialable..
> gcloud init
helps you to login

select 2
enter project name:
it will configure on  your local system.

now, try again
> terrafro plan




> notepad main.tf

provider "gogole" {
  project    = "myterraform"
  region    = "asia-south1"
  credentials     = "gce-key.json"
}

resource "google_compute_instance" "os1"
  name        = "os1"
  machine_type    = "e2-medium"
  zone:     = "asia_south1"
}
 

> tf plan

need to specify network interface, book disk required

copy the example code

> notepad main.tf

provider "gogole" {
  project    = "myterraform"
  region    = "asia-south1"
  credentials     = "gce-key.json"
}

resource "google_compute_instance" "os1" {
  name        = "os1"
  machine_type    = "e2-medium"
  zone:     = "asia_south1-c"
}

boot_disk {
  initialize_params {
    image = "debian-cloud/debian-9"

network_interface {
  network = "default"
}

>tf plan
> tf apply

error: if any, review

otherwise, instance should be created.


go to the console and see if you ca see other insances.



> tf destroy --help

destroy specific resource
> tf destroy -target=resource


>

>

Thursday, May 13, 2021

mAya jaal

 
हिमचुलीमा हिउँ कति पर्यो पर्यो
कति पर्यो भन्न म सक्दिन
तिम्लाई माया कति छ कति
मुटु चिरी देखाउन सक्दिन

when logic ends, magic begins ..


सालको पातको टपरी हुनी

 
सालको पातको टपरी हुनी

केटा: भाको एउटा जाले रुमाल फूल भरेको छैन,
आ… होएऽऽऽ भाको एउटा जाले रुमाल फूल भरेको छैन,
के लैजान्छौ सानू तिम्ले पिरतीमा बैना, पिरतीमा बैना…
एऽऽऽ हे बरै…
सालको पातको टपरी हुनी, हे बरै नहुनी सल्लैको;
तिम्रो मायाँ कति छ कति, हे बरै नलाउ पछि हल्लैको; - 2

केटी: मुटु कहाँ बैना मार्यो मै सोझीलाई जलाई,
आ… होएऽऽऽ मुटु कहाँ बैना मार्यो मै सोझीलाई जलाई,
कलेजीमा झुपडी देउ रुमाल नदेउ मलाई, रुमाल नदेउ मलाई…

केटी: पानीको रिस उर्ने माछी काँ'होला र सानू,
आ… होएऽऽऽ पानीको रिस उर्ने माछी काँ'होला र सानू,
डालीको रिस ठू्ल्ने गरे मैले त के जानूँ, मैले त के जानूँ…

केटी: छेउमै छ नि लालि फूल प्याउली किन हेर्छौ,
आ… होएऽऽऽ छेउमै छ नि लालि फूल प्याउली किन हेर्छौ,
जुगै काट्ने मन छ मलाई मन किन फेर्छौ, मन किन फेर्छौ…

केटा: आउँदा जाँदा देउरालीमा फूल चराउँछु मैले,
आ… होएऽऽऽ आउँदा जाँदा देउरालीमा फूल चराउँछु मैले,
तिम्लाई सम्झी चढाको फूल ओइलिँदैन कहिल्यै, ओइलिँदैन कहिल्यै…

केटी: जोडी पंक्षी देखेपनी घर मुनिको बोटमा,
आ… होएऽऽऽ जोडी पंक्षी देखेपनी घर मुनिको बोटमा,
तारा खस्दा तिम्रै नाम आउँछ मेरो ओंठमा, आउँछ मेरो ओंठमा…
एऽऽऽ हे बरै…


=======================================

केटा: भाको एउटा जाले रुमाल फूल भरेको छैन,
आ… होएऽऽऽ भाको एउटा जाले रुमाल फूल भरेको छैन,
के लैजान्छौ सानू तिम्ले पिरतीमा बैना, पिरतीमा बैना…
एऽऽऽ हे बरै… सालको पातको टपरी हुनी, हे बरै नहुनी सल्लैको;
तिम्रो मायाँ कति छ कति, हे बरै नलाउ पछि हल्लैको;
हा… हैऽऽऽ सालको पातको टपरी हुनी, हे बरै नहुनी सल्लैको;
तिम्रो मायाँ कति छ कति हे बरै नलाउ पछि हल्लैको ।
तिम्रो मायाँ कति छ कति हे बरै नलाउ पछि हल्लैको ।

केटी: मुटु कहाँ बैना मार्यो मै सोझीलाई जलाई,
आ… होएऽऽऽ मुटु कहाँ बैना मार्यो मै सोझीलाई जलाई,
कलेजीमा झुपडी देउ रुमाल नदेउ मलाई, रुमाल नदेउ मलाई…
एऽऽऽ हे बरै… सालको पातको टपरी हुनी, हे बरै नहुनी सल्लैको;
तिम्रो मायाँ कति छ कति, हे बरै नलाउ पछि हल्लैको;
हा… हैऽऽऽ सालको पातको टपरी हुनी, हे बरै नहुनी सल्लैको;
तिम्रो मायाँ कति छ कति हे बरै नलाउ पछि हल्लैको ।
तिम्रो मायाँ कति छ कति हे बरै नलाउ पछि हल्लैको ।

-------------------------

केटी: पानीको रिस उर्ने माछी काँ'होला र सानू,
आ… होएऽऽऽ पानीको रिस उर्ने माछी काँ'होला र सानू,
डालीको रिस ठू्ल्ने गरे मैले त के जानूँ, मैले त के जानूँ…
एऽऽऽ हे बरै… सालको पातको टपरी हुनी, हे बरै नहुनी सल्लैको;
तिम्रो मायाँ कति छ कति, हे बरै नलाउ पछि हल्लैको;
हा… हैऽऽऽ सालको पातको टपरी हुनी, हे बरै नहुनी सल्लैको;
तिम्रो मायाँ कति छ कति हे बरै नलाउ पछि हल्लैको ।
तिम्रो मायाँ कति छ कति हे बरै नलाउ पछि हल्लैको ।

केटी: छेउमै छ नि लालि फूल प्याउली किन हेर्छौ,
आ… होएऽऽऽ छेउमै छ नि लालि फूल प्याउली किन हेर्छौ,
जुगै काट्ने मन छ मलाई मन किन फेर्छौ, मन किन फेर्छौ…
एऽऽऽ हे बरै… सालको पातको टपरी हुनी, हे बरै नहुनी सल्लैको;
तिम्रो मायाँ कति छ कति, हे बरै नलाउ पछि हल्लैको;
हा… हैऽऽऽ सालको पातको टपरी हुनी, हे बरै नहुनी सल्लैको;
तिम्रो मायाँ कति छ कति हे बरै नलाउ पछि हल्लैको ।
तिम्रो मायाँ कति छ कति हे बरै नलाउ पछि हल्लैको ।

--------------------------

केटा: आउँदा जाँदा देउरालीमा फूल चराउँछु मैले,
आ… होएऽऽऽ आउँदा जाँदा देउरालीमा फूल चराउँछु मैले,
तिम्लाई सम्झी चढाको फूल ओइलिँदैन कहिल्यै, ओइलिँदैन कहिल्यै…
एऽऽऽ हे बरै… सालको पातको टपरी हुनी, हे बरै नहुनी सल्लैको;
तिम्रो मायाँ कति छ कति, हे बरै नलाउ पछि हल्लैको;
हा… हैऽऽऽ सालको पातको टपरी हुनी, हे बरै नहुनी सल्लैको;
तिम्रो मायाँ कति छ कति हे बरै नलाउ पछि हल्लैको ।
तिम्रो मायाँ कति छ कति हे बरै नलाउ पछि हल्लैको ।

केटी: जोडी पंक्षी देखेपनी घर मुनिको बोटमा,
आ… होएऽऽऽ जोडी पंक्षी देखेपनी घर मुनिको बोटमा,
तारा खस्दा तिम्रै नाम आउँछ मेरो ओंठमा, आउँछ मेरो ओंठमा…
एऽऽऽ हे बरै… सालको पातको टपरी हुनी, हे बरै नहुनी सल्लैको;
तिम्रो मायाँ कति छ कति, हे बरै नलाउ पछि हल्लैको;
हा… हैऽऽऽ सालको पातको टपरी हुनी, हे बरै नहुनी सल्लैको;
तिम्रो मायाँ कति छ कति हे बरै नलाउ पछि हल्लैको ।
तिम्रो मायाँ कति छ कति हे बरै नलाउ पछि हल्लैको ।

Day5 - Terraform - plan, refresh, apply, desire, code, tfstat, destroy

 Day5 - Terraform 5-13-2021

day5 - Terraform - plan, refresh, apply, desire, code, tfstat, destroy

notepad a.tf
terrafrom look for .tf extention file and executes


> notepad web.tf
provider "aws" {
  region                  = "ap-south-1"
  profile                 = "default"
}

resource "aws_instance" "webos1" {
  ami           = "ami-010aff33ed5991201"
  instance_type = "t2.micro"
  security_groups =  [ "webport-allow" ]
  key_name = "terraform_key"

  tags = {
    Name = "Web Server by TF"
  }
}

resource "null_resource"  "nullremote1" {

connection {
    type     = "ssh"
    user     = "ec2-user"
    private_key = file("C:/Users/Vimal Daga/Downloads/terraform_key.pem")
    host     = aws_instance.webos1.public_ip
  }

provisioner "remote-exec" {
    inline = [
      "sudo yum  install httpd  -y",
      "sudo  yum  install php  -y",
      "sudo systemctl start httpd",
      "sudo systemctl start httpd"
    ]
  }
}

resource "aws_ebs_volume" "example" {
  availability_zone = aws_instance.webos1.availability_zone
  size              = 1

  tags = {
    Name = "Web Server HD by TF"
  }
}

resource "aws_volume_attachment" "ebs_att" {
  device_name = "/dev/sdc"
  volume_id   = aws_ebs_volume.example.id
  instance_id = aws_instance.webos1.id
  force_detach = true
}

resource "null_resource"  "nullremote2" {

connection {
    type     = "ssh"
    user     = "ec2-user"
    private_key = file("C:/Users/Vimal Daga/Downloads/terraform_key.pem")
    host     = aws_instance.webos1.public_ip
  }

provisioner "remote-exec" {
    inline = [
      "sudo mkfs.ext4 /dev/xvdc",
      "sudo  mount /dev/xvdc  /var/www/html",
    ]
  }
}

resource "null_resource"  "nullremote4" {

connection {
    type     = "ssh"
    user     = "ec2-user"
    private_key = file("C:/Users/Vimal Daga/Downloads/terraform_key.pem")
    host     = aws_instance.webos1.public_ip
  }

provisioner "remote-exec" {
    inline = [
      "sudo yum install git -y",
      "sudo git clone https://github.com/vimallinuxworld13/gitphptest.git   /var/www/html/web"
    ]
  }
}

resource "null_resource"  "nullremote5" {

provisioner "local-exec" {
   command = "chrome http://13.232.50.58/web/index.php"
  }
}

=====================================

break this file

> notepad provider.tf
provider "aws" {
  region: = "ap-south-1"
  profile = "default"
}


> notepad ec2.tf
resource "Aws_instance" "webos1" {
  ami = "amo .."
  instal


tags = {
  Name = web
}



> terraform init

go there and check all the files and download the plugins for the provider such as aws, azure


> attach_block.tf
resource "aws_volume_attachment" "ebs_att" {
  device_name = "/dev/sdc"
  volume_id   = aws_ebs_volume.example.id
  instance_id = aws_instance.webos1.id
  force_detach = true
}


files are read on alphabet order, but TF will automatically handle or manage. This concept is called infering the resource with thier intellegency.
this means, which part to run first and which one to second.

> terraform plan
> tf apply

when you run this code first time, it will create tfstate

there are two state
1. desire state
2. Current state

1. Desire state
whatever you are looking for/want, you write on code - your desire state

2. Current state
What is there right now, or currently running or exist on the system


when you run tr apply, it will go and check if it already exists. if its not there then apply the code.
This concept is called - Idompotence

? tf apply
you will see a message - Infrascture is up to data
if no change is needed.

Login to your aws cloud
- check how many instances running
- check on what instance type is running.

first you run plan and apply (behind the scene plan runs when you run apply)
- when you run plan code, it basically goes and login to aws, retrive all the info and store locally and stores on terraform.tfstat file when is basically the state of the service.
stores all
open and review the file..


> notepad output.tf
output "myip" {
  value = aws_instance.webis1.public_ip
}

> tf apply

you will see the public IP.

open the file terraform.tfstat and search for public_ip and navigate through..


Note: if you use Terraform, always use terraform. do not do automation and manual.

it will make a mess..

any change, you have to make, make sure to modify the code.
say if one of the ec2 instance has issue, they may go to console and manually change the config but its not been updated on code, you will have a problem.


say, lets go to aws console and review the instnce that you have instance type is t2.small

but on your code, you hae t2.micro.

instance_type = "t2.micro"


Desire state is manual t2.small
but code has: automate: t2.micro

> tf apply

our code goes to copy the current state and it will find the conflict.

before apply, use refresh. it will go to cloud and update/referesh the current state. after that, local file is updated terraform.tfstate

> tf refresh

> notepad terraform.tfstat

> tf apply
it will change from small to micro

since your code has micro, it will change

either do everything manual, or everything automation.


Note: Never modify tfstat file manually.


refresh, plan, apply, desire, code, tfstat,  


add null resources
> notepad apache.tf
> tf destroy     # remove all the resources
They go and refresh and update the tfstat file locally.


> tf apply

- apache
- hard disk
- providers

4 resources are going to be applied.

1. Launch the instance
2. ssh -> null -: php apache
3. created storage and attahing the storage

we have one bug here.

lets destroy our infrascture again.

> notepad apache.tf



provisioner "remote-exec" {
    inline = [
      "sudo yum  install httpd  -y",
      "sudo  yum  install php  -y",
      "sudo systemctl start httpd",
      "sudo systemctl start httpd"
    ]
  }
}



file name by default are on lexical order...



apache.tf
resource "null_resource" "nullremote1" {

depends_on = [
  aws_volume_attachment.ebs_att
]

.........
}


google fro terrafrom depends on

meta-arguement

one resource is deepnds on other respurce.



> tf destroy
> tf apply


validate your code
> terraform validate

gives you line number you have issue with



Wednesday, May 12, 2021

Kubernetes - constraint security

security constraint

constraint the security/process with limited previledge

limited power

UID ==0
UID >= 1000

SElinux

container Security

SEC Constraint

- Run the process with limited power.

login to your system with normal user say sam
$ cat /etc/shadow
access denied

try with root,
# cat /etc/shadow

Create a deploymnet
kc createdeploy myd --image--vimal13/apache-webserver-phop
> kc get pods
> kc get pod
> kc exec -it myd-xxxx -- sh

# id
# ps -aux

you are runnig as a root user

# sleep 20 &
# ps -aux
you see sleep is running as a root.

> kc describe pods mypod....

kc get pods myd-xx -o yaml > p.txt
> noteapd p.txt

go see review it, you will see security context and there is nothing

securityContext: {}

how can you constrain with limited power?


more security-context

spec:
  securityContext:
    runAsUser: 1000
    runAsUser: 3000
    fsGroup: 2000
  containers:
  - name: sec-ctx-demo
    image: busybox
    commaand: [ "sh", "-c", "sleep 1h" ]
    volumeMounts:
...

> kc apply -f security-context.yaml
> kc get pods
> kc exec -it security-context-demo -- sh
$ id
you see id of 1000 and gid=3000
$ ps




# vi t.py
import time
time.sleep(300)
# python3 t.py
it started

open a new terminal
# ps -aux | grep t.py



Cap -> capibility
when process start, what is its capability?
- change network setting?
- change network stack?
- change time?

whats the capability of the process.

You can write your process on some language, and run with proper capability.

$ date
$ date 1212121212

you get error, not permitted
you don't have capability to change the time as normal user

$ ifconfig
$ ifconfig eth0 1.2.3.4
operation not permitted


# cat security-context_cap.yaml

spec:
  containers:
  - name: sec-ctx-4
    imageL gcr.io/google-samples/node-hello:1.0
    securityContext:
      capabilities:
        add ["Net_Admin..


> kc apply -f security-context_cap.yaml
> kc exec -it security-context-demo-4 -- sh

# ps -auz


[12:24]

# ifconfig
# ip a s
# ip addr add 1.2.3.4/24 dev eth0
# ip a s

This is a concept of constraint.


Limits and quotas

> kc delete all --all


user (image)  --> master

node1 - 4GB/4CPU
node2 - 16GB/6CPU
node3 - 16GB/16cpu

when you launch a new container, you have min requirement 1 GB RAM/2 cpu
kube scheduler checks on node and find which node has RAM, CPU available based on requirement.
if ram is availablem, it will launch on that available node, if that much RAM/CPU not available container will be on pending state.

Limit request

you defined minimum requirement, but not the maximum requirement.

[heap memory]

it is always a good practice to set a max limit.
when pod launch, there are lots of process running and it may use more resources, so its a good idea to limit the max value.

> kc create deploy

> kc get pods myd-... -o yaml > z.txt
> notepad z.txt
review all the output

go down, look for limit


missed about 5 minutes here,

> kc get limitrange
> kc describe limitrange
>


notepad limitrange_pod.yaml


limit range is per name space

> kc get pods



Quota

Namespace

create a user with RBAC
- constraint this user with particular namespace.

- Create ns
- create ns
- set rbac to this user, power to work on this particular ns

There is no constarint and they have full control for
pvc, pc, rc, svc ....
create n number of pods

what we do?
we can tell our ns, max pv, pvc or anything set limit
say pod=3, pvc=2 - limit the resource

This information can be done with quotas.

> kc get quota

> kc create namespace lw1
> lc get quota --namespace lw1
no resource found...

quota

spec:
  hard:
    requests.cpu: "1"
    requests.memory: 1Gi
    limits.cpu: "2"
    limits.mem......

> kc apply -f quotas.yaml --namespace lw1

you can create quota from command line too

> kc create --help
look for example
> kc create quota myq --hard=cpu=1,memory=1G, pods=2,services=3,replicatoncontroller=2,resourcequotas=1,secrets=5,persistentvolumeclaims=10

> kc get pvc
> kc get quota
> kc get pods

metric Server
-------------

its very important to monitor your servers.
its hard to see what is going on for admin.
on k8s, if pod fails, it will re launch.

how many ram/CPU using/free , whats the free resources available, this is also called metrics.

used to be a tool called heapster - but now, its a metrics server (download from github)

install, and run.
this pod runs as an agent. They install cagent. (runs as daemon sets)

> minikube addon enable metrics-server

> kc top pod

it can monitor node or pod


> kc top node
shows you use %

This tool does not give you detail but you can integrate, cloud metric agent.
or you can use
splunx
new ..
prometheus/grafana









Day 40 - AWS - Elastic Container Service (ECS cluster)

AWS - Session 40 Apr 23, 2021

Elastic Container Service (ECS cluster)

1. Start your local RHEL8 machine

App -> OS (App/web server) -> Create image (AMI) -> launch on platform

Image -> AMI -> EC2

cpu/Ram
Hypervisor(Xen)
VM1 VM2 (Shared Resources)

or launch OS on dedicated host

containerization

Image -> Containers

Basic inf about Docker
---------------------

# docker info
# docker images

docker hun

# docker run -dit --name z1 vimal12/apache-webserver-php

# docker inspect z1

reviw the ip
# docker ps
# curl 182.17.0.2/index.php

if you want to connect from outside, it does not work

You have to expose

# docker run -dit --name z2 -p 8080:80 vimal13/apache-webserver-php

-p mean patting

# docker ps

# ifconfig enp0s3
get the IP

what we mean on above docker run command is that any request comes to <hostIP>:8080 is redirected to docker_ip:80

so, lets try this at your browser
http://192.168.10.20:8080

# docker run -dit --name z3 -p 8081:80 vimal13/apache-webserver-php
Every time you launch a new instance, you have to change the port number otherwise it will fail.

# netstat -tnlp | grep 8080

two program can't run on one single port

# docker run -dit --name z4 -p 8082 vimal13/apache-webserver-php

rather than specifying port number to every instance, docker folk came up with idea to assign the port automatically.

# docker run -dit --name z5 -P vimal13/apache-webserver.php

-P (Upper case) - publish
it will take random port and expose it.

# docker ps
# docker run -dit --name z6 -P vimal13/apache-webserver.php
# docker ps

# docker run -dit --name z2 -P vimal13/apache-webserver-php

Lets talk about ECS


we have
HW
- SYSTEM - RHEL8 -> Docker Engine -> Containers

AWS will provide the HW (System)

EC2 will provide RAM/CPU

Women glow differently when they are treated right and loved properly.

Monday, May 10, 2021

Day1- Introduction to Terraform

 👉 Why Terraform is used for?
- Terraform is an infrastructure provisioning tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions. It is used to manage infrascture using HashiCorp Configuration Language (HCL) code (also known as IAC or Infrastructure as Code) to automate the provision of the infrastructure. It uses a very simple syntaxt and can be provision the infracture across multiple cloud and on-premises data centers. We can safely and efficiently provision/reprovision the infrascture (at VMWare, AWS, GCP, and deploys infrastructure) for any configuration changes. It is one of the most popular tool available as IAC - infrascture as Code.
Configuration files describe to Terraform the components needed to run a single application or your entire datacenter.


👉 What is Infrastructure as Code ?
- Infrascture as code means defining infracture in a code which when executed, it creates or builds or provisions infrascture (Servers, routers and more) based on the written code. or you can say, IAC is a method of managing and provisioning the computer systems in data center using machine readable definition files, rather than physical hardware configuration or interactive configuration tools.

Infrastructure as code (IaC) means to manage your IT infrastructure using configuration files.

IaC helps to aotomate the provision of infrascture, allow your organization to develop, deploy, and scale  your cloud applications with reduced cost and faster and risk free fashion.

👉 What's the difference between Terraform and ansible?
- Terraform is infrascture management tool which ansible is mostly use for configuration management. They can do both jobs but there are little differences between them.
- Terraform is opensource , declarative tool which ansible is both declarative and procedural configuration.

👉 What is provider in Terraform?
- Provider is the keyword used in terraform which specifiy you what cloud environment you will be working on. The value you define for provider will decide where you rinfrascture will be provisioned. Based on the provider, the plugins are downloaded which will interact with remove system. You must declare the provider while writing the code.

The providers (AWS, AZURE, GCP and etc) are used to provision a resources.


Thursday, May 6, 2021

day 3 - Terraform - data reference - data manupulation

 Day3 - Terraform  5-06-2021

data reference/data manupulation

data resource in terrraform
In programming language, you define value and print
x=5
y="Hi"

print(x)
print(y)

resource is very important. EC2 is one of the resource. Please visit terraform doc for cloud specific resources such as aws ec2.


how to define variable in terraform?

You write in block with keywork. yesterday we use keyword as resource. lets try today with variable.


variable "x" {
  type = string    # value of string is string
  default = "Terraform Training"    # define the default value.
}

# keywork

keywork "name" {
}

here we will use keywork as output

output "myvalue" {
  value = x
}

lets put together
$ cat ec2.tf
variable "x" {
  type = string    # value of string is string
  default = "Terraform Training"    # define the default value.
}

output "myvalue" {
  value = x
}


lets go for an example

> terraform apply

we got an error: Invalie reference

what this mean is 'what you are typing, i have no idea


lets modify and return

  value = "x"    # put in double quote

terraform apply

type yes when prompted

you see outpyt myvalue=x
x is printed as it is.

you have to define x as a variable

so
value  "var.x"

run it again,

we see myvalue = "var.x"

if you put inside double quote, it will print as it is.

x is a reference, need to look for value, so we have to use curly braces.

value = "${var.x}"

$ cat ec2.tf
variable "x" {
  type = string    # value of string is string
  default = "Terraform Training"    # define the default value.
}

output "myvalue" {
  value = "${var.x}"
}

run it again

lets modifey

value = "Hi ${var.x} its cool"

this is called string interpolation


if you want to print any variable, you have to define where is it defined.
you have to specify the keywork - var.

variable you created are user defined variable and var variable will look for the data that is defined to x.

------------------------------

whats the plan

launch os : name of the service ec2
     sub service instance
in terraform we call instance as resource.

lets see we want to attach a disk to the instance, we can get extra storage from aws, the service name is ebs.

ebs -> volume -> storage -> hard disk (block device)

ebs is sub service to ec2.

in AWS, you create instance at a region

lets go to ec2 -> ebs -> volume
when we create volume
- it will ask the size of the volume
- it will ask you availibity zone where you want to deploy.

1. launch OS - ec2
2. Create a volume (dish)
3. Attach volume to ec2.

The biggest problem is that you need to create harddisk at the same region where you have created ec2 instance.

-> when you launch os, it automatically assign what az (by default), it will launch.


> mkdir ec2; cd ec2
lets reference yesterday's code


> notepad terraform.tf
provider "aws" {

region = "ap-south-1"
access_key = ""
secret_key = ""

}

resource "ec2 instance" "os1" {    # os1 is resource name

ami ="ami-01938d8s003ds88ss"
instance_type = "t2.small"
tags = {
  Name = "My first OS New"
    }
}
> terraform apply


------------------
do go doc,
ec2 -> aws instances -> arguement reference
------------------


using yesterdays' code, we use accessa_key and secret_key.

static credential is not recommended.

use
> aws configre     # to configure profile  # gogole aws create profile
> aws configure list-profile


provider "aws" {

region = "ap-south-1"
profile = myprofile"

}

resouce "aws_instance" "os1" {

ami = "ami...."
instance_type = "t2.micro"
tags = {
  Name = "My OS - II"
  }
}

os1 is a varible where it keeps all the info about all the variable.

note: if you create code on new folder, you have to initialize so that it will download the plug-in needed,
> terraform init
> terrafrom plan
> terraform apply


how do we get the region name so that we can attach the volume at the same region.


all variable info is stored at 'aws_instance.os1' variable.
we can use output function to print the value.

This is how we printed the value.

$ cat ec2.tf
variable "x" {
  type = string    # value of string is string
  default = "Terraform Training"    # define the default value.
}

output "myvalue" {
  value = "${var.x}"
}


so we can do it

output "os1" {
   value = aws_instance.os1
}

lets put together

provider "aws" {

region = "ap-south-1"
profile = myprofile"

}

resouce "aws_instance" "os1" {

ami = "ami...."
instance_type = "t2.micro"
tags = {
  Name = "My OS - II"
  }
}

output "os1" {
   value = aws_instance.os1
}

> tf apply

you will see it print out everything
review the output.

you will se ami
availibity zone

public_ip

you can see all the variable, value here without going to aws.

you can login to aws console and ec2. verify ip, az and more info..

they are same.

print the public IP

edit the code and add the entry below

output "my_public_ip_is" {
# output "my_availibilibity_is" {

# availibility_zone
#value = aws_instance.os1.availbility_zone
value = aws_instance.os1.public_ip
}

> terraform apply

go to document
click on ec2
- Resources
- Data sources

click on data source
click on aws_instance

look for argument reference


now, we have to create a volume and attach.
but we have to create on specific az.


go to docs,
go to ec2 -> resource and search for ebs volume
aws_ebs_volume
click and look at the example

resource "aws_ens_volume" "st1" {

 vaailability_zone = "us-west-2a"    # review the region
 size = 10

tags = {
   Name = New Hradidisk"
}
}

we can't hardcode the value, use variable

resource "aws_ens_volume" "st1" {

 vaailability_zone = "aws_instance.os1.availibity_zone"    # review the region
 size = 10

  tags = {
    Name = New Hradidisk"
 }
}


put together and run
> tf apply

go to aws ebs, you will see the volumes



-------------------
resouce "aws_instance" "os1" {

ami = "ami...."
instance_type = "t2.micro"
tags = {
  Name = "My OS - II"
  }
}

output "my_az_is" {

  value = aws_instance.os1.availability_zone
}

resource "Aws_ebs_volume" "st1" {

  availbility_zone = aws_instance.os1.availability_zone
  size               = 10

  tags = {
    Name = New Hradidisk"
  }
}

output "o2" {
  value = aws_ebs_volume.st1.id
}


now, attach

your os and hard disk need to be on same az, same region.


one more resource we going to add

go to docs, look for resource which helps to attach the volume

aws_volume_attachment
review the example

go to arguement reference
you have to specify the device name

# Step3
resource "aws_volume_attachment" "ens_att" {
  device_name = "/dev/sdh"
  volume_id = aws_ens_volume.st1.id
  instance_id = aws_instance.os1.id
}


> tf apply

dynamically create os, volume and attach..

now, go to your aws console -> go to the instance, you will see the instance with added disk..


destroy

destroy attachment, volume and shutdown instance..and destroy

once click creates (tf apply)
one click destroys entire infrastructure (tf destroy)





Wednesday, May 5, 2021

Day2 - Terraform - Creating an EC2 instance using terraform

 github.com/zealvora/terraform-beginner-to-advanced-resource

👉  What is terraform registry...?
- Terraform registry enables the distrubution of terraform modules which are reusable configurations.
In fact, it is the place where we store all the information about the service providers and modules.
All the provider information is stored in Terraform registry. We can get example, code/documentaton details of all supported applications.

👉 What is the role of terraform init command...?
- The terraform init command is used to initialize the working directory containing terraform written code (or config files), and on the basis of that code, it will download the respective provider plugin like aws, gcp etc. This is the first command you run after writing your terraform configuration code.
To use a provider or module from the registry, simply add it to your configuration using the "terraform init" command, and Terraform will download anything it needs automatically.

👉 What are providers and resources in terraform...?
- Providers are cloud platform/applications that provides Infrastructure-as-a-Service for eg, aws, gcp that are supported by terraform. They come with plugins which is used to interact with the remote system.
The providers are the one who provide their services to us.

Resources are the sub-services provided by the cloud provides such as VPC, EC2, EBS in terraform. Resources are the most important element in the terraform, each resource block describes one or more infrastructure objects. Terraform resources are implemented by provider plugins. The terraform registry is the main directory of the publicly available terraform providers.

👉 what is the difference between imperative and declarative language...?
- Imperative language is like a Procedural language where steps plays an important role whereas in declarative language we are more focus on the final output rather than it's steps.
So, you can say, imperative language simply describes the control flow of computation and remmebers the state. Declarative programming simply expresses the logic of computation and have no knowledge of state of program.
Thats being said, you basically specify all the information that need to perform all the activity. Even the code is already implemented or executed earlier, if you run again, it will run and go throug the build process. These code may lack the intelligence. whicle in case of declrative language, you give some information about the infrascture, they will understand what to do. Once you execute the code, it will do everything it need to do. But if you rerun again, it will not execute if there is no change on the code. They are intelligent enough the tell you that the code you already executed has not new change.

 day2 - Terraform 5-5-2021

TF -> AWS (cloud) -> Provision server or other job

Code
-----
-----
-----
-----

write instruction (program)
- automation

Go to the site below to find the providers TF supports

https://registry.terraform.io

- broswer providers
- select aws


go to your windows machine
> mkdir terraform-ws
> cd terraform-ws
> mkdir basic; cd basic
> notepad first.tf
provider "aws {}    #

we are telling terraform, we want to do something on aws.
- detech aws, auto detech the provider and download the plugin.
- if you run this program again and again innfuture, you don't have to download


> dir
> terraform init    # you run this command first time.
they will automatically download the plugin

> cd .terraform\providers\registry.terraform.io\hashicorp\aws\3.38.0\windows_aws64
you will find the plugin this place
> dir

---------------------------------
something to know about aws
- Go to aws and create an account and copy seccret key and access key
- AWS works on region
- EC2 - Instance/OS/VM/server (azure also has simiar name)
  - inside the server, you have sub-services
    (volue, netwrok, load balance, security and more)
    (for os you need RAM/CPU/HD/Network ....)
  in terraform term, these sub-services are called resources.

- lets go to ec2 dashboard and
- click on instance
- click on launch new instance
- select OS (select amazon linux-2 -> copy ami-id...
- instance type - copy the type name t2.micro
- next configure instance detail, leave default
- next -> tag -> name=os3
- next security (leave default)
- click and launch
- proceed without a key pair (default setting)
These are the basic steps and we collect these steps. we have to convert these steps into terraform code.

everything you do manuall graphically, you have to copy every single step into code.

How do we know what keyword do they use on terraform.
First we collect the required fields. and go to terraform.

go to aws providers.
click on terraform aws documentation
click on AWS provider

on the right side, you see authentication
- static
you will see an example..


go to ec2 resources on left side
- go through and look for aws instance
- click on it, you will see  
  Resource: aws_instance
it says provides and go through the examples...

you are look for arguements..
- look for AMI
  - it says ami is required keyword
  - public_ip_address is optional .
  - look for fields required fields or optional ones.

- go down to tags
  - its an optional field as well

you can also click on example on right side to find more...

go to aws IAM and create a user. (give power use access power)
-
-----------------------------------------------

back to the basic on TF
> notepad terraform.tf

provider "aws" {

region = "ap-south-1"
access_key = "ABCHDDHHFHIE77lhHHGSLSLDH"
secret_key = "AJKLFJKLJDKJFKDJI3338kjjh"

}

resource "ec2 instance" {

image id ="ami-01938d8s003ds88ss"
instance type = "t2.micro"
tags Name = os4


}
 
-------------------------
now, correct the terms

> notepad terraform.tf
provider "aws" {

region = "ap-south-1"
access_key = ""
secret_key = ""

}

resource "ec2 instance" "os1" {    # os1 is resource name

ami ="ami-01938d8s003ds88ss"
instance_type = "t2.micro"
tags = {
  Name = "My first OS RHEL"
    }
}
 
Note: tag is a map (dictionary) and defined in key/value format.

terraform creates and also manages the respources.

state: currect/desire state

Now, run this code

> terraform init
> terrafrom apply    # executing the code

apply checks the code, plan code and executes
this will basically create an instance on AWS.

it will prompt, just type yes and press enter

go to aws console, you will see the instance running..



declarative language


let go ahead and make change to code

> notepad terraform.tf
provider "aws" {

region = "ap-south-1"
access_key = ""
secret_key = ""

}

resource "ec2 instance" "os1" {    # os1 is resource name

ami ="ami-01938d8s003ds88ss"
instance_type = "t2.micro"
tags = {
  Name = "My first OS New"
    }
}
> terraform apply
@prompt say: yes

it will change the name..

lets say we want to change t2.micro to other one?
yes, how?

go to instance -> action -> instance -? change instance state
it grayed out.

to change, you have to shutdown the instance, and change the instance type.


but for terraform, lets just change the name


> notepad terraform.tf
provider "aws" {

region = "ap-south-1"
access_key = ""
secret_key = ""

}

resource "ec2 instance" "os1" {    # os1 is resource name

ami ="ami-01938d8s003ds88ss"
instance_type = "t2.small"
tags = {
  Name = "My first OS New"
    }
}
> terraform apply




Updating gitlab-ee and gitlab-runner

 1. Download software
gitlab-ee
gitlab-runner

2. Copy software to repo server and create a repo
# cd /software/repo
createrepo

3. Check for update
# yum --enablerepo=* clean all
# yum search gitlab
# yum check-update

# yum versionlock list
# yum versionlock delete gitlab-ee gitlab-runner
# yum versionlick list

4. Update the software
# yum update -y
# yum versionlock add gitlab-ee gitlab-runner

-----------------------------------------------

$ eval $(ssh-agent)
$ ssh-add ~kamal/.ssh/id_rsa

$ ansible -i host-list all -o -a "rpm -q gitlab-ee" | grep CHANGED

This command lists all the systems which has gitlab-ee (Repo). You may have server on test and prod.

$ ansible -i host-list all -o -a "rpm -q gitlab-runner" | grep CHANGED

runner is a ci/cd program like Jinkins. You wil have couple of servers and need to update them as well.

Once you identify the list of servers, coordinate with developers/operation folks for downtime to implement the change.



Tuesday, May 4, 2021

Killing Zombie Processes

 
Zombie process

Zombie process is a dead process which not cleaned up properly so still has an entry to the process table. In fact when child process execution is completed but its not able to send the exit status to the parent process. Parent process still wait for the exist status from the child process. It is a good practice to clean/kill the defunct process.


1. How to find zombie process?

$ top -b1 -n1 | grep Z
or
$ ps aux | grep Z
or
$ ps aux | grep def

2. Find parent process of zombie process
$ ps -A -ostat,ppod | egrep '[zZ]' awk '{print $2}' | uniq | xargs ps -p
or
$ ps axo stat,ppid,pid,comm | grep -w defunct

3. send SIGCHLD process to parent process
$ kill -s SIGCHLD <PPID>
or
$ ps axo stat,ppid,pid,comm | awk '{ print $2}' | uniq | xargs kill -9
$ top -b1 -n1 | grep Z
or
kill -9 <PPID>

to kill

$ ps -af | grep def

kill the parent process

$ kill -9 <P-PID>


Git branch show detached HEAD

  Git branch show detached HEAD 1. List your branch $ git branch * (HEAD detached at f219e03)   00 2. Run re-set hard $ git reset --hard 3. ...