Task to do,
1. Install STIG viewer 2.14
2. Install SCC 5.4
3. Install benchmark
Task to do,
1. Install STIG viewer 2.14
2. Install SCC 5.4
3. Install benchmark
Directly copied from
https://www.reddit.com/r/sysadmin/comments/8f10mc/command_sysunconfig_alternative_for_centos_7/
posted by user: ckozler
I posted, just to get an idea.
Command sys-unconfig alternative for Centos 7
-------------------------
I just made my own. I hated that they removed that
#!/usr/bin/env bash
#
# This is the sys_prep script
# It will clear out all non-revelent information for a new VM
#
# 1. Force logs to rotate and clear old.
/usr/sbin/logrotate -f /etc/logrotate.conf
/bin/rm -f /var/log/*-20* /var/log/*.gz
#
# 2. Clear the audit log & wtmp.
/bin/cat /dev/null > /var/log/audit/audit.log
/bin/cat /dev/null > /var/log/wtmp
#
# 3. Remove the udev device rules.
/bin/rm -f /etc/udev/rules.d/70*
#
# 4. Remove the traces of the template MAC address and UUIDs.
/bin/sed -i '/^\(HWADDR\|UUID\|IPADDR\|NETMASK\|GATEWAY\)=/d' /etc/sysconfig/network-scripts/ifcfg-e*
#
# 5. Clean /tmp out.
/bin/rm -rf /tmp/*
/bin/rm -rf /var/tmp/*
#
# 6. Remove the SSH host keys.
/bin/rm -f /etc/ssh/*key*
#
# 7. Remove the root user's shell history.
/bin/rm -f /root/.bash_history
unset HISTFILE
#
# 8. Set hostname to localhost
/bin/sed -i "s/HOSTNAME=.*/HOSTNAME=localhost.localdomain/g" /etc/sysconfig/network
/bin/hostnamectl set-hostname localhost.localdomain
#
# 9. Remove rsyslog.conf remote log server IP.
/bin/sed -i '/1.1.1.1.1/'d /etc/rsyslog.conf
# 10. Clear out the osad-auth.conf file to stop dupliate IDs
#
rm -v /etc/sysconfig/rhn/osad-auth.conf
rm -v /etc/sysconfig/rhn/systemid
# clean hosts
#hostname_check=$(hostname)
#if ! [[ "${hostname_check}" =~ "local" ]]; then
# cp -v /etc/hosts /etc/hosts.sys_prep
# sed -i "s,$(hostname),,g" /etc/hosts
# sed -i "s,$(hostname -s),,g" /etc/hosts
#fi
rm -v /root/.ssh/known_hosts
#
# 11. Shutdown the VM. Poweron required to scan new HW addresses.
poweroff
#
--------------------------------
Reply by
clearing_sky
-Use Cloud-Init. It should handle all the things you need to do, but a kickstart would probably be a better idea if you are doing a physical deployment.
How to remove BIOS password
HP ProLiant
- Reboot your machine
- When it prompts the password, enter the password followed by / (slash) no space.
- Password deleted message pops up.
say your password is BrIGht(!)Side@Edg3
you enter the password as: BrIGht(!)Side@Edg3/
Day 8 - terraform
$ cat providers.tf
$ cat variables.tf
variable 'x' {
default = "t2.micro"
type = string
}
output "01" {
value = var.x
}
> tf apply -var-'x-t2.medium"
$ cat aws-main.tf
resource "aws_instance" "web" {
ami = "ami-012...."
instance_type = var.x
}
$ cat variables.tf
# variable 'x' {
# default = "t2.micro"
# type = string
# }
variable 'x' {}
output "01" {
value = var.x
}
> tf apply -var="x=t2.medium"
in this case, either you pass the value or it will prompt
don't change the code, pass the variable.
or you can create a config with key/value pair.
name must the same, fix file.
> notepad terraform.tfvars
#x="value"
x="t2.micro"
here, you can come and change
we change the value of x.
> tf apply
if you don't define, it will prompt but we defined, it will not prompt but will grab from config file.
> notepad terraform.tfvars
#x="value"
x="t2.large"
> tf apply -var="x=t2.micro"
Lets go to variable file
$ cat variables.tf
# variable 'x' {
# default = "t2.micro"
# type = string
# }
variable 'x' {}
variable "y" {
type=boot
}
output "01" {
value = var.y
}
commentout the content on aws_main.tf file.
> tf apply
ask you for true or false
boolean is good if you create condition
if you want to do something check the condition, if condition is true, do this if not do the else part.
turnety operator way ,,
condition - question mark? - value1: value2
if condition comes true, display value1 else display value2
Note: true and false should be on lower case
output "01" {
value = var.y ? "Sam" : "Ram"
}
if true it will return Sam else Ram.
> cd google
provider "aws" {
region = "ap-south-1"
profile = "default"
}
provider "google" {
project = "myproj"
region = "asia-south1"
}
$ modify variable file
> tc apply
> tf plan
webapp -> Testing (QA Team) -> Prod
$ cat aws_main.tf
resource "aws_instance" "web" {
ami = "ami-01..."
instance_type = var.x
count = 5
}
cat variables.tf
append
variable "lstest" {
type = bool
}
change the value of 5
$ cat aws_main.tf
resource "aws_instance" "web" {
ami = "ami-01..."
instance_type = var.x
# count = 5
count = var.lstest ? 0 : 1 # if this is true, 0 , count 0 , this instance will not run
}
gcp_main.tf
resource "google_compute_instance" os1" {
name = "os1"
machine_type = var.mtype
zone = "asia-south-c"
count = var.istest ? 1 : 0
boot_disk {
initialize_params {
image = "debian-cloud/debian-g"
}
}
> tf apply -var="lstest=true"
ca variables.tf
append
variable "lstest" {
type = bool
}
variable "azaws" {
default = [ "ap-south-1a", "ap-south-1b", "ap-south-1c" ]
type = list
}
#output "0s2" {
# value = var.azaws
$ cat aws_main.tf
resource "aws_instance" "web" {
ami = "ami-01..."
instance_type = var.x
availability_zone = var.azaws[1]
count = 1
}
> tf apply
map data type
[ 'a', 'b', 'c' ]
system gives the index
you may want to do your own value
say a is id
variable "azaws" {
default = [ "ap-south-1a", "ap-south-1b", "ap-south-1c" ]
type = list
}
variable "types" {
type = map
default = { # maps in curly braces
us-est-1 = "t2.nano",
ap-south-1 = "t2.micro",
us-west-1 = "t2.medium"
}
}
# output "os3" {
# value = var.types
# value = var.types["ap-south-1"]
# }
> tf apply
when your signature becomes autograph you are something ...
In SELinux, Context is set up on a file and you can see it with ls -lZ command.
$ ls -lZ
On the output, you can see the context set on user base (_u), role(_r), type(_t), or the Security label (_t)
--------------------------
Yum repo set up
- AppSteam | BaseOS | YUM Repository Setup RHEL8
BaseOS - Required Software
# mkdir /rpms
Go to your DVD and copy AppStream and BaseOS directories which contains importaint packages.
# cd /mnt; ls
# cp -aRv BaseOS AppStreas /rpms
# cd /etc/yum.repos.d
# vi appstream.repo
[AppStream-Repo]
name="App Sreadm Repo"
baseurl=file:///rpms/AppStream/
enabled=1
gpgcheck=0
# cat baseos-repo.repo
[BaseOS-Repo]
name="BaseOS Repo"
baseurl=file:///rpms/BaseOS/
enabled=1
gpgcheck=0
# yum clean all
# yum repolist
Register all packages (sign with gpg key)
# rpm --import RMP-GPG-KEY-redhat-release
raaho ke jamate ke tujhe kya sabut diun
manjil mile to paawo ke chhale nahi rahe
chalte chalte mujhse puchha mere paheron ki chhlon ne
kitne dur maiful sajadi rahiso waalone
# IPv6 set up
# nmcli connection show
# ip addr
# nmcli conn modify eni123 ipv6.addresses "<IP-V6 address>" ipv6.method manual
# ip a
# nmcli con down eno123
# nmcli con up eno123
# ping6 <IPv-6_address>
# vi /etc/syconfig/network-scripts/ifcfg-eno123
IPV6INIT=:yes"
# nmcli connection modyf eno123 ipv6.addresses "IPv6" ipv6.method manual
# nmcli conn down eno123
# nmcli conn up eno123
# ip addr
# ping6 <ipv-6 address>
Configure MAPS email alerts on SAN switch (Brocade Switch)
- deliver your alerts to your email
Pre-req tasks
- Complete domain server configuration (AD-Account set up)
- Set up smart reply server to filter email coming from outside to the switch (Optional)
> mapsconfig --show
> dnsconfig --add -domain -switch1
> dnsconfig --show
> relayconfig --config -rla_ip -rla_dname
> relayconfig --show # verify
1. Display config info
> mapsconfig --show
2. Set the confi
> mapsconfig --emailcfg -address root@everest.local
> mapsconfig --testmail -subject "Test" -message "Test message from `hostname`"
> mapsconfig --actions raslog,email,sw_critical,sw_marginal,sfp_marginal
or
> mapsconfig --action email
> mspsconfig --emailcfg -address
> mapsconfig --show
send test message (email)
> mapsconfig --testmail -subject Test Mail -message Testing
3. Verify the change
> mapsconfig --show
https://www.amazon.jobs/en/principles
We use our Leadership Principles every day, whether we're discussing ideas for new projects or deciding on the best approach to solving a problem. It is just one of the things that makes Amazon peculiar.
Leaders start with the customer and work backwards. They work vigorously to earn and keep customer trust. Although leaders pay attention to competitors, they obsess over customers.
Leaders are owners. They think long term and don’t sacrifice long-term value for short-term results. They act on behalf of the entire company, beyond just their own team. They never say “that’s not my job."
Leaders expect and require innovation and invention from their teams and always find ways to simplify. They are externally aware, look for new ideas from everywhere, and are not limited by “not invented here." As we do new things, we accept that we may be misunderstood for long periods of time.
Leaders are right a lot. They have strong judgment and good instincts. They seek diverse perspectives and work to disconfirm their beliefs.
Leaders are never done learning and always seek to improve themselves. They are curious about new possibilities and act to explore them.
Leaders raise the performance bar with every hire and promotion. They recognize exceptional talent, and willingly move them throughout the organization. Leaders develop leaders and take seriously their role in coaching others. We work on behalf of our people to invent mechanisms for development like Career Choice.
Leaders have relentlessly high standards — many people may think these standards are unreasonably high. Leaders are continually raising the bar and drive their teams to deliver high quality products, services, and processes. Leaders ensure that defects do not get sent down the line and that problems are fixed so they stay fixed.
Thinking small is a self-fulfilling prophecy. Leaders create and communicate a bold direction that inspires results. They think differently and look around corners for ways to serve customers.
Speed matters in business. Many decisions and actions are reversible and do not need extensive study. We value calculated risk taking.
Accomplish more with less. Constraints breed resourcefulness, self-sufficiency, and invention. There are no extra points for growing headcount, budget size, or fixed expense.
Leaders listen attentively, speak candidly, and treat others respectfully. They are vocally self-critical, even when doing so is awkward or embarrassing. Leaders do not believe their or their team’s body odor smells of perfume. They benchmark themselves and their teams against the best.
Leaders operate at all levels, stay connected to the details, audit frequently, and are skeptical when metrics and anecdote differ. No task is beneath them.
Leaders are obligated to respectfully challenge decisions when they disagree, even when doing so is uncomfortable or exhausting. Leaders have conviction and are tenacious. They do not compromise for the sake of social cohesion. Once a decision is determined, they commit wholly.
Leaders focus on the key inputs for their business and deliver them with the right quality and in a timely fashion. Despite setbacks, they rise to the occasion and never settle.
Find files and directories with UID,GID,Sticky bit set
# find ./ -type -d -perm 1000 -exec ls -ld {} \;
# find ./ -type f \( -perm -2000 -o -perm -4000 \) -exec ls -ld {} \;
$ ansible -i hostlist all -o -a "ls -l /etc/resolv.conf" | sort | grep -v "Mar 20 10"
Day 7
$ cat providers.tf
$ cat variables.tf
variable 'x' {
default = "t2.micro"
type = string
}
output "01" {
value = var.x
}
> tf apply -var-'x-t2.medium"
$ cat aws-main.tf
resource "aws_instance" "web" {
ami = "ami-012...."
instance_type = var.x
}
$ cat variables.tf
# variable 'x' {
# default = "t2.micro"
# type = string
# }
variable 'x' {}
output "01" {
value = var.x
}
> tf apply -var="x=t2.medium"
in this case, either you pass the value or it will prompt
don't change the code, pass the variable.
or you can create a config with key/value pair.
name must the same, fix file.
> notepad terraform.tfvars
#x="value"
x="t2.micro"
here, you can come and change
we change the value of x.
> tf apply
if you don't define, it will prompt but we defined, it will not prompt but will grab from config file.
> notepad terraform.tfvars
#x="value"
x="t2.large"
> tf apply -var="x=t2.micro"
Lets go to variable file
$ cat variables.tf
# variable 'x' {
# default = "t2.micro"
# type = string
# }
variable 'x' {}
variable "y" {
type=boot
}
output "01" {
value = var.y
}
commentout the content on aws_main.tf file.
> tf apply
ask you for true or false
boolean is good if you create condition
if you want to do something check the condition, if condition is true, do this if not do the else part.
turnety operator way ,,
condition - question mark? - value1: value2
if condition comes true, display value1 else display value2
Note: true and false should be on lower case
output "01" {
value = var.y ? "Sam" : "Ram"
}
if true it will return Sam else Ram.
> cd google
provider "aws" {
region = "ap-south-1"
profile = "default"
}
provider "google" {
project = "myproj"
region = "asia-south1"
}
$ modify variable file
> tc apply
> tf plan
webapp -> Testing (QA Team) -> Prod
$ cat aws_main.tf
resource "aws_instance" "web" {
ami = "ami-01..."
instance_type = var.x
count = 5
}
cat variables.tf
append
variable "lstest" {
type = bool
}
change the value of 5
$ cat aws_main.tf
resource "aws_instance" "web" {
ami = "ami-01..."
instance_type = var.x
# count = 5
count = var.lstest ? 0 : 1 # if this is true, 0 , count 0 , this instance will not run
}
gcp_main.tf
resource "google_compute_instance" os1" {
name = "os1"
machine_type = var.mtype
zone = "asia-south-c"
count = var.istest ? 1 : 0
boot_disk {
initialize_params {
image = "debian-cloud/debian-g"
}
}
> tf apply -var="lstest=true"
ca variables.tf
append
variable "lstest" {
type = bool
}
variable "azaws" {
default = [ "ap-south-1a", "ap-south-1b", "ap-south-1c" ]
type = list
}
#output "0s2" {
# value = var.azaws
$ cat aws_main.tf
resource "aws_instance" "web" {
ami = "ami-01..."
instance_type = var.x
availability_zone = var.azaws[1]
count = 1
}
> tf apply
map data type
[ 'a', 'b', 'c' ]
system gives the index
you may want to do your own value
say a is id
variable "azaws" {
default = [ "ap-south-1a", "ap-south-1b", "ap-south-1c" ]
type = list
}
variable "types" {
type = map
default = { # maps in curly braces
us-est-1 = "t2.nano",
ap-south-1 = "t2.micro",
us-west-1 = "t2.medium"
}
}
# output "os3" {
# value = var.types
# value = var.types["ap-south-1"]
# }
> tf apply
day6-terraform on google - GCE
> gcloud init
create account on gcloud
Instance/VM
compute service - EC2
In your account first
1. create a project.
- create service
- create database service
- deploy webus/cli/api
based on the project, blling and everything
1. click on your account and
create new project (through web or GUI or CLI) to create a new project
Name: myterraform
remember to note the ID, which is also the project IP. Give name wisely. its very imporant
now, you will be on project.
click o your account and you will see what project you are in.
on the left side, you will see different menus services and you can directly launch fromt here.
click on compute Engine -> VM instances
-> compute Engine API
click on enable to activate it..
google: gcp region list
global locations
COde:
Goal: Step by step
-------------------
step1
Login: auth: user/pass
code: Key
Project ID: myterraform
region: asia_south1
=========================
go to terraform : searh provider: google
- google provider
- authentication
look for credential
> mkdir day5; cd day5
> notepad main.tf
provider "gogole" {
project = "myterraform"
region = "asia-south1"
}
look for authentication part
- we use access key to login and store in a file.
- create service account
- create key and store in a file
IAM and Admin
-> service account -> You will see your existing account.
-> this account is associated some policy or role
-> click on it and click on Keys
-> click on add key - create new key
-> select JSON (by default)
-> Click on Create and will be downloaded on your local PC.
Note: Keey it secret
go to documents/terraform-training/google/gce-key.json
> notepad main.tf
provider "gogole" {
project = "myterraform"
region = "asia-south1"
credentials = "gce-key.json"
}
> terraform init
downloads the plugins for provider
go back to compute service, and write how to launch manually
- computer Engine -> VM instance -> Create a new instance
give os name: os2
vm: os2
region: select mumbai
instance type - machine family -> by default E2-medium
machineType: ec-medium
boot disk (AMI) - click on change and select the os you want.
specify boot-disk image name: debian
boot disk type and size: 10G
go to network settting
click on management / networking
VPC/Network: default (Network Interface)
give os
name: os2
vm: os2
region: select mumbai
instance type - machine family -> by default E2-medium
machineType: ec-medium
boot disk (AMI) - click on change and select the os you want.
specify boot-disk image name: debian
boot disk type and size: 10G
go to network settting
click on management / networking
VPC/Network: default (Network Interface)
convert it into terrafrom code.
google for google service acount and look for example.
> notepad main.tf
provider "gogole" {
project = "myterraform"
region = "asia-south1"
credentials = "gce-key.json"
}
resource "google_compute_instance" "os1"
name = "os1"
machine_type = "e2-medium"
zone: = "asia_south1"
}
tags =
look for document what keyworks are optional and mandetory
> terraform plan
errors: account_id is require
google for: gcloud cli command
go to cloud sdk and click on install SDK and click on cloud DSK installer
after you install, one cmd is avialable..
> gcloud init
helps you to login
select 2
enter project name:
it will configure on your local system.
now, try again
> terrafro plan
> notepad main.tf
provider "gogole" {
project = "myterraform"
region = "asia-south1"
credentials = "gce-key.json"
}
resource "google_compute_instance" "os1"
name = "os1"
machine_type = "e2-medium"
zone: = "asia_south1"
}
> tf plan
need to specify network interface, book disk required
copy the example code
> notepad main.tf
provider "gogole" {
project = "myterraform"
region = "asia-south1"
credentials = "gce-key.json"
}
resource "google_compute_instance" "os1" {
name = "os1"
machine_type = "e2-medium"
zone: = "asia_south1-c"
}
boot_disk {
initialize_params {
image = "debian-cloud/debian-9"
network_interface {
network = "default"
}
>tf plan
> tf apply
error: if any, review
otherwise, instance should be created.
go to the console and see if you ca see other insances.
> tf destroy --help
destroy specific resource
> tf destroy -target=resource
>
>
हिमचुलीमा हिउँ कति पर्यो पर्यो
कति पर्यो भन्न म सक्दिन
तिम्लाई माया कति छ कति
मुटु चिरी देखाउन सक्दिन
when logic ends, magic begins ..
सालको पातको टपरी हुनी
केटा: भाको एउटा जाले रुमाल फूल भरेको छैन,
आ… होएऽऽऽ भाको एउटा जाले रुमाल फूल भरेको छैन,
के लैजान्छौ सानू तिम्ले पिरतीमा बैना, पिरतीमा बैना…
एऽऽऽ हे बरै…
सालको पातको टपरी हुनी, हे बरै नहुनी सल्लैको;
तिम्रो मायाँ कति छ कति, हे बरै नलाउ पछि हल्लैको; - 2
केटी: मुटु कहाँ बैना मार्यो मै सोझीलाई जलाई,
आ… होएऽऽऽ मुटु कहाँ बैना मार्यो मै सोझीलाई जलाई,
कलेजीमा झुपडी देउ रुमाल नदेउ मलाई, रुमाल नदेउ मलाई…
केटी: पानीको रिस उर्ने माछी काँ'होला र सानू,
आ… होएऽऽऽ पानीको रिस उर्ने माछी काँ'होला र सानू,
डालीको रिस ठू्ल्ने गरे मैले त के जानूँ, मैले त के जानूँ…
केटी: छेउमै छ नि लालि फूल प्याउली किन हेर्छौ,
आ… होएऽऽऽ छेउमै छ नि लालि फूल प्याउली किन हेर्छौ,
जुगै काट्ने मन छ मलाई मन किन फेर्छौ, मन किन फेर्छौ…
केटा: आउँदा जाँदा देउरालीमा फूल चराउँछु मैले,
आ… होएऽऽऽ आउँदा जाँदा देउरालीमा फूल चराउँछु मैले,
तिम्लाई सम्झी चढाको फूल ओइलिँदैन कहिल्यै, ओइलिँदैन कहिल्यै…
केटी: जोडी पंक्षी देखेपनी घर मुनिको बोटमा,
आ… होएऽऽऽ जोडी पंक्षी देखेपनी घर मुनिको बोटमा,
तारा खस्दा तिम्रै नाम आउँछ मेरो ओंठमा, आउँछ मेरो ओंठमा…
एऽऽऽ हे बरै…
=======================================
केटा: भाको एउटा जाले रुमाल फूल भरेको छैन,
आ… होएऽऽऽ भाको एउटा जाले रुमाल फूल भरेको छैन,
के लैजान्छौ सानू तिम्ले पिरतीमा बैना, पिरतीमा बैना…
एऽऽऽ हे बरै… सालको पातको टपरी हुनी, हे बरै नहुनी सल्लैको;
तिम्रो मायाँ कति छ कति, हे बरै नलाउ पछि हल्लैको;
हा… हैऽऽऽ सालको पातको टपरी हुनी, हे बरै नहुनी सल्लैको;
तिम्रो मायाँ कति छ कति हे बरै नलाउ पछि हल्लैको ।
तिम्रो मायाँ कति छ कति हे बरै नलाउ पछि हल्लैको ।
केटी: मुटु कहाँ बैना मार्यो मै सोझीलाई जलाई,
आ… होएऽऽऽ मुटु कहाँ बैना मार्यो मै सोझीलाई जलाई,
कलेजीमा झुपडी देउ रुमाल नदेउ मलाई, रुमाल नदेउ मलाई…
एऽऽऽ हे बरै… सालको पातको टपरी हुनी, हे बरै नहुनी सल्लैको;
तिम्रो मायाँ कति छ कति, हे बरै नलाउ पछि हल्लैको;
हा… हैऽऽऽ सालको पातको टपरी हुनी, हे बरै नहुनी सल्लैको;
तिम्रो मायाँ कति छ कति हे बरै नलाउ पछि हल्लैको ।
तिम्रो मायाँ कति छ कति हे बरै नलाउ पछि हल्लैको ।
-------------------------
केटी: पानीको रिस उर्ने माछी काँ'होला र सानू,
आ… होएऽऽऽ पानीको रिस उर्ने माछी काँ'होला र सानू,
डालीको रिस ठू्ल्ने गरे मैले त के जानूँ, मैले त के जानूँ…
एऽऽऽ हे बरै… सालको पातको टपरी हुनी, हे बरै नहुनी सल्लैको;
तिम्रो मायाँ कति छ कति, हे बरै नलाउ पछि हल्लैको;
हा… हैऽऽऽ सालको पातको टपरी हुनी, हे बरै नहुनी सल्लैको;
तिम्रो मायाँ कति छ कति हे बरै नलाउ पछि हल्लैको ।
तिम्रो मायाँ कति छ कति हे बरै नलाउ पछि हल्लैको ।
केटी: छेउमै छ नि लालि फूल प्याउली किन हेर्छौ,
आ… होएऽऽऽ छेउमै छ नि लालि फूल प्याउली किन हेर्छौ,
जुगै काट्ने मन छ मलाई मन किन फेर्छौ, मन किन फेर्छौ…
एऽऽऽ हे बरै… सालको पातको टपरी हुनी, हे बरै नहुनी सल्लैको;
तिम्रो मायाँ कति छ कति, हे बरै नलाउ पछि हल्लैको;
हा… हैऽऽऽ सालको पातको टपरी हुनी, हे बरै नहुनी सल्लैको;
तिम्रो मायाँ कति छ कति हे बरै नलाउ पछि हल्लैको ।
तिम्रो मायाँ कति छ कति हे बरै नलाउ पछि हल्लैको ।
--------------------------
केटा: आउँदा जाँदा देउरालीमा फूल चराउँछु मैले,
आ… होएऽऽऽ आउँदा जाँदा देउरालीमा फूल चराउँछु मैले,
तिम्लाई सम्झी चढाको फूल ओइलिँदैन कहिल्यै, ओइलिँदैन कहिल्यै…
एऽऽऽ हे बरै… सालको पातको टपरी हुनी, हे बरै नहुनी सल्लैको;
तिम्रो मायाँ कति छ कति, हे बरै नलाउ पछि हल्लैको;
हा… हैऽऽऽ सालको पातको टपरी हुनी, हे बरै नहुनी सल्लैको;
तिम्रो मायाँ कति छ कति हे बरै नलाउ पछि हल्लैको ।
तिम्रो मायाँ कति छ कति हे बरै नलाउ पछि हल्लैको ।
केटी: जोडी पंक्षी देखेपनी घर मुनिको बोटमा,
आ… होएऽऽऽ जोडी पंक्षी देखेपनी घर मुनिको बोटमा,
तारा खस्दा तिम्रै नाम आउँछ मेरो ओंठमा, आउँछ मेरो ओंठमा…
एऽऽऽ हे बरै… सालको पातको टपरी हुनी, हे बरै नहुनी सल्लैको;
तिम्रो मायाँ कति छ कति, हे बरै नलाउ पछि हल्लैको;
हा… हैऽऽऽ सालको पातको टपरी हुनी, हे बरै नहुनी सल्लैको;
तिम्रो मायाँ कति छ कति हे बरै नलाउ पछि हल्लैको ।
तिम्रो मायाँ कति छ कति हे बरै नलाउ पछि हल्लैको ।
Day5 - Terraform 5-13-2021
day5 - Terraform - plan, refresh, apply, desire, code, tfstat, destroy
notepad a.tf
terrafrom look for .tf extention file and executes
> notepad web.tf
provider "aws" {
region = "ap-south-1"
profile = "default"
}
resource "aws_instance" "webos1" {
ami = "ami-010aff33ed5991201"
instance_type = "t2.micro"
security_groups = [ "webport-allow" ]
key_name = "terraform_key"
tags = {
Name = "Web Server by TF"
}
}
resource "null_resource" "nullremote1" {
connection {
type = "ssh"
user = "ec2-user"
private_key = file("C:/Users/Vimal Daga/Downloads/terraform_key.pem")
host = aws_instance.webos1.public_ip
}
provisioner "remote-exec" {
inline = [
"sudo yum install httpd -y",
"sudo yum install php -y",
"sudo systemctl start httpd",
"sudo systemctl start httpd"
]
}
}
resource "aws_ebs_volume" "example" {
availability_zone = aws_instance.webos1.availability_zone
size = 1
tags = {
Name = "Web Server HD by TF"
}
}
resource "aws_volume_attachment" "ebs_att" {
device_name = "/dev/sdc"
volume_id = aws_ebs_volume.example.id
instance_id = aws_instance.webos1.id
force_detach = true
}
resource "null_resource" "nullremote2" {
connection {
type = "ssh"
user = "ec2-user"
private_key = file("C:/Users/Vimal Daga/Downloads/terraform_key.pem")
host = aws_instance.webos1.public_ip
}
provisioner "remote-exec" {
inline = [
"sudo mkfs.ext4 /dev/xvdc",
"sudo mount /dev/xvdc /var/www/html",
]
}
}
resource "null_resource" "nullremote4" {
connection {
type = "ssh"
user = "ec2-user"
private_key = file("C:/Users/Vimal Daga/Downloads/terraform_key.pem")
host = aws_instance.webos1.public_ip
}
provisioner "remote-exec" {
inline = [
"sudo yum install git -y",
"sudo git clone https://github.com/vimallinuxworld13/gitphptest.git /var/www/html/web"
]
}
}
resource "null_resource" "nullremote5" {
provisioner "local-exec" {
command = "chrome http://13.232.50.58/web/index.php"
}
}
=====================================
break this file
> notepad provider.tf
provider "aws" {
region: = "ap-south-1"
profile = "default"
}
> notepad ec2.tf
resource "Aws_instance" "webos1" {
ami = "amo .."
instal
tags = {
Name = web
}
> terraform init
go there and check all the files and download the plugins for the provider such as aws, azure
> attach_block.tf
resource "aws_volume_attachment" "ebs_att" {
device_name = "/dev/sdc"
volume_id = aws_ebs_volume.example.id
instance_id = aws_instance.webos1.id
force_detach = true
}
files are read on alphabet order, but TF will automatically handle or manage. This concept is called infering the resource with thier intellegency.
this means, which part to run first and which one to second.
> terraform plan
> tf apply
when you run this code first time, it will create tfstate
there are two state
1. desire state
2. Current state
1. Desire state
whatever you are looking for/want, you write on code - your desire state
2. Current state
What is there right now, or currently running or exist on the system
when you run tr apply, it will go and check if it already exists. if its not there then apply the code.
This concept is called - Idompotence
? tf apply
you will see a message - Infrascture is up to data
if no change is needed.
Login to your aws cloud
- check how many instances running
- check on what instance type is running.
first you run plan and apply (behind the scene plan runs when you run apply)
- when you run plan code, it basically goes and login to aws, retrive all the info and store locally and stores on terraform.tfstat file when is basically the state of the service.
stores all
open and review the file..
> notepad output.tf
output "myip" {
value = aws_instance.webis1.public_ip
}
> tf apply
you will see the public IP.
open the file terraform.tfstat and search for public_ip and navigate through..
Note: if you use Terraform, always use terraform. do not do automation and manual.
it will make a mess..
any change, you have to make, make sure to modify the code.
say if one of the ec2 instance has issue, they may go to console and manually change the config but its not been updated on code, you will have a problem.
say, lets go to aws console and review the instnce that you have instance type is t2.small
but on your code, you hae t2.micro.
instance_type = "t2.micro"
Desire state is manual t2.small
but code has: automate: t2.micro
> tf apply
our code goes to copy the current state and it will find the conflict.
before apply, use refresh. it will go to cloud and update/referesh the current state. after that, local file is updated terraform.tfstate
> tf refresh
> notepad terraform.tfstat
> tf apply
it will change from small to micro
since your code has micro, it will change
either do everything manual, or everything automation.
Note: Never modify tfstat file manually.
refresh, plan, apply, desire, code, tfstat,
add null resources
> notepad apache.tf
> tf destroy # remove all the resources
They go and refresh and update the tfstat file locally.
> tf apply
- apache
- hard disk
- providers
4 resources are going to be applied.
1. Launch the instance
2. ssh -> null -: php apache
3. created storage and attahing the storage
we have one bug here.
lets destroy our infrascture again.
> notepad apache.tf
provisioner "remote-exec" {
inline = [
"sudo yum install httpd -y",
"sudo yum install php -y",
"sudo systemctl start httpd",
"sudo systemctl start httpd"
]
}
}
file name by default are on lexical order...
apache.tf
resource "null_resource" "nullremote1" {
depends_on = [
aws_volume_attachment.ebs_att
]
.........
}
google fro terrafrom depends on
meta-arguement
one resource is deepnds on other respurce.
> tf destroy
> tf apply
validate your code
> terraform validate
gives you line number you have issue with
👉 Why Terraform is used for?
- Terraform is an infrastructure provisioning tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions. It is used to manage infrascture using HashiCorp Configuration Language (HCL) code (also known as IAC or Infrastructure as Code) to automate the provision of the infrastructure. It uses a very simple syntaxt and can be provision the infracture across multiple cloud and on-premises data centers. We can safely and efficiently provision/reprovision the infrascture (at VMWare, AWS, GCP, and deploys infrastructure) for any configuration changes. It is one of the most popular tool available as IAC - infrascture as Code.
Configuration files describe to Terraform the components needed to run a single application or your entire datacenter.
👉 What is Infrastructure as Code ?
- Infrascture as code means defining infracture in a code which when executed, it creates or builds or provisions infrascture (Servers, routers and more) based on the written code. or you can say, IAC is a method of managing and provisioning the computer systems in data center using machine readable definition files, rather than physical hardware configuration or interactive configuration tools.
Infrastructure as code (IaC) means to manage your IT infrastructure using configuration files.
IaC helps to aotomate the provision of infrascture, allow your organization to develop, deploy, and scale your cloud applications with reduced cost and faster and risk free fashion.
👉 What's the difference between Terraform and ansible?
- Terraform is infrascture management tool which ansible is mostly use for configuration management. They can do both jobs but there are little differences between them.
- Terraform is opensource , declarative tool which ansible is both declarative and procedural configuration.
👉 What is provider in Terraform?
- Provider is the keyword used in terraform which specifiy you what cloud environment you will be working on. The value you define for provider will decide where you rinfrascture will be provisioned. Based on the provider, the plugins are downloaded which will interact with remove system. You must declare the provider while writing the code.
The providers (AWS, AZURE, GCP and etc) are used to provision a resources.
Day3 - Terraform 5-06-2021
data reference/data manupulation
data resource in terrraform
In programming language, you define value and print
x=5
y="Hi"
print(x)
print(y)
resource is very important. EC2 is one of the resource. Please visit terraform doc for cloud specific resources such as aws ec2.
how to define variable in terraform?
You write in block with keywork. yesterday we use keyword as resource. lets try today with variable.
variable "x" {
type = string # value of string is string
default = "Terraform Training" # define the default value.
}
# keywork
keywork "name" {
}
here we will use keywork as output
output "myvalue" {
value = x
}
lets put together
$ cat ec2.tf
variable "x" {
type = string # value of string is string
default = "Terraform Training" # define the default value.
}
output "myvalue" {
value = x
}
lets go for an example
> terraform apply
we got an error: Invalie reference
what this mean is 'what you are typing, i have no idea
lets modify and return
value = "x" # put in double quote
terraform apply
type yes when prompted
you see outpyt myvalue=x
x is printed as it is.
you have to define x as a variable
so
value "var.x"
run it again,
we see myvalue = "var.x"
if you put inside double quote, it will print as it is.
x is a reference, need to look for value, so we have to use curly braces.
value = "${var.x}"
$ cat ec2.tf
variable "x" {
type = string # value of string is string
default = "Terraform Training" # define the default value.
}
output "myvalue" {
value = "${var.x}"
}
run it again
lets modifey
value = "Hi ${var.x} its cool"
this is called string interpolation
if you want to print any variable, you have to define where is it defined.
you have to specify the keywork - var.
variable you created are user defined variable and var variable will look for the data that is defined to x.
------------------------------
whats the plan
launch os : name of the service ec2
sub service instance
in terraform we call instance as resource.
lets see we want to attach a disk to the instance, we can get extra storage from aws, the service name is ebs.
ebs -> volume -> storage -> hard disk (block device)
ebs is sub service to ec2.
in AWS, you create instance at a region
lets go to ec2 -> ebs -> volume
when we create volume
- it will ask the size of the volume
- it will ask you availibity zone where you want to deploy.
1. launch OS - ec2
2. Create a volume (dish)
3. Attach volume to ec2.
The biggest problem is that you need to create harddisk at the same region where you have created ec2 instance.
-> when you launch os, it automatically assign what az (by default), it will launch.
> mkdir ec2; cd ec2
lets reference yesterday's code
> notepad terraform.tf
provider "aws" {
region = "ap-south-1"
access_key = ""
secret_key = ""
}
resource "ec2 instance" "os1" { # os1 is resource name
ami ="ami-01938d8s003ds88ss"
instance_type = "t2.small"
tags = {
Name = "My first OS New"
}
}
> terraform apply
------------------
do go doc,
ec2 -> aws instances -> arguement reference
------------------
using yesterdays' code, we use accessa_key and secret_key.
static credential is not recommended.
use
> aws configre # to configure profile # gogole aws create profile
> aws configure list-profile
provider "aws" {
region = "ap-south-1"
profile = myprofile"
}
resouce "aws_instance" "os1" {
ami = "ami...."
instance_type = "t2.micro"
tags = {
Name = "My OS - II"
}
}
os1 is a varible where it keeps all the info about all the variable.
note: if you create code on new folder, you have to initialize so that it will download the plug-in needed,
> terraform init
> terrafrom plan
> terraform apply
how do we get the region name so that we can attach the volume at the same region.
all variable info is stored at 'aws_instance.os1' variable.
we can use output function to print the value.
This is how we printed the value.
$ cat ec2.tf
variable "x" {
type = string # value of string is string
default = "Terraform Training" # define the default value.
}
output "myvalue" {
value = "${var.x}"
}
so we can do it
output "os1" {
value = aws_instance.os1
}
lets put together
provider "aws" {
region = "ap-south-1"
profile = myprofile"
}
resouce "aws_instance" "os1" {
ami = "ami...."
instance_type = "t2.micro"
tags = {
Name = "My OS - II"
}
}
output "os1" {
value = aws_instance.os1
}
> tf apply
you will see it print out everything
review the output.
you will se ami
availibity zone
public_ip
you can see all the variable, value here without going to aws.
you can login to aws console and ec2. verify ip, az and more info..
they are same.
print the public IP
edit the code and add the entry below
output "my_public_ip_is" {
# output "my_availibilibity_is" {
# availibility_zone
#value = aws_instance.os1.availbility_zone
value = aws_instance.os1.public_ip
}
> terraform apply
go to document
click on ec2
- Resources
- Data sources
click on data source
click on aws_instance
look for argument reference
now, we have to create a volume and attach.
but we have to create on specific az.
go to docs,
go to ec2 -> resource and search for ebs volume
aws_ebs_volume
click and look at the example
resource "aws_ens_volume" "st1" {
vaailability_zone = "us-west-2a" # review the region
size = 10
tags = {
Name = New Hradidisk"
}
}
we can't hardcode the value, use variable
resource "aws_ens_volume" "st1" {
vaailability_zone = "aws_instance.os1.availibity_zone" # review the region
size = 10
tags = {
Name = New Hradidisk"
}
}
put together and run
> tf apply
go to aws ebs, you will see the volumes
-------------------
resouce "aws_instance" "os1" {
ami = "ami...."
instance_type = "t2.micro"
tags = {
Name = "My OS - II"
}
}
output "my_az_is" {
value = aws_instance.os1.availability_zone
}
resource "Aws_ebs_volume" "st1" {
availbility_zone = aws_instance.os1.availability_zone
size = 10
tags = {
Name = New Hradidisk"
}
}
output "o2" {
value = aws_ebs_volume.st1.id
}
now, attach
your os and hard disk need to be on same az, same region.
one more resource we going to add
go to docs, look for resource which helps to attach the volume
aws_volume_attachment
review the example
go to arguement reference
you have to specify the device name
# Step3
resource "aws_volume_attachment" "ens_att" {
device_name = "/dev/sdh"
volume_id = aws_ens_volume.st1.id
instance_id = aws_instance.os1.id
}
> tf apply
dynamically create os, volume and attach..
now, go to your aws console -> go to the instance, you will see the instance with added disk..
destroy
destroy attachment, volume and shutdown instance..and destroy
once click creates (tf apply)
one click destroys entire infrastructure (tf destroy)
github.com/zealvora/terraform-beginner-to-advanced-resource
👉 What is terraform registry...?
- Terraform registry enables the distrubution of terraform modules which are reusable configurations.
In fact, it is the place where we store all the information about the service providers and modules.
All the provider information is stored in Terraform registry. We can get example, code/documentaton details of all supported applications.
👉 What is the role of terraform init command...?
- The terraform init command is used to initialize the working directory containing terraform written code (or config files), and on the basis of that code, it will download the respective provider plugin like aws, gcp etc. This is the first command you run after writing your terraform configuration code.
To use a provider or module from the registry, simply add it to your configuration using the "terraform init" command, and Terraform will download anything it needs automatically.
👉 What are providers and resources in terraform...?
- Providers are cloud platform/applications that provides Infrastructure-as-a-Service for eg, aws, gcp that are supported by terraform. They come with plugins which is used to interact with the remote system.
The providers are the one who provide their services to us.
Resources are the sub-services provided by the cloud provides such as VPC, EC2, EBS in terraform. Resources are the most important element in the terraform, each resource block describes one or more infrastructure objects. Terraform resources are implemented by provider plugins. The terraform registry is the main directory of the publicly available terraform providers.
👉 what is the difference between imperative and declarative language...?
- Imperative language is like a Procedural language where steps plays an important role whereas in declarative language we are more focus on the final output rather than it's steps.
So, you can say, imperative language simply describes the control flow of computation and remmebers the state. Declarative programming simply expresses the logic of computation and have no knowledge of state of program.
Thats being said, you basically specify all the information that need to perform all the activity. Even the code is already implemented or executed earlier, if you run again, it will run and go throug the build process. These code may lack the intelligence. whicle in case of declrative language, you give some information about the infrascture, they will understand what to do. Once you execute the code, it will do everything it need to do. But if you rerun again, it will not execute if there is no change on the code. They are intelligent enough the tell you that the code you already executed has not new change.
day2 - Terraform 5-5-2021
TF -> AWS (cloud) -> Provision server or other job
Code
-----
-----
-----
-----
write instruction (program)
- automation
Go to the site below to find the providers TF supports
https://registry.terraform.io
- broswer providers
- select aws
go to your windows machine
> mkdir terraform-ws
> cd terraform-ws
> mkdir basic; cd basic
> notepad first.tf
provider "aws {} #
we are telling terraform, we want to do something on aws.
- detech aws, auto detech the provider and download the plugin.
- if you run this program again and again innfuture, you don't have to download
> dir
> terraform init # you run this command first time.
they will automatically download the plugin
> cd .terraform\providers\registry.terraform.io\hashicorp\aws\3.38.0\windows_aws64
you will find the plugin this place
> dir
---------------------------------
something to know about aws
- Go to aws and create an account and copy seccret key and access key
- AWS works on region
- EC2 - Instance/OS/VM/server (azure also has simiar name)
- inside the server, you have sub-services
(volue, netwrok, load balance, security and more)
(for os you need RAM/CPU/HD/Network ....)
in terraform term, these sub-services are called resources.
- lets go to ec2 dashboard and
- click on instance
- click on launch new instance
- select OS (select amazon linux-2 -> copy ami-id...
- instance type - copy the type name t2.micro
- next configure instance detail, leave default
- next -> tag -> name=os3
- next security (leave default)
- click and launch
- proceed without a key pair (default setting)
These are the basic steps and we collect these steps. we have to convert these steps into terraform code.
everything you do manuall graphically, you have to copy every single step into code.
How do we know what keyword do they use on terraform.
First we collect the required fields. and go to terraform.
go to aws providers.
click on terraform aws documentation
click on AWS provider
on the right side, you see authentication
- static
you will see an example..
go to ec2 resources on left side
- go through and look for aws instance
- click on it, you will see
Resource: aws_instance
it says provides and go through the examples...
you are look for arguements..
- look for AMI
- it says ami is required keyword
- public_ip_address is optional .
- look for fields required fields or optional ones.
- go down to tags
- its an optional field as well
you can also click on example on right side to find more...
go to aws IAM and create a user. (give power use access power)
-
-----------------------------------------------
back to the basic on TF
> notepad terraform.tf
provider "aws" {
region = "ap-south-1"
access_key = "ABCHDDHHFHIE77lhHHGSLSLDH"
secret_key = "AJKLFJKLJDKJFKDJI3338kjjh"
}
resource "ec2 instance" {
image id ="ami-01938d8s003ds88ss"
instance type = "t2.micro"
tags Name = os4
}
-------------------------
now, correct the terms
> notepad terraform.tf
provider "aws" {
region = "ap-south-1"
access_key = ""
secret_key = ""
}
resource "ec2 instance" "os1" { # os1 is resource name
ami ="ami-01938d8s003ds88ss"
instance_type = "t2.micro"
tags = {
Name = "My first OS RHEL"
}
}
Note: tag is a map (dictionary) and defined in key/value format.
terraform creates and also manages the respources.
state: currect/desire state
Now, run this code
> terraform init
> terrafrom apply # executing the code
apply checks the code, plan code and executes
this will basically create an instance on AWS.
it will prompt, just type yes and press enter
go to aws console, you will see the instance running..
declarative language
let go ahead and make change to code
> notepad terraform.tf
provider "aws" {
region = "ap-south-1"
access_key = ""
secret_key = ""
}
resource "ec2 instance" "os1" { # os1 is resource name
ami ="ami-01938d8s003ds88ss"
instance_type = "t2.micro"
tags = {
Name = "My first OS New"
}
}
> terraform apply
@prompt say: yes
it will change the name..
lets say we want to change t2.micro to other one?
yes, how?
go to instance -> action -> instance -? change instance state
it grayed out.
to change, you have to shutdown the instance, and change the instance type.
but for terraform, lets just change the name
> notepad terraform.tf
provider "aws" {
region = "ap-south-1"
access_key = ""
secret_key = ""
}
resource "ec2 instance" "os1" { # os1 is resource name
ami ="ami-01938d8s003ds88ss"
instance_type = "t2.small"
tags = {
Name = "My first OS New"
}
}
> terraform apply
1. Download software
gitlab-ee
gitlab-runner
2. Copy software to repo server and create a repo
# cd /software/repo
createrepo
3. Check for update
# yum --enablerepo=* clean all
# yum search gitlab
# yum check-update
# yum versionlock list
# yum versionlock delete gitlab-ee gitlab-runner
# yum versionlick list
4. Update the software
# yum update -y
# yum versionlock add gitlab-ee gitlab-runner
-----------------------------------------------
$ eval $(ssh-agent)
$ ssh-add ~kamal/.ssh/id_rsa
$ ansible -i host-list all -o -a "rpm -q gitlab-ee" | grep CHANGED
This command lists all the systems which has gitlab-ee (Repo). You may have server on test and prod.
$ ansible -i host-list all -o -a "rpm -q gitlab-runner" | grep CHANGED
runner is a ci/cd program like Jinkins. You wil have couple of servers and need to update them as well.
Once you identify the list of servers, coordinate with developers/operation folks for downtime to implement the change.
Zombie process
Zombie process is a dead process which not cleaned up properly so still has an entry to the process table. In fact when child process execution is completed but its not able to send the exit status to the parent process. Parent process still wait for the exist status from the child process. It is a good practice to clean/kill the defunct process.
1. How to find zombie process?
$ top -b1 -n1 | grep Z
or
$ ps aux | grep Z
or
$ ps aux | grep def
2. Find parent process of zombie process
$ ps -A -ostat,ppod | egrep '[zZ]' awk '{print $2}' | uniq | xargs ps -p
or
$ ps axo stat,ppid,pid,comm | grep -w defunct
3. send SIGCHLD process to parent process
$ kill -s SIGCHLD <PPID>
or
$ ps axo stat,ppid,pid,comm | awk '{ print $2}' | uniq | xargs kill -9
$ top -b1 -n1 | grep Z
or
kill -9 <PPID>
to kill
$ ps -af | grep def
kill the parent process
$ kill -9 <P-PID>
Git branch show detached HEAD 1. List your branch $ git branch * (HEAD detached at f219e03) 00 2. Run re-set hard $ git reset --hard 3. ...