Saturday, June 22, 2024

Git branch show detached HEAD

 


Git branch show detached HEAD


1. List your branch

$ git branch

* (HEAD detached at f219e03)

  00


2. Run re-set hard

$ git reset --hard


3. Check the log info

$ git log


4. Checkout to your master branch, in my case 00

$ git branch

* (HEAD detached at f219e03)

  00

$ git checkout 00


$ git branch

* 00


$ cd ../..

$ git status

On branch master

Your branch is up to date with 'origin/master'.


nothing to commit, working tree clean


$ iac state pull

$ lscrypt unseal

$ lscrypt read -d dominos template.yml

templates:

    host: https://exp.local/landscapes/template-bas.git

    username: john

    password: jfe7uhTY-mVPJhtdWV

$ cd landscape-scripts/

$ more env-config.yaml

Wednesday, February 23, 2022

Day2 - Terraform - class notes

 2/22/2022

Terraform - class notes


Recap

Teffaform Lifecycle


- init

- plan

- apply

- destroy


file: main.tf


we can add info about,

- provider 

- variables

- resources


It maintains the desired state.

How?

- it maintains the terraform.tfstate file. 

- By default it stores on local machine.

- You have to store it on remote location (such as s3 bucket, blob storage)


Terraform  authentication

- aws configure


1. Create 2 aws instance t2.mocro

ansible-host

ansible-client


2. Login to your system

$ sudo -i

# cd example

# cat backend.tf

# cat main.tf 


we want to seperate our instance on environment lavel,

dev, different type,


# vi variables.tf

   variable "image_id" {  

   type = string

}


google

variables in terraform 


# vi main.tf


terraform {


.

required_version = >= 0.14.9"

}

provider "aws" {

  profile = "default"

  region = "us-west-2"

}

.

resources "ami_instance" "app_server" {

  ami   = var.image_id

  instance_type = "t2.micro"

  tags = {

      name = "demo"


}



# vi dev.tfvars

image_id = "ami-830c94e3"


# cp dev.tfvars test.tfvars

# cp dev.tfvars prod.tfvars


here,

dev.tfvars => variable.tf => main.tf



# vi variables.tf

variable "image_id" {

type = string

default = ""  # defaine the default value here

}


# vi main.tf


# cp main.tf provider.tf


# vi provider.tf


terraform


provider


remove everything else



# tf plan -out dev.plan -var-file dev.tfvars


# vi main.tf

resources "ami_instance" "app_server" {

  ami   = var.image_id

  instance_type = "t2.micro"

  tags = {

      name = var.tag_name

}



# variables.tf

variable "imae_id"

type = string

default = ""

}


variable "tag_name"

type = string

default = "" ....  [8:00]

}



vi dev.tfvars


image_id = ani_84...

tag_name = "example Demo"


# terraform plan -out dev.plan -var-file dev.tfvars



If aws cli is not configured,

Go to aws, user -> credential -> delete old key and create new 


copy 


# aws configure

access key: *******

secret key: *******


# tf apply dev.plan 


It will create example demo instance. Login to aws console and check...


check out this url for example ..


https://github.com/qfitsolutions/aws-terraform-course/blob/master/EC2withJenkins/ec2_jenkins.tf


google: for other platform,

azure terraform examle

use: azure cli


login/authentication

- create terraform file


provider "azurerm"


for google cloud: google cloud,,,


you can configure more than one instance in the same config file..



for eg,

if you want to create an instance on different region,

get the ami for specific location, 


# vi variable.tf

# ec2_jenkins.tf


# terraform init


error: invalid quoted type constraints..


varibable "region"

  type = string # remove double quote

  default = "us_east_1"


read the error carefuly. change, try and learn ..


# tf init


warnings sometimes can be ignored ..




# sh abv.sh

# vi abc.sh

#!/bin/bash

yum update -y

yum install htpd.x86_68 -y

service httpd start/enable

echo "<h1> Deployed via terraform</h1>  sudo tee /var/www/html/index.html


yum install java.. -y

wget -o /etc/yum.repos.d/jenkins.repo

https://pkg.jenkins.io/redhat-stable/jenkins.repo

rpm --import htps://pkg.jenkins.io/redhat-stable/jenkins.io.key

yum upgrade


yum install fontconfig java-11-openjdk

yum install jenkins



# terraform destroy


==================


Ansible Roles

- Reusable components


on playbook:

decleare


jenkins playbook:

  roles:

    nginx

    jenkins

    java


nexus:

  roles:

    java

    nexus

    nginx


on terraform

we use resources


reources # ec2 ...vpc, security group, s3, eks ...


reusable components


parameters ...



next lavel extracction is module ..



modules contains multiple tf file


main.tf

module:

  vpc

  eks

  ecs


terraform template (abc.tf) => module => ec2_instance.tf



eks.tf

cf.tf


-------


# vi main.tf


download the code ..


https://github.com/easyawslearn/terraform-aws-instance-template


$ cat variables.tf

variable "ami_id" {}

variable "region" {}

variable "instance_type" {}

variable "tag" {

default="Testing"

}


$ cat main.tf

provider "aws" {

  region = "${var.region}"

}


resource "aws_instance" "web" {

  ami           = "${var.ami_id}"

  instance_type = "${var.instance_type}"


  tags = {

    Name = "${var.tag}"

  }

}


$ cat output.tf

output "instance_ip" {

  value = ["${aws_instance.web.public_ip}"]

}



# terrafrom init ..


# terraform plan


you can get ready made modules


vpc terraform module


vpc terraform 


----------------


Note: if you have dev.tfvars, test.tfvars, prod.tfvars


whatever you created last, will have latest tfvars file. which you can use to destroy the resources associated with. 


This is the reason, you will use teraform workspace..


# terrafrom workspace list


# terrfrom init --reconfigure



# terrafrom worksapce list


# terrafrom init -migrate-state 


# ls -ltr


# # rm -rf *.terraform

# terraform init


# terrafrom worksapce list

# terraform worksapce new dev

# terrafrom worksapce new test

# tf worksapce list

# tf worksapce select dev

# tf worksapce select prod


you can use different providers such as azure, gcp, k8s, docker


Continouse monitoring (CM)

- troubleshooting

- high availibity

- infracture /application health check



Monitorying are 2 types

- application

    - logs

    - status


- infre:

    CPU, memory, users, ports enable..






Log collection

----------------

n1/d1 (agent) n2/d2 n3/d3 =>  stored data (DB) dashboard (log analyzer) 


elk/splunk

preometheus/grafana


https://prometheus.io/docs/introduction/overview/



grafana is a dashboard and preometheus is a log collector


Tomorrow

--------

ELK stack


Wednesday, February 16, 2022

Day1 - Terraform - class notes

 Terraform day1 - class notes

2/16/2022


Infrascture as code (IAC)

Terraform


Recap

- Language Construction

- Ansible Tower


google "ansible module list"

cloud module -> creates resources


but why this tool is not used as Infra as a code?


- you are able to create complete environment. you can create,

  - s3

  - instance

  - lambda function


Infra as code

-------------

IAC allow you to,

- create environment

- create resources

- update resource

- destroy resources


One created and decleared, you can perform the same task multiple time, without too much of configuration.

- We want to be stable environment

- Desire state (idempotent - if package is there do not do anything, or if directory is there, don't delete and create)


once create, it should not be changed. 


How can we create resources?

- Using aws console -> graphical

- cli -> aws commands => aws s3 create name ...


- cloud-formation

- terraform


AWS APIs [DSL]

----------------


declare methods with params

- method

- params



We will create Resources => APIs => aws/azure/gcp


on ansible

apt:

  name: tree

  state: latest


apt:

  name: tree

  state: absent


if you run the following command below, it will keep creating resource. Its hard to manage it. 

ec2:

  name: test

  type: t2.micro



Configuration Management (ansible)

- works only with softwares

- can not work on hardware level, but can install any software

- can not be used as a replacement to IoC tools


Infracture as a Code (IoC - terraform)

----

1. can create/destroy hardware architecture

2. can install sotware while bootstrapping servers

3. should not be used as a replacement to CM tools.


Both of them are complement to each other. 



What is Terraform?

- it is an open source infracture as a code softaware created by HashiCorp. It enables uses to define and provision a datacenter infracture using a high level configruation language known as hashicorp cofiguration language or optionally its just a JSON.


- Opensource

- MultiCloud support (AWS, Azure, GCP)

- Easy to use

- Maintains desire state


Architecture

-------------


google "terraform architecture" 


====================

rest-API



ec2 => abc

-> next execution should skip

-> already exist.



-> say jenkins has a job andyou want to run 

we created end point URL -> Jenkins job + token, you can run through python script, curl command, or thorugh the browser


curl -i http://hostname:port/job/token=423456677


AWS has some resources (APIs)

- terraform calling API to create resources on AWS.


=====================


terraform code


main.tf


Terraform operation

-------------------

4 kinds: terraform lifecycle


init =>   plan  =>  apply => destroy



Developer -> write code (tf) -> plan -> Apply -> Destroy


- init

- plan (what are you going to create: ec2, lambda function -> 2 resources)

- Apply ( whatever plan you selected, it will be sitting on AWS platform)

- Destroy (Once your requestement is completed, you can destroy your resources


if you have terraform utility, you can create resources same way like ansible.


Terraform

- lcoal machine

- jenkins agent

- VM

- Docker



LAB

- Create an aws (terraform) instance - t2.micro


google - terraform setup


1. install package

2. Verify the installation


$ terraform --help


# terraform 

command not found


go to installation and get the step

# sudo a-t=get 

sudo apt-get install -y gnugp software-properties-comman curl


# curl -fsFL <url>

apt-add-reposityt


# update and  install terraform

ap-get update && sid apt-get install terraform




# which terraform

/usr/bin/terraform


# terraform --help


now, we can get help here with the command


main comands

init

validate

plan

apply and

destroy


other commands

console

fmt

get

graph

omport

login/logout

output

show

....



We want to create resource now.


learn.hashicorp.com/terraform


get started ...

To work with AWS, set up the following

1. Install aws cli

    $ apt install awscli

2. Set up account

    $ aws configure # go to aws console and delete the old key and create new access key/secret key

   you have to seecify region output type (json)


  # cd .aws; ls -ltr

  # cat credentials

  This is where you credentials are stored.

  # aws s3 ls


3. Install/set up terraform


# mkdir eg; cd eg

# vi main.tr


# providers define here

terraform

  required_providers {

    aws = {

      source = "hashicorp/aws"

      version = "-> 3.27"

    }

 }


provider "aws" {

  profile = "default"

  region = "us-west-2"

}


# resource declaration

resources "aws_instance" "app-server" {

   ani   = "ami-82c94e3"

   instance_type = "t2.micro"


  tags = {

    Name = "ExampleAppServerInstance"

  }

}


# cat ~/.aws/config

[defailt]

region = us-west-2

output = json


google aws resource instance creation

resource : aws_instance


Once you write your code, run the command below,

# terraform init


- initializing provider plugins

- required plugins pulled.



it wil show what it is going to do, what resource its going to create.

# terrafrom plan


at the botton of the page, it gives you suggestion to use -out option to save this plan.



# terraform plan -out eg.plan


saving the plan file into eg.plan


now, we can run this plan towards the terraform 


# ls -ltr ; cat eg.plan # its a binary file. can't read it.

# terrafrom apply ex.plan


apply complete. resources: 1 added, 0 changed, 0 destroyed

go to aws console and you should be able to see the instance is being created


# ls -ltr

# cat terraform.tfstate


Try again

what happens if you run,

# terraform plan --out eg.plan


refreshing ..

No changes, your infracture matches the configuration.


# tf apply eg.plan

apply complete. Resources: 0 added, - changed, 0 destroyed.


do not try to apply directly. first plan and apply

lets modify


# main.tf

change instance_type = "t2.micro

 change tag to demo


# terraform plan --out eg.plan

refreshting


plan: 0 to add, 1 to change, 0 to destroy


# tf apply ex.plan

not adding, deleting the resource, only changing


now, lets change the region to us-west-1

# vi main.tf


it will deletes the original resource and re-created on another region.


# tf plan -out eg.plan

plan: 1 to add, 0 to change, 0 to destroy


# tf apply eg.plan


error: error launching source instance: invalidMIID. not found


The reason it, we change in provider level


can we declare the 

vi main.tf


go to different region and go to amis and get the id from there.


ami = "ami-0123333"


# tf plan 

# tf apply



# tf destroy # it will destroy all the resource on that specific region.


Note: maintain terraform.tfstate file.


# tf destroy

no resource found..


in this case, you have to go to other region and manually delete

or modify the main.tf file and run

modify the ami value and 


# tf plan -out eg.plan

# tf apply eg.plan


# tf destroy --auto-approve


# ls -ltr 

terraform.tfstate


this file is on local system. you need to keep it on a sfe location.


if it is modified, it will be recreated.

you better stored in a central location.


for that case, we choose storage location like google drive.


s3 or blob


google for backend terraform


select available backends

- local

- remote

- azure

- etcd

- gcs

- http

- s3

...


lets take s3 example


terraform {

  backend "s3" {

    bucket = "mybuck.."  # got to s3 -> create "mybuck.."

    key    = "dev"  # it will create there

    region = "us-west-2" # specify the region

  } 

}



by default it will be publically accessible. make private if needed *** verify


# tf plan -out dev.plan

it complains that you have to run tf init.

reason: initial configuration of the required backend "S3"


it is a new resource, so we have to initialize

# tf init


# tf plan -out dev.plan

plan: 1 to add, 0 to change, 0 to destroy

# tf apply dev.plan


now, tc file willbe stored in s3 bucket.


# tf destroy


definining multiple resources..


https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/instance




tomorrow,

- what is variables

- tfvars


Q. how to use k8s, docker, azure?

google, read the docs ..


run the following job, read, write and understand ...

https://github.com/qfitsolutions/aws-terraform-course/blob/master/EC2withJenkins/ec2_jenkins.tf


read about what is elk

===============================




Tuesday, February 15, 2022

Day3 - ansible - class notes

 Ansible day3 - class notes

2/15/2022
Recap from 
Ansible
 - agentless
 - ssh
 - python
 - ansible
 - yaml
 - playbook -> task/handlers/vars/hosts/roles
 - Roles
Variable Scope
 - vars/default => default
 - group => env variables
 - Host =? certs
 - Extra => env type
Nested Groups as part of inventory



google
githubn ansible jenkins
- get some githun link
github.com/qf-devops/ansible-gjenkins
# git clone github.com/qf-devops/ansible-gjenkins
# cd ansible-jenkins
# vi inventory.yml
---
- hosts" "jenkins
....
Dry run
# ansible-playbook -i inventory.yml jenkins.yml --syntax-check
Dry run - test the script for errors.
# ansible-playbook -i inventory.yml jenkins.yml --check



install docker 
- find playbook where docker installation steps are available.
# cd roles/ansible-java8./tasks/
vi main.yml
name: install oracle java
  with item
 define the correct version of java here.

install jdk
openjdk-11-jdk-headless
google for 
jenkins install ubuntu ansible

https://www.drpraize.com/2020/07/install-jenkins-using-ansible-playbook_29.html
https://blog.knoldus.com/how-to-install-jenkins-using-ansible-on-ubuntu/
https://github.com/jcalazan/ansible-jenkins
https://medium.com/nerd-for-tech/installing-jenkins-using-an-ansible-playbook-2d99303a235f
https://middlewaretechnologies.in/2021/04/how-to-install-and-configure-jenkins-using-ansible-playbook.html
https://gist.github.com/seancallaway/39281aa0f2ab05b8e79d205f384d0ee2
https://www.techguru.cloud/post/automated-jenkins-installation/
# ap -i ansible-jenkins/inventory.yml jenkins.yml

# ap -i ansible-jenkins/jenkins.yml jenkins.yml -e "mygroup=jenkins"
ap -i ansible-jenins/inventory.yml jenkins.yml -e "mygroup=jenkins"
# systemctl start jenkins
# ps -ef | grep -i jenkins
how to get status of jenkins in ansible

# vi jenkins.init
[jenkins]
<private_IP>
192.168.10.20
[local]
localhost ansible_connection=local
# vi jenkins.ini
# cat jenkins.ini
# cat jenkins.yml
$ ap -i jenkin.ini jenkins.yml -e "mygroup=local"


# vi docker.yml
---
- hosts: local
  tasks:
  - name: install docker
    apt:
      name: docker.io
      state: latest
# ap -i jenkins.init docker.yml
# docker 
# ap -i kenkins.init docker.yml

Review 
Assignment:
  kubespray -> ansible playbook
Tower
What is Language construction?
if you want to handle, print data, execute multiple time, what you are going to use it?
we will use,
- contional statements => if-else/switch/case
- iterators (loops): for, while
- variables : int, char, float, or key:varlue pair
- functions/methods/blocks: block of statement, set of lines which we will reuse it: block of code/repetivive code
- list/array/dict: data handling, collection of data
- exception handling: if something goes wrong, I want to have a work around, try { catch{} statement. ignore and continue...
- Threads: seq/parallel: like executers. three jobs can run.. parallel approach.
if you want to run multiple request parallelly,
- we use threads
we control our data, request using above programming construct.

Now, lets think as an ansible point of view,
in ansible how condition is used: ansible: if
we will use when 
google
when condition in ansible
tasks:
  - name: Shut down CentOS 6 systems
    command: /sbin/shutdown -t now
    when:
      - ansible_facts['distribution'] == "CentOS"
      - ansible_facts['distribution_major_version'] == "6"
if two conditions below when are matching, the command section command will execute.
block statement:
control multiple modules
-----------------------
tasks:
  - name: Shut down CentOS 6 systems
    command: /sbin/shutdown -t now
    when:
      - ansible_facts['distribution'] == "CentOS"
      - ansible_facts['distribution_major_version'] == "6"
  yum: 
    name: tomcat
    state: latest
    when:
      - ansible_facts['distribution'] == "CentOS"
      - ansible_facts['distribution_major_version'] == "6"
  copy:
    src: abc.txt
    dest: /tmp
    when:
      - ansible_facts['distribution'] == "CentOS"
      - ansible_facts['distribution_major_version'] == "6"
this is not good since we are repeating it. 
can we use block statement:
tasks:
  block:
      - ansible_facts['distribution'] == "CentOS"
      - ansible_facts['distribution_major_version'] == "6"
   yum: 
     name: tomcat
     state: latest

---
tasks:
  block:
      - ansible_facts['distribution'] == "CentOS"
      - ansible_facts['distribution_major_version'] == "6"
   yum: 
     name: tomcat
     state: latest
-------------
  with_items:
  - tomcat
  - tree
  - jenkins
  - maven
------------
yum:
  name: "{{ item }}"
  state: latest
 with_items:
 - tomcat
 - tree
 - jenkins
 - maven
------------------
even first blocks fails, I want to execute, remaining
ignore-error: true

tasks:
  block:
  - name: shutdown centos 6 systems
    command: /sbin/shutdown -t now
    ignore-error: true
    tag: test  # identify your module, to control the execution.
  - name: shutdown centos 6 systems
  yum:
   name: "{{ item }}"
   state: latest
 with_items:
 - tomcat
 - tree
 - jenkins
 - maven
 tag: dev
- name: shutdown 6 systms
   copy:
     src: abc.txt
     dest: /tmp
  when:
      - ansible_facts['distribution'] == "CentOS"
      - ansible_facts['distribution_major_version'] == "6"
   
 
# ap -i hosts abc.yml --tags dev
ansible-playbook -i hosts abc.yml --tags dev  # only whereever matching tags, will have the command executed. perform particular module 
-------------
with 
------------


Tower => ami
awx
dashboard
execute
schedule
t2.laarge machine - vm
clone the repo
git clone https://github.com/vytec-ca/awxsetup/blob/
ap -i inventory ansetup.ml
install nagiox
github.com/qf-devops/ansible-nagios-example
ap -i hosts install/nagis.yml

kubespray 
https://kubernetes.io/docs/setup/production-environment/tools/kubespray/
https://docs.google.com/document/d/1_ZQ8XN1dfcaXkBJHPMIlZqoW4_BePbrCU_LW3yvDTAs/edit
x

Day2 - Ansible class notes

Ansible - class notes - 2/14/2022 by - Ravi 

Recap from last class,

Ansible
-------
Why do they use ansible?
- its a configuration management tool to save time. 
- Remote execution of configuration infomration
- Agentless (ssh communication), push mechanism.
- modules


google

ansible modules
go to - module index.


-----------------
install new instance
apt update
# apt 

# cat nginx.yaml

- hosts: remote
  vars:
  packagename: "nginx"
  tasks:
  - name: add repo nginx
  - name: install package nginx
    apt:
         name: "{{packagename}}"
         state: latest
  - name: create a dir tutorial
  - name: copy index.html file
  ....
handlers:
  - name: restart service nginx


# ansible-playbook -i hosts nginx.yaml

Everything came green, ok, ok..
The reason was - 
it was already done

in this example, we use a variable.

Get the ip and paste on the browser, you will see the page.


ansible roles:
we are going to create multiple roles, and those roles are called through playbook.

tmcat, docker,, IAM are seperate roles,

$ site.yaml

---
- hosts: remote

  roles:
  - nginx

# how do you declare 

# cd /etc/ansible
# mkdir roles;cd roles
# ansible-galaxy init niginx

lets install tree pkg through ansible.

# cd ../hosts
[remote]
192.168.10.100

# ansible -i hosts localhost -m apt -a "name=tree state=latest"

# tree roles/nginx

# vi hosts
[remote]
192.168.10.100

[local]
localhost

# all task section you can define  [00:30]

# vi tasks/main.yaml



---
- name:restart nginx
  service "{{pkgname}}
# vi tasks


# vi vars/main.yml

---
packagename: nginx


# cd /etc/ansible; ls 
tutorial
index.html
nginx.yml
roles
hosts
ansible.cfg
# mv index.html tutorial roles/nginx/files



# cp nginx.yml site.yml
# cat site.yml
---
- hosts: remote
  roles:
  - nginx


# ansible-playbook -i hsots site.yml

roles under stite.yaml will call from roles directory in 

# ansibleplaybook -i hosts site.yml

# cd /etc/ansible; cat site.yml
---
- hosts: remote
  roles:
  - nginx

nginx is going to be under roles directory
cd roles/nginx; ls -ltr
# cd tasks; ls -ltr
main.yml

# cat main.yml

we can use ansible galary to use roles

# vi nginx/vars/main.yml
---
packagename: nginx

# vi nginx/tasks/main.yml

# cd ..
# ansible-loayboot -i hosts site.yml

https://github.com/qf-devops/ansible-jenkins

$ cat jenkins.yml
---
- hosts: "webserver"
  roles:
    - naisble-java8-orale
    - jenkins
    - maven


anisble-galary install -r requirements.yaml

-------------------------------
variables/ different types of variables.

how to execute
ansible tower
-------------------------------

variables - vars
got to roles directory
# cd /etc/ansbile/roles; ls -ltr
# cd nginx; ls -ltr
# cd vars

static -> once allocated, value can not be changed.
dynamic -> default value can be override, mean it can be changed.

local variables - scope is local, you can not use in other yaml file
global variables -> you can use anywhere, available in key-value pair. it can collect the facts on target node.

# cd /etc/ansible
# ansible -i hosts remote -m setup

review the output. you will see key-value pair.

ansible_nodename: "ip"
ansible_os_family: ""

# cd roles/nginx/default
# cat main.yaml
---
version: 1:0.0

# cd ../tasks
# vi main.yaml

# print he value
---
- name: debug # module name is debug
  debug:
     msg: "vars: {{ version }}"

.......... [00:57:00]


# ansible-playbook -i /etc/ansible/hosts /etc/ansible/site.yml

see the output, goto debug section and you will see the version output.

# vi ../default/main.yml
removet eh version line and run the play book

# ap -i hosts site.yml
error: undefind variable..
# cd /etc/ansible
# vi ../defaults/main.yml
---
version: 0.0.0

we have to stie,
webserver1
adn webserver2
Where can you declare? 
vars is local variable and can't be overridden.

on inventory section you can defailne

[remote]




# /etc/ansible
# mkdir group_vars; group_vars/remote

# vi main.yml
---
version: 2.0.0v1


# vi /etc/hosts
[1:05:00]

# ap 


# vi etc/ansible/hosts
[remote]
192.168.10.20 version=1.2.3"
 see the output

# ansible-plabook..

# /etc/ansible
# mkdir host_vars

# mkdir 192.168.10.20
# vi main.yml
---
version: 3.00v1


# cat /etc/ansible 
# ap -o hosts site.yml

view the output, 


you can define -e at the promot too.

# ap -i hosts site.yml -e "version=4.0.0"

look at the output.


order that variable is read from,

extra_vars
host_vars
group_vars
default section


extra_vars - > vars 
Note: we can use all types of variables but only extra_vars value will be read.
cd roles/nginx/vars
#
vi main.yml
---
packagename: gninx
dbname: cassandra

vi ../../task/main.yaml
---
- name: debug
  debug
  msg: "vars: {{ version}} {{dbname }}"

# ap -i hosts site.yml -e "version=4.0.0" -e "dbname=mongod"

it will pick up the value you speacify at the command line.

certificate to deploy for all host, use host_vars

for dev, prod, test
extra_vars -> extra is a high priority so it will pick it up first and executed from the inventory file.


# ap -i hosts /etc/ansible.siteyml -e "version=4.0.0" -e "dbname=mongod"


nested variables

in case of aws,
we have multiple datacenter in a region

we have multiple regions

google for default group

how do you handle multiple group?

group variables for groups of group
----------------------------------

[atlanta]
host1
host2

[releigh]
host2
host3

[southeast: children]
atlanta
releigh

[southeast:vars]
....

[usa:children]
southwest
northeast
southwest
nothewest




all:
  children
    usa:
....

Thursday, February 10, 2022

Day10 - Ansible intro

 2/10/2022 - class notes

- ansible/terraform


Recap

master

nodes

cronjob

initcontainers

ingress

daemonset

statfulset


---------------------------

eks/aks/openshft


jenkins

docker

k8s


=================

Ansible


configured manually


ssh to host

install

configure

services


10 servers need to install

100 servers


webserver

dbserver

proxy server


1 - servers - 10 minutes

10 - servers -> 30-60 minutes


Avoid manual

- automate


3 nodes


1 loadbalancer


10 more

image

vm

package

file

service


bootstraps


configuration management code



code

remote side execution

feedback/report


Puppet, Chef (pull based architecture)

puppet

- puppet master (holds the code)

- puppet node (install puppet agent, agent pulls the code from server and executes)



  • Request

  • Catalog

  • Report 

You have to maintain the server. It may be expensive to maintain. To avoid this kind of tool, they came up with push based architecture.

- simple and clean

- easy to understand


Push model …

Agent less

Python 


Need to develop a python based framework. Write code on python.


Ansible.

- need ssh communication

- push model

- ssh 

- no agent needed.

- develop source code


DSLs - Domain specific language

- derived from the base programming language.

- python

- yaml 


Ansible

- easy to learn

- written in python

- Easy to install and configure

- no need to install ansible on client

- Highly scalable..


How does it works?


Using ansible playbooks, which are written in a very simple language: yaml


Configuration management

Run from the server and the target server is configured automatically.


Architecture

Master

- playbook

- inventories

- Modules

- List of hosts

- Where playbook task


Minimum 2 hosts required. Master/node

1. Ansible host

2. Host



Lets go ahead and create instances.

- Create 2 aws instances. T2-micro or small.

- security group - launch it.

Tag: ansible-host, node01


Login to ansible host


# which python3 - it is available by default

/usr/bin/python3


# which ansible # not available. We have to install it


# apt update/upgrade


# apt install ansible # try to see if you can install



VMS 

Puppet => agent/pull/ruby based

Check => agent/pull/ruby

Ansible => agent less/push/python

Salt => agent/push/python


Out of these ansible is simple. 

puppet , chef faster, secure

Salt is also security wise good tool.


# ls -l /usr/bin/ansible


Ansible => ad-hoc commands

Ansible-playbook => yaml


1. Maintain inventory file

# hostname -i

Get the ip address - private (in our case)

# cd /etc/ansible; ls -l 


# vii hosts


# ansible -i hosts all -m ping 

Permission denied.


We have to authenticate it. 


Ansible modules list

# ansible -i hosts al l-m ping -u root -k

ssh password:


It will prompt you for a password.


But it failed again. Authentication is denied for this user to login remotely.


Generate key

# ssh-keygen

# ls -l .ssh


#copy public key to client system at host_dir/ .ssh/authorized_keys


# vi /etc/ansible/ansible.cfg


Enable configuration here. 

# host_key_checking = False

# log_path = /var/log/ansible.log


# ansble -i hosts all -m ping


ansible -docs

—----------------

# ansible -i hosts <groupname or ip> -m apt -a “name=tree state=latest

# ansible -i hosts all -m apt -a “name=tree state=latest””


No package matching available.


Since its a brand new machine, we have to update.

# ansible -i hosts all -m apt_repostory -a “repo=ppa:nginx/stable”


It's going to update the repository. Now run,

# ansible -i hosts all -m apt -a “name=tree state=latest”

Look for the output.


# which tree


Run the same command 2nd time, you get green color. First time, you see yellow color.

2nd time, you see change = falst. 

If package is already installed, it does not do nothing. It is called idempotent.

Desire state is not changed. 

# ansible -i hosts all -m apt -a “name=tree state=absent”

Yellow color

Run it again, you get green color


Run it again,

# ansible -i hosts all -m apt -a “name=tree state=latest”

It will install and shows yellow color.


You can run one command at a time. This command is called ad-hoc command. If you want to run multiple command, you can’t do this way. How can you achieve running multiple command?

- by using yaml file.


# cat example.yaml

# cat nginx.yml


Google

How to install nginx server manually on ubuntu?

1. Install nginx pkg

 $ sudo apt update; sudo apt install nginx

2. Create our website

<html></html>


3. Set up virtual hosts

4. Activate virtual host and test the result



# ansible -i hosts all -m apt -a “name=tree state=latest”

# cat nginx.yml

  • Hosts: remote  # define host group, ip

tasks:

  • Name: add repo

  • name: install package nginx

apt:

  Name: nginx

  state: latest 



Vi /etc/ansible/hosts

[remote]

192.168.10.20

192.168.10.21

….




# cat nginx.yml

---

- hosts: remote  # define host group, ip

  tasks:

  - name: add repo nginx

    apt_repositiry:

      repo: "ppa:nginx/stable"

   -name: install package nginx

      apt:

        name: nginx

        state: latest


   - name: start service ngins if not started

     service:

       name: nginx

       state: started

  name: install package nginx

apt:

  Name: nginx

  state: latest 


============================

service(package a, state b) {

return a+b;

}

- name: add methid

  service:

    package: nginx

    state: started



add (int, int b) {

return a+b;

}


add a=10, b=20

- name: add method

  add:

   a: 10

   b: 20

==============================

---

# now need to start service

go to service module -> go under examples...


Vi /etc/ansible/hosts

[remote]

192.168.10.20

192.168.10.21

….



# ansible-playbook -i hosts nginx.yml


just observer the output

- remote

- gathering facts

- add repo

- install package nginx

- start service ngins

- play recap


changed=1


get the ip address of the host and paste at the browser, you will see nginx page.



# cat nginx.yml

---

- hosts: remote  # define host group, ip

  tasks:

  - name: add repo nginx

    apt_repositiry:

      repo: "ppa:nginx/stable"

 

  -name: install package nginx

      apt:

        name: nginx

        state: latest


   - name: start service ngins if not started

     service:

       name: nginx

       state: started


   - name: create a dir tutorial # google for file module, look for eg,

     file:

       path: /etc/myfile.txt

       state: directory


    - nameL copy index.html file

      copy:

        ser: index.html

        dest: /var/www/tutorail/index.html


    - name start nginx if not started

      service:

        name: nginx

        state: started


# we have to create virtual host


   - name copy tutoril

      copy:

        ser: tutorial

        dest: /var/www/tutorail/tutorial


once  you updated, or modified, we have to restart the service. 

we have to speacify nofity 





changed=1


get the ip address of the host and paste at the browser, you will see nginx page.



# cat nginx.yml

---

- hosts: remote  # define host group, ip

  tasks:

  - name: add repo nginx

    apt_repositiry:

      repo: "ppa:nginx/stable"

 

  -name: install package nginx

      apt:

        name: nginx

        state: latest


   - name: start service ngins if not started

     service:

       name: nginx

       state: started


   - name: create a dir tutorial # google for file module, look for eg,

     file:

       path: /etc/myfile.txt

       state: directory


    - nameL copy index.html file

      copy:

        ser: index.html

        dest: /var/www/tutorail/index.html


# we have to create virtual host


   - name copy tutoril

      copy:

        ser: tutorial

        dest: /var/www/tutorail/tutorial


    - name start nginx if not started

      service:

        name: nginx

        state: started

      notify: restart service ngins

     handlers:


    - name: start servie ngins, 

      service:

       name: nginx

       state: restarted





# cd /etc/ansible

$ vi tutorial



jenkins ubuntu install


convert commands into yaml and try it 


jenkins.io/doc/book/..



# ansible-playbook -i hosts nginx.yml


review the output..


green color, already perfored, yellow color, its performed now.


go to browser

1p:81 => you see the content.


next class ...

- ansible roles, running multiple service 

- terraform, monitoring



Git branch show detached HEAD

  Git branch show detached HEAD 1. List your branch $ git branch * (HEAD detached at f219e03)   00 2. Run re-set hard $ git reset --hard 3. ...