Wednesday, December 30, 2020

Ansible - Dynamic Inventory

 Ansible - Dynamic Inventory - 12/30/2020



CM -> Controller Node -> PB - code ----> Managed Nodes

IP add into inventory

We have been manually updating the inventory with new IP address.

This process is called static inventory.

There are cases, that you don't know the ip of the target node. You have to login to target node and check the IP.
or upon reboot, system's IP get changed.

We are on dynamic world. We may bring a server for testing purpose and after that we shutdown.
After a while we may bring the server and IP get changes.

Say your env has thousands of servers.
or you have server is build on could - can be aws, google, azure or any other source or locally..
You have new IPs or your instance is on docker/container.

OS can be from any source and IP keep changing..
 - VM
 - Cloud -> AWS, GCP, Azure
 - Containers

We want such a mechanism, where we will configure playbook on certain context.

We can make our inventory little intellegent or call it dynamic.
What that mean?
- we will not write IP manually on inventory file.
- Since we don't know, and can't add it to inventory.

run playbook or ad-hock commands and
scan -> new IPs

what info do you provide?
- need ssh/IP for linux hosts..


we will have a playbook which goes out to aws
1. instal os (EC2) - provision a server

2. Configure webserver

in playbook, you have to define as,
- hosts: ip
  tasks
    - configure web server

You can only run this only if you know the IP.

if you know ip, you have to add this ip to inventory on control node.


# ansible all --list-hosts

never use IP in the playbook.
rather use group name.


os1 - 1.2.3.4
os2 - 1.2.3.5
os3 - 1.2.3.6

horizontal scaling -> adding more hosts


----------------------------
inventory
# cat /etc/ansible/ansible.cfg
# more /root/myhosts
create a single file, and update the config file.

You may be using multiple inventory files with different app, subnet or any other purpose.



# anlsible all --list-hosts

extension can be .py,yaml or no extention.

[root@master mydb]# cat >a
1.1.1.1
[root@master mydb]# cat >b
2.2.2.2
[root@master mydb]# cat >c
3.3.3.3

[root@master mydb]# ls
a  b  c


Update the inventory file to point the directory.

# vi /etc/ansible/ansible.cfg


Since ansible accept .py extention, we can write python code as well..

you can use scanning tool 'nmap' to scan


# ansible all --list-hosts

[root@master mydb]# cat my.py
#!/usr/bin/python3
print("5.5.5.5")


the display is not proper. IP comes but with print
but if you ask manually it displays properly.


You have to follow certain format.


check Bimal Daga's github
github.com/vimallinuxworld12/ansible_dynamic_inventory/master/hosts.py


download it:
# weget <download URL>


hosts.py get it from bimal's page
# cp hosts.py mydb
# chmod +x hosts.py
# python3 hosts.py --list


in exam, they give you pre-created file and need to copy it and run from there.

now, you can run ansible all --list-hosts
# ansible all --list-hosts

ansible gives you on good format that ansible understands it.


get another URl from ansible github link
http://github.con/ansible/treee/stable-2.9/controlb/inventory

There is a script just download and use it.

download the ec2.py file
# chmod +x ec2.py
Run manually
# python2 ec2.py

You need to install library called boto if you don't have it

# pip3 list
# pip3 install boto
# python2 3c2.py -> it failed again...

# python3 ec2.py --list

it mght be because of lower version of python
# ./ec2.py --list

stil a problem.

# vi ec2.py and change the path pytohon path.
#!/usr/bin/python3



Go to aws dashboard


you have to specify
1. region info
2. API
3. Login and pw info


# vi ec2.py

update the code with region / pw

you can create a variable and you done...


On dashboard
- IAM -> create user with power : poweruseraccount

click, click and click ....

record your access key and
you can use export variable

export AWS_ACCESS_KEY_ID='AJDHSJHDSHDDSDD'
export AWS_SECRET_ACCESS_Key='dsfsdfsdfsdfsdfsdfsd'
export AWS_REGION='US-EAST-1a'

# ansible all --list-hosts



-------------------------
launch an aws instance manualy
Tag:
name: mywebos
country: Nepal
DadaCenter:    Virginia


There is another file ec2.ini

download it as well.


giving error again.. error on 172 line.
go ahead and commentout the line and run it again

# ansible all --list-hosts


always tag the os on the cloud

Key        value
Country        US
DC        dc2
Tech        web

# ./ec2.py --list

keep ec2.ini file on the same location as ec2.py
Tag is really importand to work with ansible
# ansible tag_Country_US--list-hosts

# ansible tag_Country_IN--list-hosts

Now, in summary,
1. Launch an os using ansible playbook
  tag
2. Configure dynamic inventory

3. Write a playbook to configure a web server on the instance on the cloud.
   hint: use -hosts: tag_country_US

grab all IP and install web server...

ansible-docs -l

Tuesday, December 29, 2020

Ansible - Error / exception handling

 Eye exam -- call ->

Error / exception handling
--------------------------

Loop/Dictionary


Command module
---------------

google
ansible command
look for command module

running code on same host - localhost

# ansible localhost --list-hosts

no entry needed on inventory for localhost

# cat mycommand.yaml
- hosts: localhost
  tasks:
  - command: date



- hosts: localhost
  tasks:
  - command: date
    register: x # to print the output, store on variable x

  - debug:
      var: x

# Command module always run


- hosts: localhost
  tasks:
  - command: date
    register: x # to print the output, store on variable x

  - debug:
      var: x
  - debug:
     msg: "Welcome ...."

# Command module always run


[root@master Dec29]# cat cmd.yaml
- hosts: localhost
  tasks:
  - command:  mkdir /tmp/users
    register: x # to print the output, store on variable x

  - debug:
      var: x
  - debug:
     msg: "Welcome ...."

# Command module always run
# command module does not support idompoent

[root@master Dec29]# ansible-playbook cmd.yaml

if you
----------------------------------

- hosts: localhost
  tasks:
  - command:  mkdir /tmp/users
    register: x # to print the output, store on variable x
    ignore_errors: yes  # if you run second time, it will not run

  - debug:
      var: x
  - debug:
     msg: "Welcome ...."

# Command module always run
# command module does not support idompoent


[root@master Dec29]# ansible-playbook cmd.yaml


---------------------

Check the condition

- hosts: localhost
  tasks:
  - command: ls /tmp/users
    register: s
    ignore_errors: yes

  - command:  mkdir /tmp/users
    register: x # to print the output, store on variable x
    ignore_errors: yes  # if you run second time, it will not run

  - debug:
      var: x
  - debug:
     msg: "Welcome ...."

# Command module always run
# command module does not support idompoent
~

=====================================

- hosts: localhost
  tasks:
  - command: ls /tmp/users
    register: s #
    ignore_errors: yes

  - command:  mkdir /tmp/users
    #register: x        # to print the output, store on variable x
    #ignore_errors: yes  # if you run second time, it will not run
    register: x
    when: s.rc != 0  # test if dir exists. Its for idompotence

  - debug:
      var: x
  - debug:
     msg: "Welcome ...."

# Command module always run
# command module does not support idompoent
~


---------------------
- hosts: localhost
  tasks:
  - command: ls /tmp/users
    register: s #
    ignore_errors: yes

  - command:  mkdir /tmp/users
    #register: x        # to print the output, store on variable x
    #ignore_errors: yes  # if you run second time, it will not run
    register: x
    when: s.rc != 0  # test if dir exists. Its for idompotence

  - debug:
      var: x

  - debug:
     msg: "Welcome ...."

# Command module always run
# command module does not support idompoent


=========================================



Role - people create and upload to public repo (Ansible-Galaxy)

google for ansible galaxy and browse through

They are categories based on os, network, cloud ..... and more ..

you can also upload your playbook.

[root@master Dec29]# cat /etc/passwd | wc -l

| is a facility offered y shell


To run this command, you have to use shell module instead of command module


- hosts: localhost
  tasks:
#  - command: cat /etc/passwd | wc -l
# | will throw error. You can't just use pipe. Its a facility.
# use shell module
  - shell: cat /etc/passwd | wc -l



How many ansible galaxy we have
[root@master Dec29]# ansible-galaxy list
# /root/wk20-Roles
- myapache, (unknown version)



ansible-galazy role -h
ansible-galazy role search apache



go to ansible-galaxy site and search for apache


# ansible-version

we are on latest version as of dec 29, 2020

[root@master roles]# ansible-galaxy role install geerlingguy.apache
- downloading role 'apache', owned by geerlingguy
- downloading role from https://github.com/geerlingguy/ansible-role-apache/archive/3.1.4.tar.gz
- extracting geerlingguy.apache to /root/wk20-Roles/geerlingguy.apache
- geerlingguy.apache (3.1.4) was installed successfully
[root@master roles]# ls
[root@master roles]# pwd
/etc/ansible/roles
[root@master roles]# ansible-galaxy list
# /root/wk20-Roles
- myapache, (unknown version)
- geerlingguy.apache, 3.1.4
[root@master roles]#

[root@master roles]# cd /root/wk20-Roles
[root@master wk20-Roles]# ls



1:09



Monday, December 28, 2020

Ansible - Privilege Escalation

 
Ansible - Privilege Escalation
- ssh key
- AWS instance as a managed node
- sudo



Linux OS -> root - full access - full power -> Privilege
Disable root login because of security


# useradd sam; passwd sam

You can use this account to login to your system.

give user extra power -> by -> privilege escalation - by using sudo



Control Node (CN) -> PB -> package -> connect to TN using ssh.

Target Node (TN) -> No root, create a user sam and give full access
# visudo
sam ALL=(ALL)    ALL


Inventory file
# cat /root/myhosts
[myweb]
192.
ansible_user=sam ansible_sssh_pass=q ansible

[myweb]
worker1 ansible_user=sam ansible_ssh_pass=q ansible_connection=ssh
worker2 ansible_user=sam ansible_ssh_pass=changeme ansible_connection=ssh


# ansible myweb - package -a "name=httpd state=present"
This command will fail.

The reason is:
- go to visudo and give sam user extra power
- and become root to run this command.

# ansible myweb - -m command -a id
This command is successful as a user sam.

but we are talking about previlege escalation

how do we do it?

1. First login with other use but don't run it as a normal user sam.
# ansible -h | grep become

look for become option.

# ansible 192.168.10.20 -m command -a id --become --become-user root --become-methos sudo

# ansible-doc -t become -l

# ansible 192.168.10.20 -m command -a id --become --become-user root --become-methos sudo --ask-become-pass


[root@master ~]# ansible worker1 -m package -a "name=httpd state=present" --become ---become-user root --become-methos sudo --ask-become-pass



Go to ansible conf file and add following conent

[ privilege_escalation]
become=true
become_methos=sudo
become_user=root
become_ask_pass=true # ask pw, false will not prompt for pw.
but yo have to edit sudoers file ...
sam all=all    nopasswd:all


Since you write on your conf file, now, you can supply

ansible worker1 -m  command -a id --become --become-user root --become-method sudo --ask-become-pass

ansible worker1  -m command -a id



In case of EC2 -> We never login as root user. ec2-user

on your inventory @control node, you can't use user=root since it is disable
so, you have to use normal user with sudo power.
so login as ec2-user and run the command as root. so become is use ful...



keys or ssh-key
--------------

ssh to remove system
- ask you for username/pw

you create key
- private key
- public key

take public key to other nodes

On control node run
$ ssh-keygen

it will generate two keys
you never share your private key but share your public key

If user @other end accepts your putblic key, and you inititate adn ssh connection
to remote machine, you can login without password

To share your pubic ckey
$ ssh-copy-id sam@remove-host

Now, login
$ ssh user@remove-host

you are logged in without pw...

# cat .ssh/id.rsa -> private key

CN
- create a VM
- Create an inventory file
   ip address -> user -> pw/ssh-private-key  (download the key and upload on your cn)

GOOGLE FOR ANSIBLE INVENTORY FILE

ansible_ssh_private_key_file ...
review other options

from your laptop -> login to aws EC2 and run ansible ad-hoc commands..
you don't have to configure anything since the use comes with sodo access..


Thursday, December 24, 2020

Ansible - Sample practice questions

 

Ansible Sample Practice questions for EX407/EX294

This is a sample Ansible exam that I’ve created to prepare for EX407. I have not taken the EX407 exam yet.

You can also use it for the new RHCE 8 exam EX294.

As with the real exam, no answers to the sample exam questions will be provided.

Requirements

There are 18 questions in total.

You will need five RHEL 7 (or CentOS 7) virtual machines to be able to successfully complete all questions.

One VM will be configured as an Ansible control node. Other four VMs will be used to apply playbooks to solve the sample exam questions. The following FQDNs will be used throughout the sample exam.

  1. ansible-control.hl.local – Ansible control node
  2. ansible2.hl.local – managed host
  3. ansible3.hl.local – managed host
  4. ansible4.hl.local – managed host
  5. ansible5.hl.local – managed host

There are a couple of requiremens that should be met before proceeding further:

  1. ansible-control.hl.local server has passwordless SSH access to all managed servers (using the root user).
  2. ansible5.hl.local server has a 1GB secondary /dev/sdb disk attached.
  3. There are no regular users created on any of the servers.

Tips and Suggestions

I tried to cover as many exam objectives as possible, however, note that there will be no questions related to dynamic inventory.

Some questions may depend on the outcome of others. Please read all questions before proceeding.

Note that the purpose of the sample exam is to test your skills. Please don’t post your playbooks in the comments section.

Sample Exam Questions

Note: you have root access to all five servers.

Task 1: Ansible Installation and Configuration

Install ansible package on the control node (including any dependencies) and configure the following:

  1. Create a regular user automation with the password of devops. Use this user for all sample exam tasks and playbooks, unless you are working on the task #2 that requires creating the automation user on inventory hosts. You have root access to all five servers.
  2. All playbooks and other Ansible configuration that you create for this sample exam should be stored in /home/automation/plays.

Create a configuration file /home/automation/plays/ansible.cfg to meet the following requirements:

  1. The roles path should include /home/automation/plays/roles, as well as any other path that may be required for the course of the sample exam.
  2. The inventory file path is /home/automation/plays/inventory.
  3. Privilege escallation is disabled by default.
  4. Ansible should be able to manage 10 hosts at a single time.
  5. Ansible should connect to all managed nodes using the automation user.

Create an inventory file /home/automation/plays/inventory with the following:

  1. ansible2.hl.local is a member of the proxy host group.
  2. ansible3.hl.local is a member of the webservers host group.
  3. ansible4.hl.local is a member of the webservers host group.
  4. ansible5.hl.local is a member of the database host group.

Task 2: Ad-Hoc Commands

Generate an SSH keypair on the control node. You can perform this step manually.

Write a script /home/automation/plays/adhoc that uses Ansible ad-hoc commands to achieve the following:

  1. User automation is created on all inventory hosts (not the control node).
  2. SSH key (that you generated) is copied to all inventory hosts for the automation user and stored in /home/automation/.ssh/authorized_keys.
  3. The automation user is allowed to elevate privileges on all inventory hosts without having to provide a password.

After running the adhoc script, you should be able to SSH into all inventory hosts using the automation user without password, as well as a run all privileged commands.

Task 3: File Content

Create a playbook /home/automation/plays/motd.yml that runs on all inventory hosts and does the following:

  1. The playbook should replace any existing content of /etc/motd with text. Text depends on the host group.
  2. On hosts in the proxy host group the line should be “Welcome to HAProxy server”.
  3. On hosts in the webservers host group the line should be “Welcome to Apache server”.
  4. On hosts in the database host group the line should be “Welcome to MySQL server”.

Task 4: Configure SSH Server

Create a playbook /home/automation/plays/sshd.yml that runs on all inventory hosts and configures SSHD daemon as follows:

  1. banner is set to /etc/motd
  2. X11Forwarding is disabled
  3. MaxAuthTries is set to 3

Task 5: Ansible Vault

Create Ansible vault file /home/automation/plays/secret.yml. Encryption/decryption password is devops.

Add the following variables to the vault:

  1. user_password with value of devops
  2. database_password with value of devops

Store Ansible vault password in the file /home/automation/plays/vault_key.

Task 6: Users and Groups

You have been provided with the list of users below.

Use /home/automation/plays/vars/user_list.yml file to save this content.

---
users:
  - username: alice
    uid: 1201
  - username: vincent
    uid: 1202
  - username: sandy
    uid: 2201
  - username: patrick
    uid: 2202

Create a playbook /home/automation/plays/users.yml that uses the vault file /home/automation/plays/secret.yml to achieve the following:

  1. Users whose user ID starts with 1 should be created on servers in the webservers host group. User password should be used from the user_password variable.
  2. Users whose user ID starts with 2 should be created on servers in the database host group. User password should be used from the user_password variable.
  3. All users should be members of a supplementary group wheel.
  4. Shell should be set to /bin/bash for all users.
  5. Account passwords should use the SHA512 hash format.
  6. Each user should have an SSH key uploaded (use the SSH key that you created previously, see task #2).

After running the playbook, users should be able to SSH into their respective servers without passwords.

Task 7: Scheduled Tasks

Create a playbook /home/automation/plays/regular_tasks.yml that runs on servers in the proxy host group and does the following:

  1. A root crontab record is created that runs every hour.
  2. The cron job appends the file /var/log/time.log with the output from the date command.

Task 8: Software Repositories

Create a playbook /home/automation/plays/repository.yml that runs on servers in the database host group and does the following:

  1. A YUM repository file is created.
  2. The name of the repository is mysql56-community.
  3. The description of the repository is “MySQL 5.6 YUM Repo”.
  4. Repository baseurl is http://repo.mysql.com/yum/mysql-5.6-community/el/7/x86_64/.
  5. Repository GPG key is at http://repo.mysql.com/RPM-GPG-KEY-mysql.
  6. Repository GPG check is enabled.
  7. Repository is enabled.

Task 9: Create and Work with Roles

Create a role called sample-mysql and store it in /home/automation/plays/roles. The role should satisfy the following requirements:

  1. A primary partition number 1 of size 800MB on device /dev/sdb is created.
  2. An LVM volume group called vg_database is created that uses the primary partition created above.
  3. An LVM logical volume called lv_mysql is created of size 512MB in the volume group vg_database.
  4. An XFS filesystem on the logical volume lv_mysql is created.
  5. Logical volume lv_mysql is permanently mounted on /mnt/mysql_backups.
  6. mysql-community-server package is installed.
  7. Firewall is configured to allow all incoming traffic on MySQL port TCP 3306.
  8. MySQL root user password should be set from the variable database_password (see task #5).
  9. MySQL server should be started and enabled on boot.
  10. MySQL server configuration file is generated from the my.cnf.j2 Jinja2 template with the following content:
[mysqld]
bind_address = {{ ansible_default_ipv4.address }}
skip_name_resolve
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock

symbolic-links=0
sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES 

[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid

Create a playbook /home/automation/plays/mysql.yml that uses the role and runs on hosts in the database host group.

Task 10: Create and Work with Roles (Some More)

Create a role called sample-apache and store it in /home/automation/plays/roles. The role should satisfy the following requirements:

  1. The httpd, mod_ssl and php packages are installed. Apache service is running and enabled on boot.
  2. Firewall is configured to allow all incoming traffic on HTTP port TCP 80 and HTTPS port TCP 443.
  3. Apache service should be restarted every time the file /var/www/html/index.html is modified.
  4. A Jinja2 template file index.html.j2 is used to create the file /var/www/html/index.html with the following content:
The address of the server is: IPV4ADDRESS

IPV4ADDRESS is the IP address of the managed node.

Create a playbook /home/automation/plays/apache.yml that uses the role and runs on hosts in the webservers host group.

Task 11: Download Roles From Ansible Galaxy and Use Them

Use Ansible Galaxy to download and install geerlingguy.haproxy role in /home/automation/plays/roles.

Create a playbook /home/automation/plays/haproxy.yml that runs on servers in the proxy host group and does the following:

  1. Use geerlingguy.haproxy role to load balance request between hosts in the webservers host group.
  2. Use roundrobin load balancing method.
  3. HAProxy backend servers should be configured for HTTP only (port 80).
  4. Firewall is configured to allow all incoming traffic on port TCP 80.

If your playbook works, then doing “curl http://ansible2.hl.local/” should return output from the web server (see task #10). Running the command again should return output from the other web server.

Task 12: Security

Create a playbook /home/automation/plays/selinux.yml that runs on hosts in the webservers host group and does the following:

  1. Uses the selinux RHEL system role.
  2. Enables httpd_can_network_connect SELinux boolean.
  3. The change must survive system reboot.

Task 13: Use Conditionals to Control Play Execution

Create a playbook /home/automation/plays/sysctl.yml that runs on all inventory hosts and does the following:

  1. If a server has more than 2048MB of RAM, then parameter vm.swappiness is set to 10.
  2. If a server has less than 2048MB of RAM, then the following error message is displayed:

Server memory less than 2048MB

Task 14: Use Archiving

Create a playbook /home/automation/plays/archive.yml that runs on hosts in the database host group and does the following:

  1. A file /mnt/mysql_backups/database_list.txt is created that contains the following line: dev,test,qa,prod.
  2. A gzip archive of the file /mnt/mysql_backups/database_list.txt is created and stored in /mnt/mysql_backups/archive.gz.

Task 15: Work with Ansible Facts

Create a playbook /home/automation/plays/facts.yml that runs on hosts in the database host group and does the following:

  1. A custom Ansible fact server_role=mysql is created that can be retrieved from ansible_local.custom.sample_exam when using Ansible setup module.

Task 16: Software Packages

Create a playbook /home/automation/plays/packages.yml that runs on all inventory hosts and does the following:

  1. Installs tcpdump and mailx packages on hosts in the proxy host groups.
  2. Installs lsof and mailx packages on hosts in the database host groups.

Task 17: Services

Create a playbook /home/automation/plays/target.yml that runs on hosts in the webservers host group and does the following:

  1. Sets the default boot target to multi-user.

Task 18. Create and Use Templates to Create Customised Configuration Files

Create a playbook /home/automation/plays/server_list.yml that does the following:

  1. Playbook uses a Jinja2 template server_list.j2 to create a file /etc/server_list.txt on hosts in the database host group.
  2. The file /etc/server_list.txt is owned by the automation user.
  3. File permissions are set to 0600.
  4. SELinux file label should be set to net_conf_t.
  5. The content of the file is a list of FQDNs of all inventory hosts.

After running the playbook, the content of the file /etc/server_list.txt should be the following:

ansible2.hl.local
ansible3.hl.local
ansible4.hl.local
ansible5.hl.local

Note: if the FQDN of any inventory host changes, re-running the playbook should update the file with the new values.

 

Copied from:  https://www.lisenet.com/2019/ansible-sample-exam-for-ex407/

Ansible - Install Kubernetes Cluster with Ansible

 

Install Kubernetes Cluster with Ansible

We are going to install a Kubernetes control plane with two worker nodes using Ansible.

Note that installation of Ansible control node is not covered in this article.

Pre-requisites

This guide is based on Debian Stretch. You can use Ubuntu as well.

  1. Ansible control node with Ansible 2.9 to run playbooks.
  2. 3x Debian Stretch servers with the ansible user created for SSH.
  3. Each server should have 2x CPUs and 2GB of RAM.
  4. /etc/hosts file configured to resolve hostnames.

We are going to use the following ansible.cfg configuration:

[defaults]
inventory  = ./inventory

remote_user = ansible
host_key_checking = False
private_key_file = ./files/id_rsa

[privilege_escalation]
become=False
become_method=sudo
become_user=root
become_ask_pass=False

The content of the inventory file can be seen below:

[k8s-master]
10.11.1.101

[k8s-node]
10.11.1.102
10.11.1.103

[k8s:children]
k8s-master
k8s-node

The Goal

To deploy a specific version of a 3-node Kubernetes cluster (one master and two worker nodes) with Calico networking and Kubernetes Dashboard.

Package Installation

Docker

Configure packet forwarding and install a specific version of Docker so that we can test upgrades later.

---
- name: Install Docker Server
  hosts: k8s
  become: true
  gather_facts: yes
  vars:
    docker_dependencies:
    - ca-certificates
    - gnupg
    - gnupg-agent
    - software-properties-common
    - apt-transport-https
    docker_packages:
    - docker-ce=5:18.09.6~3-0~debian-stretch
    - docker-ce-cli=5:18.09.6~3-0~debian-stretch
    - containerd.io 
    - curl
    docker_url_apt_key: "https://download.docker.com/linux/debian/gpg"
    docker_repository: "deb [arch=amd64] https://download.docker.com/linux/debian stretch stable"
  tasks:
  - name: Debian | Configure Sysctl
    sysctl:
      name: "net.ipv4.ip_forward"
      value: "1"
      state: present

  - name: Debian | Install Prerequisites Packages
    package: name={{ item }} state=present force=yes
    loop: "{{ docker_dependencies }}"

  - name: Debian | Add GPG Keys
    apt_key: 
      url: "{{ docker_url_apt_key }}"

  - name: Debian | Add Repo Source
    apt_repository: 
      repo: "{{ docker_repository }}"
      update_cache: yes

  - name: Debian | Install Specific Version of Docker Packages
    package: name={{ item }} state=present force=yes install_recommends=no
    loop: "{{ docker_packages }}"
    notify:
    - start docker

  - name: Debian | Start and Enable Docker Service
    service:
      name: docker
      state: started
      enabled: yes  

  handlers:
  - name: start docker
    service: name=docker state=started
...

Kubeadm and Kubelet

Install a specific version of kubeadm so that we can test upgrades later.

---
- name: Install Kubernetes Server
  hosts: k8s
  become: true
  gather_facts: yes
  vars:
    k8s_dependencies: 
    - kubernetes-cni=0.6.0-00 
    - kubelet=1.13.1-00
    k8s_packages: 
    - kubeadm=1.13.1-00 
    - kubectl=1.13.1-00 
    k8s_url_apt_key: "https://packages.cloud.google.com/apt/doc/apt-key.gpg"
    k8s_repository: "deb https://apt.kubernetes.io/ kubernetes-xenial main"
  tasks:
  - name: Disable SWAP K8S will not work with swap enabled (1/2)
    command: swapoff -a
    when: ansible_swaptotal_mb > 0

  - name: Debian | Remove SWAP from fstab K8S will not work with swap enabled (2/2)
    mount:
      name: "{{ item }}"
      fstype: swap
      state: absent
    with_items:
    - swap
    - none

  - name: Debian | Add GPG Key
    apt_key:
      url: "{{ k8s_url_apt_key }}"

  - name: Debian | Add Kubernetes Repository
    apt_repository: 
      repo: "{{ k8s_repository }}"
      update_cache: yes

  - name: Debian | Install Dependencies
    package: name={{ item }} state=present force=yes install_recommends=no
    loop: "{{ k8s_dependencies }}"

  - name: Debian | Install Kubernetes Packages
    package: name={{ item }} state=present force=yes install_recommends=no
    loop: "{{ k8s_packages }}"
...

Kubernetes Configuration

Create a Kubernetes control plane and join two other servers as worker nodes. Note the content of the file files/dashboard-adminuser.yaml below:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system	
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system

Deploy Kubernetes cluster.

---
- name: Deploy Kubernetes Cluster
  hosts: k8s
  gather_facts: yes
  vars:
    k8s_pod_network: "192.168.192.0/18"
    k8s_user: "ansible"
    k8s_user_home: "/home/{{ k8s_user }}"
    k8s_token_file: "join-k8s-command"
    k8s_admin_config: "/etc/kubernetes/admin.conf"
    k8s_dashboard_adminuser_config: "dashboard-adminuser.yaml"
    k8s_kubelet_config: "/etc/kubernetes/kubelet.conf"
    k8s_dashboard_port: "6443"
    k8s_dashboard_url: "https://{{ ansible_default_ipv4.address }}:{{ k8s_dashboard_port }}/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login"
    calico_rbac_url: "https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml"
    calico_rbac_config: "rbac-kdd.yaml"
    calico_net_url: "https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml"
    calico_net_config: "calico.yaml"
    dashboard_url: "https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml"
    dashboard_config: "kubernetes-dashboard.yml"
  tasks:
  - name: Debian | Configure K8S Master Block
    block:
    - name: Debian | Initialise the Kubernetes cluster using kubeadm
      become: true
      command: kubeadm init --pod-network-cidr={{ k8s_pod_network }}
      args:
        creates: "{{ k8s_admin_config }}"

    - name: Debian | Setup kubeconfig for {{ k8s_user }} user
      file:
        path: "{{ k8s_user_home }}/.kube"
        state: directory
        owner: "{{ k8s_user }}"
        group: "{{ k8s_user }}"
        mode: "0750"
    
    - name: Debian | Copy {{ k8s_admin_config }}
      become: true
      copy:
        src: "{{ k8s_admin_config }}"
        dest: "{{ k8s_user_home }}/.kube/config"
        owner: "{{ k8s_user }}"
        group: "{{ k8s_user }}"
        mode: "0640"
        remote_src: yes
    
    - name: Debian | Download {{ calico_rbac_url }}
      get_url:
        url: "{{ calico_rbac_url }}"
        dest: "{{ k8s_user_home }}/{{ calico_rbac_config }}"
        owner: "{{ k8s_user }}"
        group: "{{ k8s_user }}"
        mode: "0640"
    
    - name: Debian | Download {{ calico_net_url }}
      get_url:
        url: "{{ calico_net_url }}"
        dest: "{{ k8s_user_home }}/{{ calico_net_config }}"
        owner: "{{ k8s_user }}"
        group: "{{ k8s_user }}"
        mode: "0640"     

    - name: Debian | Set CALICO_IPV4POOL_CIDR to {{ k8s_pod_network }}
      replace:
        path: "{{ k8s_user_home }}/{{ calico_net_config }}"
        regexp: "192.168.0.0/16"
        replace: "{{ k8s_pod_network }}"
    
    - name: Debian | Download {{ dashboard_url }}
      get_url:
        url: "{{ dashboard_url }}"
        dest: "{{ k8s_user_home }}/{{ dashboard_config }}"
        owner: "{{ k8s_user }}"
        group: "{{ k8s_user }}"
        mode: "0640"     
    
    - name: Debian | Install calico pod network {{ calico_rbac_config }}
      become: false
      command: kubectl apply -f "{{ k8s_user_home }}/{{ calico_rbac_config }}"
    
    - name: Debian | Install calico pod network {{ calico_net_config }}
      become: false
      command: kubectl apply -f "{{ k8s_user_home }}/{{ calico_net_config }}"
    
    - name: Debian | Install K8S dashboard {{ dashboard_config }}
      become: false
      command: kubectl apply -f "{{ k8s_user_home }}/{{ dashboard_config }}"
    
    - name: Debian | Create service account
      become: false
      command: kubectl create serviceaccount dashboard -n default
      ignore_errors: yes
    
    - name: Debian | Create cluster role binding dashboard-admin
      become: false
      command: kubectl create clusterrolebinding dashboard-admin -n default --clusterrole=cluster-admin --serviceaccount=default:dashboard
      ignore_errors: yes
    
    - name: Debian | Create {{ k8s_dashboard_adminuser_config }} for service account
      copy:
        src: "files/{{ k8s_dashboard_adminuser_config }}"
        dest: "{{ k8s_user_home }}/{{ k8s_dashboard_adminuser_config }}"
        owner: "{{ k8s_user }}"
        group: "{{ k8s_user }}"
        mode: "0640"
    
    - name: Debian | Create service account
      become: false
      command: kubectl apply -f "{{ k8s_user_home }}/{{ k8s_dashboard_adminuser_config }}"
      ignore_errors: yes
    
    - name: Debian | Create cluster role binding cluster-system-anonymous
      become: false
      command: kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous
      ignore_errors: yes
    
    - name: Debian | Test K8S dashboard and wait for HTTP 200
      uri:
        url: "{{ k8s_dashboard_url }}"
        status_code: 200
        validate_certs: no
      ignore_errors: yes
      register: result_k8s_dashboard_page
      retries: 10
      delay: 6
      until: result_k8s_dashboard_page is succeeded

    - name: Debian | K8S dashboard URL
      debug:
        var: k8s_dashboard_url
    
    - name: Debian | Generate join command
      command: kubeadm token create --print-join-command
      register: join_command
    
    - name: Debian | Copy join command to local file
      become: false
      local_action: copy content="{{ join_command.stdout_lines[0] }}" dest="{{ k8s_token_file }}"
    when: "'k8s-master' in group_names"

  - name: Debian | Configure K8S Node Block
    block:
    - name: Debian | Copy {{ k8s_token_file }} to server location
      copy: 
        src: "{{ k8s_token_file }}"
        dest: "{{ k8s_user_home }}/{{ k8s_token_file }}.sh"
        owner: "{{ k8s_user }}"
        group: "{{ k8s_user }}"
        mode: "0750"

    - name: Debian | Join the node to cluster unless file {{ k8s_kubelet_config }} exists
      become: true
      command: sh "{{ k8s_user_home }}/{{ k8s_token_file }}.sh"
      args:
        creates: "{{ k8s_kubelet_config }}"
    when: "'k8s-node' in group_names"
...

Verify Cluster Status

Cluster nodes:

$ kubectl get nodes -o wide
NAME          STATUS   ROLES    AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                       KERNEL-VERSION   CONTAINER-RUNTIME
debian9-101   Ready    master   5m30s   v1.13.1   10.11.1.101           Debian GNU/Linux 9 (stretch)   4.9.0-13-amd64   docker://18.9.6
debian9-102   Ready    < none>  2m30s   v1.13.1   10.11.1.102           Debian GNU/Linux 9 (stretch)   4.9.0-13-amd64   docker://18.9.6
debian9-103   Ready    < none>  2m29s   v1.13.1   10.11.1.103           Debian GNU/Linux 9 (stretch)   4.9.0-13-amd64   docker://18.9.6

Cluster pods:

$ kubectl get pods -n kube-system -o wide
NAME                                   READY   STATUS    RESTARTS   AGE     IP              NODE          NOMINATED NODE   READINESS GATES
calico-node-fghpt                      2/2     Running   0          3m29s   10.11.1.101     debian9-101              
calico-node-stf8j                      2/2     Running   0          77s     10.11.1.103     debian9-103              
calico-node-tqx8r                      2/2     Running   0          78s     10.11.1.102     debian9-102              
coredns-54ff9cd656-52x2n               1/1     Running   0          3m46s   192.168.192.3   debian9-101              
coredns-54ff9cd656-vzg6b               1/1     Running   0          3m47s   192.168.192.2   debian9-101              
etcd-debian9-101                       1/1     Running   0          4m3s    10.11.1.101     debian9-101              
kube-apiserver-debian9-101             1/1     Running   0          3m55s   10.11.1.101     debian9-101              
kube-controller-manager-debian9-101    1/1     Running   0          3m53s   10.11.1.101     debian9-101              
kube-proxy-gc7r2                       1/1     Running   0          78s     10.11.1.103     debian9-103              
kube-proxy-tfvmj                       1/1     Running   0          79s     10.11.1.102     debian9-102              
kube-proxy-xf8pv                       1/1     Running   0          3m48s   10.11.1.101     debian9-101              
kube-scheduler-debian9-101             1/1     Running   0          3m44s   10.11.1.101     debian9-101              
kubernetes-dashboard-57df4db6b-gmmcx   1/1     Running   0          3m25s   192.168.192.4   debian9-101              
 
 
Copied from: https://www.lisenet.com/2020/install-kubernetes-cluster-with-ansible/

Wednesday, December 23, 2020

Ansible - Roles

 Ansible - 12/23/2020
Roles:

Auto -> Plabook ->
1. Role
2. Galaxy
3. Priviledge Escalation (P.E.)


Code
-> Properly manage, rather keeping all content in one file -> They break down into multiple files
say. for mamagement purpuse ... program is break down into multiple files...
File1
File2
File3
File4 .....

Once all code are completed, put all code into one place and run the program..

This is just a high level overview.

The small individuals file can be of modules if we are talking about python..


We have control node

inventory
[web]
host1
host2

On control Node -> Write playbook
setup.yaml
- host: web  # runnig play on web servers
  includes all the tasks
     -----
     -----
     ------
  define vars
     -------
     -----
     -----
  handlers
     -----
     -----
     -----

Out goal is to deploy on managed nodes

we can have a seperate file
one file for tasks or
one file for handlers...

We bundle all files together which we call it a role.

- You can create a role, that is only use to configure web server.
- Create another role which is used for security realated tasks.


Roles
--------
WebServs
-web
-sec

DBservers
-db
-sec

Some community members are already created roles and shared.
These are pre-created roles. We can download and use.

We can write a file and instruct to apply to all web servers or
db servers.

The name of the community is ansible Galaxy.

google for ansible galaxy

ame ways, docker images are also shared in the community
hub.docker.com  -> Docker Hub

Ansible Galaxy is the public place for ansible Roles...


- We create role inside control node and assign this role to managed node.

Plan
1. Check  your inventory file
2. Configure apache web server..
3.



Configure web server
[root@master wk20-Roles]# cat web.yaml
# cat web.yaml
- hosts: myweb
  vars:
  - p: "httpd"
  - s: "httpd"
  tasks:
  - package:
       name: "{{ p }}"
       state: present
  - service:
      name: "{{ s }}"
      state: started
[root@master wk20-Roles]#



Rather than writing this way, we can write in a better managed way.


# ansible-galary rile list
# ansible-galaxy -role -h

Lets create a role
ansible-galary role init myapache
[root@master wk20-Roles]# ansible-galaxy role init myapache
- Role myapache was created successfully

role is just a directory/folder. here my apache is just s directory

List the roles available on the system
# ansible-galaxy role list
# /usr/share/ansible/roles
[WARNING]: - the configured path /root/.ansible/roles does not exist.

Note: You have to specify the location

[root@master wk20-Roles]# ansible-galaxy role list --roles-path /root/wk20-Roles
# /root/wk20-Roles
- myapache, (unknown version)
# /usr/share/ansible/roles
# /etc/ansible/roles
[WARNING]: - the configured path /root/.ansible/roles does not exist.
[root@master wk20-Roles]#


Lets go ahead a write some code, that is going to be a playbook

Role -> myapache ->

[root@master my-wk20]# pwd
/root/my-wk20
[root@master my-wk20]# ls
setup.yaml
[root@master my-wk20]# cat setup.yaml
# configure apache
- hosts: myweb
  roles:
  - role: "myapache"
  #- role: "sec"        # another role. Role is a list, thats why you have -
[root@master my-wk20]#
# ansible-playbook setup.yaml --roles-path /root/wk20-Roles/
Got error
[root@master my-wk20]# ansible-playbook -h | grep role


# Check your config file
# vi /etc/ansible/ansible.cfg

[defaults]

# some basic default values...

inventory      = /root/myhosts
host_key_checking=false
#ansible-playbook setup.yaml --roles-path /root/wk20-Roles/
roles_path    = /root/wk20-Roles/


[root@master my-wk20]# ansible-playbook setup.yaml
 

This time, you will see diffrent output.
# ap setup.yaml


# ansible roles list


Another play
# cat setup.yaml
# configure apache
- hosts: myweb
  roles:
  - role: "myapache"

- hosts: mylb
  roles:
  - role: "mynewrole"

Ansible - Ansible - Loop

 Ansible - Loop 12-18-2020

Loop

Install software on a host
- hosts: worker1
  tasks:
  - package:
      name: httpd
      state: present

  - package:
      name: php

Installing two software, say you run it on RHEL systems, you call package two times and also yum command two times.
It comsumes more CPU/Memory time. You waster unwanted CPU/RAM.

How we can resolve this kind of situation?
-> Rather than calling same command multiple times, we call these command one time and pass list of sfotware packages..


software we supply like this order

[ php, httpd, ... bac ]

items

its a list..

for loop..


Lets try an example in python

1. We have list of packages
>>> p = [ "httpd", "php", "xyz"]

2. Print them
>>> p
['httpd', 'php', 'xyz']

3. Loop through it
>>> for i in p:
...     print(i)
...
httpd
php
xyz
>>>

You can directly asign values.

>>> for i in [1,2,3, 4]:
...     print(i)
...
1
2
3
4


- hosts: worker1
  tasks:
  - package:
      name: "{{ item }}"
      state: present

    loop:
       - "httpd"
       - "php"



note: older versions they use items



Define with variable
# cat myloop.yaml
- hosts: worker1
  vars:
  - x:
     - "httpd"
     - "php"

  tasks:
  - package:
      name: "{{ item }}"
      state: present

    loop: ""{{ x }}"

  - debug:
      var: x

# ap



# cat myloop.yaml
- hosts: worker1
  vars:
  - x:
     - "httpd"
     - "php"
  tasks:
    - debug:
        var: x[0]

print the first value




In linux systems,
we create a user and associate this group to multiple users.

groupadd mygroup
# useradd [user1, user2 ...] -g/G mygroup

# cat /etc/passwd, etc/group


How do we do on ansible

google for ansible-doc to create user
ansible-doc user


# cat user.yaml
- hosts: localhost
  tasks:
  - user:    # ansible module to create user ansible-doc user
      name: "jack"
      password: "password"
      state: present

# ap user.yaml

-> Add user to the group
# ansible-doc user

go to groups (secondary group) and see the options


# cat user.yaml
- hosts: localhost
  tasks:
  - user:    # ansible module to create user ansible-doc user
      name: "jack"
      password: "password"
      state: present
      groups: "devops"

[root@master day16]# ansible-playbook user.yaml

how do we create multiple group?
# cat user.yaml
- hosts: localhost
  vars:
  - u1:
        - "jack1"
        - "redhat"
        - "devops"
# u1=[ "jack", "redhat", "devops" ]
         0        1          2
we know, but how system knows?

  tasks:
  - user:    # ansible module to create user ansible-doc user
      name: "jack"
      password: "password"
      state: present
      groups: "devops"
    loop:


---------------------

# cat user.yaml
- hosts: localhost
  vars:
  - u1:
        - "jack1"
        - "redhat"
        - "devops"
  tasks:
  - user:    # ansible module to create user ansible-doc user
      name: "{{ u1[0] }}"
      password: "{{ u1[1] }}
      state: present
      groups: "{{ u1[2] }}"
#    loop:


How can we arrange data better.
Arrangement of data structure is not easy.

- hosts: localhost
  vars:
  - u1:
        - "jack1"
        - 1234abc
        - "redhat"
        - "devops"
  tasks:
  - user:       # ansible module to create user ansible-doc user
      name: "{{ u1[0] }}"
      password: "{{ u1[1] }}
      state: present
      groups: "{{ u1[2] }}"
    loop:

say if you add a value, then it will mess up your passwd.
here your pw is going to be 1234abc rather than redhat.



we are not going to use list any more. Instead of taking index number, we will give the name to it. such as user, password, group..

Instead of precreated 0, 1, 2, 3 .., we will use our own.



- hosts: localhost
  vars:
  - u1:
        - "name": "jack1"
        - "gid" 1234
        - "password" "redhat"
        - "g": "devops"
  tasks:
  - user:  
      name: "{{ u1[0] }}"
      password: "{{ u1[1] }}
      state: present
      groups: "{{ u1['g'] }}"

This is called dictionary or HASH..



- hosts: localhost
  vars:
  - u1:
        - "name": "jack1"
#        - "gid": 1234
        - "password": "redhat"
        - "g": "devops"
  tasks:
  - user:
      name: "{{ u1[0] }}"
      password: "{{ u1['password'] }}
      state: present
      groups: "{{ u1['g'] }}"




python dictionary
userdb = ["Ram",1111, "sam",2222, "chris", 3333]


userdb = [ ["Ram",1111], ["sam",2222], ["chris", 3333]]
> userdb
> userdb[1]
> userdb[1][1]
> userdb[0][1]
> userdb[2][1]


to retirve, you need to know the position number


Lets try on ansible

- hosts: localhost
  vars:
  - u1:
        - "name": "jack1"
#        - "gid": 1234
        - "password": "redhat"
        - "g": "devops"
  tasks:
  - user:
      name: "{{ u1[0] }}"
      password: "{{ u1['password'] }}
      state: present
      groups: "{{ u1['g'] }}"




Three information in one variable
- hosts: 127.0.0.1
  vars:
  - userdb:
       - "Sam", 1111
       - "Bill", 222
       - "Cabob", 333



can be written as

- hosts: 127.0.0.1
  vars:
  - userdb:
       - "Sam"
          1111
       - "Bill"
          222
       - "Cabob"
          333



or

- hosts: 127.0.0.1
  vars:
  - userdb:
       - name: "Ram"
       - phone: 1111
       - name: "jack"
#         password: redhat
          name: "Chris"
          phone: 222
  tasks:
  - debug:
      var: userdb



- hosts: 127.0.0.1
  vars:
  - userdb:
       - name: "Ram"
       - phone: 1111
       - name: "jack"
#         password: redhat
          name: "Chris"
          phone: 222

  tasks:
  - debug:
      var: userdb[1]



loop is a for loop...
for loop is always applies to variable..



pw stored on plain text is not supported.




============================

encript pw

ansible-doc passwd

convert clear text into hash
item.p | password_hash('sha512')


previledge escalation - Tuesday ..

know about sudo ...


you can use loop inside jinga template

jinga can only be use on template file


replace with varaible with vault...


Windows - Windows 2019 server installation to replace old Active directory domain controller

 Provision Windows 2019 on HPE DL380 Gen10 to replace old Active directory domain controller

1. Move/Mount the server in the datacenter
2. Keep inventory record/Label the server with proper classification
3. Select the hostname and IP address for the host
4. Configure ILO mgmt IP/host, if needed certificate for ILO
5. If needed, set BIOS password
6. Build the server and join the domain
7. Add adds-on roles
8. Perform other software/tools installation
9. Promote this server to Domain Controller

Tuesday, December 22, 2020

Ansible - Redhat Seminar

 

Redhat ansible - Seminar ..

 Anant -> Redhat -> Seminar ..

Re-wrting strategy to automate organizations tasks.

tesla's charging strategy


- new normal to use automation
automation inproves accountability, effeciency


Automation was already there, on different form but that automation was on silos... (SILOS)

automated -> SOLOS -> are still -> SILOS


We need enterprise wide, or organization level automation.

- Individual must first get benefit saving time, ease of doing the job.
- Team get benefit and other team get benefit by adapting it. When team benefits whole interprise gets befefits.


Automation for everyone
-> Designed around the way people work and the way people work together.

 
How?
Automation happens When a person encounters a problem, they never want to solve again..
- you put steps together, and job is done in short time...



Everyone thinks about automation.

We try to minimize the risk by automating...

Enterprise level automation
-> Different teams within the organization need to talk, communicate to make sure everything is working or not

Why ansible?
puttet, chef, ansible, terraform, salt .....

Get tools

- For automation to scale
- It must be simple to learn
- Easy to learn (simplicity)

Why ansible is widely adoptable
Simple, powerful, agentless
Simple
-> Written in human readable languages
-> You don't have to have coding skill
-> Task execute in order
-> Usable by every team.
-> get productive quickly

Powerful
-> App deployment
-> Configuration management
-> Workfolow orchestration
-> Network automation
-> Orchestrate the app lifecycle

Agentless
-> Agentless achitecture
-> Uses openssh & WinRM
-> No agents to exploit or update
-> Get started immediately


What can I do using ansible?
- automate the deployment and management of your entire IT footprint

Do this
orchestration
configuration management
application deployment
provisioning
continous delivery
security and compliance

On these
Firewalls
load Balancer
Applications ....



automate technologies
Cloud
AWS
Azure
Digital Ocean
Google
OpenStack
RackSpace
....

OS
RHEL and Linux
Windows

Virt & COntainers
Docker
VMware
RVH
OpensTACK
oPENSHIFT

....


ANSIBLE AUTOMATION PLATFORM

EAGAGE ->    Ansbile:-     engage user with an automation focused experience
Scale  ->    Ansible Tower:- operate and control at scale
Create ->    Ansible Enginer:- Universal language of automation


content hub -> playbook, rules ....
playbooks are prewritten -> you can leverage... you need to certified content ...

modules, playbook, roles .. certified ...

RIO -> Return on investment


AUtmoated  USE case
-----------------------

- Platform automation
-> 150 + linux modules

autmate everything

Windows automation
- 90+ windows modules

ansible.com/cloud

800+ cloud modules -> proviosing, network

30+ cloud platfomr -> puplic/private cloud, azure, aws, google cloud .. rackspace ...

Network Automation
-> 65+ network platform - cisco, juniper .....

1000+ network modules
15 - Galaxy Network roles..

Security automation
11+ security platforms
800+ security modules


Start small, think big
start with read-only

Crwal, walk , run and fly ...


The world is automating .. those who succeed in automation will win

Automation helps you on any job you do on any field..



====================================
Learn, sustain, survive

Learn, upgrade
-> Technology
Automation
kubernates, ansible

-> Approval from chain of people

automation, containerazation
->  ease

Create your own production like env and test it.


Even I slip, I may not fall...


Barsako hiloma chipler ke bho ra? na ladhe t bhaee hallyo ni ..?



Know about Edge Computing -> Computer at the Edge -> Self driving computer 

Monday, December 21, 2020

Ansible - Variable Load ...

 1. Dynamic variable
We will create a variable file name same as os name and will use the variable name to configure
out target node.

How we can do it is by using defining a variable name with the os_name

1. I do have one control domain and 2 managed (Target) nodes.
a. Have one control node (AWS [ Free Tier] or Virtual Box)
b. Have one or 2 Managed nodes (AWS or Virtual Box [ Centos/amazon linux and ubuntu/debian)
Note: Keep your key safe.

2. Config file
# cat /etc/ansible/ansible.cfg
[defaults]
inventory=/root/myhosts
host_key_checking=false

[proviledge_escalation]
become=true

3. Inventory file
[web1]
192.168.10.10    ansible_user=ec2-user ansible_ssh_private_key_file=/home/ec2-user/pkey.pem ansible_connection=ssh
192.168.10.12   ansible_user=root ansible_ssh_pass=changeme ansible_connection=ssh

[web2]
192.168.10.20    ansible_user=ec2-user ansible_ssh_private_key_file=/home/ec2-user/pkey.pem ansible_connection=ssh
192.168.10.12    ansible_user=root ansible_ssh_pass=changeme ansible_connection=ssh



[root@master wk16]# cat RedHat.yaml
package_name: httpd
[root@master wk16]# cat Denian.yaml
package_name: apache2
[root@master wk16]# cat myplaybook.yaml
- hosts: all
  vars_files:
    - " {{ ansible_facts['os_family'] }}.yaml"
  tasks:
    - package:
        name: "{{ package_name }}"
        state: present
    - service:
        name: "{{ package_name }}"
        state: started

[root@master wk16]# ansible-playbook myplaybook.yaml


Run the playbook
# ap myplaybook.yaml


4. Open your browser and open the page
192.168.10.10

you should be able to see default page.



Thursday, December 17, 2020

Ansible - Ansible Vault, command Idempotence, shell module

 Ansible Vault, command Idempotence, shell module
=====================================================

1. Create your yaml file
We are going to create a file keepitsecret.yaml and we will keet it secret using vault
[root@master wk_16]# cat myvault.yaml
- hosts: 127.0.0.1
  vars_files:
    - keepitsecret.yaml
  tasks:
  - name: Sending email using Gmail's smtp services
    mail:
      host: smtp.gmail.com
      port: 587
      username: "{{ u }}"
      password: "{{ p }}"
      to: sam@gmail.com
      subject: Testing email from gmail using ansible
      body: system {{ ansible_host }} has been successfully tested.

2. Create a vault where you will store your username/pw
[root@master wk_16]# ansible-vault create keepitsecret.yaml
New Vault password:
Confirm New Vault password:
u: "sam@gmail.com"
p: "MyPasswordSecret"

3. View the content of the file. You can't read what your stored. Its encripted.
[root@master wk_16]# cat keepitsecret.yaml
$ANSIBLE_VAULT;1.1;AES256
32346435633239646636626465663162613262623434333664393437316461366565316364396632
6365373834616464333437373134653435386335653165660a326331363163353932373161386362
61316464353339383834666662353230393036313538646563303632393134363165353431336130
3037393363643463650a643762353433663662306630376231363836376464656330346235663964
31656463373832353739303239353032613838333231613464343336656239656535333561663064
3036336665303135313061666234313831626630343066613130
[root@master wk_16]#

Run your playbook

I got email alert
Sign-in attempt was blocked
sam@gmail.com
Someone just used your password to try to sign in to your account from a non-Google app. Google blocked them, but you should check what happened. Review your account activity to make sure no one else has access.

Less secure app blocked
Google blocked the app you were trying to use because it doesn't meet our security standards.
Some apps and devices use less secure sign-in technology, which makes your account more vulnerable. You can turn off access for these apps, which we recommend, or turn on access if you want to use them despite the risks. Google will automatically turn this setting OFF if it's not being used.
Learn more
google for less secure app access and 
Enabling less secure apps to access Gmail

========================================

Idempotence
- If service already exists, do not do anything
- by design command modules don't have idempotence feature
How do we create it?
Lets create a sample yaml file
# cat command.yaml
- hosts: w1
  tasks:
  - package:
      name: "httpd"
      state: present
  - command: "date"
# ap command.yaml
hide the message of the change (that is displayed on yellow color)
-> Using "changed_when" module
# cat command.yaml
- hosts: w1
  tasks:
  - package:
      name: "httpd"
      state: present
  - command: "date"
    changed_when

-----------
Command module always runs
- hosts: w1
  tasks:
  - package:
      name: "httpd"
      state: present
  - command: "date"
    changed_when: false
   - command: "mkdir /opt/test"

it keep running
- hosts: w1
  tasks:
  - package:
      name: "httpd"
      state: present
  - command: "date"
    changed_when: flase
   - command: "mkdir /opt/test"
      ignore_errors: yes

How do we make command module an intelligent one.
lets put some condition,
if you find the directory /opt/test exists, do not run. if not create it.

- hosts: w1
  tasks:
  - package:
      name: "httpd"
      state: present
  - command: "date"
    changed_when: flase
   - command: "mkdir /opt/test"
      ignore_errors: yes
How to check if dir exists?
# ls -ld /opt/test
if it exist, list the directory, if not gives you error
How ansible knows this is an error?
Unix systems captures the last command state, whether the command failed or success.
0=success
1 or other number - failed or other condition
[root@master wk_16]# ls -ld /root
dr-xr-x---. 22 root root 4096 Dec  8 07:50 /root
[root@master wk_16]# echo $?
0
[root@master wk_16]# ls -ld /roo1
ls: cannot access '/roo1': No such file or directory
[root@master wk_16]# echo $?
2
Now, lets see how we can make this task little more intellegent.
we are talking about idompotent ..
- hosts: w1
  tasks:
  - command: "ls -ld /opt/test"
     register: x # store the output of ls -ld output
    ignore_error: yes
  - debug:
  #      msg: "Testing ..."
       var: x # you can use debug to print a variable ...
   - command: "mkdir /opt/test"
      ignore_errors: yes
      when: false

Run the playbook
# ap command.yaml
----------------------------------

- hosts: w1
  tasks:
  - command: "ls -ld /opt/test"
     register: x # store the output of ls -ld output
    ignore_errors: yes
  - debug:
  #      msg: "Testing ..."
       var: x
   - command: "mkdir /opt/test"
#      ignore_errors: yes
      when: x.rc != 0

===============================
shell module and command modules
shell prompt commands
User database
# cat /etc/passwd | wc -l
# cat ad-hock.yaml
- hosts: w1
  tasks:
  - command: "cat /etc/passwd | wc -l"
throws error
we do have shell module to perform shell prompt commands terminals
these are little slower than command module
# cat ad-hock.yaml
- hosts: w1
  tasks:
  - shell: "cat /etc/passwd | wc -l"
=======================================================
ansible volt
-----------------
send the mail using your gmail to your friends email using ansible
# cat mymail.yaml
- hosts: 127.0.0.1
  tasks:
  - name: Sending email using Gmail's smtp services
    mail:
      hosts: smtp.gmail.com
      port: 587
      username: sam@gmail.com
      password: itsasecret
      to: ram@gmail.com
      subject: Testing email from gmail using ansible
      body: system {{ ansible_host }} has been successfully tested.
Run it
# ap mymail.yaml
failed - authentication failed

# cat mymail.yaml
- hosts: 127.0.0.1
  vars:
    -u: "sam@gmail.com"
    - p: "mypass"
  tasks:
  - name: Sending email using Gmail's smtp services
    mail:
      hosts: smtp.gmail.com
      port: 587
      username: u
      password: p
      to: ram@gmail.com
      subject: Testing email from gmail using ansible
      body: system {{ ansible_host }} has been successfully tested.
Run it
# ap mymail.yaml

failed again

now, lets create a secret file
# cat secret
u: "sam@gmail.com
p: "mypass"

# cat mymail.yaml
- hosts: 127.0.0.1
  vars_files:
    - mysecret.yaml
  tasks:
  - name: Sending email using Gmail's smtp services
    mail:
      hosts: smtp.gmail.com
      port: 587
      username: u
      password: p
      to: ram@gmail.com
      subject: Testing email from gmail using ansible
      body: system {{ ansible_host }} has been successfully tested.
Run it
# ap mymail.yaml
say you open the secret file or uploaded on the net, someone will get access to your account.
so, you would like to re-arrange this file.

# cat mymail.yaml
- hosts: 127.0.0.1
  vars_files:
    - mysecret.yaml
  tasks:
  - name: Sending email using Gmail's smtp services
    mail:
      hosts: smtp.gmail.com
      port: 587
      username: u
      password: p
      to: ram@gmail.com
      subject: Testing email from gmail using ansible
      body: system {{ ansible_host }} has been successfully tested.
Run it
# ap mymail.yaml
Update with your real password.
# cat secret
u: "sam@gmail.com
p: "mypass_secret"
you include actual password of your gmail account

now, run the code - your play
# ap mymail.yaml
still got authentication error ..

there might be an issue with mail server
go to your gmail account
google for gmail secure app
myaccount.google.com
less secure app access?
say yes to allow the email sending feature.
now, anyone with this feature can send email.
re-run your play
# ap mymail.yaml

somehow, its not working ...
In this play, we see a security flaw.. someone may see confidential 
information at office or any place (say net) if you upload on the net
accidently.


# cat mymail.yaml
- hosts: 127.0.0.1
  vars_files:
    - mysecret.yaml
  tasks:
  - name: Sending email using Gmail's smtp services
    mail:
      hosts: smtp.gmail.com
      port: 587
      username: {{ u }}
      password: {{ p }}
      to: ram@gmail.com
      subject: Testing email from gmail using ansible
      body: system {{ ansible_host }} has been successfully tested.


Now, to secure your mysecret.yaml file, -> lock it with key.
Locking with key in called vault.
how do we lock it?
There is a command called ansible-vault
how do I lock this file
get help
# ansible-vault -h 
encript yaml file
look at the syntax
let go ahead and create a vault
# ansible-vault -h
# av create mysecure.yaml
enter your pw
Vault password: EnterYourPassword
now store your variable
u: "sam@gmail.com"
p: "password123"
now this file is encripted..
# cat mysecure.yaml
you can't read the output.

How do I edit
# av edit myseure.yaml
How do I view it
# av view mysecure.yaml
This is a better way to manage your password.
Even this file is shared, your password is kept secret.

# cat mymail.yaml
- hosts: 127.0.0.1
  vars_files:
    - mysecret.yaml
  tasks:
  - name: Sending email using Gmail's smtp services
    mail:
      hosts: smtp.gmail.com
      port: 587
      username: {{ u }}
      password: {{ p }}
      to: ram@gmail.com
      subject: Testing email from gmail using ansible
      body: system {{ ansible_host }} has been successfully tested.

# ap mymail.yaml

prompt for pw.
# ap --ask-vault-pw m-yaml # --vault-password-file to specify the file.
Enter vault pw which will unlock the file
and start executing the code..

to get other options, use help file
# ansible-vault -h
to change the pw, use rekey
edit - to edit the file to make change to the file


dynamic inventory
------------------------

Wednesday, December 16, 2020

Kubernetes - LAB - Resource Requirements and Limits, Daemonset, Static PODS

 Kubernetes - LAB

OOMkill - outof memory killed

LAB - Resource Requirements and Limits, Daemonset, Static PODS

1. A pod named 'rabbit' is deployed. Identify the CPU requirements set on the Pod
in the current(default) namespace

-> Run the command 'kubectl describe pod rabbit' and inspect requests.
under Requests, there is 1.

2. Delete the 'rabbit' Pod.
Once deleted, wait for the pod to fully terminate.
-> Run the command 'kubectl delete pod rabbit'.

3. Inspect the pod elephant and identify the status.

OOM killed

4. The status 'OOMKilled' indicates that the pod ran out of memory. Identify the memory limit set on the POD.
Under Request, mem is set to 5Mi

5. The elephant runs a process that consume 15Mi of memory. Increase the limit of the elephant pod to 20Mi.
Delete and recreate the pod if required. Do not modify anything other than the required fields.

    Pod Name: elephant
    Image Name: polinux/stress
    Memory Limit: 20Mi

# kc edit pod elephant

change the value to 20m on lilit section and save it. It will same a temp file as /tmp/kuber....

once you exit with error, delete the pod and recreate it.
wq!
# kc delete pod elephant
# kc create -f /tmp/kuber.....
# kc get pod
you will see running state.


6. Delete the 'elephant' Pod.
Once deleted, wait for the pod to fully terminate.
-> Run the command 'kubectl delete pod elephant'.


Daemon-set - LAB
------------------
1. How many DaemonSets are created in the cluster in all namespaces?
Check all namespaces
-> Run the command kubectl get daemonsets --all-namespaces

# kc get daemonset
# kc get daemonsets --all-namespaces
6 of them available
# kc get ds --all-namespaces

2. Which namespace are the DaemonSets created in?

The above command show, its kube-system

3. Which of the below is a DaemonSet?
-> Run the command kubectl get all --all-namespaces and identify the types

# kc get all --all-namespaces | more
weave-net

4. On how many nodes are the pods scheduled by the DaemonSet kube-proxy

-> Run the command kubectl describe daemonset kube-proxy --namespace=kube-system
# kc describe daemonset kube-proxy --namespace=kube-system | more

no of nodes scheduled : 2

or

Daemonsets are created on all nodes
# kc get nodes
# kc -n kube-system  get pods | proxy

we have two pods
# kc -n kube-system  get pods -o wide | proxy

You can see the nodes schedules..

5. What is the image used by the POD deployed by the kube-flannel-ds-amd64 DaemonSet?

# kc -n kue-system describe ds weave-net | grep -i image
you see 2 image output

6. Deploy a DaemonSet for FluentD Logging.
Use the given specifications.

    Name: elasticsearch
    Namespace: kube-system
    Image: k8s.gcr.io/fluentd-elasticsearch:1.20

Look for kubernetes documentation
# kc create deployment elasticsearch --image=k8s.gcr.io/fluentd-elasticsearch:1:20 --dry-run -o yaml > elastic-search.yaml
# vi elastic-search.yaml
chane from deployment to DaemoSset

go to metadata section and add a new entry for
namespace: kube-system

delete the replicas field
also delete strategy since daemonset does not use strategy

delete any extra fields not required.

# kc apply -f elastic-.yaml
# kc -n kube-system get ds elasticsearch


Static PODS - check kubelet.service file to see the config or definition file location
/etc/kubernetes/manifests

-----------------
# docker ps

1. How many static pods exist in this cluster in all namespaces?
-> Run the command kubectl get pods --all-namespaces and look for those with -controlplane appended in the name

# kc get all --all-namespaces

for static node look for master node name or other node name - suffix

There are 4 controlplances

or
# kc get pods --all-namespaces
# kc get pods -all-namespaces | grep "\-master"

2. Which of the below components is NOT deployed as a static pod?
kubeproxy

3. On what nodes are the static pods created?
-> Run the kubectl get pods --all-namespaces -o wide

control plane

4. What is the path of the directory holding the static pod definition files?

# ps -ef | grep kiubelet

look for --config
/var/lib/kubelet/config/yaml
and search for static
# grep -i statilc /var/lib/kubelet/config/yaml

you see /etc/kubernetes/manifests

5. How many pod definition files are present in the manifests folder?
-> ount the number of files under /etc/kubernetes/manifests

6. What is the docker image used to deploy the kube-api server as a static pod?
-> Check the image defined in the /etc/kubernetes/manifests/kube-apiserver.yaml manifest file.
# grep -i image /kube-apiserver.yaml

7. Create a static pod named static-busybox that uses the busybox image and the command sleep 1000

    Name: static-busybox
    Image: busybox


# kc run static-busubx --image=busybox --command sleep 1000 --restart+never --dry-run  -o yaml>static-busybox.yaml
# kc get pods




8. Edit the image on the static pod to use busybox:1.28.4

    Name: static-busybox
    Image: busybox:1.28.4
# cat statuc.busybox.yaml
change the image version to
image: busybox:1.28.4

kc get pods

9. kc get pods
check the suffix

look for the config file

# kc get pods
# kc get node node01 -o wide

# ssh node01
it failed because it is not added to host entry or dns

copy the internal-ip form kc get node node01 -o wide output

# ssh IP-address

you can login

# ps -ef | grep kubelet | grep "\--config"

check the --config path
# grep -i static /var/lib/kubelet/config

Remove the pod definition file...
# rm -rf greenbox.yaml
logout

go to master node and run
# kc get nodes
# kc get pods

10. We just created a new static pod named static-greenbox. Find it and delete it.

    Static pod deleted


Remove the pod definition file...
# rm -rf greenbox.yaml
logout

go to master node and run
# kc get nodes
# kc get pods



Multiple Schedulers
--------------------------

1. What is the name of the POD that deploys the default kubernetes scheduler in this environment?
-> Run the command 'kubectl get pods --namespace=kube-system'
# kc -n kube-system get pods

we see name of the scheduler is kube-scheduler-master

2. What is the image used to deploy the kubernetes scheduler?
Inspect the kubernetes scheduler pod and identify the image
-> Run the command 'kubectl describe pod kube-scheduler-controlplane --namespace=kube-system'

# kc -n kube-system describe pod kube-scheduler-controlplane  | grpe -i image

we see two images
the version is 1:19.0

3. Deploy an additional scheduler to the cluster following the given specification.
Use the manifest file used by kubeadm tool. Use a different port than the one used by the current one.

    Namespace: kube-system
    Name: my-scheduler
    Status: Running
    Custom Scheduler Name

-> Use the file at /etc/kubernetes/manifests/kube-scheduler.yaml to create your own scheduler.

# cd /etc/kubernetes/manifests
# cp kube-scheduler.yaml /root/my-scheduler.yaml

google for multiple scheduler or @kubernetes doc page, search for multiple schedulers

go to the example section

look under command section
copy leader
and scheduler sections and change the value to

--leader-elect=false
--scheduler-name=my-scheduler

go to metadata section and change the name to
name: my-scheduler

go to spec section and go down and change the name of container

name: my-scheduler

once modified, run the command

# kc create -f my-scheduler.yaml
# kc -n kube-system get pods

review the output
my-scheduler is created..

4. A POD definition file is given. Use it to create a POD with the new custom scheduler.
File is located at /root/nginx-pod.yaml

    Name: nginx
    Uses custom scheduler
    Status: Running


create a new schection under spec section

spec:
  schedulerName:my-scheduler

save the file and run,
# kc create -f my-scheduler.yaml

# kc -n kube-system get pods
# kc describe pod nginx



Kube-scheduler


Kubernetes - Taints and Tolerations and Node Affinity rules

Taints and Tolerations and Node Affinity rules

1. How many Nodes exist on the system?
including the master/controlplane node

-> Run the command 'kubectl get nodes' and count the number of nodes.
# kc get nodes

2. Do any taints exist on node01?
-> Run the command 'kubectl describe node node01' and see the taint property

# kc describe node node01 | grep taint

None

3. Create a taint on node01 with key of 'spray', value of 'mortein' and effect of 'NoSchedule'
-> Run the command 'kubectl taint nodes node01 spray=mortein:NoSchedule'.

# kc taint nodes node01 spray=mortein:NoSchedule
# kc describe nodes node01 | grep -i taint

4. Create a new pod with the NGINX image, and Pod name as 'mosquito'

# kc run mosquitos --image=nginx
# kc get pods
or
# cat mosquitos.yaml
---
apiVersion: v1
kind: Pod
metadata:
  name: mosquito
spec:
  - image: nginx
    name: mosquito

5. What is the state of the POD?
-> Run the command 'kubectl get pods' and see the state
# kc get pods -o wide
Its on pending state

6. Why do you think the pod is in a pending state?
POD Mosquito can not tolerate ttaint mortein.

7. Create another pod named 'bee' with the NGINX image, which has a toleration set to the taint Mortein

# cat bee.yaml
---
apiVersion: v1
kind: Pod
metadata:
  name: bee
spec:
  containers:
  - image: nginx
    name: bee
  tolerations:
    - key: spray
      value: mortein
      effect: NoSchedule
      operator: Equal

# kc create -f bee.yaml

8. Notice the 'bee' pod was scheduled on node node01 despite the taint.

# kc get pod -o wide

because we set toleration on bee pod.

8. Do you see any taints on master/controlplane node?
-> Run the command 'kubectl describe node master/controlplane' and see the taint property

# kc describe node controlplane | grep -i taint
does have taints - NoSchedule

9. Remove the taint on master/controlplane, which currently has the taint effect of NoSchedule
-> Run the command 'kubectl taint nodes master/controlplane node-role.kubernetes.io/master:NoSchedule-'.
# kc taint node controlplane node-role.kubernetes.io/master:NoSchedule-

remember the - at the end.

10. What is the state of the pod 'mosquito' now? Which node is the POD 'mosquito' on now?
-> Run the command 'kubectl get pods'
Running on controlplane


Node Affinity

1. How many Labels exist on node node01?
-> Run the command kubectl describe node node01 and count the number of labels.

5 - GO to Labels section and count

2. What is the value set to the label beta.kubernetes.io/arch on node01?
-> Run the command kubectl describe node node01 OR kubectl get node node01 --show-labels and check the value for the label
it is set to amd64

3. Apply a label color=blue to node node01
-> Run the command kubectl label node node01 color=blue.

# kc label node node01 color=blue
# kc describe node node01 | more  or
# kc get node node01 --show-lables

4. Create a new deployment named blue with the nginx image and 6 replicas
-> Run the command kubectl create deployment blue --image=nginx followed by kubectl scale deployment blue --replicas=6
# kc create deployment blue --image=nginx --replicas=6
# kc get pods -o wide

5. Which nodes can the pods for the blue deployment placed on?
-> Check if master/controlplane and node01 have any taints on them that will prevent the pods to be scheduled on them. If there are no taints, the pods can be scheduled on either node.

# kc describe node node01 | grep -i traints
# kc describe node controlplane | grep -i traints

# kc get pods -o wide
On controlplane and node01 since no traits set on hosts.
no traits set , so it can be on either of them


6. Set Node Affinity to the deployment to place the pods on node01 only

Name: blue
Replicas: 6
Image: nginx
NodeAffinity: requiredDuringSchedulingIgnoredDuringExecution
Key: color
values: blue

Create a yaml file

# cat pod-deployment.yaml
apiVersion: v1
kind: Deployment
metadate:
  name: Blue
spec:
  replicas: 6
  selector:
    matchLabels:
      run: nginx
    template:
      metadata:
        labels:
          run: nginx
      spec:
        conainers:
        - name: nginx
          image: nginx
          imagePullPolicy: Always
        affinity:
          nodeAffinity:
            requiredDuringSchedulingIgnoreDuringExecution: no
              nodeSelectorTerms:
              - mathExpressions:
                - key: size
                  operator: In    # NotIn
                  values:
                  - blue


# kc delete deployment blue
# kc apply -f pid-definition.yaml

7. Which nodes are the pods placed on now?

-> Run the command kubectl get pods -o wide and see the Node column

All pods are on node1


8. Create a new deployment named red with the nginx image and 3 replicas, and ensure it gets placed on the master/controlplane node only.

Use the label - node-role.kubernetes.io/master - set on the master/controlplane node.

Name: red
Replicas: 3
Image: nginx
NodeAffinity: requiredDuringSchedulingIgnoredDuringExecution
Key: node-role.kubernetes.io/master
Use the right operator


# cat pod-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadate:
  name: Red
spec:
  replicas: 3
  selector:
    matchLabels:
      run: nginx
    template:
      metadata:
        labels:
          run: nginx
      spec:
        conainers:
        - name: nginx
          image: nginx
          imagePullPolicy: Always
        affinity:
          nodeAffinity:
            requiredDuringSchedulingIgnoreDuringExecution:
              nodeSelectorTerms:
              - mathExpressions:
                - key: node-role.kubernetes.io/master
                  operator: Exists



Git branch show detached HEAD

  Git branch show detached HEAD 1. List your branch $ git branch * (HEAD detached at f219e03)   00 2. Run re-set hard $ git reset --hard 3. ...