Ansible - Dynamic Inventory - 12/30/2020
CM -> Controller Node -> PB - code ----> Managed Nodes
IP add into inventory
We have been manually updating the inventory with new IP address.
This process is called static inventory.
There are cases, that you don't know the ip of the target node. You have to login to target node and check the IP.
or upon reboot, system's IP get changed.
We are on dynamic world. We may bring a server for testing purpose and after that we shutdown.
After a while we may bring the server and IP get changes.
Say your env has thousands of servers.
or you have server is build on could - can be aws, google, azure or any other source or locally..
You have new IPs or your instance is on docker/container.
OS can be from any source and IP keep changing..
- VM
- Cloud -> AWS, GCP, Azure
- Containers
We want such a mechanism, where we will configure playbook on certain context.
We can make our inventory little intellegent or call it dynamic.
What that mean?
- we will not write IP manually on inventory file.
- Since we don't know, and can't add it to inventory.
run playbook or ad-hock commands and
scan -> new IPs
what info do you provide?
- need ssh/IP for linux hosts..
we will have a playbook which goes out to aws
1. instal os (EC2) - provision a server
2. Configure webserver
in playbook, you have to define as,
- hosts: ip
tasks
- configure web server
You can only run this only if you know the IP.
if you know ip, you have to add this ip to inventory on control node.
# ansible all --list-hosts
never use IP in the playbook.
rather use group name.
os1 - 1.2.3.4
os2 - 1.2.3.5
os3 - 1.2.3.6
horizontal scaling -> adding more hosts
----------------------------
inventory
# cat /etc/ansible/ansible.cfg
# more /root/myhosts
create a single file, and update the config file.
You may be using multiple inventory files with different app, subnet or any other purpose.
# anlsible all --list-hosts
extension can be .py,yaml or no extention.
[root@master mydb]# cat >a
1.1.1.1
[root@master mydb]# cat >b
2.2.2.2
[root@master mydb]# cat >c
3.3.3.3
[root@master mydb]# ls
a b c
Update the inventory file to point the directory.
# vi /etc/ansible/ansible.cfg
Since ansible accept .py extention, we can write python code as well..
you can use scanning tool 'nmap' to scan
# ansible all --list-hosts
[root@master mydb]# cat my.py
#!/usr/bin/python3
print("5.5.5.5")
the display is not proper. IP comes but with print
but if you ask manually it displays properly.
You have to follow certain format.
check Bimal Daga's github
github.com/vimallinuxworld12/ansible_dynamic_inventory/master/hosts.py
download it:
# weget <download URL>
hosts.py get it from bimal's page
# cp hosts.py mydb
# chmod +x hosts.py
# python3 hosts.py --list
in exam, they give you pre-created file and need to copy it and run from there.
now, you can run ansible all --list-hosts
# ansible all --list-hosts
ansible gives you on good format that ansible understands it.
get another URl from ansible github link
http://github.con/ansible/treee/stable-2.9/controlb/inventory
There is a script just download and use it.
download the ec2.py file
# chmod +x ec2.py
Run manually
# python2 ec2.py
You need to install library called boto if you don't have it
# pip3 list
# pip3 install boto
# python2 3c2.py -> it failed again...
# python3 ec2.py --list
it mght be because of lower version of python
# ./ec2.py --list
stil a problem.
# vi ec2.py and change the path pytohon path.
#!/usr/bin/python3
Go to aws dashboard
you have to specify
1. region info
2. API
3. Login and pw info
# vi ec2.py
update the code with region / pw
you can create a variable and you done...
On dashboard
- IAM -> create user with power : poweruseraccount
click, click and click ....
record your access key and
you can use export variable
export AWS_ACCESS_KEY_ID='AJDHSJHDSHDDSDD'
export AWS_SECRET_ACCESS_Key='dsfsdfsdfsdfsdfsdfsd'
export AWS_REGION='US-EAST-1a'
# ansible all --list-hosts
-------------------------
launch an aws instance manualy
Tag:
name: mywebos
country: Nepal
DadaCenter: Virginia
There is another file ec2.ini
download it as well.
giving error again.. error on 172 line.
go ahead and commentout the line and run it again
# ansible all --list-hosts
always tag the os on the cloud
Key value
Country US
DC dc2
Tech web
# ./ec2.py --list
keep ec2.ini file on the same location as ec2.py
Tag is really importand to work with ansible
# ansible tag_Country_US--list-hosts
# ansible tag_Country_IN--list-hosts
Now, in summary,
1. Launch an os using ansible playbook
tag
2. Configure dynamic inventory
3. Write a playbook to configure a web server on the instance on the cloud.
hint: use -hosts: tag_country_US
grab all IP and install web server...
ansible-docs -l
Install Kubernetes Cluster with Ansible
We are going to install a Kubernetes control plane with two worker nodes using Ansible.
Note that installation of Ansible control node is not covered in this article.
Pre-requisites
This guide is based on Debian Stretch. You can use Ubuntu as well.
- Ansible control node with Ansible 2.9 to run playbooks.
- 3x Debian Stretch servers with the ansible user created for SSH.
- Each server should have 2x CPUs and 2GB of RAM.
/etc/hosts
file configured to resolve hostnames.
We are going to use the following ansible.cfg
configuration:
[defaults] inventory = ./inventory remote_user = ansible host_key_checking = False private_key_file = ./files/id_rsa [privilege_escalation] become=False become_method=sudo become_user=root become_ask_pass=False
The content of the inventory
file can be seen below:
[k8s-master] 10.11.1.101 [k8s-node] 10.11.1.102 10.11.1.103 [k8s:children] k8s-master k8s-node
The Goal
To deploy a specific version of a 3-node Kubernetes cluster (one master and two worker nodes) with Calico networking and Kubernetes Dashboard.
Package Installation
Docker
Configure packet forwarding and install a specific version of Docker so that we can test upgrades later.
--- - name: Install Docker Server hosts: k8s become: true gather_facts: yes vars: docker_dependencies: - ca-certificates - gnupg - gnupg-agent - software-properties-common - apt-transport-https docker_packages: - docker-ce=5:18.09.6~3-0~debian-stretch - docker-ce-cli=5:18.09.6~3-0~debian-stretch - containerd.io - curl docker_url_apt_key: "https://download.docker.com/linux/debian/gpg" docker_repository: "deb [arch=amd64] https://download.docker.com/linux/debian stretch stable" tasks: - name: Debian | Configure Sysctl sysctl: name: "net.ipv4.ip_forward" value: "1" state: present - name: Debian | Install Prerequisites Packages package: name={{ item }} state=present force=yes loop: "{{ docker_dependencies }}" - name: Debian | Add GPG Keys apt_key: url: "{{ docker_url_apt_key }}" - name: Debian | Add Repo Source apt_repository: repo: "{{ docker_repository }}" update_cache: yes - name: Debian | Install Specific Version of Docker Packages package: name={{ item }} state=present force=yes install_recommends=no loop: "{{ docker_packages }}" notify: - start docker - name: Debian | Start and Enable Docker Service service: name: docker state: started enabled: yes handlers: - name: start docker service: name=docker state=started ...
Kubeadm and Kubelet
Install a specific version of kubeadm so that we can test upgrades later.
--- - name: Install Kubernetes Server hosts: k8s become: true gather_facts: yes vars: k8s_dependencies: - kubernetes-cni=0.6.0-00 - kubelet=1.13.1-00 k8s_packages: - kubeadm=1.13.1-00 - kubectl=1.13.1-00 k8s_url_apt_key: "https://packages.cloud.google.com/apt/doc/apt-key.gpg" k8s_repository: "deb https://apt.kubernetes.io/ kubernetes-xenial main" tasks: - name: Disable SWAP K8S will not work with swap enabled (1/2) command: swapoff -a when: ansible_swaptotal_mb > 0 - name: Debian | Remove SWAP from fstab K8S will not work with swap enabled (2/2) mount: name: "{{ item }}" fstype: swap state: absent with_items: - swap - none - name: Debian | Add GPG Key apt_key: url: "{{ k8s_url_apt_key }}" - name: Debian | Add Kubernetes Repository apt_repository: repo: "{{ k8s_repository }}" update_cache: yes - name: Debian | Install Dependencies package: name={{ item }} state=present force=yes install_recommends=no loop: "{{ k8s_dependencies }}" - name: Debian | Install Kubernetes Packages package: name={{ item }} state=present force=yes install_recommends=no loop: "{{ k8s_packages }}" ...
Kubernetes Configuration
Create a Kubernetes control plane and join two other servers as worker nodes. Note the content of the file files/dashboard-adminuser.yaml
below:
apiVersion: v1 kind: ServiceAccount metadata: name: admin-user namespace: kube-system apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: admin-user roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: admin-user namespace: kube-system
Deploy Kubernetes cluster.
--- - name: Deploy Kubernetes Cluster hosts: k8s gather_facts: yes vars: k8s_pod_network: "192.168.192.0/18" k8s_user: "ansible" k8s_user_home: "/home/{{ k8s_user }}" k8s_token_file: "join-k8s-command" k8s_admin_config: "/etc/kubernetes/admin.conf" k8s_dashboard_adminuser_config: "dashboard-adminuser.yaml" k8s_kubelet_config: "/etc/kubernetes/kubelet.conf" k8s_dashboard_port: "6443" k8s_dashboard_url: "https://{{ ansible_default_ipv4.address }}:{{ k8s_dashboard_port }}/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login" calico_rbac_url: "https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml" calico_rbac_config: "rbac-kdd.yaml" calico_net_url: "https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml" calico_net_config: "calico.yaml" dashboard_url: "https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml" dashboard_config: "kubernetes-dashboard.yml" tasks: - name: Debian | Configure K8S Master Block block: - name: Debian | Initialise the Kubernetes cluster using kubeadm become: true command: kubeadm init --pod-network-cidr={{ k8s_pod_network }} args: creates: "{{ k8s_admin_config }}" - name: Debian | Setup kubeconfig for {{ k8s_user }} user file: path: "{{ k8s_user_home }}/.kube" state: directory owner: "{{ k8s_user }}" group: "{{ k8s_user }}" mode: "0750" - name: Debian | Copy {{ k8s_admin_config }} become: true copy: src: "{{ k8s_admin_config }}" dest: "{{ k8s_user_home }}/.kube/config" owner: "{{ k8s_user }}" group: "{{ k8s_user }}" mode: "0640" remote_src: yes - name: Debian | Download {{ calico_rbac_url }} get_url: url: "{{ calico_rbac_url }}" dest: "{{ k8s_user_home }}/{{ calico_rbac_config }}" owner: "{{ k8s_user }}" group: "{{ k8s_user }}" mode: "0640" - name: Debian | Download {{ calico_net_url }} get_url: url: "{{ calico_net_url }}" dest: "{{ k8s_user_home }}/{{ calico_net_config }}" owner: "{{ k8s_user }}" group: "{{ k8s_user }}" mode: "0640" - name: Debian | Set CALICO_IPV4POOL_CIDR to {{ k8s_pod_network }} replace: path: "{{ k8s_user_home }}/{{ calico_net_config }}" regexp: "192.168.0.0/16" replace: "{{ k8s_pod_network }}" - name: Debian | Download {{ dashboard_url }} get_url: url: "{{ dashboard_url }}" dest: "{{ k8s_user_home }}/{{ dashboard_config }}" owner: "{{ k8s_user }}" group: "{{ k8s_user }}" mode: "0640" - name: Debian | Install calico pod network {{ calico_rbac_config }} become: false command: kubectl apply -f "{{ k8s_user_home }}/{{ calico_rbac_config }}" - name: Debian | Install calico pod network {{ calico_net_config }} become: false command: kubectl apply -f "{{ k8s_user_home }}/{{ calico_net_config }}" - name: Debian | Install K8S dashboard {{ dashboard_config }} become: false command: kubectl apply -f "{{ k8s_user_home }}/{{ dashboard_config }}" - name: Debian | Create service account become: false command: kubectl create serviceaccount dashboard -n default ignore_errors: yes - name: Debian | Create cluster role binding dashboard-admin become: false command: kubectl create clusterrolebinding dashboard-admin -n default --clusterrole=cluster-admin --serviceaccount=default:dashboard ignore_errors: yes - name: Debian | Create {{ k8s_dashboard_adminuser_config }} for service account copy: src: "files/{{ k8s_dashboard_adminuser_config }}" dest: "{{ k8s_user_home }}/{{ k8s_dashboard_adminuser_config }}" owner: "{{ k8s_user }}" group: "{{ k8s_user }}" mode: "0640" - name: Debian | Create service account become: false command: kubectl apply -f "{{ k8s_user_home }}/{{ k8s_dashboard_adminuser_config }}" ignore_errors: yes - name: Debian | Create cluster role binding cluster-system-anonymous become: false command: kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous ignore_errors: yes - name: Debian | Test K8S dashboard and wait for HTTP 200 uri: url: "{{ k8s_dashboard_url }}" status_code: 200 validate_certs: no ignore_errors: yes register: result_k8s_dashboard_page retries: 10 delay: 6 until: result_k8s_dashboard_page is succeeded - name: Debian | K8S dashboard URL debug: var: k8s_dashboard_url - name: Debian | Generate join command command: kubeadm token create --print-join-command register: join_command - name: Debian | Copy join command to local file become: false local_action: copy content="{{ join_command.stdout_lines[0] }}" dest="{{ k8s_token_file }}" when: "'k8s-master' in group_names" - name: Debian | Configure K8S Node Block block: - name: Debian | Copy {{ k8s_token_file }} to server location copy: src: "{{ k8s_token_file }}" dest: "{{ k8s_user_home }}/{{ k8s_token_file }}.sh" owner: "{{ k8s_user }}" group: "{{ k8s_user }}" mode: "0750" - name: Debian | Join the node to cluster unless file {{ k8s_kubelet_config }} exists become: true command: sh "{{ k8s_user_home }}/{{ k8s_token_file }}.sh" args: creates: "{{ k8s_kubelet_config }}" when: "'k8s-node' in group_names" ...
Verify Cluster Status
Cluster nodes:
$ kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME debian9-101 Ready master 5m30s v1.13.1 10.11.1.101 Debian GNU/Linux 9 (stretch) 4.9.0-13-amd64 docker://18.9.6 debian9-102 Ready < none> 2m30s v1.13.1 10.11.1.102 Debian GNU/Linux 9 (stretch) 4.9.0-13-amd64 docker://18.9.6 debian9-103 Ready < none> 2m29s v1.13.1 10.11.1.103 Debian GNU/Linux 9 (stretch) 4.9.0-13-amd64 docker://18.9.6
Cluster pods:
$ kubectl get pods -n kube-system -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES calico-node-fghpt 2/2 Running 0 3m29s 10.11.1.101 debian9-101 calico-node-stf8j 2/2 Running 0 77s 10.11.1.103 debian9-103 calico-node-tqx8r 2/2 Running 0 78s 10.11.1.102 debian9-102 coredns-54ff9cd656-52x2n 1/1 Running 0 3m46s 192.168.192.3 debian9-101 coredns-54ff9cd656-vzg6b 1/1 Running 0 3m47s 192.168.192.2 debian9-101 etcd-debian9-101 1/1 Running 0 4m3s 10.11.1.101 debian9-101 kube-apiserver-debian9-101 1/1 Running 0 3m55s 10.11.1.101 debian9-101 kube-controller-manager-debian9-101 1/1 Running 0 3m53s 10.11.1.101 debian9-101 kube-proxy-gc7r2 1/1 Running 0 78s 10.11.1.103 debian9-103 kube-proxy-tfvmj 1/1 Running 0 79s 10.11.1.102 debian9-102 kube-proxy-xf8pv 1/1 Running 0 3m48s 10.11.1.101 debian9-101 kube-scheduler-debian9-101 1/1 Running 0 3m44s 10.11.1.101 debian9-101 kubernetes-dashboard-57df4db6b-gmmcx 1/1 Running 0 3m25s 192.168.192.4 debian9-101