Thursday, April 29, 2021

Kubectl day 43 - CRI-O, etcd HPA

 Multi-Node

today's topic CRI-O, etcd/HPA

aws -> ubuntu instance

t2 medium 2/4

instances - 3 -> 1 master, 2 worker nodes

tag
name: ubuntu_kube_CRI-O


attach/create private key

How to configure multi-node cluster?

we have master node and attach nodes
- by default we will have docker engine and will replace with CRI-O




On your PC, launch your minikube
> minikube start



Get the document, and fllow

copy the IP of your cloud instance, and login

- configure repo (apt)
 - cri-o -software setup

$ apt update
updates the repo

$ apt install -qq -y cri-o crio-runc cri-rools

go to cri-o
you get software for different OS flavor


# systemctl daemon-repload
# systemctl enable --now crip
# systemtl status crio    # our container enginer is started

instead of docker, we are using crio here.

almost the commands are same

type cri and tab

# crictl images
# crictl ps
# crictl pull httpd (pulls image)
# crictl images

technicaly, commands are very same, if you know docker, you can use

we don't use CRI-O directly but we ask k8s to manage it,.


setup another repo for k8s where kubernetes software is
for kubelet, kubeadm

start master
# kubeadm init --apiserver .....-advertise-address=<master_IP> --pod-network-cidr=192.168.0.0/16

you may  have to do some ip forwarding stuffs

setup some driver to use (overlay) - follow document



restart crio service
systemctl daemon-reload
systemctl enable --now crio

restart firewall
systemctl disable --now ufw

now, rerun kubeadm init --api....

your master is ready

keep the output safe
2. Now, go to another cloud system (worker node1)
login

you can put all commands together in shell and run or convert into play on ansible (playbook) and execute


set up repo as before for cri-o software to be downloaded

apt-install -qq -y cri-o cri-o-runc

systemctl daemon-reload
systecmtl enable --now crio

setup repo for kube software as well.

modeprobe overlay

swap off and firewall disable

kubeadm init ......


@master
kc get pods
# kc get nodes

you see only one master node
# kc describe nodes ip-172....

look for runtime container: CRI-O

you need overlay network (flannel)

we will use calico today

follow the doc ....


get the join command from master and go to worker node and paste


@worker node
kubeadm join <ip:port> --tocken .....

go to @master and run
# kc get nodes
you see master and worker nodes


# ps -aux | grep kubelet
you will see output for container run-time engine as CRI-O


# crictl ps

you see lots of containers running
# crioctl images # you see lots of images

# kc create deplopymnet myd --image=httpd
# kc get pods

everything is same but the contianer is managed by cri-o.

# kc get pods -o wide
# kc get


go to node

# crictl ps
# crictl images

# crictl exec ....
# crictl exec -lt <pod> sh


==============================

etcd/HPA

Service Mesh

kube
 - pod
 - secret
  - quota/limit

everything is stored in database
name of the database is etcd

this database is stored inside pod

> kc get pods -n kube-system

you see etcd pod

etcd is third party tool.. you can install on your own too.
- its a simple database with key=value format

everything is stored in etcd

if you plan to migrate or upgrade, you first backup the etcd and upgrade.
once upgrade is done, you restore it.

This is one of the way.
How to create backup/restore

# kc get pods
# kc get pods -o wide
# kc get nodes


Lets go to etcd

> kc -n kube-system exec -it etcd-minikube -- sh

you are inside etcd pod

# etcdctl
you will see help

kubernetes creates etcd with some security

describe the pod

> kc describe pod kube-system ...


go under etcd
you see listen-clietn-urls ..

you will also see the certificates

you need private key
server cert
and ca.crt

/var/lib/minukube/

go inside the pod
> kc -n kube-system exec -lt etcd-minukube -- sh

# cd /var/lib
there is no ls command

put some data on database

# etcdctl put xyz Sam

(it will add this data to database)

it will fail, because this database is secure, need username/pw.

follow etcd and hpa document.

<.crt" --key="


take a backup
snapshot save </var/tmp/dtcd.bk.db>


snapshot status
# etcdctl snapshot status
you have to specify cert, key as wel


to restore
# etcdctl <key-info> snapshot restore /var/tmp/etcd.bk.db --data-dir=/var/tmp/etcd.db

it will restore to /var/tmp/etcd.db

kubernetes is a tool to manage pods, database and more..

for migration and upgrade, you make a backup of etcd database

-------------------------------------
HPA -
---

scalling ->
we have one pod running
- load increases, we add new pod with replica SEt managed by deployment. it will scale out
- no load - scale down.

we have to keep on monitoring manually because we want if some traffic comes, we have enough capacity to handle the load.

we don't have metric server thats why we have to keep checking


> kc get pods -n kube-system

you will see 'metric server '


> kc top pods

our metric server, keep on monitoring our resources.

in kubernetes programs are called respurces

keep on checking pod, if cpu is more than 90% cpu load, (metric server monitors), something triggers to create a new pod.

if cpu goes over 90% keep on launching a new pod.

but note: You have to limit the max limit otherwise you may encounter problem. say 10 max.



we have one of the respurces HPA to do this job

clean your environment.
# kc get hpa
# kc delete hpa --all
# kc delete all --all

( Google for horizontal pod autoscale walkthough - get link at kubernetes.io)
auto-scaling

follow the doc

> notepad h1.yaml

> kc create -f h1.yaml

> kc get pods

create horizontal pod scaler

> kubectl autoscal deployment pho-apache --cpu-percent=50 --min=1 --max=10

hpa has been comfigured
> kc get hpa

target is unknown

you need to have metric server already configured.
> kc top pods

your pod has 0 utilization
when load increase, it will launch a new pod

to increase the load,
go to section increase load on doc

> kc get pods
> kc get hpa
you see new pod

> kc top pods

> kc get hpa
look the value under targets

stop the load and gradually pod will start disapearing ..

============
review

RBAC -> Rolebinding

high availibity cluster

node affinity rule

================================


etcd/HPA


No comments:

Post a Comment

Git branch show detached HEAD

  Git branch show detached HEAD 1. List your branch $ git branch * (HEAD detached at f219e03)   00 2. Run re-set hard $ git reset --hard 3. ...