Tuesday, February 23, 2021

Day 21- Kubernetes - Daemonset(DS)/Static Pod/scheduler

k8s -  Daemonset(DS)/Static Pod/scheduler (Multi) 2/23/2021)

1. Login to AWS console
2. Launch one more EC2 instance. (4 nodes - 1 Master, 3 worker))
3.
   - My AMI -> kubernetes_base_setup_image_
my-vimal

Name:
allow all security rules

finish

a. Make one master
AMI has already pre-configure info
only you have to specify is to tell that which one is master.

Follow the note: Quick setup of Kubernetes Cluster in AWS

Set up master
# kc init --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors-NumCPU --ignore-preflight-errors=Mem --node-name=masterc
different cidr number..
if you look at the command output,  you see static pod for manifests, api, controller, scheduler ..


mkdir -p $HOME/.kube
# cp -i /etc/kubernetes/admin.conf $HOME/.kube/

Flannel has own network ..
Use everlay network

# kc apply -f https:...flanner......

#

worker node with join command
# kubeadm joing <masterIP:6443> --token .....
--node
token is different on every server...

# kubeadm --help

join
# kubeadm join --help

# kubeadm joing <MasterIP:6443> --token

if you get errors

Run flunner command
# generate token
#

To reset worker node
kubeadm reset

joing again...

go to master
# kc get nodes
you should see new node it here

add another node
Login to server
ec2user@ip

# kubeadm join IP:port --token ... --node-node=worker1


to launch a pod,
User sends request to api server -> api server sends request to controller manager (CM)-> it controls replica set.
Control Manager connects to scheduler..
it decides what node to deploy...

scheduler checks some metrics
- RAM/CPU
- Rank....
- more up time, free resourcees

based on rules or internal algorithum, they decide what host to deploy the POD

you can also create your own scheduler..

# kc get pods -n kube-system
scheduler por running on kube-system
# kc describe pods ...scheduler ....

look for image..
gcr - google container registry
Go language...

this is default scheduler ..



or write your own using go or python language...

Lets see master node goes down, how to create pod
At worker node, tyr
# systemctl status/start kubectl
get the config file and review the file
log for staticPodPath: /etc/kubernetes.manifests

# kc get pods


# cd /etc/kubernetes/mainfests

by default, any yaml file to create POD is copied here, pod is launch automatically.

# vi pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: "Mypod1"
  labels:
    app: mypod1

spec:
  containers:
  -name: "webc"
   image: "httpd"

If you remove the file, pod is deleted. This is static pod
# kc edit pod mypod1
review the file and controlled by info..


Master node is also like a worker node.

Masternode has comtroller Manager
- masternode has static pod

when you run kubeinit command, behind th scene, they run multiple static PODs.


# kc get pods -n kube-system

Try to remove
# kc delete pods kube-scheduler-master -n kube-system

even you delete, it wil comes up
it is not managed by api-program...
master node has config file at /etc/kubernetes/manifests; ls
if you remove yaml file from here, (rm kube-controller-manager.yaml)
static pod is gone..

node that ends with master are static nodes.

if you want to schedule any pod is to going manually to worker node and launch.

# create pod on worker node under manifest directory.. it will launch automatically..

# vi pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: "Mypod1"
  labels:
    app: mypod1

spec:
  containers:
  -name: "webc"
   image: "httpd"


kubectl always connects to apiserver.
who will decide who control where to launch?
we don't have scheduler on master node

# kc create deploy myd --image=httpd

# kc get deploy
# kc et pod

status: pending
# kc get pod
# kc get pods -n kube-system

you need schedulers
# kc describe deploy myd

to have scheduler, download and install
# cp /tmp/kube-scheduler.yaml /etc/kubernets/manifest..
# kc get pods -n kube-system

may take a while to oom



You can manually create
# kc create -f kubescheduler.yaml
# kc get pods -n kube-system
# kc delete pods kube-scheduler -n kube-system

now, all your pending state pods are on running state now.
# kc get pods -o wide
# kc get pods -n kube-system
# kc describe pods <POD>

vi scheduler.yaml

review

-> If you want to launch new pod?
scheduler may launch on node1 or 2 or 3.
deoends on RANK, free resources...

google for kuberbetes custom scheduler..


copy your scheduler code ..

# cp kube-scheduler.yaml /root/my-scheduler.yaml

# vi /root/kube-scheduler.yaml

change the name of scheduler
name: my-changed-scheduled
namespace: kube-system
image: specify the image..

why do you need multischeduler
one shceduler - all port launcing, make decision

if one node goes down, bad.
so there is anoother scheduler available...

what scheduler to choose

on config file, make leader..
one active/passive
or
make both file

leader-elect=true/false
scheduler-name=myscheduler
lexader-elect=false/true


google multip scheduler config...

Once you make change, copy it over to manifest dir



# cp my-scheduler.yaml /root/

# cp /root/kube-scheduler.yaml /etc/kubernetes.manifests

# kc create 0f my0scheduler.yaml
this is a new POD and acts as a scheduler

we have two scheduler... one default one and one you created..

you are not leader, false, passive, you are keep monitoring..
# vi kube-scheudler.yaml
leader-elect=true -> primary default

# kc run p1 --image-httpd

pod created
# kc get tpods
# kc get pods -o wide
# kc describe pod p1

review, who launch -> from and wher eit is launch.. node


kc get pods -n kube-system
# mv kube-scheduler.yaml /tmp
now, your scheduler becomes default
# kc get pods
# kc run p2 --image=http
# kc get pods

its not pending state..
our pod has leader-elect=false...
at pod definition file, you have to make changes

google pod definition file for scheduler name
look for multiple scheduler...

spec:
  schedulername


launch pod
# kc run p3 --image-httpd --dry-run -o yaml >p3.yaml
# vi p3.yaml
under spec
spec:
  scheudlerName: my-scheduler

# kc apply -f pd.yaml


# kc delete my-scheduler n kube-system
# mv my-scheduler /tmp/
# vi my-scheduler.yaml
change leader-elect=true
since we don't have HA set up for master

# mv my-scheduleryaml /etc/kubernetes.manifests
# kc get pods -n kube-ssytem
# kc apply -f my-scheduler
@ kc get pods -n kube-system


some reason, you have to launch manually but should have run automaticaly

# kc get pods
now, you can see p3 is on running state
# kc describe pod p3

if you are using one master node, use leader-elect=true
as we see, it is not working...

copy kube-scheduler-yaml file too under mainfest
# kc apply -f kube-schedulr.yaml
# kc get pods -n kube-system


static pod -


Daemonset
-----------
when you launch a pod, you contact apiserver -> scheduler -> decides what node to launch

lets say you want to launch 3 relicase, on 3 nodes
there is possibility to launch all pod on one node or 2 in one and one in other or any condition may happens.
one copy in all nodes.


worker node has a monitoring tool, keep monitor..

kube-proxy is a kind of program, that always runs on all the worker nodes..
# kc get pods -n kube-system

will have instance that matches the node
3 node, 3 proxy.

you also see flannel also created on all nodes.
- flannel is the one who controls the network.


DS- is a concept,
if you need 10 pods, it will distribute to all nodes.

# kc get rs
proxy, flannel not running on replica set
# kc get ds -n kube-system

you see flannel and proxy..


you can use daemon set

# kc create deploy myds --image=httpd --dry-run -o yaml >myds.yaml
# vi myds.yaml

kind: DaemonSet

remove strategy part also
spec:
  selector:
....
remove unneeded options ...


# kc create -f myds.yaml
# kc get ds
it creates two pod. master node has taint and wont allow.
# kc get ds
# kc get pods

# kc get pods -o wide

still on pending state..
no default scheduler
# vi /etc/kubetest/manifest/kube-schelduler.yaml
# mv my-scheduler.yaml /tmp/
# kc get pods -n kube-system
# kc delete pods kube-scheduler -n kube-system

# if it does not utomatically launch, clean up the POD.

finally you can see it launch
# kc get pods
# kc describe pods myds-pods

Lets say you want to add new host/node to the cluster
scale your cluster further..

As soon as you add new node, daemonset automatically kicks in

at master run to get token
# kubeadm token create --print-join-command

on host
# kubeadm joing <masterIP:port> --tocken ..... discovery


# kc get nodes
# kc get ds
# kc get pods -n kube-system


custom scheduler..




No comments:

Post a Comment

Git branch show detached HEAD

  Git branch show detached HEAD 1. List your branch $ git branch * (HEAD detached at f219e03)   00 2. Run re-set hard $ git reset --hard 3. ...