k8s - Node Selector, Node Affinity, Taint/Tolerance - Day20
Today's topic
- Node Selector
- Node Affinity
- Taint/Tolerance
We will continue from yesterdays's session.
We set up a user login using sadmin.kubeconfig
We can login to k8s cluster and take total control of the cluster.
- This setting has cluster admin role -> namespace/cluster role to take control of the cluster.
You can copy this file to any other location and use to authentication.
so, lets copy this file from your PC to linux (Redhat) system.
Use scp (winscp) or just open this file and copy the content over.
1. Login to your linux system and copy the file
Copy over to /root
# kc version
# kc get pods
both cases, it fails, but lets try this
# kc get pods --kubeconfig sadmin.kubeconfig
so when you run any commands, you have to pass these options.
if you want to add these options, eidit config file under .kube
# mkdir .kube; cp sadmin config
Note: Please make sure to make a backup of your existing config file
if you think it is importand..
# kc get pods
it rreturns the value..
Remove all resources on default name space
# kc delete all -all
# kc get ns # check to see if you have any other namespace.
If you do, remove other ns too if you are not planning to use it.
pod always launch on worker node.
to launch pod on particular node, you need kubelet program.
This program is available on worker and master node.
But if you want to run a pod on master node, it does not launch it.
In master node, we configure master with kubeadmin command.
when we config, we tag in such a way that if someone try to launch a pod,
they won't. We won't allow to create pod on master.
This tag is called Tainted
Lets get some node detail.
# kc get nodes
# kc describe nodes <master-node>
and loog for output
you see IP address of master
taints: NoScheduled
NoExecute
Where to launch a POD is the one to decide where to launch the pod and where to schedule.
We see master is tainted, so master will denies if scheduled try to schedule.
So, it will contacts other node, and see if its not tainted, pod will be created.
# kc describe node >node1> | grep -i taint
Taints <none>
looking at the output, we can say node1 is a good candidate to create a POD.
We will see how to untaint the master node later.
How to taint worker node?
Lets go ahead a launch a pod
# mkdir kws; cd kws
To see any detail about the resource, you can get complete detail about the resource
# kc explain pod
its like a help fie
its show what you need, review the key.. it gives you link for more info as well.
# kc explain pod --recursive
it will show all possible keywords in very detail ... on pod file
You don't need all these but better to review.
- We can do lots of stuffs with code..
You can copy paste the syntax to create pod.yaml file.
initially, try this way
# kc run mypod --image=vm13/apache-webserver-php
this will launch pod but your requirement is to ccreate yaml file. use
# kc run mypod --image=vm13/apache-webserver-php --dry-run-client
dry-run is to test something. does it fails, or is there any syntax error
its a quick way to test but it does not implement.
convert it into yaml file format
# kc run mypod --image=vm13/apache-webserver-php --dry-run=client -o yaml
You will see complete yaml file
[root@master ~]# kc run mypod --image=vm13/apache-webserver-php --dry-run=client -o yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: mypod
name: mypod
spec:
containers:
- image: vm13/apache-webserver-php
name: mypod
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
[root@master ~]#
Now, modify
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: mypod
name: mypod
spec:
containers:
- image: vm13/apache-webserver-php
name: mypod
when you run this file, it wil launch the POD
# kc get po -n kube-system
we have one pod scheduler running, it takes the request and decides where to launch the POD.
it may send request to the node where Taint is enable and node may deny access and scheduler will
schedule on other node.
# kc get pods
# kc get nodes
# kc get po -n kubesystem
we may have a node that is running in US and other node in Singapore or Japan.
US might have different policy where you can't launch a pod in US. May be pod only
be allowed to launch in Singapore not anywehre else in the world.
In this case, you may have to manually launch, not using scheduler...
You may have a pod which need lots of resources say 1 TB of ram. you see only one node has that much resources available.
So, pod will be launch on this particular node. say node 2
There may be situation where I/O is high priorority
that is why, you have to give unique name or tag so it will be easy to manage.
When node joins master, node is attach label
# kc get nodes --show-labels
the label is created by system automatically but we can create ourself as well
so node can be tag like web server, machine learning, AI or based on function or performance.
we can tell -
go and search for the tag US or AI, and launch the POD
Lets say we tag a node
setting label to node. label comes in key and value pair
# kc label ip-172....internla region=US
# kc get nodes --show-labels
# kc label node ip-192...internal storage=bigdata
# kc get nodes --show-labels
now, we use this label for label and selector.
now, lets run the pod file
We can add node selector under spec and define the region.
# cat mypod.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
run: mypod
name: mypod
spec:
nodeSelector:
region: US
containers:
- image: vimal13/apache-webserver-php
name: mypod
You can use replica set as well.
# kc apply -f mypod.yaml
# kc apply -f mypod.yaml
pod/mypod created
[root@master kws]# kc get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
master Ready master 211d v1.19.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master,kubernetes.io/os=linux,node-role.kubernetes.io/master=
[root@master kws]#
this is how you manually you define particular node.
there are some limitation
but there is concept call node affinity.
Taint and Tolerance
-------------------
Lets say you have multiple worker node
- you have multi node cluster
Our env,
- workernode1 - 100GB ram, 1 GB bandwidth.
- workernode2
- workernode3
On master, we tell to launch a pod only on workernode1.
how to do this?
we don't have so much resource on worker node2 because we need more ram/cpu on node2 which will not have
in near future.
We may have namespace for multiple Teams say team1, team2, team3
Lets say team1 launch a pod on ns1.
scheudler decides to launch this pod on w1.
and scheduler keep sending request to create pod on w1
we don't want pod2 pod3 to be launch on w1 node.
Taint: NoScheduled
Three different types of Taint
# kc describe node ip-192..... | grep -i taint
Taints: <None>
this node is not tainted.
We think this node is very important and we don't have any new pod to be deployed. we can set taint
# kc taint node <node1 or IP > <Policy>
# kc taint node <node1> No schedule
# kc taint node <node1> mytype=veryIMP:NoSchedule
for your reference you use mytype
# kc describe node node1 | grep -i taint
Taints mytype-veryImp:NoSchedule
Let give lable
# kc lable node node1 or ip region=US
so we have a tag US
# kc describe node node1 | grep Taint
Lets say w1 has a tag US
# cat mypod.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
run: mypod
name: mypod
spec:
nodeSelector:
region: US
containers:
- image: vimal13/apache-webserver-php
name: mypod
what happens in this case while we have taint.
# kc apply -f mypod.yaml
# kc get pods
you see pod on pending state.
[root@master kws]# kc get pods
NAME READY STATUS RESTARTS AGE
mypod 0/1 Pending 0 13m
Describe on one of the command to troubleshoot
# kc describe pod mypod...
failedScheduling ...
node has taint
look at the error message carefuly
now, to pod config and change node
# kc edit pods mypod
change region to say india
region: IN
Got error,
you can't change some keywork on the fly, so use deployment
Remove this pod and launch pod
edit the yaml file and run again
# kc apply -f mypod.yaml
# cat mypod.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
run: myapp1
name: myapp1
spec:
nodeSelector:
region: US
containers:
- image: vimal13/apache-webserver-php
name: myapp1
# kc apply -f mypod.yaml
# kc get pods
still pending.
this pod is trying to launch.
it allows to launch only meant to be launch there.
You have to tell pod when launching, that you are going to launch on that node.
and it has to launch. SO, you have to make your pod tolerance.
This concept is called tolerance.
intolerant mean they won't allow, remove from there.
# kc get node
# kc describe pod myapp1
Tolerations: look at the values .. on this line..
you have to go to the node and tell change from intolerance to tolerance.
You can edit
# kc edit pods myapp1
go to the section tolerations:
effectL NoExecte
change to NoSchedule
so, you have
tolerations:
- effect: NoSchedule
key
we pass the value when we tainted...
You set pod a tolernace
These are some of the keyword that you can change ..
# kc get pods -o wide
now, you can see running...
you can use same concept on master node as well
# kc get nodes
# kc describe node master | grep -i taint
Taints: NoScheduled
# kc taint node master
# kc describe node master | grep Taint
# kc get pods -n kube-system
# kc create deployment myd --image=httpd
# kc get nodes -o wide
Pod may be launch on master node
# kc scale deployent myd --replicas=10
# kc get pods -o wide
# kc get pods -o wide | grep master
scheduler can schedule anywhere
# kc pods -o
# kc describe node master | less
# kc describe node node1 | less
# kc describe node
# kc taint nodes node1 key1=value1:NoSchedule
# kc taint nodes node1 key1=value1:NoExec
# kc taint nodes node1 key2=value2:NoSchedule
taint and tolerance is the way to use the resources...
Type
NoSchedule
NoExec
we have three nodes
# kc describe node master | grep -i taint
Taints: NoSchedule
# kc describe node w1 | grep -i taint
Taints: <none>
We have say 10 pods running and we taint the node, what happens to the pod already running
# kc taint node w1 a=b:NoSchedule
now w1 is tainted.
this mean, if we launch any new pod, it will not launch here.
what happens to the older PODs?
# kc describe node w1 | grep -i Taint
older pods are stil running, no change to made.
Node has no schedule capacity. Launching new pods are not allowed.
- alredy running POD will be keep running but no new one can run
# kc describe node w1 | less
# cat mypod.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
run: myapp2
name: myapp2
spec:
nodeSelector:
region: US
containers:
- image: vimal13/apache-webserver-php
name: myapp2
# kc apply -f mypod.yaml
# kc describe pod mypod
NoExec
# kc taint node w2
google for taint type and read
- NoSchedule
- NoExecute
- Prefer_NoSchedule
# kc taint node w1 a=b:NoExecute
all pods are terminated..
# kc describe node w1 | grep taint
# kc get pods -o wide
# kc get pods
only the pod which has tolerance allowed.
- All old pods will be terminated.
- If pods created using deployment, replica set will launch on other node.
# kc get pods
tolerance is for pod
Taint is for node
next class
Node Affinity
----------------------
Friday, February 19, 2021
Kubernetes - Node Selector, Node Affinity, Taint/Tolerance - Day20 - k8s
Subscribe to:
Post Comments (Atom)
Git branch show detached HEAD
Git branch show detached HEAD 1. List your branch $ git branch * (HEAD detached at f219e03) 00 2. Run re-set hard $ git reset --hard 3. ...
-
snmpconfig command allow you to managge snmpv1/v3 agent configuration on SAN switch. Event trap level is mapped with event severity level....
-
Firmware upgrade on HPE SuperDom Flex 280 - prerequisites tasks a. Set up repo b. Upload firmware to your webserver 1. For foundation so...
-
Disabling the Telnet protocol on Brocade SAN switches By default, telnet is enabled on Brocade SAN switches. As part of security hardening o...
No comments:
Post a Comment