Wednesday, February 3, 2021

Kubernetes - namespace-kubesystem- CNI - configure another node on AWS as a worker node - Day14

Class note 

 Kubernetes - namespace-kubesystem- CNI - configure another node on AWS as a worker node

CNI - Container Network Interface


Bridge network - single network
Overlay - two nodes, personal lan(bridge), you connect bridge is called overlay. a logical one.
its a concept, tunnel behind the scene. Sending package from one bridge to another.



Namespace

overlay/VCLANFLennel/CoreDNS/NS/CNI

Kube-proxy



yesterday, we deployed AWS-kubernetes

Master  - IP
Worker1 - IP
Worker2 - IP

Worker --> Join ---> Master

storage cluster - gluster, hadoop , computing cluster...
every cluster has use case to solve

our kubernetes cluster is to manage nodes.

User/admin connect to --> Master server
- Using own workspace
- use kubectl to manage

Master
- we configured kubectl

1. Login to your master server
# kc get pods
# kc get nodes

duty of worker node
- The one who work for us
- They launch container on their resources (RAM/CPU/OS)

# kc get pods -o wide
will show you IP, Node, age (uptime)

remove all pods instances.
# kc delete all --all


Configure another node
- Create a server
- Get the IP and login

now, join this worker node to master node.

# yum install docker -y
# get repo code from kubernetes
# install kubectl kubelet kubeadm
follow the note
# docker info
# change the driver info to systemd from cgroup (cgroup is to restrict the respources)

# systemctl enable kubelet
# systecmtl restart docker
# yum insall iproute-tc

update sysctl info


Run kubeadm join command on worker node 3

if you don't have tocket
# kubeadm tocket create --print-join-command
to get join command
and run on worker node
if succeed, go to master and run

@master
kubectl get nodes
You will see new node available (it make take minutes or so max)


when you join, worker node downloads some docker images

# docker images
# docker ps


namespace
------------
what is it?

We create one single kuberntes cluster with hundreds of worker nodes.
- We have different teams. They have their own requirements, images
- They want to run pods..
- Creating hundred different k8s cluster but they may not use it
- So, we create one single cluster and give access to them.
- This way we are optimizing our hardware resources.
- We don't want to waste our hardware resources.

say, multiple team want to use our cluster.
They want to run pod, security database.

Here, the biggest challange is security
- same cluster is used by different team, so they may see each others pods
- what k8s did is they are seperate room (Say like room in a hotel)
- say room 1 for team1 and room2 for team2 and so
- How big the room is?
  - How much you pay, you get the bigger size
what is pay more mean?
 - You are going to put data - data, pod --> RAM/CPU - how much you need (Limits and Quotas)
 - provide limited respurces to team such as allocate 10G for 10G but team is using 3 GB.

How do you manage it?

once family is one tenant ...

Because of this room concept, team think its a seperate room, this is my own world.
- multiple team work on isllation, seperation
- this is called multi-tenant facility.


Every account, they create seperate room .. (tenant)

In kubernetes term, this room is called namespace.

POD,Deployment, RS. RC

put all of these resources inside the namespace.

Run the command below
# kc get pods
No resources found in default namespace.

sometimes, there is a namespace (a room) called default created.

# kc get namesapce
you see some output, these are room that kubernetes use for their own purpose.


As a user, we always put our resources under default.

But we can create seperate room (namesapce) as well.

Without namespace, its hard to implement security.


# kubectl create namesapce myns
# kc get namespace
# kc get pods

by default it always looks for resource under default
# kc get pods --namespace myns
you see no resource

# kc get pods -n myns

I want to create not on default ns
# kc create deployment myd1 --image=httpd -n myns
# kc get pods
# kc get pods -n myns

this way you can create different namespace.


Delete ns
# kc delete namespace myns

you remove the room, all data will be lost, destroyed.

-------------------------------------------------------

kubernetes usages kube-system

# kc get pods -n kube-system

in this room, kubernetes internet room, it has lots of pods runnings

such as coredns, etcd, kubeapiserver, kube-controller, fkube-flannel, proxy, scheduler.... and more..

inside kubernetes master server lots of services running...

Look at the kube-system output
you see coredns    ready 0/1

describe the detail of coredns pod

# kc describe pods coredns--- -n kube-system

you see pod is not running,
you need this facility in multi node facility

# kc get pods

Launch a pod using deployment
# kc create deploy myd --image=vimal13/apache-webserver-php

- we are asking to depoly one pod,
- pod is launch on worker node
- Who decides to launch?
- scheduler decides behind the scene where to launch.

Note: scheleduer tell kube-controller-manager contacts the node1 (say) and launch the POD.

lets for instance focus on docker..
- In the doker world, the dconainer enginer is docker engine.
- From image, you run the container
- container is same as Operating system and we install app
- Why app? - some one from ourside connect and use the app.
- Use comes throug the network and use need IP address of the app to connect
- So, we need ethernet card on the system. (eth0/1 ... )
    [ OSI layer - layer 2 concept ]
Read physical server has read NIC card but VM has virtual nic card.

# docker ps

the resposnibility of docker engine or container run time engine to assign one NIC care and IP address.
The NIC is not real but virtual. That is called VETH - virtual Ethernet Card.

If they launch new container - they assign new VETH and IP address.

Run on base os (node)
# ifconfig -a
review the output
in our case, we have one ethernet card

# docker run -it --name myos1 ubuntu:14.04


In one network boundry or in One LAN, you use switch
so that you can use it to connect all physical systems.

But in our case, we have to have a software that creates switch like env
which is called bridge. so docker will help us to create a switch in our case bridge.

in linux there is software called linux-bridge, where you can create route, switch

Install linux-bridge
# yum install *bridge*
console-bridge


when you install docker, by default it created bridge (A LAN) for you


# brctl show
# which brctl
#

when you create new container, it creates a NIC and attach to the switch...

if you create two container in docker, they can communicate each other

# docker network ls
# docker network inspect bridge
Note: bridge is a switch

# docker run -it --name os123 ubuntu:14.04

in this bridge, no container yet connected. since we are not yet launch the container.

launch one container and rerun the command..


# docker run -it --name os123 ubuntu:14.04

docker goes to base os and attach to bridge and provides to container with new IP address.

Run the inspect command


The entire network setting is done by docker or docker engine.

this lead us to big challenge.

what is the challenge?




# brctl show # shows you switches
cni0 ->

When you join worker node with master node, they create one switch on each nodes.
say cni0 switch on both nodes.

Go to worker node1

# brctl show
you will see bridge there as well.

on node2
# brctl show

When you launch POD,


go to master
# kc create deploy myd --image-vimal13/apache-webserver-php
# kc get deplpy
# kc get pod
# kc get pod -o wide

it will create container, attach interface, atach to switch, assign IP,
now, the output of brctl show command, interfaces is connected


on node1
# docker ps

once container launch on client.

# brctl show you see interface
# ifconfig

by default nature, you can ping each other if you have multiple containers.

In read, you face some issues.
In this entire set up, you face --

say you have two nodes.
- they are independent.
- You running container engine on both.
- Swich is automatically created on both.
- When you create container on node 1 and another container on node2.
- Node1 and node2 are different nodes.

now, from master node
- you plan to launch wordpress pod
it is launch on node1

and you decoded to launch another container, database
database container on node2

now, the challege here is:
when you create docker engine, they create switch,
- if you launch more containers, switch's responsibility is to connect between all containers as well.
- They are on isolated env

- Any container want to connect from node1 to another container on other node, it can't.


but these are multi-tier application since wp and database can't communicate because of network isolation.
- This kind of connection is not allowed.

If you launch both on same node, its possible, but you won't since scheduler will make decision where to launch. but you have option to launch on same node, you may not do that as well.


To solve this issue, what we have to do?
- We have to use one approach ->

how can we connect on container on node1 and database on another node. To implement this, we need to know
one more network component.

Lets say you have two worker nodes
Node1 and Node2

- We need to  have a physical connectivity between these two systems.
- and needed read IP assigned on both systems

This physical network is called underlay network.

A (.20)---------Phy n/w-------------B (.40)

as soon as you have network connectivity, they create switch cni0 (name can be different), when you create a conatiner, it connects to switch.
if c1 is wordpress and c2 is DB due to conainer network, you can't connect.

What we will do is launch a container.
- launch a container (c10) and connect to switch.
- now c1 and c10 have connectivity
- launch new container say container 11 on node2
- so container c2 can connect to c11.
- Now, container c10 treat as a router/gateway. when package from container comes to you, take a package and
send an package to under lay network.
- Since c10 is from same node, it can connect within all containers.
- Once package is on underlay networ, it can travel to other point on the network.
_ on the other hand, c11 also does like this. and follow the same route/path and can have communication.

In networking world, this container (C10, c11) is very important.

say IP of
On node1
c1 is 10.0.1.1
c3: 10.0.1.2
c10: 10.0.1.0/24

Node2
c2: 10.0.2.1
c4: 10.0.2.2
c11: 10.0.2.0/24

we will tell container c10 or c11, we tell to behave like dhcp server.
- you are router/gateway/dhcp server.
-


c1 -> c10  -> cth0 (A) ---- eth0 (B) ---> c11 ----- c2

container c10 and c11 create routing table..


container c1 wants to reach container c2
- c1 goes to c10 and it has routing table and pass it to eth0 on NodeA
- Now package goes to eth B (overlay network)
- now goes to c11 which acts like a router/gateway and traffic is pass to c2

container c10 and container c11 make direct communication
- they are establishing direct connectivity which is called overlay network set up

To implement overlay network setup
- we have to use concept called tunnneling
one of the concept is
- GRE
- VxLAN

A has personal LAN and B has own personal LAN

because of overlay network concept A and B feel like they are on same lan
This is called Virtual Extended LAN

overlay network is a concept and it has two concept
- VxLAN (more in use)
- GRE


Implement VxLAN tunnel (overlay network)

we will use app called Flannel. There are many more, google it

we applied one kube-flannel app yesterday..

you need flunnel node on each node.

@master
# kubectl get pod -n kube-system

you will see flunnel pod runing and three pod running.. each on each nodes
# kc get pod -n kube-system -o wide

without flannel, you can not connunication.

kubernetes cluster-administration addons
- flunnel
- .....

container you launch from flunnel program acts like router, gateway, dncp

# # kc get pod -n kube-system
you see coredns still not running  0/1 - ready

so, we can say, flanel is just an app.
- it runs in the form of container and acts as router, dhcp
- inside worker node, they provide IPs.

conflict comes up
- when you luanch kubernetes cluster, you have to tell master, when you launch POD, I want IP to be on this form.
- You define the range at the time to kubeadm command
# kc get pod -o wide

but technically, I want flannel to decide what IP to decide.
- Master give IP but flannel manage it.
- look at the config of flannel, it is written and IP range.
- POD will get on that range.

in this case, master has differnt range and flannel has different, it will create problem.
In this case, you have to ask flannel to change the config file and change it to match IP from master.


lets expose the deployment
# kc expose deploy myd --port=80 --type=NodePort

# kc get svc

# kc get pod -o wide

we see container is running on nodeB
you need to use IP of podb
type at browser
http://154.44.344.99:12345

you see the ip of pod..

kc get pod -n kube-system

because of kube-proxy, all pod communicate.

We have to properly configure flannel and overlay network to make it work

Only thing you have to do is to change the config of lannel confi file

# vi /var/run/rflannel/subnet.env
look at the flunnel network range: 10.244.0.0/16

if you go to all nodes including master and worker, if flannel is running, you will see this network
what you have to do?
- you have to change

How to change it?
flannel is a service/program inside the container, you have to manage throug configmap
# kc get configmap

config map is a data base where we keep config
# kc get configmap -n kube-system

# open this file
# kc edit configmap -n kueb-system

it will open entire confi of flannel
go to network section
net-conf.json

network: 10.244.0.0/16


you change it to the right one from master.

the one when you set up master kubeadmin init ip address


only get flannel pods
# kc get pod -l app=flannel -n kube-system

# kc delete pod -l app=flannel -n kube-system

pod will be created using new network setting and

# kc get pod -n kube-system

check the config file on flannel

# cat /var/run/flannel/subnet.env

now config is changed with updated network address...

# kc get pod -n kube-system

now coredns is working fine now.

# kc get pod -o wide
now, pod is runing on nodeB

now, you can access from A.
use IP of A and the port, you can access
even from master ...


this concept explains kube-proxy

so far we discussed overlay network, flannel, kube-proxy

overlay is a concept and flannel is a software - app
calico or somethig is another app



# kc get deploy -n kube-system
# kc get pods -n kube-system
# kc get rs -n kube-system
# kc get rs -n


# kc describe kube-flannel-ds

controlled by daemomSet

# kc get ds -n kube-system

No comments:

Post a Comment

Git branch show detached HEAD

  Git branch show detached HEAD 1. List your branch $ git branch * (HEAD detached at f219e03)   00 2. Run re-set hard $ git reset --hard 3. ...