Tuesday, February 9, 2021

Kubernetes - identity and access manager (IAM) - RBAC - Day 16

Kubernetes - identity and access manager (IAM) 2/09/2021
Class notes - Day15

- Role Based access control
  - Roles/User/Password/Role-binding


Namespace
- if you want to implement multi-tanancy (tanent -> your client or user)
- One cluster used to multiple different user.
- While they are using, we give them set up (env) in such a way that user will feel that entire cluster belongs to them
- independent space.

Cluster
- User
- Tanent

When set up the env, other user can't see it. This concept is called namespace

Cluster is like a big hotel
- it has big room, isolated, secure => this is called namespace
- The cluster => hotel - is a master node
- Root numbers are namespaces
  such as room1, room2 ....
- According to the price user pay, suite, premium... user also get same kind of features...
- How much you pay, accordingly, we will give service.


Features -> mean resources such as can you launch resources such as deplpyment, pod, service, secret ...

In hotel, guest gets key/card - authenticate to enter the room
key is like password

what//how much resources you can use is controlled by master node.
what action you can perform is decided by the admin guy on master mode

the action in kubernetes is verb.

Team1
- Working in web app

You have multiple admin on Team1 say bill, jack, bob

They can login to namespace to deploy pod, svc
- say bill can login and check service, and see pod status
- no power to create or delete..

User        Resources        Verb
----        ---------        -------
bill        svc, pod        list, get
jack        deploy, pod, svc    can do everything, list, get, create and almost everything


user authenticate and user is identified,
- user is check with the access such as what access a user has on what resources.



We have
Team1
- Working on web app (php)

Team2
- Working on DB team


system identifies you when you get into it and the access you have. This is called role..
Role => All power (say admin)
- title => say manager (what priviledge a manager has, a power)
  - some power is granted.

Every title we give for any position, (role)
on this title, we define what it contains.

Role => techadmin and add resources

(Role is like a job position.)


in k8s cluster
- we are controlling the access with the help of role
- This is RBAC
  what you can do, what you can't do..

User security is govern by RBAC

- We create multiple user and multiple role
- and we associate the role with the user.
Rolebinding

$ kunectl create --help
review role, rolebinding


============xxxx=============

Now, login to your master node
all commands are same on single or multiple cliuster

# kc get nodes
Shows the status of cluster
# kc cluster-info

control Place
- api
- kube-master
- service
.....

# kc create press enter
you will get help but does not show you how to create user/pw (key)

how do you set up?

cryptography
- authentication
   - username/pw (weak security)
   - user/key (Key based authentication) (strong security)
   - Certificate based Authentication
       - secure way
       - ease of management



user/laptop               -----> k8s cluster
- OS (win)
  - Program (key)
    - Kubectl


Two type of key based authentication
- Key based authentication (hard to manage)
- Certificate based

- Key based authentication

- Certificate based
  - its a file
    - contains
      - name
      - domain
      - email
      - host
      - Country

Transfer this file to k8s server
- everything needed is available on this file.
- Transfer this file from your laptop to k8s server (CSR)
- on k8s server, verifies and spamps (Signed) by master and return back to user. (CRT)

Go to /etc/kubernetes/pki/ on your master
# cd /etc/kubernetes/pki/

You see ca.crt file.. This is very important file. hacker gets it, use it to get access.
ca.key is also a very important file.

- private key keep it secret. (keep it yourself)
- only share public key.

these two key file, you provide to your program.
1. Privae key
2. crt certificate file


IAM
- Identity
- Access

Identity (authentication
- certificate based

Access
- Role based

We will create a user and provide certificate (no password or key)

For certificate, on your laptop
- create private key
- create your own certificate (csr)
- Send certificate to master (CSR)
- Master signes the certificate (CRT)
- Master sends private key back to you (CRT)

in this process, all step, its a file

so, create - send - signed and get back.


On your local workstation

There are lots of tools available to create certificate, but openssl is a popular one.
There are graphical tools also available

1. Create a private key
# mkdir /kubews; cd /kubews
# openssl genrsa 1024
save it
# openssl genrsa -out mykey.key 1024
(private-> public) key asymetric key (RSA, DSA)
public key for public
bigger size, more powerful

Now, create certificate
# openssl req -new -key mykey.key -out mykey.csr
fill out the form

[root@master kubews]# ls -ltr
-rw-------. 1 root root 887 Feb  9 11:55 mykey.key
-rw-r--r--. 1 root root 615 Feb  9 11:57 mykey.csr

certificate is created and it is encoded
# cat mykey.csr
you can decode it.

2. now, you have to send this csr to k8s master
# scp mykey servername:
or cat the file and copy the conetnt to server

3. Now, master will read the file and signes since master also works as signing authority..

copy this file into /etc/kubernetes/pki file
# cp mykey.csr /etc/kubernetes/pki

how to sing? x509
# openssl x509 -req -in mykey.crt -CA ca.crt -CAkey ca.key  - CAcreateserial -out mycrt.crt

go in and read the file


copy crt that you created and go to local machine and paste it..


now, your local laptop contains signed crt..

This is basically certificate based authentication...


Set up kubectl for your PC on rhel8 system.
google and get a link from kubernetes.io
configure yum and install..

# yum  install kubectl
# alias 'kc-kubectl'

# kc version
it does not know where is the master


client should know 3 things about server
1. need to know API-server ip:port
2. Username
3. Password (certificate based pw)

you put all together and put it under kubeconfig file

# kc config view

kubeconfig file is configuration file for client.


# kc --server -cert


How to create kube config file.
Using command
# kubectl config -h

set-cluster ..
set-credentials
....

we have lots of option available or you can get dumy file and update the record from the net.

On master server
# cd /root/.kube

go to the server section and get the IP and port

also go to users section
name: kubernetes-admin
and user certifiacte ...

This user and certificate has all power...

You can use the keyword as a template...

# kc config -h
lets create ourselves..

# cd /root/kubews; kc config --kubeconfig mykey.kubeconfig set-cluster --server IP-of_master
# cd /root/kubews; kc config --kubeconfig mykey.kubeconfig set-cluster --server IP-of_master:6443


how to find ip
# kubeconfig


if you are using aws, use public address, private address does not work..

from client, you will be connecting to master using https

you will need to copy ca.crt from master to your client and regenerate the config file.
# vi ca.crt (copy the content from your k8s master server)
# cd /root/kubews
# kc config --kubeconfig mykey.kubeconfig awskubecluster set-cluster --server IP-of_master:6443 --certificate-authority=ca.crt

file is created
# cat mykey.kubeconfig

you see detail of the cluster.

this is config resource(client config)

# kc config view
fails because it reads config from root
# kc config view --kubeconfig mykey.kubecofig
you will see the output...

it will show kubernetes master and the port

# now, run on your client system
# kc get pods
it will go to k8s master
but you get master
because
kc runs on local ssytem
you have to tell command to go to your remove server (AWS)
kc get pods --kubeconfig myfile.ckeyconfi

still need to update the info
# vi myfile.kubeconfig or use command line
# kc config -h
set - credentials.

# kc config ---kubeconfig myfile.kubeconfig set-credentials sam  --client-certificate mykey.crt --client-key mykey.key

- What user name you want to create
- What is your client cert , its client-key

conf file is updated
# cat myKubernetes - namespace-kubesystem- CNI - configure another node on AWS as a worker node

CNI - Container Network Interface


Bridge network - single network
Overlay - two nodes, personal lan(bridge), you connect bridge is called overlay. a logical one.
its a concept, tunnel behind the scene. Sending package from one bridge to another.



Namespace

overlay/VCLANFLennel/CoreDNS/NS/CNI

Kube-proxy



yesterday, we deployed AWS-kubernetes

Master  - IP
Worker1 - IP
Worker2 - IP

Worker --> Join ---> Master

storage cluster - gluster, hadoop , computing cluster...
every cluster has use case to solve

our kubernetes cluster is to manage nodes.

User/admin connect to --> Master server
- Using own workspace
- use kubectl to manage

Master
- we configured kubectl

1. Login to your master server
# kc get pods
# kc get nodes

duty of worker node
- The one who work for us
- They launch container on their resources (RAM/CPU/OS)

# kc get pods -o wide
will show you IP, Node, age (uptime)

remove all pods instances.
# kc delete all --all


Configure another node
- Create a server
- Get the IP and login

now, join this worker node to master node.

# yum install docker -y
# get repo code from kubernetes
# install kubectl kubelet kubeadm
follow the note
# docker info
# change the driver info to systemd from cgroup (cgroup is to restrict the respources)

# systemctl enable kubelet
# systecmtl restart docker
# yum insall iproute-tc

update sysctl info


Run kubeadm join command on worker node 3

if you don't have tocket
# kubeadm tocket create --print-join-command
to get join command
and run on worker node
if succeed, go to master and run

@master
kubectl get nodes
You will see new node available (it make take minutes or so max)


when you join, worker node downloads some docker images

# docker images
# docker ps


namespace
------------
what is it?

We create one single kuberntes cluster with hundreds of worker nodes.
- We have different teams. They have their own requirements, images
- They want to run pods..
- Creating hundred different k8s cluster but they may not use it
- So, we create one single cluster and give access to them.
- This way we are optimizing our hardware resources.
- We don't want to waste our hardware resources.

say, multiple team want to use our cluster.
They want to run pod, security database.

Here, the biggest challange is security
- same cluster is used by different team, so they may see each others pods
- what k8s did is they are seperate room (Say like room in a hotel)
- say room 1 for team1 and room2 for team2 and so
- How big the room is?
  - How much you pay, you get the bigger size
what is pay more mean?
 - You are going to put data - data, pod --> RAM/CPU - how much you need (Limits and Quotas)
 - provide limited respurces to team such as allocate 10G for 10G but team is using 3 GB.

How do you manage it?

once family is one tenant ...

Because of this room concept, team think its a seperate room, this is my own world.
- multiple team work on isllation, seperation
- this is called multi-tenant facility.


Every account, they create seperate room .. (tenant)

In kubernetes term, this room is called namespace.

POD,Deployment, RS. RC

put all of these resources inside the namespace.

Run the command below
# kc get pods
No resources found in default namespace.

sometimes, there is a namespace (a room) called default created.

# kc get namesapce
you see some output, these are room that kubernetes use for their own purpose.


As a user, we always put our resources under default.

But we can create seperate room (namesapce) as well.

Without namespace, its hard to implement security.


# kubectl create namesapce myns
# kc get namespace
# kc get pods

by default it always looks for resource under default
# kc get pods --namespace myns
you see no resource

# kc get pods -n myns

I want to create not on default ns
# kc create deployment myd1 --image=httpd -n myns
# kc get pods
# kc get pods -n myns

this way you can create different namespace.


Delete ns
# kc delete namespace myns

you remove the room, all data will be lost, destroyed.

-------------------------------------------------------

kubernetes usages kube-system

# kc get pods -n kube-system

in this room, kubernetes internet room, it has lots of pods runnings

such as coredns, etcd, kubeapiserver, kube-controller, fkube-flannel, proxy, scheduler.... and more..

inside kubernetes master server lots of services running...

Look at the kube-system output
you see coredns    ready 0/1

describe the detail of coredns pod

# kc describe pods coredns--- -n kube-system

you see pod is not running,
you need this facility in multi node facility

# kc get pods

Launch a pod using deployment
# kc create deploy myd --image=vimal13/apache-webserver-php

- we are asking to depoly one pod,
- pod is launch on worker node
- Who decides to launch?
- scheduler decides behind the scene where to launch.

Note: scheleduer tell kube-controller-manager contacts the node1 (say) and launch the POD.

lets for instance focus on docker..
- In the doker world, the dconainer enginer is docker engine.
- From image, you run the container
- container is same as Operating system and we install app
- Why app? - some one from ourside connect and use the app.
- Use comes throug the network and use need IP address of the app to connect
- So, we need ethernet card on the system. (eth0/1 ... )
    [ OSI layer - layer 2 concept ]
Read physical server has read NIC card but VM has virtual nic card.

# docker ps

the resposnibility of docker engine or container run time engine to assign one NIC care and IP address.
The NIC is not real but virtual. That is called VETH - virtual Ethernet Card.

If they launch new container - they assign new VETH and IP address.

Run on base os (node)
# ifconfig -a
review the output
in our case, we have one ethernet card

# docker run -it --name myos1 ubuntu:14.04


In one network boundry or in One LAN, you use switch
so that you can use it to connect all physical systems.

But in our case, we have to have a software that creates switch like env
which is called bridge. so docker will help us to create a switch in our case bridge.

in linux there is software called linux-bridge, where you can create route, switch

Install linux-bridge
# yum install *bridge*
console-bridge


when you install docker, by default it created bridge (A LAN) for you


# brctl show
# which brctl
#

when you create new container, it creates a NIC and attach to the switch...

if you create two container in docker, they can communicate each other

# docker network ls
# docker network inspect bridge
Note: bridge is a switch

# docker run -it --name os123 ubuntu:14.04

in this bridge, no container yet connected. since we are not yet launch the container.

launch one container and rerun the command..


# docker run -it --name os123 ubuntu:14.04

docker goes to base os and attach to bridge and provides to container with new IP address.

Run the inspect command


The entire network setting is done by docker or docker engine.

this lead us to big challenge.

what is the challenge?




# brctl show # shows you switches
cni0 ->

When you join worker node with master node, they create one switch on each nodes.
say cni0 switch on both nodes.

Go to worker node1

# brctl show
you will see bridge there as well.

on node2
# brctl show

When you launch POD,


go to master
# kc create deploy myd --image-vimal13/apache-webserver-php
# kc get deplpy
# kc get pod
# kc get pod -o wide

it will create container, attach interface, atach to switch, assign IP,
now, the output of brctl show command, interfaces is connected


on node1
# docker ps

once container launch on client.

# brctl show you see interface
# ifconfig

by default nature, you can ping each other if you have multiple containers.

In read, you face some issues.
In this entire set up, you face --

say you have two nodes.
- they are independent.
- You running container engine on both.
- Swich is automatically created on both.
- When you create container on node 1 and another container on node2.
- Node1 and node2 are different nodes.

now, from master node
- you plan to launch wordpress pod
it is launch on node1

and you decoded to launch another container, database
database container on node2

now, the challege here is:
when you create docker engine, they create switch,
- if you launch more containers, switch's responsibility is to connect between all containers as well.
- They are on isolated env

- Any container want to connect from node1 to another container on other node, it can't.


but these are multi-tier application since wp and database can't communicate because of network isolation.
- This kind of connection is not allowed.

If you launch both on same node, its possible, but you won't since scheduler will make decision where to launch. but you have option to launch on same node, you may not do that as well.


To solve this issue, what we have to do?
- We have to use one approach ->

how can we connect on container on node1 and database on another node. To implement this, we need to know
one more network component.

Lets say you have two worker nodes
Node1 and Node2

- We need to  have a physical connectivity between these two systems.
- and needed read IP assigned on both systems

This physical network is called underlay network.

A (.20)---------Phy n/w-------------B (.40)

as soon as you have network connectivity, they create switch cni0 (name can be different), when you create a conatiner, it connects to switch.
if c1 is wordpress and c2 is DB due to conainer network, you can't connect.

What we will do is launch a container.
- launch a container (c10) and connect to switch.
- now c1 and c10 have connectivity
- launch new container say container 11 on node2
- so container c2 can connect to c11.
- Now, container c10 treat as a router/gateway. when package from container comes to you, take a package and
send an package to under lay network.
- Since c10 is from same node, it can connect within all containers.
- Once package is on underlay networ, it can travel to other point on the network.
_ on the other hand, c11 also does like this. and follow the same route/path and can have communication.

In networking world, this container (C10, c11) is very important.

say IP of
On node1
c1 is 10.0.1.1
c3: 10.0.1.2
c10: 10.0.1.0/24

Node2
c2: 10.0.2.1
c4: 10.0.2.2
c11: 10.0.2.0/24

we will tell container c10 or c11, we tell to behave like dhcp server.
- you are router/gateway/dhcp server.
-


c1 -> c10  -> cth0 (A) ---- eth0 (B) ---> c11 ----- c2

container c10 and c11 create routing table..


container c1 wants to reach container c2
- c1 goes to c10 and it has routing table and pass it to eth0 on NodeA
- Now package goes to eth B (overlay network)
- now goes to c11 which acts like a router/gateway and traffic is pass to c2

container c10 and container c11 make direct communication
- they are establishing direct connectivity which is called overlay network set up

To implement overlay network setup
- we have to use concept called tunnneling
one of the concept is
- GRE
- VxLAN

A has personal LAN and B has own personal LAN

because of overlay network concept A and B feel like they are on same lan
This is called Virtual Extended LAN

overlay network is a concept and it has two concept
- VxLAN (more in use)
- GRE


Implement VxLAN tunnel (overlay network)

we will use app called Flannel. There are many more, google it

we applied one kube-flannel app yesterday..

you need flunnel node on each node.

@master
# kubectl get pod -n kube-system

you will see flunnel pod runing and three pod running.. each on each nodes
# kc get pod -n kube-system -o wide

without flannel, you can not connunication.

kubernetes cluster-administration addons
- flunnel
- .....

container you launch from flunnel program acts like router, gateway, dncp

# # kc get pod -n kube-system
you see coredns still not running  0/1 - ready

so, we can say, flanel is just an app.
- it runs in the form of container and acts as router, dhcp
- inside worker node, they provide IPs.

conflict comes up
- when you luanch kubernetes cluster, you have to tell master, when you launch POD, I want IP to be on this form.
- You define the range at the time to kubeadm command
# kc get pod -o wide

but technically, I want flannel to decide what IP to decide.
- Master give IP but flannel manage it.
- look at the config of flannel, it is written and IP range.
- POD will get on that range.

in this case, master has differnt range and flannel has different, it will create problem.
In this case, you have to ask flannel to change the config file and change it to match IP from master.


lets expose the deployment
# kc expose deploy myd --port=80 --type=NodePort

# kc get svc

# kc get pod -o wide

we see container is running on nodeB
you need to use IP of podb
type at browser
http://154.44.344.99:12345

you see the ip of pod..

kc get pod -n kube-system

because of kube-proxy, all pod communicate.

We have to properly configure flannel and overlay network to make it work

Only thing you have to do is to change the config of lannel confi file

# vi /var/run/rflannel/subnet.env
look at the flunnel network range: 10.244.0.0/16

if you go to all nodes including master and worker, if flannel is running, you will see this network
what you have to do?
- you have to change

How to change it?
flannel is a service/program inside the container, you have to manage throug configmap
# kc get configmap

config map is a data base where we keep config
# kc get configmap -n kube-system

# open this file
# kc edit configmap -n kueb-system

it will open entire confi of flannel
go to network section
net-conf.json

network: 10.244.0.0/16


you change it to the right one from master.

the one when you set up master kubeadmin init ip address


only get flannel pods
# kc get pod -l app=flannel -n kube-system

# kc delete pod -l app=flannel -n kube-system

pod will be created using new network setting and

# kc get pod -n kube-system

check the config file on flannel

# cat /var/run/flannel/subnet.env

now config is changed with updated network address...

# kc get pod -n kube-system

now coredns is working fine now.

# kc get pod -o wide
now, pod is runing on nodeB

now, you can access from A.
use IP of A and the port, you can access
even from master ...


this concept explains kube-proxy

so far we discussed overlay network, flannel, kube-proxy

overlay is a concept and flannel is a software - app
calico or somethig is another app



# kc get deploy -n kube-system
# kc get pods -n kube-system
# kc get rs -n kube-system
# kc get rs -n


# kc describe kube-flannel-ds

controlled by daemomSet

# kc get ds -n kube-system


Automatetheboringstuff.com


Python math operators is similar to that of mathematics. The ** operator is evaluated first; the *, /, //, and % operators are evaluated next, from left to right; and the + and - operators are evaluated last (also from left to right). You can use parentheses to override the usual precedence if you need to.

key.kubeconfig

certificate and key also included.

# kc conig view --kubeconfi mykey.kubeconfig
# kc get pods --kubeconfig mykey.kubeconfig
we still get the error

pointing to local
here we come to discussion of context.

without context, kc does not work.

# kc config -h
set the context..


continue tomorrow ....



No comments:

Post a Comment

Git branch show detached HEAD

  Git branch show detached HEAD 1. List your branch $ git branch * (HEAD detached at f219e03)   00 2. Run re-set hard $ git reset --hard 3. ...