Wednesday, June 16, 2021

K8S - playing with kubernetes

Worker node
- Container runtime
- kubelet - interacts with container and node. starts the pod with a container inside
- kube-proxy - Forwards the request

Master node interacts nodes
- schedule pod
- monitor
- reschedule


API-Server - cluster gateway, gets authentication
- validate request
- query request through api-server

scheduler
- say start a node
- intelligent enought to start the right resource on right node.
  - checks available resource, least busy, schedules the pod.

Controller Manager
- nodes dies, reschedule (detact cluster state change)
- restart pod

etcd
- keeps the state /changes stored in key/value pair
- its a database of resources
- keeps cluster state
- cluster health is kept in etcd cluster
- application is not stored here. it only stotes cluster state data.

api-server load balanced


Cluster set up
-> master - 2
-> nodes - 3

add new master/node server
- get new server
- install all master/worket node components
- join the cluster


Minikube ad kubectl

minikube
- you have master node
- worker nodes

to test on local node, setting whole infrascture is every difficult and time consuming.
so  you can use minikube
- use virtual box (minikube creates)
- node runs in the virtual box
- 1 node cluster
- used for testing purpose


what is kubectl?
- command line tool for k8s cluster

Minikube runs master process, which means
- it has API-server (enables interaction with cluster)
- use kubectl to interact
  - add,delete component
    create, destroy pods

kubectl works for minukube, cloud or any type of cluster.

Installation
- download mini-kube from kubeernetes.io site - just google
- Download virtual box and install it

for mac
- brew update
- brew install hyperkit

- brew install minikube
kubectl also installed.

@ kubectl
you will see output
$ minikube

you can do everything here on your system.

start the lcuster
$ minikube start

specify the
> minikube start --vm-driver=hyperkit

or virtualbox

minikube cluster is set up.  
# kc get nodes
gets the state of the node

its ready with master role
# minikube status
shows the status

you see kubelet, apiserver running

$ kubectl version
# alias kc=kubectl


Once you have minikube and kubectl installed, you can run and practice k8s.
Some kubectl commands
Create and debug pods in minikube cluster


CRUD commands
$ kc create deployment <name>    -> Create deployment
$ kc edit deployment <name>    -> Edit deployment
$ kc delete deployment <name>    -> Delete the deployment

Status of different k8s components
$ kc get nodes|pod|services|replicaset|deployment

Debugging pods
$ kc logs <pod_name>    -> Log to console
$ kc exec -it <pod_name> -- /bin/bash    # get interactive pod terminal



$ kc get nodes
$ kc get pods
$ kc get services

create pod
$ kc create -h
look for available commands

Note: pod is one of the smallest unit in k8s cluster.
we will not directly create pod but we will use deployment - an abstraction layer on top of pods.
so, we will create deployment as follow,
$ kc create deployment NAME --image=name-of-image [--dry-run] [options]

image=> container image
$ kc create deployment nginx-depl --image=nginx

$ kc create deployment mydep --image=nginx

$ kc get deployment
$ kc get pods
you get pod name with prefix of deployment

deployment has all the information to create POD so you can call deployment is a blueprint for creating pods
- most basic configuration needed for deployment to create pod is name and image to use to create pod.


There is another set of layer between POD and deployment, which is automatically managed by kubernetes deployment called replicaSet.

$ kc get replicaset

you will see the name of the replicaset and other info.

ReplicaSet basically manages the replicas (copy) of POD.

You don't have to manage or delete or update replicaset. You will directly work with deployment.

the above command creates one pod or one replica. If you want to create more replicas (copy of instance), you can specify with additional options.


Layers of abstraction

- Deployment manages a replicaSet
- ReplicaSet manages a POD and
- Pod is an abstraction of container

everything below deployment should be managed by kubernetes. You don't have to manage it.

You can edit the deployment for image, not on the pod. for eg,
$ kc edit deployment nginx-deploy

You will see auto generated configuration file with default values.

go down to spec and look under containers  you will see image

you can change the version here
--image: nginx:1.19

and save the change

$ kc get pods
now, you see old pod is terminating and new is creating with new image
$ kc get replicaset

you will see new replica set has one (current) and old has 0 pods.

Debugging POD
$ kc logs <name of pod>
$ kc create deployment mongo-dep1 --image=mongo
$ kc get pod
$ kc logs mongo-dep...

$ kc describe pod <pod_name>


check under message for status
$ kc get pod

$ kc logs mongo-dep... <pod-name>

if contianer has problem, you will see it

$ kc get pod

debugging
$ kc exec - gets the terminal
$ kc exec -it <pod-name> -- bin/bash

it mean interactive terminal
you get a termial of mongo db.

you can test and check logs


Delete pod

$ kc get deployment
$ kc get pod

To delete
$ kc delete deployment <name of deployment>

$ kc get pods

you see pods terminating
$ kc get replicaset

kc get replicaset
everything underneath deployment is gone.


$ kc create deployment name image option1 option2 ...

you can supply more value at the command but it will better to work on config file.


using k8s configuration file contains all needed information on file and execute.

to execute, you use apply comand

$ kc apply -f <name of config file>.yaml

$ vi nginx-deploy.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
# specification for deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
# specification (blueprint) for POD
  template:
    metadata:
      labels:
        app:nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.19
        ports:
        - containerPort: 80    # binding port

Apply the configuration
$ kc apply -f nginx-deploy.yaml

to change , edit the file and change the replicas..

$ vi nginx-deploy.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
# specification for deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
# specification (blueprint) for POD
  template:
    metadata:
      labels:
        app:nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.19
        ports:
        - containerPort: 80    # binding port


$ kc apply -f nginx-deploy.yaml


in summary, what we covered so far,

Create (CR), Update (U), Delete (D) -> CRUD commands

Deployment created
$ kc create deployment <name-of-deployment>

Edit deployment
$ kc edit deployment <name of deployment>

Delete deployment
$ kc delete deployment <name of deployment>

Get Kubernet components
$ kc get nodes | pod | services | replicaset | deployment

Debugging PODs
Log to console
$ kc logs <Name of pod>

Get terminal of running container
$ kc exec -it <pod_name> -- /bin/bash

Using config file to create pods

Create and apply configuration
$ kc apply -f <config.file.yaml>

Delete with config file
$ kc delete -f <config-file.yaml>

Troubleshooting
$ kc describe deployment <deployment-name>
$ kc describe pod <my-pod>

=================================


Introduction to YAML file in kubernetes

Overview
- There three parts of config file
- Connecting deployments to service to pods


First 2 lines will tell you what you want to create.

kind: Deployment # here we are creating deployment. first letter is uppercase.
kind: Service    # here, we are creating service

apiVersion: v1 or apps/v1    # for different api version,  you have to look for particular version.

1. metadata of the component you are creating.
name of the component,
for eg,
metadata:
  name: nginx-deployment

2. Specification
- each component configuration file will have specification.
  whatever information you need, you can specify here.
some examples,

spec:
  replicas: 2
  selector: ...
  template: ...

or
spec:
  selector: ...
  ports: ...

Note: The attribute of spec are specific to the kind (such as deployment, pod) you are creating.

3. Status
- it is automatically generated and added by kubernetes

What k8s does is it checks what is the desire state and what is the actual status of the component.

For eg, lets say there are 4 replicas running and your desire state is 2 then k8s will compare this info. If info is not matching, k8s thinks there is something wrong. it will try to fix it so it wil terminate 2 out of 4 when you apply it.

When you apply the configuration, k8s will add status of your deployment and update the state continausly.

If you change the replicas from 2 to 5, then it will first check the status with specification, then it will take a corrective action.

How/where does k8s gets the status data?
- k8s has a database called etcd.
- Master node keep the cluster data. Basically etcd holds the current status of any k8s component in key value pair.


======================================

Format of config file
- its a yaml file.
- human friend data serialization standard for programming languages.
- syntax: strict indentation

use "yam validator" to validate your code.

- Make a habit of storing the config file with your application code or have your own gitrepo.


Layers of abstraction
- Deployment manages a ..
- ReplicaSet manages a ..
- POD is an abstraction of
- container

Deployment manages POD in reality.


template
- template holds the information about the POD.
- You will have specification about the deployment and
  you will have specification about pod inside the deployment.
  the specification can be the name of the container, the type of image or the ports that will be open.


Connecting components (Labels, selectors and PODS)
- how the connection is established?
by using labels and selectors, connection is established.

Note: metadata part contains the labels and
      specification part contains selector

In metadata, you specify component like deployment, pod in key/value pair.

for eg,
labels:
  app: nginx

this label will stick to the component.

So we give POD, the lable through the template.
so the label -> app -> aginx

  template:
    metadata:
      labels:
        app:nginx

we tell the deployment to match all the labels with app: nginx to create a connection.

spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx

This way, deployment will know what pod belongs to which deplpyment.

As we know, deployment has its own label, nginx.

These two labels are used by the service selector.

specificaion of service, we defined a selector, which basically makes a connection between the service and deployment or its pod.

apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  selector:
    app: nginx
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8080

service must know which pods are registered with it. This connection is made through selector of the label.


Ports in Service and Pod

nginx service
DB service

[root@master week1]# kc apply -f nginx-service.yaml
[root@master week1]# kc get service


how can we validate that the service has the right pods, that it forwards the request to. We can do it using,
$ kc describe service nginx-service


selector,
target port.
look at the output, and end points.

ip address and port of the pods that service must forward the request to.

how do we know these are the right IP address of the pods?

because 'kc get pod' does not give you ip info.

get more info, you will see more coloum and also ip address.
$ kc get pod -o wide

so we know service has right ip address.


Now, lets discuss about the third part (status) which k8s automatically generates.

the way you can do is, through 'kc get deployment

[root@master week1]# kc get deployment nginx-deployment -o yaml

or save it to a file.

when you edit, you will see lots of info added.

[root@master week1]# kc get deployment nginx-deployment -o yaml > nginx-deployment-detail.yaml
$ vi nginx-deployment-detail.yaml

you will see status section right below spec. it is very helpful while debugging.

you also see other information added to metadata section as well.
such as creation timestamp, uid and more


how do you copy the yaml file?
you have to remove the added content.

you can delete the deployment using config file.



# kc delete -f nginx-deployment.yaml
# kc delete -f nginx-service.yaml


==========================================

Complete application setup with k8s components
- we will be deploying two applications.
mongoDB and mongo-express.

we will create
- 2 deployment/POD
- 2 servie
- 1 configMap
- 1 secret

How to create it?
1. First we will create mongoDB POD.
2. In order to talk to the pod, we will need to have a service. we will create an internal service which basically means that no external requests allowd to the pod. Only component inside the same cluster can talk it.
3. After that we will create mongo express deployment.
- one we will need for database URL of mongoDB so mongo express can connect to it.
- second one is credential. username and password of database so that can authenticate to DB.

The way we can pass this info to mongo-express deployment is through its deployment configuration file through environmental variable.  Because thats how the application is configured.

So we will create ConfigMap that contains databse URL. Then weill create secret that contains credentails which we will reference on both inside deployment.yaml

We will have mongo-epress to be accessinble through the browser. In order to do it, we will create external service, that will allow external request to talk to the POD. so the URL will be,
- IP address of Node
- Port of external service.


We will create,
- Mango DB
- Mango Express

- Internal Service

- Config Map (DB URL)

- Deployment.yaml
  - env Variables

- Secret
  - DB user
  - DB pswd

- External service

-----------------------------------------
Request flow

1. Request comes from browser
2. It goes through external service (mongo-express) which is then forwarded to
3. Mongo - express POD.
4. POD then will connect to internal service of Mongo-DB which is basically the DB URL.
5. It will then forward to MongoDB POD where it will authenticate the request using the credentials.


Mongo Express browser -> Mongo Express External service -> Mongo express -> MongoDB internal service -> MongoDB


Now, lets go ahead and work on LAB.

1. Lets go ahead and start minukube cluster

$ kc get all


apiVersion: apps/v1
kind: Deployment
metadata:
  name: mangodb-deployment
  labels:
    app: mangodb    # making connection to POD
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mangodb    # making connection
  template:        # definition (or blue print) for POD
    metadata:
      labels:
        app: mangodb
    spec:
      containers:
      - name: mangodb
        image: mango


so, we have pod info
- name: mangodb
  image: mongo

go to docker hub and search or mongo


now, we will add more config info

$ cat mongo.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mangodb-deployment
  labels:
    app: mangodb    
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mangodb    
  template:        
    metadata:
      labels:
        app: mangodb
    spec:
      containers:
      - name: mangodb
        image: mango
    # specify ports and env variable
        ports:
        - containerPorts: 27017    
        env:
        - name: MONGO_INITDB_ROOT_USERNAME
          value:
        - name: MONGO_INITDB_ROOT_PASSWORD
          value:

deployment config file is checked into repo
- we will not write admin user/pw on the configuration.

now, we will create secret and reference the value

Now, save your deployment file.        

before we apply the config, we will create secret.





get info from the URL,
https://hub.docker.com/_/mongo





https://www.youtube.com/watch?v=X48VuDVv0do
1:22:35
























No comments:

Post a Comment

Git branch show detached HEAD

  Git branch show detached HEAD 1. List your branch $ git branch * (HEAD detached at f219e03)   00 2. Run re-set hard $ git reset --hard 3. ...