Friday, March 19, 2021

Kubernetes - Multiple containers in one POD

 Kubernetes - 3/19/2021

Kubernetes - Multiple containers in one POD
Class notes
Today's Agenda
- StatefulSets
   - Headless service
   - Persistent volume template (PVT)
   - init container
 
- Multiple containers in one POD
  - POD design pattern
 

After this class, create a blog/article about the topic.
Create your own personal project based on today's topic.

CKA?

Multi Container Pod

what we learn so far -
we have app -> Image ---> Deploy to container (k8s)

as of now, we have single container pod.

There are some use cases, what we required in one box is more than one container inside a pod

Container is launch from different image.
So, you have a pod with two different container with different image.

if something goes bad, it will create same pod with two conainers.

This type of pod is multi-container pod.

Why do we need it?
Lets say,
- we have one web site (webapp written php)
- We deploy this app on webserver (Apache)
- We need os say linux os.
- We have one service running apache and serving the website.

This was a traditional approach. System admin keep tracking the web site (monitoring), applying patches, in fact maintaining.
- Monitoring resour utilizing,
- CPU/Mem utilizing

Some of the tools available
- splunk
- ELK
- EFK
- Data Dogs

Everytool has a program,
- This program want to monitor the web server
- If you want to monitor this program on your web page OS, you have to install it.
- It can be agent program which will keep track of the website
- So, we will be installing and adding one more program on the same system.

Now, We have one system (OS) and we have
- Web server installed and set up
- We also install monitoring tool (agent)

Now, lets say, somehow, monitoring failed, we have to troubleshoot and fix the issue.
some case its hard to fix, and we may have to completely rebuild. In that case, we want both program running.

But, lets think this way, lets say this whole OS component is a POD.
- Inside POD, we have web server
- Monitoring tool is also there.

same thing, say somehow pod failed, or monitoring tool is not working.
we want both program to be running all the time.

So, out POD design

POD
- C2
- C2

We have POD and two containers.

This is called multipod and the design is called sidecar.

Write a yaml code



Please note:
container is nothing but just a process. Its not like a whole OS.


apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: myweb-rs
spec:
  replicas: 3
  selector:
    matchLabels:
      env: prod
    matchExpressions:
      - { key: team, operator: In, values: [ team1, team2 ] }
      - { key: tier, operator: In, values: [ frontend ] }
  template:
    metadata:
      name: "webpod4"
      labels:
        env: prod
        team: team1
        tier: frontend

    spec:
      containers:
      - name: "webc1"
        image: "vimal13/apache-webserver-php"
      - name: "webc2"




Lets say we have a web server
- it generates and keeps the log
   /data/tmp/IP/page
   Say time format is: Jan 23:10:15

There is a monitoring program say splunk
- it keep checking the log on web server,
- if there is any log, copy log to central server.
- Log collection server may save the log in different format
  say time is 11:10:15
- This agent program reformat or filter the data.

this 2nd container is helping 1st container.

This design pattern is called adapter



ambassador design
we have to container

based on use case to use case, we have different design pattern

we have contaner 1 and 2.
- We want container 1 go to container2 and get some data and process within
- This design is called sidecar

we have two container
- container1 collects the data and does some filtering process
- go to another container, extract data, and filter
- And collected data send it to centralize storage. (load)
- This is kind of ETL type of design, - is called Adapter

we have two containers
- One collects data from another but forward it to central location
- The pod acts like a proxy called ambassador design
- It is also called proxy design.

So, we discussed three types of design pattern.


We have two containers

container1
- we have volume
- we have PVC storage
- We keep log on the volume

container2
- container 2 does not have to go to c1,
- rather than going to conatiner1, container 2 will share the same storage and read the data from pvc
- This is going to be shared volume. One writes the date, another reads the data.

POD
- C1
- C2

c1 Write --> PVC  <-- read C2




apiVersion: v1
kind: POD
metadata:
  name: adapter-pod
  labels:
    app: adapter-app
spec:
  volumes:
  - name: logs
    emptyDir: {}
  containers:
  - name: app-container
    image: alpine
    command: ["/bin/sh"]
    args: ["-c", "while true, do date >> /var/log/app.log; sleep2; done" ]
    volumeMounts:
..............


=====================


apiVersion: v1
kind: Pod
metadata:
  name: mc1
  labels:
    app: adapter-app
spec:
  volumes:
  - name: logs
    emptyDir: {}
  containers:
  - name: 1st
    image: nginx
    volumeMounts:
    - name: html
      mountPath: /usr/share/nginx/html
    - name: 2nd
      image: debian
      volumeMounts:
      - name: html

complete the yaml file ..



> kc run -f pod-storage.yaml
> kc get pod
> kc describe pod <pod_name>

we see two containers from two images.
if you look at the mounts, both have shared /html

this is a shared volume

this is also an example of sidecar.

this design is side car but how they are commmunicating is sharedVolume for getting data here and there.

login to POD
> kc exec -it mc1 --bash

since we have two container running
by default, it goes to first container.

to login to 2nd container, use the command below,
> kc exec -it mc1 -c 1st --bash

Login to 2nd container
> kc exec -it mc1 -c 2nd --bash


shared volume

c1 ------------------ c2
we used shared storage

we can also use shared memory

this program keep storing the data in ram and another container goes there and read the data.

IPC -> interprocess

- two container want to communicate
- They can use one part of memory - shared memory (IPC)
usecase
- This program (c1) keep on generating data and another read the data
- So, we can have message Queue (MQ) type of env.

data creating program is called producer and
reading program also called consumer program.

shared memory can be implemented using IPC.

This example also called sidecar design.

Demo



$ cat multi-ipc.yaml
apiVersion: v1
kind: Pod
metadata:
  name: mc2
spec:
  containers:
  - name: 1st
    image: allimgeek/ch6_ipc
    command: ["./ipc", "-producer"]
  - name: 2nd
    image: allimgeek/ch6_ipc
    command: ["./ipc", "-consumer"]
  restartPolicy: Never


- you can add more functionality

> kc apply -f multi-ipc.yaml

you see 2/2
> kc get pods

- they created random data.

message is on console, that means its on memory.


Lifecycle:-
RestartPolicy
- always (by default - )
- never
- on Failed

say you want to download something and once job is done, do not restart again.

when program exist with error, this case, pod will restart.

> kc describe pod mc2

> kc logs mc2 -c 1st

you will see streams of data.

> kc logs mc2 -c 2nd
you will also see consumed data...

----------------------

c1 wants to communicate with c2
what are the method?
- hardsisk
- memory
- network

K8s assigns ip to POD and not on containers.

if two program want to communicate with network, if they are local, we use local ip and port
so they can comminicate with the help of network.


Weapp -> appserver (tomcat/Jboss)
lets say this program is running on port 8080

we will have a webserver (80) which will directly communicate with webapp.
- client connect to the webapp.

we wil put these webapp and webserver with in a pod
so we call them c1 (webapp) and c2 (webserver)

so client connect to --> webserver --> webapp

client connect to c2 and c2 get data from c1.
so they use same namesapce

$ cat multi-network.yaml


pay the attention to port numner


> kc apply -f multi-network.yaml
> kc get pods

we see two containers
lets expose the POD, not the contianer

by default we have two ports, so we will expose port 80

> kc expose pod mc3 --type=NodePort --port=80
nodePort - connect with outside world

so, pod is launch, service is exposed

> kc get pods
> kc get svc

get ip of master ssytem
192.168.99.100:31468
data is coming from your container

so, we have 3 ways container can communcate with each other

- volume
- memory
- network

what use case
- sidecar
- adapter
- ambassador

$ cat multi_sidecar.yaml

> kc create -f multi-sidecar.yaml
> kc get pods
 go inside the pod or expose the pod
you can see what data they have

$ cat multi-adapter.yaml
> kc apply -f multi-adapter.yaml
> kc get pods
> kc exec -it -c app-container -- bash

> kc exec -it -c app-container -- sh

$ cat /var/log/app.logs

> kc exec -it adapter-pod -c logadater - sh
$ cat /var/log/out.log


=====================

initConainter
----------------

We have a pod with two containers c1 and c2

c1 is creating data (producer) and c2 is using (consumer) it.

when you start first
- do we start producer first or consumer?
- depending on the situation of yours, we have to decide

- first run producer container and once all the programs are running then start the consumer program
- Consumer container run first and start c1 and get the data.
- start 1st and 2nd keep waiting, its not good too..


initContainer
- start 1st program, run all the program and complete successful.
- c2 keep waiting until c1 is fully available.

you have to define initConainters: keywork on yaml file.

initContainers:
- name: 2nd
  image: allingeek/ch6_ipc
  command: [./ipc ....]

....


lets take a look at wp example

we launch wordpress on one pod and MySQL on another POD

we write a yaml file to create these two pods
[helm]

one pod may take 1 sec to launch and another may take 15 sec to launch.

what we want
- launch 2 pods
p1 (wp)and p2(mySql)

start p2 first and then start p1.
to solve, we can use initContainer

wp program does not start until mysql is ready.

initContainer only works on multi container condition.


so, we can launch a pod for frontend(wp)
and another POD for database (mysql)

we have load balancer (clusterIp loadbalancer)

Pod1
- c1
  - wordpress
- c2
  - monitoring

Pod2
- C1
  - Mysql

wp -> LB -> MySql

c2 on pod1, keep checking if mysql container is ready.
once it finds its ready, c2 on pod1 will automatically terminated.
upon the successful termination, wordpress container starts.

$ cat initContainer.yaml

> kc apply -f initContainer.yaml
> kc get pods

you will see 0/1 init:0/2 =>
this means we have containers, two containers are init containers.
none of them ar completed. they are keep checking the service
$ kc get svc
not running.

> kc describe pod myapp-pod

we se 1st one is running.
- keep on checking whether service is running or not

3 container is also waiting...

once lal complete, all run on parrellel

for that we have to create service
$ cat init-service

> kc apply -f init-service.yaml
> kc get svc
> kc get pods
> kc describe pod <pod_name>


so far,
we discussed
- multi-node container
- communication
- share volume/memory/network
- design pattern - sidecae, ambassador, adapter
- init-container

=======================================



No comments:

Post a Comment

Git branch show detached HEAD

  Git branch show detached HEAD 1. List your branch $ git branch * (HEAD detached at f219e03)   00 2. Run re-set hard $ git reset --hard 3. ...