Wednesday, February 2, 2022

Day5 - Docker, swarm, k8s intro

 2/02/2022 - class notes

docker, service, network - overlay, 
Recap from yesterday
If you running muclitple containers, how do you connect them?
https://github.com/fgitsolutions/docker
$ cat docker-compose.yml
version: '3'
services:
  web:
    build: .
    ports:
     - "5000:5000"
  redis:
    image: "redis:alpine"

- Discussed about Dockerfile
- Docker-compose
  - multiple containers
- Single host/standalone host
- If you run multiple containers, you will be out of memory
- Distribute the containers among nodes
We need custer managerd by
 - manager
 - Worker nodes
Who is going to provide this kind of facilities
 - Docker SWARM
 - K8s
 - Mesos
Swarm architecture
docker swarm init
docker swarm join
google and see the swarm internal atchitecture.
- internally swarm manager has multiple components.
 - API -> accepts commands and creates service object
 - Orchestrator
 - Allocater
 - Dispatcher
 - Schedular
Worker Node
 - worker 
 - executor
LAB
Login to your VM
# ssh swarm-manager
$ docker nodes ls
$ docker swarm init  # initialize
 initialized current node is now manager.
you are manager.
@worker node
$ ssh swarm-worker
$ sudo -i
$ docker swarm join <key> ip:2377
Go back to master and run
$ docker node ls
manager is a leader and other is a worker node.
You can execute your program
Docker Service => docker run
Docker stack -> docker-compose
you can execute as a job or a service.
# docker service
read the output
$ docker service create --help
# docker service create --name=n1 -p 80:80 nginx
name -> n1
-p -> port
image-> nginx
List it now,
# docker service ls
# docker service ps n1
scale
# docker service scale n1=3
it created 3 replicas. 
# docker service ps n1
Your service will load balance.
# docker inspect <ID>
it is going to use overlay network
# docker network ls
see the ID and the driver type.
It will creates tunnel between all the nodes
VPC or VNET pairing?
- able to communicate between one network to another. Establish the communication between private network.
# docker inspect ingress
look at under peers, you will see two IPs.
peer mean they both can communicate. its like creating tunnel between these two node. 
manager/node can transmit the communication.
google "vpc peering"
google for "overlay network"

# docker swarm --helo
# docker swarm join-token
# docker swarm join-token manager
# docker swarm join-token worker
# docker service ls
# docker serviec ps n1
how to run muitple cotainer 
# cat docker/docker-compose.yml # refer the example from yesterday


$ cat swarm-compose.yml
version: '3'
services:
  web:
    build: .
    ports:
     - "5000:5000"
  redis:
    image: "redis:alpine"

# docker git pull
# docker service 
review help
# docker service rm n1
# docker stack services mystack
# docker stackk
# docker stack rm mystack
if you want to deploy
our instance is t2.large
# cat ci.yaml
it will create multiple containers

# docker stack deploy -c stack-ci.yml costack
stack is multiple 
List all services from stack-ci
# docker stack services cistack
# docker stack services cistack
if you keep creating, we may go out of resources.
# docker logs <id>
# docker stack service cistack
# docker ps -a

# docker stack deploy -c stack-ci.yaml cistack
Remove
# docker stack rm cistack
# docker stack services cistack
everything is gone.
# docker stack service cistack
# docker logs <docker_Id>
getting error for dogs
# docker stack rm cistack
# docker stack deploy -c stack-dc.yml mystack
# docker stack services or ls
# docker stack services mystack


version: '3'
services:
  web:
    image: 'devopsjuly22017/web:latest
..

# vi stack-dc.yaml
# docker stack deply -c stack-dc.yml mystack
# docker stack service mystack
docker is not production ready by itself. because of the nature. 
docker swarm does not have features that k8s offers.
k8s is production ready. Swarm can be used in production but it does not have alots of features that k8s offers.

Read about "ECS cluster in AWS" - aws implemented docker cotainer (microservice service) implmentation.
go to aws
search for ecs
-> get started

k8s ->
google and read about k8s features.
will discuss 'architecture and internal components;
google k8s architecture
----------------------------------------------------
user (cli) -> api -> k8s master -> Node1|node2|node3|..|Noden  --> Image Registry

you are going to use,
k8s-client
k8s-master
k8s-nodes
k8s features
- open source systems for automatiing deploying ....
- Automated rollouts and rollbacks - you created 3 replicas, version is 1.1 and need to update 1.2. How do  you do it? but k8s will do roll back/rollup. one by one update/one by one update. high availibity. if something goes bad, if will roll back. 
- storage orchestration - automatically mounts storage system of your choice such as aws, gcp or nfs, iscsi, gluster, ceph, cinder or flocker
- automatic bin packing
- service discovery and load balancing
- secret and configuration management.
- horizontal scaling [ up and down your app]
- Batch execution - run jobs
- IPv4/IPv6 dual-stack
- Self healing - restart if fails.
Node level and container level.
Deployment types
- blue green 



Kubernets components
--------------------
Control plane
kube-apiserver
- api server the component of the k8s control plac=e that executes the k8s API. The API server is the front end for the k8s control plane. its like a front desk
- ETCD - database storage (NoSQL database). key value backing store for all cluster data. maintains the state of your cluster. what is connected, 
- kube-Scheduler -> when request comes to say create container, checks which node is free, which node is capable to create the conteiner.  load balances. 
kube-controller-manger (internal component)
  - control takes a decision, logically each controller is a separate process 
some types of controllers
- node conteoller - checks to see if nodes go down. checks health of nodes.
- job controller - watches for job objects. self healing, creates pods to run jobs.
- end poing controller
- service account and token controllers

@aws, create auto scaling group

cloud controller-manager
- this can use use with cloud service providers.
- cloud specific controller logic.
node controller
route controller
- service controller
5 components
- API server

Node components
on node side,
1. kubelet
- an agent runs on each node in the cluster. makes sure that containers are running in a pod.
kubelet does not manage containers which were not created by kubernetes.
2. kube-proxy
- kube-proxy is a network proxy that runs on each node in your cluster, implementing part of the k8s server concept.
- it maintains network rules on nodes. These network rules allow network counication to your pods from network sessions inside or outside of your cluster.
- it uses the operating system packet filtering layer if there is one and its available, otherwise kube proxy forwards the traffic itself.
3. Container runtime
- container runtime is the software that is responsible for running containers.
- k8s supports containerd, cri-o, docker, rkt and other implementations. 

apart from these components, we have addons
- DNS
- WebUI (dash board)
- Container repsirce monitoring
- Cluster level logging

There are 
- Opensource
- Enterprise :- ~PAAS
k8s setup
google, kubernetes set up

create minimum 2 server
1. install kubeadm, kubelet and kubectl
2. Initialize run commands kueadm init <args> - initialize master and provides the join command.
we are going to use two types of commands
1. client command - kubectl
2. Admin : kubeadm (master)

github.com/qfiitsolutions/k8s
https://github.com/qfitsolutions/k8s
know little about openshift service as well.
https://docs.google.com/document/d/1_ZQ8XN1dfcaXkBJHPMIlZqoW4_BePbrCU_LW3yvDTAs/edit

cluster contains
- master
- nodes
Tomorrow's topic
- set up k8s
- PODS
= Write yml
- Dashboard
- Deployment
- Service



================================

apiVersion: apps/v1
kind: Deployment
metadata:
  name: ais-ifm
  labels:
    app: nice
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ais
      tier: web
  template:
    metadata:
      labels:
        app: ais
        tier: web
    spec:
      containers:
      - name: tomcat
        image: tomcat:latest
        ports:
        - containerPort: 8080
        resources:
          limits:  
            cpu: 1000m
            memory: 600Mi             
          requests: 
            cpu: 500m
            memory: 300Mi
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
        resources:
          limits:  
            cpu: 400m
            memory: 200Mi             
          requests: 
            cpu: 200m
            memory: 100Mi

No comments:

Post a Comment

Git branch show detached HEAD

  Git branch show detached HEAD 1. List your branch $ git branch * (HEAD detached at f219e03)   00 2. Run re-set hard $ git reset --hard 3. ...