Thursday, February 3, 2022

Day 6 - Kubernetes in depth

2/03/2022 - class note
Docker, Docker compose
Recap
Orchestration Frame
Clusters
Swarm
K8s
Master 
Nodes (pool of worker nodes)

Master node
API server
Controller Manager
Scheduler
ETCD
kubeadm
Worker Node
- kubectl
- Kube-proxy
- PODs

How to configure k8s using kubeadm?
- Create 2 VMs per cluster
- 1 VM -> client (it can be your PC/Laptop)
How to configure?
- Login to AWS console
- Select your region
- You need elastic IP
- Go to instances to see if you have existing instinaces
- Go to services -> EC2
- Select ubuntu t2.micro (two instances) if you want to load jenkins or other program, use medium.
go to kubernetes.io and search for kubeadm
- look to installation
- look for requirement, you need to use t2.medium. 2GB, 2CPU
- add storage, security group. ssh enable
your 2 instances are ready. Create other 2 as follow
k8s-master - t2 medium
k8s-node01 - t2.micro
k8s-node02  - t2.micro
k8s-client (Treat this like a client pc, jump host, bastion host)
Login to each instances node01, node02, master, and client
Now, we have to install required utility on master and nodes.
follow this guide,
https://github.com/qfitsolutions/k8s/blob/master/k8s-master
CNI - container network interface.
@master, and both node, run the following command,
apt-get update && apt-get install -y apt-transport-https && \
apt install docker.io -y && \
     systemctl start docker && \
     systemctl enable docker && \
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - && \
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF && \
apt-get update && \
apt-get install -y kubelet kubeadm kubectl kubernetes-cni

Enable cgroup driver, otherwise kubelet will not start. Run it on node side and restart the docker.
cat <<EOF >/etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
systemctl restart docker
systemctl status docker

# kubeadm
you will see the output
available commands
look at init option, k8s control plan,
join - to join existing cluster
token - manage token..

Now, you have to initialize
kubeadm init --ignore-preflight-errors all   
or
sudo kubeadm init --control-plane-endpoint "PUBLIC_IP:PORT" --ignore-preflight-errors all 
Review all the output.
Your control node is initialize successfully.
You can join any number of worker node. Copy the join command and execute on worker nodes.
Static pod - kubelet is going to be managed, api server, etcd, kube systems.. kube-proxy
You have to config kube config by running the command below,

mkdir -p $HOME/.kube && \
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config && \
sudo chown $(id -u):$(id -g) $HOME/.kube/config
cat .kube/config 
you will see very important info
server: https:ip:port
https://ip:6443
if IP is public, you can reach from anywhere.
kubectl is a client command.
# kubectl get node
check status: NotReady
Role: control plane.
---------------------------------
You also have to deploy pod network.
go to kubernetes.io installation page and see add-on, networking and network policy
There are lots of policy
- calico 
- Flannel 
- Weave net - We will use it. click on the link 
  it cana be installed on CNI-enabled k8s cluster. we used this option so we should be able to use it
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
- service account created
- authorization, role binding
- daemonset - going to run background process such as agent. like kubelet, 
now run kubectl command
# kubectl get node 
you will see status: ready
# kubectl get pod
what is pod?
- it's a container. 
k8s devide memory into multiple namespace. namespace holds logically spliting memory. 
# kubectl get pod --all-namespaces
coredns
etcd-ip
kube-apiserver-ip
kube-controller-manager-ip
kube-proxy-2

Now, lets go to node systems and run the join command.
#Note: get kubeadm join command from k8s master and execute like below command:
#kubeadm join 172.31.26.24:6443 --token hpnfgz.52pq3e95hrsz68c6 --discovery-token-ca-cert-hash sha256:92f783e806fb2b0bd36c2847d276847e78a14e07f86256cdbb4f3d79b9618df8

read the output carefully
you will see message that node has join th cluster
go to master node and run
# kubectl get node
Now, go to 2nd worker node and execute the join command. once done, go to master node and run
# kubectl get node
you will see one master and 2 nodes.
see under Roles.
# cat .kube/config 
copy the config file output and paste under home dir on client node.
apt-get update && apt-get install -y apt-transport-https && \
apt install docker.io -y && \
     systemctl start docker && \
     systemctl enable docker && \
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - && \
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF && \
apt-get update && \
apt-get install -y kubectl kubernetes-cni
cat <<EOF >/etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
systemctl restart docker
systemctl status docker

# mkdir .kube
$ vi ./kube/config

k8s-client -> k8s-Master -> k8s-node01|k8s-node02]
PC/Laptop -> k8s-Master -> k8s-node01|k8s-node02]
google
"aws kubeconfig update"
aws eks update-kubeconfig --name example (cluster name)

goto your pc, under home, crete .kube/config 
and paste the content
# kc get node
you should get output. If it hangs, go to congig and get the public IP and paste on server line..
try again
$ kc get node
invalid certificate
cert is generated backed on IP so you may get cert error. If everything is good, result is good.

Can we have multiple master nodes in k8s?
- yes, but you can have multiple components on different nodes. spilit the componenet
- create high availabliity cluster with kubeadm
- high availibity etcd and so on..
next, how to deploy application on k8s
break time...
We want to deploy pod
to start pod, 
kubectl run command
kubectl -> kc
kubectl run
kc expose
# kubectl
you will see detail help output
look under basic commands
create -
expose -
run - 
set - 
Lets look at these two commands: expose and run
expose - take a replication controller, service, deployment or pod and expose it as a new k8s service
run - run a particular image on the cluster

# kc run --help
see th eusage..
review the output, go through the exaples.
# kubectl run n1 --image=nginx:latest --port=80
podname=n1
image=ngins
port-> 80
# kc get pods
Status: containerCreating
===================================
Q. Can we host private registry on our on-prem env?
go to hub.docker.com
search for "registry". just use it.
===================================
# kc describe n1
How can you access. lets say pod is deployed to node02.

# kubectl expose --help
read the help output

# kubectl expose pod n1  --port=80 --target-port=80 --name=n1service --type=NodePort
--port=80 is your nginx port
target-port=80 is service port
n1service - is the name of service we just gave.
What is service?
a service represents a logincal set of pods and acts as a gateway, allowing (cielnt) pods to send requests to the service without needing to keep track of which physical pods actially make up the service. so basically now servie will redirect the consumer to the actual working pod.

service types,
cluster-ip ->  cluster level
NodePort - node level
Loadbalancer - use external (public) IP, you can use it.
# kubectl get svc
# kc describe svc n1service
Type: NodePort
selector: run=n1
clusterip -> private ip, such as database, use local ip
Nodeport - web site -> you use public ip, so you can use nodeport is going to be used. node port is like bridge port. it enable the port forwarding.
LB - You can use loadbalance for public IP.

google for "static ip in service gke"
get the node IP and use at the browser, you will be able to access the nginx default page.
http://ip:32703

Kubernetes dashboard,
github.com/kubernetes/dashboard
kc describe svc n1service

rather than running commands, we can convert command line option to yaml file.
$ mkdir feb22
open it on vs code
FEB22
 - k8s-example
    pod.yml
kc describe svc n1service
kubectl expose pod n1  --port=80 --target-port=80 --name=n1service --type=NodePort

lets convert these two cmmmands into yaml
what we need is
apiVersion: v1
kind: Pod # what we want, pod
metadata:
  name: n1
  labels:
    app: n1
    env: dev
  namespace: dev
spec:
  containers:
  - name: nginx
    image: nginx:latest
    ports:
    - containerPort: 80

---
apiVersion: v1
kind: Service # what we want, service
metadata:
  name: n1service
  labels:
    app: n1
    env: dev
  namespace: dev
spec:
  Type: NodePort
  selector:
    app: n1
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80




@k8s.io site, search for service.

# kc get ns
you see default namespace.
You want your own.
# kc create ns dev
dev namespace is created.
# kc delete ns dev
you can create namespace using yaml file.
google for namespace and how to create it.

---
apiVersion: v1
kind: Namespace
metadata:
  name: dev
 
* - means list.
$ vi nginx.yaml
#kubectl run n1 --image=nginx:latest --port=80
#kubectl expose pod n1 --port=80 --target-port=80 --name=n1service --type=NodePort
apiVersion: v1
kind: Namespace
metadata:
 name: dev
---
apiVersion: v1
kind: Pod
metadata:
 name: n1
 labels:
   app: n1
   env: dev
 namespace: dev
spec:
 containers:
 - name: nginx
   image: nginx:latest
   ports:
   - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
 name: n1service
 labels:
   app: n1
   env: dev
 namespace: dev
spec:
 type: NodePort
 selector:
   app: n1
 ports:
   - protocol: TCP
     port: 80
     targetPort: 80

# kc apply -f nginx.yaml
# kc get pod -n dev
next week helm 
read yaml file, write yaml file.
https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.0/aio/deploy/recommended.yaml
k8s
terraform
ansible


---------------------------------

apiVersion: apps/v1
kind: Deployment
metadata:
  name: ais-ifm
  labels:[InternetShortcut]
URL=https://codeshare.io/WdlgYy

    app: nice
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ais
      tier: web
  template:
    metadata:
      labels:
        app: ais
        tier: web
    spec:
      containers:
      - name: tomcat
        image: tomcat:latest
        ports:
        - containerPort: 8080
        resources:
          limits:  
            cpu: 1000m
            memory: 600Mi             
          requests: 
            cpu: 500m
            memory: 300Mi
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
        resources:
          limits:  
            cpu: 400m
            memory: 200Mi             
          requests: 
            cpu: 200m
            memory: 100Mi
            
            
            
---


apiVersion: v1
kind: Service
metadata:
  name: ais-service
  labels:
    app: ais
spec:
  ports:
   - name: tomcat
     port: 8080
     targetPort: 8080
   - name: nginx
     port: 80
     targetPort: 80
  type: NodePort
  selector:
    app: ais
    tier: web
    

No comments:

Post a Comment

Git branch show detached HEAD

  Git branch show detached HEAD 1. List your branch $ git branch * (HEAD detached at f219e03)   00 2. Run re-set hard $ git reset --hard 3. ...