Monday, March 15, 2021

Kubernetes - ReplicationController, ReplicaSets

 k8s

Pod-ReplicationController-Replicaset - 2/27/2021

POD
Please use PyCharm from getbrains.com
Create a POD using yaml file

1. Login to your master node
# kubernetes definition has four root level properties
kc='kc=kubectl'
# apiVersion, kind, metadata, spec

# cat pod-def.yaml

apiVersion: v1

kind: pod

metadata:
    name: myapp-pod
    labels:
      app: myapp

spec:
  containers:
    - name: nginx-container
      image: nginx
    

Create a pod with yaml file
# kc get pods
# kc create -f pod-def.yaml

pod is created
# kc get pods


==============================

kubernetes controller
- Controller are brain behind the kubernetes.
- They Monitors kubernetes objects and respond accordingly.

Replication Controller
- When your pod goes bad, we want to run new pod
- RC allow you to run multiple instances.
- Even you have a single pod you can still run RC. If this pod fails, it wil bring new
- It helps to load balance and scaling
- You can create pods and if demand increases, it adds new pod automatically.

Thre are two terms
- Replication Controller
- Replica set

RC is old and RS is new. They work on same purpose of RS is latest comes with new features.

lets create yaml file

# cat my-rs.yaml
apiVersion: v1
kind: ReplicationController
metadata:    # Metadata for RC
  name: myapp-rc
  labels:
    app: myapp
    type: front-end

spec:    # important part. It defines whats inside the object we are creating. What RS creates?
  - template:     # to define other values to reuse..
      metadata:    # this metadata is for POD
        name: myapp-pod
        labels:
    app: myapp
    type: front-end
        spec:    # POD spec file
          continers:
            -name: nginx-controller
             image: nginx

  - replicas: 3

we addded new property called replicas: 3


> kc create -f rc-def.yaml
> kc get replicationcotroller
> kc get pods    # see the pod created by RC
Pod name starts with RC name


Now, sets see ReplicaSet

apiVersion is different here.

apiVersion: apps/v1
kind: ReplicaSet
metadata:
    name: myapp-rs
    labels:
       app: myapp
       type: front-end
spec:
  template:
     metadata:
       name: myapp-pod
       labels:
          app:myapp
          type: frontend
    spec:
      containers:
        - name: nginx-controller
          image: nginx

replicas: 3    # here, replicas need selector
selector:        # it identifies what pod it falls under. Replica set can also manage PODs
        # that are not created as part of replicaSet creation.The pods created before replicaSet is created
        # but matches the labels specified under selector, the replicaSet will also manages those PODs.
        # the major difference between RC and RS is the selector. Its an optional on RC config file.
        # In case of RS, it need to define under matchLabels
# selector:
  matchLabels:    # matchLabel selector matches the label specified under it to the label on POD.
    type: fron-end
    

> kc create -f repset-def.yaml
> kc get replicaset
> kc get pods

Labels and selectors
- Lets say you deploed three pods
- We like to create RS or RC to manages these pods or to ensure that there are available all the time.
- In this situation, you can use replicaSet to manages these pods but RC can not do it.
- If they are not created, RS will create for them.
- The role of RS is to monitor pods, and in case they failed, create them.
- RS is infact process that monitors pods

How RS knows what pod to monitor?
- This is where labeling comes in handy
- This is where we can provide label as a filter for replica set.
- Under selector section, you specify the matchLabels filter and provide the same lable that you use while creating the POD.

Under RS, we knew three new sections under spec.
- template
- replicas
- selector

Using RS, we can not create new PODs with matching labels that are already created.
In case one of the pod fails and need to be created, you need to include template definition part to add labels.

To update the pods, lets change the replicas to and run the command
> kc replace -f replicaset-def.yaml
> kc scale --replicas=6 -f replicaset-def.yaml
or
> kc scale --replicas=6 replaceset myapp-replicaset (ReplicaSet name)

# note, by defining file, it does not automatically so you have to complete all command

Command summary
> kc create -f replaceset-def.yaml
> kc get replicaset            # List pods
> kc delete replicaset myapp-replicaset # Deleetes PODs
> kc replae -f replicaset-definition.yaml    # update replicaset
> kc scale --replicas=6 -f replicaset-def.yaml    # without modifying file, increase the pod/replicas

using the file as input, will not replace the no. of replicas.  so you have to use scale parameter to increase the replicas.

Lab
----------
1. How many PODs exist on the system in the current(default) namespace?

controlplane $ alias kc=kubectl
controlplane $ kc get po
No resources found in default namespace.
controlplane $

Hints: Run the command 'kubectl get pods' and count the number of pods.

2. How many ReplicaSets exist on the system in the current(default) namespace?

controlplane $ kc get rs
NAME              DESIRED   CURRENT   READY   AGE
new-replica-set   4         4         0       20s
controlplane $

hint: Run the command 'kubectl get replicaset'.

3. How many PODs are DESIRED in the new-replica-set?

the output under DESIRE is = 4

hints: Run the command 'kubectl get replicaset' and look at the count under the 'Desired' column

4. What is the image used to create the pods in the new-replica-set?

controlplane $ kc describe rs new-replica-set
Name:         new-replica-set
Namespace:    default
Selector:     name=busybox-pod
Labels:       <none>
Annotations:  <none>
Replicas:     4 current / 4 desired
Pods Status:  0 Running / 4 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  name=busybox-pod
  Containers:
   busybox-container:
    Image:      busybox777
    Port:       <none>
    Host Port:  <none>
    Command:
      sh
      -c
      echo Hello Kubernetes! && sleep 3600
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Events:
  Type    Reason            Age    From                   Message
  ----    ------            ----   ----                   -------
  Normal  SuccessfulCreate  2m57s  replicaset-controller  Created pod: new-replica-set-wd8xj
  Normal  SuccessfulCreate  2m57s  replicaset-controller  Created pod: new-replica-set-7hlrp
  Normal  SuccessfulCreate  2m57s  replicaset-controller  Created pod: new-replica-set-5hppz
  Normal  SuccessfulCreate  2m57s  replicaset-controller  Created pod: new-replica-set-9lq6n
controlplane $

The output shows image= busybox777

hints: Run the command 'kubectl describe replicaset' and look under the containers section.

5. How many PODs are READY in the new-replica-set?
-> from the output above, por status shows running =0
or
controlplane $ kc get rs
NAME              DESIRED   CURRENT   READY   AGE
new-replica-set   4         4         0       6m7s

hints: Run the command 'kubectl get replicaset' and look at the count under the 'Ready' column


6. Why do you think the PODs are not ready?

pod not ready

hints: Run the command 'kubectl describe pods' and look under the events section.

7. Delete any one of the 4 PODs
controlplane $ kc delete po new-replica-set-5hppz
pod "new-replica-set-5hppz" deleted
controlplane $ kc get po
NAME                    READY   STATUS             RESTARTS   AGE
new-replica-set-6hjj9   0/1     ErrImagePull       0          4s
new-replica-set-7hlrp   0/1     ImagePullBackOff   0          9m31s
new-replica-set-9lq6n   0/1     ImagePullBackOff   0          9m31s
new-replica-set-wd8xj   0/1     ImagePullBackOff   0          9m31s
controlplane $ kc get rs
NAME              DESIRED   CURRENT   READY   AGE
new-replica-set   4         4         0       9m40s
controlplane $ kc describe rs

hints: Run the command 'kubectl delete pod '

8. How many PODs exist now?
still 4

Hints: How many PODs exist now?


9. Why are there still 4 PODs, even after you deleted one?
- ReplicaSet ensures that desired number of PODs always run.

10. Create a ReplicaSet using the 'replicaset-definition-1.yaml' file located at /root/

There is an issue with the file, so fix it.

controlplane $ cat replicaset-definition-1.yaml
apiVersion: v1
kind: ReplicaSet
metadata:
  name: replicaset-1
spec:
  replicas: 2
  selector:
    matchLabels:
      tier: frontend
  template:
    metadata:
      labels:
        tier: frontend
    spec:
      containers:
      - name: nginx
        image: nginxcontrolplane $

controlplane $ kc create -f replicaset-definition-1.yaml
error: unable to recognize "replicaset-definition-1.yaml": no matches for kind "ReplicaSet" in version "v1"
controlplane $
controlplane $ cat replicaset-definition-1.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: replicaset-1
spec:
  replicas: 2
  selector:
    matchLabels:
      tier: frontend
  template:
    metadata:
      labels:
        tier: frontend
    spec:
      containers:
      - name: nginx
        image: nginx
controlplane $ kc create -f replicaset-definition-1.yaml
replicaset.apps/replicaset-1 created

hints: The value for 'apiVersion' is incorrect. Find the correct apiVersion for ReplicaSet.Run: kubectl explain replicaset|grep VERSION


11. Fix the issue in the replicaset-definition-2.yaml file and create a ReplicaSet using it.
This file is located at /root/

Name: replicaset-2

run the command to find the error
controlplane $ kc create -f replicaset-definition-2.yaml
The ReplicaSet "replicaset-2" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"tier":"nginx"}: `selector` does not match template `labels`
controlplane $

controlplane $ cat replicaset-definition-2.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: replicaset-2
spec:
  replicas: 2
  selector:
    matchLabels:
      tier: frontend
  template:
    metadata:
      labels:
        tier: nginx
    spec:
      containers:
      - name: nginx
        image: nginxcontrolplane $


Hint: The values for labels on lines 9 and 13 should match.

controlplane $ vi replicaset-definition-2.yaml
controlplane $ cat replicaset-definition-2.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: replicaset-2
spec:
  replicas: 2
  selector:
    matchLabels:
      tier: frontend
  template:
    metadata:
      labels:
        tier: frontend
    spec:
      containers:
      - name: nginx
        image: nginx
controlplane $
controlplane $ kc create -f replicaset-definition-2.yaml
replicaset.apps/replicaset-2 created
controlplane $ kc get rs
NAME              DESIRED   CURRENT   READY   AGE
new-replica-set   4         4         0       23m
replicaset-1      2         2         2       7m38s
replicaset-2      2         2         2       6s
controlplane $

12. Delete the two newly created ReplicaSets - replicaset-1 and replicaset-2

    Delete: replicaset-2
    Delete: replicaset-1

Solution
controlplane $ kc delete rs  replicaset-1
replicaset.apps "replicaset-1" deleted
controlplane $ kc delete rs  replicaset-2
replicaset.apps "replicaset-2" deleted
controlplane $ kc get rs
NAME              DESIRED   CURRENT   READY   AGE
new-replica-set   4         4         0       24m
controlplane $

Hint: Run the command 'kubectl delete replicaset '

13. Fix the original replica set 'new-replica-set' to use the correct 'busybox' image

Either delete and re-create the ReplicaSet or Update the existing ReplicSet and then delete all PODs, so new ones with the correct image will be created.

Replicas: 4

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  creationTimestamp: "2021-03-16T00:30:18Z"
  generation: 1
  name: new-replica-set
  namespace: default
  resourceVersion: "2711"
  selfLink: /apis/apps/v1/namespaces/default/replicasets/new-replica-set
  uid: aea7ec6f-58d9-432e-81d9-4455b07248a5
spec:
  replicas: 4
  selector:
    matchLabels:
      name: busybox-pod
  template:
    metadata:
      creationTimestamp: null
      labels:
        name: busybox-pod
    spec:
      containers:
      - command:
        - sh
        - -c
        - echo Hello Kubernetes! && sleep 3600
        image: busybox777
        imagePullPolicy: Always
        name: busybox-container
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler

change the image name to busybox

controlplane $ kc edit rs new-replica-set
replicaset.apps/new-replica-set edited
controlplane $ kc get po
NAME                    READY   STATUS             RESTARTS   AGE
new-replica-set-6hjj9   0/1     ImagePullBackOff   0          21m
new-replica-set-7hlrp   0/1     ImagePullBackOff   0          31m
new-replica-set-9lq6n   0/1     ImagePullBackOff   0          31m
new-replica-set-wd8xj   0/1     ImagePullBackOff   0          31m
controlplane $ for i in 6hjj9 7hlrp 9lq6n wd8xj                    
> do
> kc delete po new-replica-set-$i
> done
pod "new-replica-set-6hjj9" deleted
pod "new-replica-set-7hlrp" deleted
pod "new-replica-set-9lq6n" deleted
pod "new-replica-set-wd8xj" deleted
controlplane $ kc get po
NAME                    READY   STATUS    RESTARTS   AGE
new-replica-set-jk7rh   1/1     Running   0          24s
new-replica-set-lt7jj   1/1     Running   0          6s
new-replica-set-srpn2   1/1     Running   0          12s
new-replica-set-wr97b   1/1     Running   0          8s
controlplane $


hints: Run the

command 'kubectl edit replicaset new-replica-set', modify the image name and then save the file.

14. Scale the ReplicaSet to 5 PODs

Use 'kubectl scale' command or edit the replicaset using 'kubectl edit replicaset'
    Replicas: 5

controlplane $ kc get po
NAME                    READY   STATUS    RESTARTS   AGE
new-replica-set-jk7rh   1/1     Running   0          2m3s
new-replica-set-lt7jj   1/1     Running   0          105s
new-replica-set-srpn2   1/1     Running   0          111s
new-replica-set-wr97b   1/1     Running   0          107scontrolplane $ kc get rs
NAME              DESIRED   CURRENT   READY   AGE
new-replica-set   4         4         4       35m


controlplane $ kc edit rs new-replica-set
replicaset.apps/new-replica-set edited
controlplane $ kc get po
NAME                    READY   STATUS              RESTARTS   AGE
new-replica-set-6rzjn   0/1     ContainerCreating   0          3s
new-replica-set-76tpj   1/1     Running             0          3s
new-replica-set-jk7rh   1/1     Running             0          4m3s
new-replica-set-lt7jj   1/1     Running             0          3m45s
new-replica-set-srpn2   1/1     Running             0          3m51s
new-replica-set-wr97b   1/1     Running             0          3m47s
controlplane $

controlplane $ kc edit rs new-replica-set
replicaset.apps/new-replica-set edited
controlplane $ kc get po
NAME                    READY   STATUS        RESTARTS   AGE
new-replica-set-6rzjn   1/1     Terminating   0          107s
new-replica-set-76tpj   1/1     Running       0          107s
new-replica-set-jk7rh   1/1     Running       0          5m47s
new-replica-set-lt7jj   1/1     Running       0          5m29s
new-replica-set-srpn2   1/1     Running       0          5m35s
new-replica-set-wr97b   1/1     Running       0          5m31s

hints: Run the command 'kubectl edit replicaset new-replica-set', modify the replicas and then save the file.


15. Now scale the ReplicaSet down to 2 PODs

Use 'kubectl scale' command or edit the replicaset using 'kubectl edit replicaset'

 Run the command 'kubectl edit replicaset new-replica-set', modify the replicas and then save the file.

    Replicas: 2

Hint: Run the command 'kubectl edit replicaset new-replica-set', modify the replicas and then save the file.

$ kc scale replicaset --replicas=2  new-replica-set
$ kc get pods




This example comes from Udemy. Please purchase to get full access. One of the best course.
https://www.udemy.com/course/certified-kubernetes-administrator-with-practice-tests/l

 

No comments:

Post a Comment

Git branch show detached HEAD

  Git branch show detached HEAD 1. List your branch $ git branch * (HEAD detached at f219e03)   00 2. Run re-set hard $ git reset --hard 3. ...