Monday, February 1, 2021

Kubernetes - Dynamic storage - storage class - Day 12

Kubernetes - Dynamic storage - storage class - 1-29-2021
minikube start --wait=false
alias 'kc=kubectl'
PV is a plugin which helps you to connect to storage (cloud, nfs)
for mini-kube, 
$ kc get sc
NNAME                 PROVISIONER                RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
standard (default)   k8s.io/minikube-hostpath   Delete          Immediate           false                  4m9s
here provisioner is k8s.io/minikube-hostpath
standard -> PV - > workernode
provisioner -> the program that creates your storage.
we have provisioner for 
aws- ebs
google cloud for blockstorage
openstack - cinder

kc describe sc standard

cat mypvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mypvc
spec:
#  storageClassName: standard"  # by default it is standard
  accessModes:
  -  ReadWriteOnce
  resources:
    requests:
      storage: 2Gi


kc apply -f mypvc.yaml
kc get pvc
kc get pv
kc get pvc
kc describe sc standard
see the reclaimpolicy -> delete
this is a default behavior
this is good if you use the storage from cloud.

cat mypv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: mypv
spec:
  storageClassName: ""
  capacity:
    storage: 1Gi
  accessModes:
  -  ReadWriteOnce
  hostPath:
    path: "/mydata"

kc get pv
kc get pvc


For application need: RAM/CPU also called compute unit
- we will use container engine to manage system resources.

we will create pvc and create pv from nfs
In this example, we will configure one nfs server and use that storage for pod
1. Configure nfs server
You can also use efs service from aws or similar cloud services as well.
# mkdir /mynfs
now, share this directory
# vi /etc/exports
/mynfs 192.168.99.100(rw,no_root_squash) # use * for all or ip, grant root access to no_rot_squash, mean do not remove root access)
Start server
# systemctl start nfs-server
verify the share
# exportfs -v 

now your nfs share is available.
# ping 192.168.20.120 (nfs server)
now, client need to mount to use this storage
@client system, run the command below to verify that you can access
Create a mount point and mount
# mkdir /opt/share
# mount 192.168.20.120:/mynfs /opt/share
note: verify your nfs version if you encounter problem.
if you encounter any problem, go to your nfs server and review the error message
# cat /var/log/messages
you will see messages related to nfs.
note the error, port, permission things like this.
# exportfs -v
review the output options
secure ..
in our case, the p
# vi /etc/exports
/mynfs 192.168.99.100(rw,no_root_squash,insecure
# exportfs -r
# exportfs -v 
your nfs server is running in insecure way. bad option but good for lab


Now, we have storage available.
now, go back to kubernetes.

kc delete pvc --all
kc get pv
> kc get pvc
kc apply -f pvc.yaml

open your pv file
# cat mypv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: mypv
spec:
  storageClassName: ""
  capacity:
    storage: 1Gi
  accessModes:
  -  ReadWriteOnce
  nfs:
    server: "192.168.10.20"
    path: "/opt/share"
mount needed IP address of server and 
kc get pv
kc get pvc
you get it pending, just wait for a while, status changes to bound


cat mypvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mypvc
spec:
  storageClassName: standard"  
  accessModes:
  -  ReadWriteOnce
  resources:
    requests:
      storage: 2Gi


kc apply -f deployment.yaml
> kc get pod
> kc get pv, pvc
> kc exec -it myd-ddddd -- bash
# cd /var/www/html
you will see nfs server is mounted.
# cat >index.html
nfs server data tested
if you see, nfs implemented
# curl <pod-ip>
go to nfs server and the shared directry
# opt/share
# cat index.html

goo k8s
> kc delete pod --all
your POD deleted but new pod will come with old data.
since data comes from nfs
now, what happens if you have more than one pod is running...
> kc get pods
kc expose deploy myd --port=80 --type=NodePort
> kc get svc
go to other server and run
> curl 192.168.99.100:32351
What happens whn you have more pods
> kc get deploy
> kc get rs
> sc scale deploy myd --replicas=3
> kc get rs
> kc get pods
There is no error as of now.
you notice,
all the pod getting same data
now, login to one pod and change the content
> kc exec -it myd--- --bash
# cat >>/var/www/html/index.html
Added new content, to verify the change
curl to all pods and curl it
you will get same data
you know the storage source is same and same data is modified from on server..

> kc get pc
you see access mode: rw
RO - readonly
RW
  - RWO  (readwriteonce - All pod can run)
  - RWX
Note: 

RWO - RW only -> 

cat pv-for-nfs.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: mypv
spec:
  storageClassName: ""
  capacity:
    storage: 1Gi
  accessModes:
  -  ReadWriteOnce
  hostPath:
    path: "/mydata"


 

No comments:

Post a Comment

Git branch show detached HEAD

  Git branch show detached HEAD 1. List your branch $ git branch * (HEAD detached at f219e03)   00 2. Run re-set hard $ git reset --hard 3. ...