kubernetes - storage - LAB -> 1/28/2021
k8s -> Deployment -> Image (app) -> Launch POD
k8s
- Deployment
- Image
- Launch POD (APP)
[root@master jan26]# more mydeploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myweb-deploy
spec:
# type: recreate
replicas: 3
selector:
matchLabels:
env: prod
matchExpressions:
- { key: team, operator: In, values: [ team1, team2 ]}
- { key: tier, operator: In, values: [ frontend ] }
template:
metadata:
name: "webpod4"
labels:
env: prod
team: team1
tier: frontend
spec:
containers:
- name: "webc1"
image: "vimal13/webserver-apache-php"
> kc get depploy
> kc get po
> kc apply -f pv-deploy
> kc get deploy
> kc expose deploy myd --port=80 --type=NodePort
> kc get scvc
> kc exec -it myd-.... -- bash
curl http://192.168.56.20:30518
hellow world !!!
# cd /var/www/html
# vi index.php
I made change to this file.
affaramal
> kc get po
> kc delete pod myd-....
since pod launch using deployment, it will re-launch again.
if you login inside this pod, your data is removed.
# cat /var/www/html/index.php
so, we have to manage the storage on permanenet basis.
kubernet -> persistance storage
Persistent volume
-----------------
> kc get po
> kc exec -it mdyd.... -- bash
# cd /var/www/html
mount path
PV -
-------------------------------------------
$ cat pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata: # give name to your resource
name: mypvc
spec:
storageClassName: "" # you don't have to write static, only value needed for dynamic storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Storage type accss mode
- ReadWrite (RW) -
- RWO - Read write o -> one read/write
- RWX - read write many - multi cluster env
- ReadOnly (RO)
PVC
- Static PV
- Dynamic PV
Storage Class
static PV -> we don't use storage class
> kc apply -f pv-deploy
> kc get pod
> kc apply -f pv-deploy.yaml
> kc get pvc
> kc describe pvc myppvc
you see status pending ..
what happened here?
here the job of kubernetes admin team comes up...
They go to main storage (cloud, nas or local storage)
- they create a block of say 10 GB, access mode say rw
- they create a box and provide it (bound/binding)
after that status change from pending to bound
here, admin need to manually go and create a box/volume. this is called static PV.
k8s current architecture
-> single machine
-> VM (Minikube)
-> Master node
-> Worker node
so, we are going to create a dir /mydir on worker node and and assign it to POD.
$ cat pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: mypv
spec:
storageClassName: ""
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
:> kc get pd
> kc get ppvc
> notepad pv.yaml
> kc get pvc
> kc get pv
you see bounded..
see under claim..
Open deployment file
---------------------
# more mydeploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myweb-deploy
spec:
# type: recreate
replicas: 3
selector:
matchLabels:
env: prod
matchExpressions:
- { key: team, operator: In, values: [ team1, team2 ]}
- { key: tier, operator: In, values: [ frontend ] }
template:
metadata:
name: "webpod4"
labels:
env: prod
team: team1
tier: frontend
spec:
Volunes: # You are defining on pod definition file.
- name: mypod-pvc
persistentVolumeClaim: # specify PVC where it is coming from?
claimname: mypvc
# you may have other pvc
# persistentVolumeClaim: # specify PVC where it is coming from?
# claimname: mypvc2
containers:
- name: "webc1"
image: "vimal13/webserver-apache-php"
volumemounts:
- mountPath: /var/www/html # specify the mountpoint where you want to mount
name: mypod-pvc # the name of PVC
# you can have other containers as well.
# but you may use different pvc
# - name: "webc2"
# image: "httpd"
# volumemounts:
# /var/www/html1
clean the code
# more mydeploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myweb-deploy
spec:
# type: recreate
replicas: 3
selector:
matchLabels:
env: prod
template:
metadata:
name: "webpod4"
labels:
env: prod
team: team1
tier: frontend
----------------------------------------------- spec for volume
spec:
Volunes:
- name: mypod-pvc
persistentVolumeClaim:
claimname: mypvc
containers:
- name: "webc1"
image: "vimal13/webserver-apache-php"
volumemounts:
- mountPath: /var/www/html
name: mypod-pvc
> kc apply
> kc describe
check under mounts
> kc get svc
> kurl 192.168.99.100:30534/index.html
Now, you don't see any output
The reason is that new storage replaced the old data.
now, everything you store on PVC, will be available
now, data is storeed on the host(node) where it is coming from
> kc get pods
> kc exec -it myd.---- -- bash
> cat > /var/www/html/index.html
New updated file
now curl again from the client side, you will see latest data.
> kc get svc
> kurl 192.168.99.100:30534/index.html
> kc get pvc
> kc get pv
> kc get pod
> kc delete pvc mypvc
it will not be deleted because it is used by POD.
to delete ppvc, delete pod. to delete pod, first delete deployment
> kc delete deployment # everything will be deleted.
kc get pvc
it is also deleted.
deployment deleted
> kc get pv
now, status will shows released before when it was used it was on bound state.
reclaim policy
- recycle -> you can reuse
- retain -> you cant re-use
Status
- released
- available
- bound
claim
- recycle
How to change it
reclaim policy
edit pw, on the fly - dynamically
> kc edit pv mypv
go down all the way down
and change to Recycle un der persistentVolumeRecaimPolicy
you will see recycle and status changed to available and claim is free...
in this process, we have to create PV manually.
admin need to go and check manually whether PVC request is there.
Now, dynamic comes on the play and storage class is used.
reclaim delete policy will remove the volume. You don't have to manually delete it.
1. create pvc
2. Create pv (attach)
3. Edit pod definition file
4. Create pod
5. Use the storage
6. Modify the files content
7. Delete the pod
8. Recreate a pod (use deployent, it will automatically creates on deletion)
Monday, February 1, 2021
Kubernetes - Storage - LAB - Day 11
Subscribe to:
Post Comments (Atom)
Git branch show detached HEAD
Git branch show detached HEAD 1. List your branch $ git branch * (HEAD detached at f219e03) 00 2. Run re-set hard $ git reset --hard 3. ...
-
snmpconfig command allow you to managge snmpv1/v3 agent configuration on SAN switch. Event trap level is mapped with event severity level....
-
Firmware upgrade on HPE SuperDom Flex 280 - prerequisites tasks a. Set up repo b. Upload firmware to your webserver 1. For foundation so...
-
Disabling the Telnet protocol on Brocade SAN switches By default, telnet is enabled on Brocade SAN switches. As part of security hardening o...
No comments:
Post a Comment