Thursday, February 25, 2021

Day23- Kubernetes - Config Map Core DNS

 Config map, CoreDNS  2-25-2021
-------------------------------
 Class Notes
DO-180 - OpenShift
------------------


We have k8s cluster with 1 master and 3 nodes

Clean your old env resources
# kc remove all -all


lets say we need to set up apache web service
-> We need OS -> install httpd

any software, jinkins, permotheus, hadoop, or any software
-> they come with their own configure file and we configure these software/service
say you have httpd and uses default port of 80 and need to change it to something else,
you go to config file and change it.

1. os -> linux
2. Software -> httpd (software)
3. Edit config file, change port 85
4. Login to all os and change it.

rather than that, you create a new pod image and change it there.
lets say you have port 80 on first pod,
and there is a request to change it.
two way to do this use case
1. login to every server (pod) and change it
2. Create a centralize location


lets say storage (volume)
- create centralized storage
- change whatever you need to change it here.
- You manage all config file on this location
- This storage is a central location to manage configuration files.
- This resource mapping from storage to pods is called configmap.



on the pod side,
you will mount the storage from storage to pod.


You can store on PVC too but it is not dynamic.

to change the port from 80 to 8080 on all pods,
you make change on storage config file and all pods port will be changed and restarted with the change.

secrets


config map (CM)
Login to your master node or base computer.
# kc get cm

there are already created by default by system

# kc create deploy myd --image=vimal12/apache-webserver-php
# kc get pods

inside this image, apaache is configured. config file is fixed and static
port is 80.
There is not centralize confif file set up.

# kc exec -it myd.... -- bash

you are on pod
# rpm -q httpd
# ps -aux
# cd /etc/httpd/config.d; ls
# netstat -rn
when you change config file, you have to restart



We don't want to manage this config file. we want to manage conf file in a volume or a resource called config map.

@your base os or master node or user computer
# vi web.conf

Listen 85

every other applicaiton such as nginx, ha-proxy, jinkins have their own keyword.

# kc create -h
see config NAME  --from-file

# kc create config mywebconf --from-file=web.conf

web.conf file is on our PC.
now, we have conf file has been created.

# kc get cm

# kc describe cm mywebconf

If you like to change, just edit and change the data
# kc edit cm mywebconf

You can change, modify the data here.


so we have CM called mywebconf and the file we stored is web.conf

now, we have to link this config map to pod.

we have hundres of thousadns of pods, how do we manage it?

We will get use of deployment.


for httpd we have /etc/httpd/conf.d config location

now, we have to map config map to this location
map
CM -> /etc/httpd/conf.d

we have to mount it.
how?

google kubernetes configmap volume

get an example from kubernetes..

volume:
  configMap:

Note: keep same indend
# kc get deployment
# kc edit deploy myd

change it under spec:
 volumes:
  - name: config-volume
    configMap:
       name: mywebconf

this way you can add more maps as well.

under containers:
add volumeMount part

   containers:
    volumemOUNTS:
       - NAME: MYVOLUME
      - mountPath: /etc/httpd/config.d/

you made change it in deployment file...

once  you samve it, it will retrigger on the save.
 

# kc get deploy
# kc get pods
# kc exec -it myd-... -- bash

# netstat -tnlp

you see the port
# cd /etc/httpd/conf.d
# cat web.conf

you see the config file web.conf that you created...


say you have to change port to 8081

by editing your config map

# kc edit cm mywebconf
change it here as well


you can add documentroot and other values as well

# kc get pods
# kc delete pods --all

# kc get pods
you get new pors
# kc exec -it myd-.... -- bash

# netstat -rn

back to master
# kc get pods


# kc describe deployment myd

inside container we have mounting something from volume myvolume and that volume is coming from mywebconf config map.

you can have env value can be on env section of env , config map or secret.

Flannel POD
# kc get pods -n kube-system


CoreDNS
- mostly computer connunicate with IP

HostA            HostB
IP: 192.168.10.20    192.168.10.30

rather than communicating with IP, we create a central database and
map the ip and host

Database (DNS server)
192.168.10.20|hosta

host1 should know where is DNS server. so we have to make hosta as a dns client.
We tell hosta that your nameserver (DNS server) is ip 100.

foreg, we type google.com on browser, it will quickly goes to nds server and maps it and delivers it.

how do you create your own DNS server?
There are lots of tools available. bind is one of them.

dns softwares
- bind
- DNSmask
- CoreDNS

CoreDNS
- opensource software
- DNS and servie discovery service
- comes with plugins
- you can keep add on plugins and keep enhancing
- Plugin is the way to enhance.

you can configure your own dns server

Note: This is an indenpendent software but helpful to kubernetes.


Lets open two linux instances and login to your linux system.

Plan: create DNS server
- it has database
- maintain ip address and host info

DNS server: 192.168.10.20    john
DNS client: 192.168.10.30    mary

Easy solution to map host to ip and ip to host is to
add entry to /etc/hosts file. local database
but its a problem when number of system grows. so go to coredns.io and download
takes you to git hub
get linux amd 64 ->




DNS runs on port 53/udp

# tar -xzvf coredns_1.8.3_linux_amd64.tgz

# netstat -tnlp | grep 53
# netstat -unlp | grep 53

Create a file
# vi /etc/mydb
192.168.10.20    john    
192.168.10.30    mary
1.2.3.4        abc.hello.com
2.3.4.5        abc

every line is a record 1 and two ...

Now, what we have to do,

we have a plugin that has capability to read ip and host name
search for
coredns hosts plugins
there are keyworkd
you have to create their conf files.

every server has their conf file such as apache file

tell on this conf file, we define to read the database
host plugin
# vi mycorends.conf

. {
   hosts /etc/mydb
}

this host file has capability to read database

# ./coredns --conf mycorends.conf

hit enter, if everything is good, your dns server is started.

till that time, your dns server is running.

you either run on background or open new terminal and start working
# netstat -ulnp | grep 53     # for udp
you wil port 53 so your dns server is running

Verify

go to your client machine (mary)
# ping to master
# ping john
it says don't know.

we have to configure dns client. for that we have to change it.
# vi /etc/resolv.conf
nameserver    192.168.10.20


your new dns server is added
now, ping it, it simply works
# ping john


# nslookup john
# nslookup mary


There is a keywork search on /etc/resolv.conf
you don't have to type domain
you tell your conf file that you add domain at the end of the domain
in your search section, you can add domain
search sitabas.com
# nalookup abc
you don't have to type .domain.com

So, on client side, dns client set up
$ cat /etc/resolv.conf
search domain.com
nameserver  1.2.3.4

Now, cancel your dns server by pressing CTRL+C

-------------------------

Now lets go back to kubernetes part.

kubernetes you have lots of pods

when you launch application, you launch pod. when you lunch one application you run multiple instances of pod.
you put these pods behind load balancer service.

when client connect to load balancer and load is distributed to PODs.

we have load balancer IP and client should know it.
they can use ping, curl to verify...
The IP we provide to LoadBalancer is dynamic, if your loadbalancer is deleted, it should be permanent.
give proper name to the IP

client can use host name rather than ip.
in future if ip changes, you don't have to change fqdn.
if ip changes,  you can go to dns database and change it.

# cat /etc/mydb

4.3.4.4 abc.hello.com

so client never have to remember what ip load balancer has.

now, create a dataabse where IP and hostnames are kept.

we have fix database as of now.
lets say our load balancer has ip of .50 and its fixed.

what if we have something of k8s and keep checking on dns
say if ip is changed from .50 to 60 but its not yet changed.
- we need that program to contact dns and update the record on the fly.

this concept is called service discovery - dns service discovery

for this we have CoreDNS which has a plugin for kubernetes.

go to coredns.io and look under plugins
they have lots of plugins for other services.

the plugin is kubernetes... keyword also kubernetes.

But we already have

# kc get nodes
# kc get pods
# kc get pods -n kubesystem
We have two pods with coredns because of the failover set up.
Everything is configured for us. Everything is avialable by default in k8s cluster.

We already have coredns service running. and running one of the pod.

# kc get nodes
# kc get pods
# kc get pods -n kube-system
We have pods managed by replicaset and replicasets are managed by deployment.

# kc get deply -n kubes-system

We have

Load Balancer (LB)
- DNS
- DNS

client - connect to LB and can get the IP address,

Lets see the service
# kc get svn -n kube-system

We have ClusterIP and the port : 53
Technically you can say you already have DNS configured, the IP of DNS server is the IP of the CLUSTER-IP.
Anyone whats to use the dns server, they have to use this IP and port number.

Behind the scene, we have two DNS servers running.
We can verify with describe command.

# kc describe deploy coredns -n kube-system

look for pod template
You see what image is used. l8s.gcr.io/coredns:1.7.0

Args:
  - conf
    /etc/coredns/Corefile

They are getting data from core file. and from /etc/coredns directory.

we have config map.

check the Mounts: little down from config-volume (read only -ro)
But we use mycoredns.conf and getting data directly from this file.

it is coming from config
mounts
/etc/coredns from config-volume (ro)

# kc get cm -n kube-system
what content of the config map coredns, lets see
# kc describe cm coredns -n kubesystem

we will see mycoredns.conf
# cat mycoredns.conf
. {
   hosts /etc/mydb
}

we have one plugin.

o to cirefiles section
you will see lots of plugins
loop
reload
loadbalance
forward.

one of the plugins is forward

any client who is going to use me, I will update /etc/resolv.conf

it will automatically update and make dns client

I will do discovery for kubernetes. discovery for kubernetes cluster.local domain.
in kubernetes dns is already configured and db is dynamically created.

lets say, we have name spaces
tech
- load two pods
- one load balancer
- Service name myd1
HR
- load two pods
- one load balancer
- service-> d2

# alias kc=kubectl
# kc get pods
# kc delete all --all # delete all old resources
Create new namespace
# kc create ns tech
# kc create ns hr
# kc create deployment myd1 --image=httpd -n tech --replicase=2

on this deploying we are doing inside ns tech
# kc get pods -n tech

expose the dpeloyment
# kc expose deploy myd1 --port=80 -n tech
we expose the deployment on namespace tech

# kc get svn -n tech

# kc create deployment myd2 --image=httpd -n hr
# kc expose deploy myd2 --port=80  -n hr

We have services myd1 (Tech) and myd2 (HR)

client always connect to LB

IP of load balancer is -> get load balancer ip by running
# kc get svc -n tech
# kc get svc -n hr

We are using k8s plugin.
when you launch ip you have, they dynamically create database on kubernetes

what service name you give, exactly same hostname they give you in the dataabse, - dns

service name normally comes from deployment name

if ip changes, it dynamically updates...

they keep doing service discovery.

how to verify
# kc get svc -n kube-system

when you launch any pod, they make all pod dns client.

if you go to any of the pods

# kc get pods -n tech

# kc exec -it myd1-ddd.. --bash -n tech

# cat # /etc/resolv.conf

name server is automatically updated. All pod know what is their database.

# curl http://myd1

myd1 is name of the service and this is exact the hostname

# curl htto://myd2

you got error

# kc get pods -n tech
# kc get svc -n tech

# kc -n tech exwc -it myd1-...<pod> -- bash

same name space
record cluster ip

now, try with ip
# curl http://<clusterIP>

# curl http://myd1

dns also configured.

Now this this ns you want to conect to hr name sapce
# kc get svc -h hr
record ClusterIp of hr and try to connect.

# curl http://<cluster Ip of hr>

output is displayed
but with hostname?
# curl http://myd2

it failes
when you use short name, it fails.
check /etc/resolv.conf on POD

under search you see tech.svc.cluster.local is added.
but when you try with
# curl http://myd1.tech.svc.cluster.local

the page is displayed.
what is this mean

this is basically its a service discovery.

name of load balancer with fully qualified name : cluster.local
since its a service add svc: svc
the service is : myd2

this is the naming convernstion
name of the service: namespace:svc:domain of cluster service.

in this case, if you type,
# curl htto://myd2.hr.svc.cluster.local

you can connect.

either you can use IP or the fully qualified name

everything is precreated automatically.

why they did it?
it is written on config file

# curl http://myd1.tech.

# cat /etc/resolv.conf
# curl http://myd2.hr.svc.cluster.local
works as well
if you give shoirt name, they add domain part,

it is written on config map
# kc get cm -n kube-system


see the description of this config map

# kc describe cm coredns -n kube-system

go below corefile under ready
you see,
kubernetes cluster.local
keep on looking for this domain.
any name comes under this domain, i will do service discovery.

finally everything is configured, you don't have to manually configure.

everything is configured for you.

If you want, to connect one pod to another pod
do not use ip address of POD, but use loadbalancer IP and expose it.


There is another plugin

rewrite plugin

rewrite name myd.exapple.com myd.default.svc.cluster.local

example.com is private dns, you can create your own domain name and set it up.


               

No comments:

Post a Comment

Git branch show detached HEAD

  Git branch show detached HEAD 1. List your branch $ git branch * (HEAD detached at f219e03)   00 2. Run re-set hard $ git reset --hard 3. ...