Sunday, February 28, 2021

SQS Simple Queue Service - 2-28-2021

SQS Simple Queue Service - 2-28-2021

Class Notes


Tightly couple


A (Program) --> data info ---> B (Program)
tightly coupled (sync(

A --> data info ---> C

A --> SYNC -->  C

tightly coupled


Create a  new instance

MQ - queue

so A sends message to MW and then MQ system will send message to program B
MQ is a concept that helps on this kind of scenario

A wants to communiccate to C
A  -> B
A -> C

Here A is called producer that generates the message
B and C are consumers. There can be may other consumers such as other databases.

Once the message is produced, it can be sent to multiple destinations.


Message Queue is a program which has persistance storage and messae are stored there as a queue.

This middle program comes in the middle of two different program thats why its called middleware.

This kind of set up is called Decoupled
The reason is that A and B are not sync directly.

If B is down, A is still working, capturing messages and sending to centralized storage (MQ server).

Instead of MQueue, can we put a DB server here.

- Its not a good practice.

When message is generated, data is stired on DB services such as MySQL DB.
Since we already have DB server, why do you want to storage again?
and on top of it, its really slow..

We have to use such a program which stores data and also forwards the message at no time.

so, we want to have a program between A and B which can store, receive, and forward milllions of records.

MQ is a program where they get message, store in their storage, progrss and sent to the target location.

we use these message for some purpose.

AWS has a product SQS.
- By default they store for 14 days.
- You can change the retention time..


Amazone, as soon as you buy any product, you get a message. invoice may take time, that time, may be your mail server was down.
or there might be lots of transaction goin on..


MQ is a middleware which we put it in between two programs.
- They can be microservices.

MQ
- RabbitMQ
- ApacheMQ
- ActivcMQ
- Kafka

Managed MQ services
  - auto scaling
  - security

one of the managed service by AWS is SQS
You two programs A and B and information is stored in between these two programs.

ProgramA ->   MQ (DB)   ---> ProgramB

Program A
- Producer of message

MQ-DB
- Queue message
- Stores the message on persistance storage

Program B
- Consumer
- It keeps on checking on database (MQ) if there is any new message.
- This checking process is called POLL
- It then download, use and delete the message

Go to AWS console
- Go to SQS
Check how it works
- Click on create queue
 Note: think queue as a database

- You get two options
Standard
FIFO

select standard (default)

Name: myq1

Configuration
Leave default for now.

visibbility timeout: 30
message retention : 4
mexx size message: 256 KB

This program is meant to exchange message between program A and ProgramB.
These messages are very small in size. Proabbly half a page
- The smaller size is to storage and faster delivery.
- It takes long time to deliver if size if bigger

Access Policy - who is going to use it
- Only the queue owner

[Note: You can integrate with LAMBDA]


Encription
-------------
Do you want to encropt the in-transit data (data while transfering)
Do you want to encript Data at rest (storage) disable by default

If you want to encript, you need keys.

for now, disable

Dead letter queue - disable by default

as of now, we haven't changed any thing except name.

now, click on create queue.


Click on Monitoring
you see messages from cloudWatch.

- old Age of message
- Message received
- Message Deleted
...

This kind of service is used by Developer.

They write programA and ProgramB.
They write code on A say java and send message to MQ and then to B.
On ProgramB you write other program which pulls the info from MQ.

You have to test is MQ is properly working or not,

Go to your queue
click on the name of your queue
- send and receive message

send message
write some message and click send message.

This message goes to message queue

message is sent to is on MQ.

Now, B has to come to A

Go down on the page and you will see receive message

Messages Available, you will see

This part is a consumer section. you still don't have message received

Now, go to queue page and refresh the page
You see the message.
go to messages
and go down and see if you have message

On some places, on programB, you can write some python program which goes to MQ and pulls the messages.

Click the message, you see the body of the message.

If you go to queue, and refresh.
- You still see the message.
- Our consumer program only downloaded and use it but message is not deleted.
- Delete the messae and click on poll for message again, you will see the same message again.
- The reason is that you still have the same message is on MQ.


Now, go back to message queue,
You see message retention period is 4 days

consumer can use for 4 days.
if you lower the retention time, say 1 sec, and your consumer system is down, they may not receive message.

Do not change it.

There is a concept, that consumer has a permission to delete it if you dont want it.

MQ does not delete the message until it receah the retention day.

Gooogle for AWS SQS delete message

There is an API
delete_message

To delete,
go to received message
select the message and click on delete.

Now, you see message available = 0
- even you poll the message, you will not get any.



Visibility timeout
- Its a common architecture
- When client connects to front end of producer program,
- producer program creates and sends message to MQ
- Now, program B uses it..


consumer  -> LB ->  systems A|B|C


Consumer  -> Public LB  -> Systems A|B|c  ---->  MQ  ---> Private LB  -----> sys 1|2|3
from public

Say message is generated on A and goes to MQ and then goes to private LB and then to say sys1.

system 1, 2 and 3 are consumers.
The keep polling the message
all client consumers are polling the message so they will have 4 message

but we don't want this
what we want is
until message is downloaded, we dont want other to download

in that case, we want this message to be invisable for say 30 sec. Message will be there but can't see it.

this is visibility timeout.

in this case say system1 is accessing the message but for other its invisible.

say, system1 downloads the message and takes 100 sec but that time message is visible and system 2 may download again.
So, you have to know how your polling and proceesssing program function. based on that you set the visible timeout.


Lets go to our mq - myq
send and receive message
send a message
and go to receive message and click on poll
it can't download for 30 sec

first consumer cant download for 30 sec. so system1 should use and delete within 30 sec.


MQ - timing is very important.

If load increase
- Load Balancer
- AutoScaling

if more ram/cpu -> add more nodes
more and more message coming on consumer side, or
check MQ server and see how many messages are there.

- Set up a metric on cloudWatch, if there are more than 500 message coming per second, auto launch a new system on consumer side.

- if less message coming, remove the nodes.


Go to your message queue
- Retention perion
- Visibility period
are important.

Producer - MQ - Consumer
- Producer creates and stores on MW
- Consumer will poll the message and use it, delete it.
1. Poll
2. Process
3. Delete
what happens if they forget to delete

after consumer process message, and for 30 sec, message it invisible.
All nodes keep polling.

say first message is generated at

8:15:05    - msg1
8:15:36    - > msg1

Every 30 second, new message is delivered.
and same message, so you think it has a loop created.

You have to terminate this loop.

SQA admin may see it but they can't do anything for customer's message.

But what you can do it, after certain time, you can tag this message and say this message is dead.
and move this message to Dead letter Queue (DLQ)
This message is keep going and store it on queue but do not delete.

Go to your queue and click on it
you can go t odeal letter queue.

it under encription
you can enable or disable.

you have to create a queue - deal letter queue.

go to your queue and click a create queue
name: DLQ
config
visibility timeout: 30
message retention: 14    # keep for longer time for developer to look into
size: 256


Click on create queue..

now, any message stuck on myq1, this message it forwarded to DLQ queue

go to myqu1 and enable deal-letter-queue

max receive: 3

so our main queue, we implemented DLQ.


go and send message

message body: testing for dead

and click on dend

to to polling
it start pooling..

you will see the message available: you see the value is changing..

go to queue, and see available message

after 3 attempt, message goes to deal leter queue that you created..


Dead letter queue messages are not delivered to consumers.
this is to analyze why message is on dead-letter queue.



There is a message option for Delivery Delay
- when message comes to me, I want to hold for certain time and deliver.

By default delivery delay: 0

Open - ID - Protocal




Queue service LAMBDA

Lambda Function
----------------------

Lets see we have S3 Bucket
- we set event trigger
  as soon as we put one object here, contact lambda function (invlcate )

S3(bucket) - > Trigger Lambda function (invocation) -> after successful -> store on MQ -> C1 (consumer polls the message)

if message failure, they wil take

You can build this complete pipeline.

S3 bucket (source)  ---------------> Destination (consumer)

Lets see how we can do it.

Go to lambda on aws
open new tab for s3 as well

- On Lambda- create functon

function name: function
Run time: python 3.8

create


go to s3
create a bucket, and create a trigger

go to bucket -> properties -> sent yo event notification
you can create he here

to go to lambda
add trigget
s3-> bucketname

event type
put
and create from here.

either you create from lambda or s3, they are same.

import json
def lambda_handler(event, context):
    # TODO implement
    print("This is a lambda test for SQS")

go to lambda and click on event


cloudwatch lg info is stored

go to cloud watch and go to log section, you won't see it.


open your trigger and add destination

destination configuration
source



or go to queue and creat a new queue  lambdaQ
we have producer is lambda.

go to destination configuration of your lambda or s3, set destination...

Note: You have to create IAM role, but system will automatically creates for you.

s3 -> Trigger -> Lambda -> SQS




now, add some file on s3.

so we have put request trigger.

now go to monitoring tool on lambda
click on view logs in cloudlogs

refresh the cloudwatch and you will see one log stream file.

you see the message here.


got oyour mysqs1, you will se the message
go down and poll the message

go to body of the message and review the message..


create lambda function

name: f1
python 3.8

create function

click on function


print("Trigget from SQS as consumer")

in this case you have to create Role for SQS

SQs need to have capalibity to run lambda.


Paurge

delete message queue...


FIFO
- You want your mesage process in order


Saturday, February 27, 2021

Day 25 - Kubernetes - Deployment

 k8s - Deployment - 2/27/2021

How do you deploy your application in prod env?

We have to deploy multiple web servers
- New applicaiton build comes in
- How do upgrade?
- You may not want to update all at once. One at a time - (rolling update)
- What if you encounter on one of the instance and you have to rollback?
- If you want to have multiple change on your env, say application version, scaling env, resource allocation etc, you don't want to apply all all changes immediately, for the command is running, but pause your env, make the changes to your env and resume the change are done together.

All of these can be done through deployment.
- Each container is encapsulated in POD.
- Multiple PODs are deployed using RS or RC.

Deployment comes on top of RS or RC which allows us to upgrade the underline instances flawlessly
- using rolling updates.
- undo changes
- pause/resume as required


How to create deployment?

- Create deployment definition file
- Content of the definition file is similar to rs.
- Only we change is kind: Deployment

# cat dep-def.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myappDep
  labels:
    app: myapp
    type: front-end

spec:
  template:
    metadata:
      name: myapp-pod
      labels:
        app: myapp
        type: front-end
    spec:
      containers:
      - name: nginx-controller
        image: nginx

  replicas: 3
  selector:
    matchLabels:
      type: front-end

> kc create -f dep-def.yaml
> kc get deployments

Deployment automatically creates replicaSets
> kc get rs

you see the rs in the name of deployment

> kc get pods
you see the name of the pod with deployment.

Get all deployment, RS and POds
> kc get all


Visit the link for more detail
https://kubernetes.io/docs/reference/kubectl/conventions/

Create an nginx POD
# kc run nginx --image=nginx

Create POD manifest YAML file (-o yaml) don't create (--dry-run)
# kc run nginx --image=nginx --dry-run=client -o yaml

create deployment
# kc create deployment nginx --image=nginx

Create deployment def fine without creating it (--dry-run)
# kc create deployment mydep --image=nginx --dry-run=client -o yaml

generate with replicas
# kc create deployment mydep --image=nginx --replicas=3 --dry-run=client -o yaml > dep-def.yaml

# kc apply -f dep-def.yaml


 

Day 24 - Kubernetes - Pod-ReplicationController-Replicaset

 Pod-ReplicationController-Replicaset - 2/26/2021

POD
Please use PyCharm from getbrains.com
Create a POD using yaml file

1. Login to your master node
# kubernetes definition has four root level properties
kc='kc=kubectl'
# apiVersion, kind, metadata, spec

# cat pod-def.yaml

apiVersion: v1

kind: pod

metadata:
    name: myapp-pod
    labels:
      app: myapp

spec:
  containers:
    - name: nginx-container
      image: nginx
    

Create a pod with yaml file
# kc get pods
# kc create -f pod-def.yaml

pod is created
# kc get pods


==============================

kubernetes controller
- Controller are brain behind the kubernetes.
- They Monitors kubernetes objects and respond accordingly.

Replication Controller
- When your pod goes bad, we want to run new pod
- RC allow you to run multiple instances.
- Even you have a single pod you can still run RC. If this pod fails, it wil bring new
- It helps to load balance and scaling
- You can create pods and if demand increases, it adds new pod automatically.

Thre are two terms
- Replication Controller
- Replica set

RC is old and RS is new. They work on same purpose of RS is latest comes with new features.

lets create yaml file

# cat my-rs.yaml
apiVersion: v1
kind: ReplicationController
metadata:    # Metadata for RC
  name: myapp-rc
  labels:
    app: myapp
    type: front-end

spec:    # important part. It defines whats inside the object we are creating. What RS creates?
  - template:     # to define other values to reuse..
      metadata:    # this metadata is for POD
        name: myapp-pod
        labels:
    app: myapp
    type: front-end
        spec:    # POD spec file
          continers:
            -name: nginx-controller
             image: nginx

  - replicas: 3

we addded new property called replicas: 3


> kc create -f rc-def.yaml
> kc get replicationcotroller
> kc get pods    # see the pod created by RC
Pod name starts with RC name


Now, sets see ReplicaSet

apiVersion is different here.

apiVersion: apps/v1
kind: ReplicaSet
metadata:
    name: myapp-rs
    labels:
       app: myapp
       type: front-end
spec:
  template:
     metadata:
       name: myapp-pod
       labels:
          app:myapp
          type: frontend
    spec:
      containers:
        - name: nginx-controller
          image: nginx

replicas: 3    # here, replicas need selector
selector:        # it identifies what pod it falls under. Replica set can also manage PODs
        # that are not created as part of replicaSet creation.The pods created before replicaSet is created
        # but matches the labels specified under selector, the replicaSet will also manages those PODs.
        # the major difference between RC and RS is the selector. Its an optional on RC config file.
        # In case of RS, it need to define under matchLabels
# selector:
  matchLabels:    # matchLabel selector matches the label specified under it to the label on POD.
    type: fron-end
    

> kc create 0f repset-def.yaml
> kc get replicaset
> kc get pods

Labels and selectors
- ILets say you deploed three pods
- We like to create RS or RC to manages these pods or to ensure that there are available all the time.
- In this situation, you can use replicaSet to manages these pods but RC can not do it.
- If they are not created, RS will create for them.
- The role of RS is to monitor pods, and in case they failed, create them.
- RS is infact process that monitors pods

How RS knows what pod to monitor?
- This is where labeling comes in handy
- This is where we can provide label as a filter for replica set.
- Under selector section, you specify the matchLabels filter and provide the same lable that you use while creating the POD.

Under RS, we knew three new sections under spec.
- template
- replicas
- selector

Using RS, we can not create new PODs with matching labels that are already created.
In case one of the pod fails and need to be created, you need to include template definition part to add labels.

To update the pods, lets change the replicas to and run the command
> kc replace -f replicaset-def.yaml
> kc scale --replicas=6 -f replicaset-def.yaml
> kc scale --replicas=6 replaceset myapp-replicaset

# note, by defining file, it does not automatically so you have to complete all command

Command summary
> kc create -f replaceset-def.yaml
> kc get replicaset            # List pods
> kc delete replicaset myapp-replicaset # Deleetes PODs
> kc replae -f replicaset-definition.yaml    # update replicaset
> kc scale --replicas=6 -f replicaset-def.yaml    # without modifying file, increase the pod/replicas






This example comes from Udemy. Please purchase to get full access. One of the best course.
https://www.udemy.com/course/certified-kubernetes-administrator-with-practice-tests/l



Python - If statement eg1

Python: If

print("hello world")
hat = 4
print(hat)

food = 3.456
green = 5
flower = 33

sum = green + food
diffrence = green - food
multiply = green * food
division = green / food

print(sum)

age = 15
if age > 18:
    print("you are allowed to drive")
elif age < 18:
    print("wait a couple more years")
elif age > 14:
    print("go to driving school")
else:
    print("you are not allowed to drive")

Friday, February 26, 2021

Ansible - Package (software) install on remote host

 Lets install puppet agent on all hosts using ansible

1. Install client software

$ ansible -i myhosts all -m yum -a "name=puppet-agent-6.21... state=present" -b -K

2. Verify
$ ansible -i myhosts all -o -a "rpm -q puppet-agent" | grep CHANG|sort

Python - Collections - list

 Class  notes: 2/26/2021

Python Collections:

- When we want to treat some data as a group, then it is not good to store the data in individual variables.
- We need to use a collection to store the data a a group
- For eg, An university offers different courses to her students. So, all the courses should be stored as a group in a collection.

Note: list is mutable which allows us to change its contents. 
Say in restaurant, they can remove one item from their one item.
say, fish is not available, they can remove from menu for short time and
they may add some non-veg item. so this way contents can be changed.

- In python, we have following collection types
list
tuple
set
frozenset
dict


List
* List can be used to store the group of elements in a sequence.
* a list can store both homogeneous and hetrogeneous elements
   what is that mean?
   -> homegeneour -> same type
   -> hetrogeneous -> different type
* A list is represented with [] (square braces)
  within square braces, we can seperate elements with comma ,
* list is a mutable object.
* a list can store duplicate elements
* each element in a list has a positive & negative index.

important points
sequance
mutable
duplicate 

















For eg
phone_numbers=[1234,2345, 3423,2123]
these are list of phone numbers. list of homogenous elements because these are all integers.
airlines=["AI",10,"JA",6]
if you look at this, there are two strings and two integers.
so, this is a list of hetrogeneous. because elements are not of same types.
You can have float value as well.

We can also create empty list as well. For eg,
sample=[ ]  # empty list
lst1=[ ]
lst2=[10,'x',20.15,True]

we have list containing 4 elements
This is list with known size and known elements.

















Lets look at another eg

lst3=[None]*5
none is unknown but list will create element with 5 elements but we don't know. We will append later
we create a list with 5 element with known size and unknow elements

length of a list:
airlines=["AI","JA","LT", "VA"]
print(len(airlines))
output: 4
the output is the length of elements

courses=['Python', 'Data Science', 'Big Data']
index   0     1 2
negative index  -3    -2 -1

for last element, so do not use comma

print(courses[1])  => output is Data Science
print(courses[3])  => output is an "error"
because list has 3 items with index 0, 1, and 2.

print(Courses[-3]) => output? its python
because negative index starts with -1.



Python - Index and strings

 

Index and strings



Function: around 50 function be comfortable
built-in functions
len() 
type()
id()
print()
input()
Find length of string
str = 'Core Python'
print(len(str1)
you get output: 11
space is also counted.
This is print the total length of a string.
Open idle
+ -> 
* ->
==
<
>
>>> s1="Core"
>>> s2="Python"
>>> s3=s1+s2
>>> print(s3)
CorePython
>>> s3= s1 + s2
>>> print(s3)
CorePython
>>> s1="Python "
>>> s2="Core"
>>> print(s3)
CorePython
>>> s3=s2+s1
>>> print(s3)
CorePython 
>>> s3=s1+s2
>>> print(s3)
Python Core
>>> 
>>> s="Python"
>>> t=s*3
>>> print(t)
PythonPythonPython
>>> str1="Python"
>>> str2="python"
>>> print(str1==str2)   
False
>>> 
uppercase comes before lower case
>>> print(str1<str2) 
True
Its going to compare in dictionary order. Capital letters come before small letters
>>> user1="guest"  
>>> user2="gUest"   
>>> print(user1<user2)   
False
Slice a string
-----------------
Slice -> a piece or a part of a string.
slice has a syntax
stringname[contains Three part]
stringname[start:stop:step]
you have stringname square braces, followed by start, and stop and then step
step default value is 1.
you will get value start to stop-1 value
lets say start is 0 and stop is 9
so you will get from index 0 to 8
how do we get part of string? - slicing
For eg,
str1='Core Python'
We have our string as
C o r e   P y t h o n
0 1 2 3 4 5 6 7 8 9 10
-10 -9 -8 -7 -6 -5 -4 -3 -2 -1  -> negative index
print(str1[0:4]) => what output you expect from this 
we start from 0 and go upto 4 so we have 4-1=3
so we will get 0 to 3 => Core
print(str1[0:9:2]) what's the output?  => Cr yh
start index is 0
gives output to 0 to 8
but the skip is 2 characters
so you get in index
0 2 4 6 8 (increment by 2)
so the output is Cr yh
print(str1[::] -> what's the output?
we didn't see start, stop and step.
by default value of step size is 1.
Its going to be Core Python
print(str1[:4:]) -> what's the output?




Most of the data in string
string manipulation next class

Thursday, February 25, 2021

Day23- Kubernetes - Config Map Core DNS

 Config map, CoreDNS  2-25-2021
-------------------------------
 Class Notes
DO-180 - OpenShift
------------------


We have k8s cluster with 1 master and 3 nodes

Clean your old env resources
# kc remove all -all


lets say we need to set up apache web service
-> We need OS -> install httpd

any software, jinkins, permotheus, hadoop, or any software
-> they come with their own configure file and we configure these software/service
say you have httpd and uses default port of 80 and need to change it to something else,
you go to config file and change it.

1. os -> linux
2. Software -> httpd (software)
3. Edit config file, change port 85
4. Login to all os and change it.

rather than that, you create a new pod image and change it there.
lets say you have port 80 on first pod,
and there is a request to change it.
two way to do this use case
1. login to every server (pod) and change it
2. Create a centralize location


lets say storage (volume)
- create centralized storage
- change whatever you need to change it here.
- You manage all config file on this location
- This storage is a central location to manage configuration files.
- This resource mapping from storage to pods is called configmap.



on the pod side,
you will mount the storage from storage to pod.


You can store on PVC too but it is not dynamic.

to change the port from 80 to 8080 on all pods,
you make change on storage config file and all pods port will be changed and restarted with the change.

secrets


config map (CM)
Login to your master node or base computer.
# kc get cm

there are already created by default by system

# kc create deploy myd --image=vimal12/apache-webserver-php
# kc get pods

inside this image, apaache is configured. config file is fixed and static
port is 80.
There is not centralize confif file set up.

# kc exec -it myd.... -- bash

you are on pod
# rpm -q httpd
# ps -aux
# cd /etc/httpd/config.d; ls
# netstat -rn
when you change config file, you have to restart



We don't want to manage this config file. we want to manage conf file in a volume or a resource called config map.

@your base os or master node or user computer
# vi web.conf

Listen 85

every other applicaiton such as nginx, ha-proxy, jinkins have their own keyword.

# kc create -h
see config NAME  --from-file

# kc create config mywebconf --from-file=web.conf

web.conf file is on our PC.
now, we have conf file has been created.

# kc get cm

# kc describe cm mywebconf

If you like to change, just edit and change the data
# kc edit cm mywebconf

You can change, modify the data here.


so we have CM called mywebconf and the file we stored is web.conf

now, we have to link this config map to pod.

we have hundres of thousadns of pods, how do we manage it?

We will get use of deployment.


for httpd we have /etc/httpd/conf.d config location

now, we have to map config map to this location
map
CM -> /etc/httpd/conf.d

we have to mount it.
how?

google kubernetes configmap volume

get an example from kubernetes..

volume:
  configMap:

Note: keep same indend
# kc get deployment
# kc edit deploy myd

change it under spec:
 volumes:
  - name: config-volume
    configMap:
       name: mywebconf

this way you can add more maps as well.

under containers:
add volumeMount part

   containers:
    volumemOUNTS:
       - NAME: MYVOLUME
      - mountPath: /etc/httpd/config.d/

you made change it in deployment file...

once  you samve it, it will retrigger on the save.
 

# kc get deploy
# kc get pods
# kc exec -it myd-... -- bash

# netstat -tnlp

you see the port
# cd /etc/httpd/conf.d
# cat web.conf

you see the config file web.conf that you created...


say you have to change port to 8081

by editing your config map

# kc edit cm mywebconf
change it here as well


you can add documentroot and other values as well

# kc get pods
# kc delete pods --all

# kc get pods
you get new pors
# kc exec -it myd-.... -- bash

# netstat -rn

back to master
# kc get pods


# kc describe deployment myd

inside container we have mounting something from volume myvolume and that volume is coming from mywebconf config map.

you can have env value can be on env section of env , config map or secret.

Flannel POD
# kc get pods -n kube-system


CoreDNS
- mostly computer connunicate with IP

HostA            HostB
IP: 192.168.10.20    192.168.10.30

rather than communicating with IP, we create a central database and
map the ip and host

Database (DNS server)
192.168.10.20|hosta

host1 should know where is DNS server. so we have to make hosta as a dns client.
We tell hosta that your nameserver (DNS server) is ip 100.

foreg, we type google.com on browser, it will quickly goes to nds server and maps it and delivers it.

how do you create your own DNS server?
There are lots of tools available. bind is one of them.

dns softwares
- bind
- DNSmask
- CoreDNS

CoreDNS
- opensource software
- DNS and servie discovery service
- comes with plugins
- you can keep add on plugins and keep enhancing
- Plugin is the way to enhance.

you can configure your own dns server

Note: This is an indenpendent software but helpful to kubernetes.


Lets open two linux instances and login to your linux system.

Plan: create DNS server
- it has database
- maintain ip address and host info

DNS server: 192.168.10.20    john
DNS client: 192.168.10.30    mary

Easy solution to map host to ip and ip to host is to
add entry to /etc/hosts file. local database
but its a problem when number of system grows. so go to coredns.io and download
takes you to git hub
get linux amd 64 ->




DNS runs on port 53/udp

# tar -xzvf coredns_1.8.3_linux_amd64.tgz

# netstat -tnlp | grep 53
# netstat -unlp | grep 53

Create a file
# vi /etc/mydb
192.168.10.20    john    
192.168.10.30    mary
1.2.3.4        abc.hello.com
2.3.4.5        abc

every line is a record 1 and two ...

Now, what we have to do,

we have a plugin that has capability to read ip and host name
search for
coredns hosts plugins
there are keyworkd
you have to create their conf files.

every server has their conf file such as apache file

tell on this conf file, we define to read the database
host plugin
# vi mycorends.conf

. {
   hosts /etc/mydb
}

this host file has capability to read database

# ./coredns --conf mycorends.conf

hit enter, if everything is good, your dns server is started.

till that time, your dns server is running.

you either run on background or open new terminal and start working
# netstat -ulnp | grep 53     # for udp
you wil port 53 so your dns server is running

Verify

go to your client machine (mary)
# ping to master
# ping john
it says don't know.

we have to configure dns client. for that we have to change it.
# vi /etc/resolv.conf
nameserver    192.168.10.20


your new dns server is added
now, ping it, it simply works
# ping john


# nslookup john
# nslookup mary


There is a keywork search on /etc/resolv.conf
you don't have to type domain
you tell your conf file that you add domain at the end of the domain
in your search section, you can add domain
search sitabas.com
# nalookup abc
you don't have to type .domain.com

So, on client side, dns client set up
$ cat /etc/resolv.conf
search domain.com
nameserver  1.2.3.4

Now, cancel your dns server by pressing CTRL+C

-------------------------

Now lets go back to kubernetes part.

kubernetes you have lots of pods

when you launch application, you launch pod. when you lunch one application you run multiple instances of pod.
you put these pods behind load balancer service.

when client connect to load balancer and load is distributed to PODs.

we have load balancer IP and client should know it.
they can use ping, curl to verify...
The IP we provide to LoadBalancer is dynamic, if your loadbalancer is deleted, it should be permanent.
give proper name to the IP

client can use host name rather than ip.
in future if ip changes, you don't have to change fqdn.
if ip changes,  you can go to dns database and change it.

# cat /etc/mydb

4.3.4.4 abc.hello.com

so client never have to remember what ip load balancer has.

now, create a dataabse where IP and hostnames are kept.

we have fix database as of now.
lets say our load balancer has ip of .50 and its fixed.

what if we have something of k8s and keep checking on dns
say if ip is changed from .50 to 60 but its not yet changed.
- we need that program to contact dns and update the record on the fly.

this concept is called service discovery - dns service discovery

for this we have CoreDNS which has a plugin for kubernetes.

go to coredns.io and look under plugins
they have lots of plugins for other services.

the plugin is kubernetes... keyword also kubernetes.

But we already have

# kc get nodes
# kc get pods
# kc get pods -n kubesystem
We have two pods with coredns because of the failover set up.
Everything is configured for us. Everything is avialable by default in k8s cluster.

We already have coredns service running. and running one of the pod.

# kc get nodes
# kc get pods
# kc get pods -n kube-system
We have pods managed by replicaset and replicasets are managed by deployment.

# kc get deply -n kubes-system

We have

Load Balancer (LB)
- DNS
- DNS

client - connect to LB and can get the IP address,

Lets see the service
# kc get svn -n kube-system

We have ClusterIP and the port : 53
Technically you can say you already have DNS configured, the IP of DNS server is the IP of the CLUSTER-IP.
Anyone whats to use the dns server, they have to use this IP and port number.

Behind the scene, we have two DNS servers running.
We can verify with describe command.

# kc describe deploy coredns -n kube-system

look for pod template
You see what image is used. l8s.gcr.io/coredns:1.7.0

Args:
  - conf
    /etc/coredns/Corefile

They are getting data from core file. and from /etc/coredns directory.

we have config map.

check the Mounts: little down from config-volume (read only -ro)
But we use mycoredns.conf and getting data directly from this file.

it is coming from config
mounts
/etc/coredns from config-volume (ro)

# kc get cm -n kube-system
what content of the config map coredns, lets see
# kc describe cm coredns -n kubesystem

we will see mycoredns.conf
# cat mycoredns.conf
. {
   hosts /etc/mydb
}

we have one plugin.

o to cirefiles section
you will see lots of plugins
loop
reload
loadbalance
forward.

one of the plugins is forward

any client who is going to use me, I will update /etc/resolv.conf

it will automatically update and make dns client

I will do discovery for kubernetes. discovery for kubernetes cluster.local domain.
in kubernetes dns is already configured and db is dynamically created.

lets say, we have name spaces
tech
- load two pods
- one load balancer
- Service name myd1
HR
- load two pods
- one load balancer
- service-> d2

# alias kc=kubectl
# kc get pods
# kc delete all --all # delete all old resources
Create new namespace
# kc create ns tech
# kc create ns hr
# kc create deployment myd1 --image=httpd -n tech --replicase=2

on this deploying we are doing inside ns tech
# kc get pods -n tech

expose the dpeloyment
# kc expose deploy myd1 --port=80 -n tech
we expose the deployment on namespace tech

# kc get svn -n tech

# kc create deployment myd2 --image=httpd -n hr
# kc expose deploy myd2 --port=80  -n hr

We have services myd1 (Tech) and myd2 (HR)

client always connect to LB

IP of load balancer is -> get load balancer ip by running
# kc get svc -n tech
# kc get svc -n hr

We are using k8s plugin.
when you launch ip you have, they dynamically create database on kubernetes

what service name you give, exactly same hostname they give you in the dataabse, - dns

service name normally comes from deployment name

if ip changes, it dynamically updates...

they keep doing service discovery.

how to verify
# kc get svc -n kube-system

when you launch any pod, they make all pod dns client.

if you go to any of the pods

# kc get pods -n tech

# kc exec -it myd1-ddd.. --bash -n tech

# cat # /etc/resolv.conf

name server is automatically updated. All pod know what is their database.

# curl http://myd1

myd1 is name of the service and this is exact the hostname

# curl htto://myd2

you got error

# kc get pods -n tech
# kc get svc -n tech

# kc -n tech exwc -it myd1-...<pod> -- bash

same name space
record cluster ip

now, try with ip
# curl http://<clusterIP>

# curl http://myd1

dns also configured.

Now this this ns you want to conect to hr name sapce
# kc get svc -h hr
record ClusterIp of hr and try to connect.

# curl http://<cluster Ip of hr>

output is displayed
but with hostname?
# curl http://myd2

it failes
when you use short name, it fails.
check /etc/resolv.conf on POD

under search you see tech.svc.cluster.local is added.
but when you try with
# curl http://myd1.tech.svc.cluster.local

the page is displayed.
what is this mean

this is basically its a service discovery.

name of load balancer with fully qualified name : cluster.local
since its a service add svc: svc
the service is : myd2

this is the naming convernstion
name of the service: namespace:svc:domain of cluster service.

in this case, if you type,
# curl htto://myd2.hr.svc.cluster.local

you can connect.

either you can use IP or the fully qualified name

everything is precreated automatically.

why they did it?
it is written on config file

# curl http://myd1.tech.

# cat /etc/resolv.conf
# curl http://myd2.hr.svc.cluster.local
works as well
if you give shoirt name, they add domain part,

it is written on config map
# kc get cm -n kube-system


see the description of this config map

# kc describe cm coredns -n kube-system

go below corefile under ready
you see,
kubernetes cluster.local
keep on looking for this domain.
any name comes under this domain, i will do service discovery.

finally everything is configured, you don't have to manually configure.

everything is configured for you.

If you want, to connect one pod to another pod
do not use ip address of POD, but use loadbalancer IP and expose it.


There is another plugin

rewrite plugin

rewrite name myd.exapple.com myd.default.svc.cluster.local

example.com is private dns, you can create your own domain name and set it up.


               

Python - Class Notes

 

>>> x=2345
>>> print(type(x))
<class 'int'>
>>> y=3.14
>>> print(type(y))
<class 'float'>
>>> z=1.5e2
>>> type(type(z))
<class 'type'>
>>>


complex data type is used in linear programming (Machine learning programming-MLP)
MLP - image is captured and analyze, recognized it..


there are real and immaginary


c = (real part) + 2(Imaginary_number)

c=2_4j
d=6-2j
e=-4j
f=6+1j

value of j=square root of -1  (sq root of -1)


STR type

In python is object
int,str, eveyrhting is class
we use built-in functions


Advanced Python
----------------
Topics
- Collections
- Functions
- Modules & Packages - Organizing packages
- OOPS - Object oriented programming
- Exception handling (Error)
- Regular Expression in Python
- Working with files (txt,cvs)
- Working with Date and Time
- Multithreading
- Database connectivity
- GUI applications
- Python for datascience (Numpy, Pandas, .. )


Collections
list
tuple
set
frozenset
dict

----------------------

Ticket  # is assigned to each ticket for passenger.


t1=2345
t2=2346
t3=4564

Ticket is assigned to individual variable.
This is not a good way to assign the value
we want to treat these data as a group.
so we store together in a collection
these data belongs to same group.

treat these ticket numbers as a group.
say as collection
ticket_numbers=[2345, 2346, 4564]

store data into collection so we can perform operation on group belongs to same group.

data stored in individual variable, in memory, its stored of different location
not stored on contious location which cause the performance proble,

say
t1=12
t2=23
t3=34
what happens in memory, they stored in different location
it start looking for on different location for each variable and performance will be slow

now say
ticket_number=[12, 23, 34]
what happens in memory?
in memory, these are stored as collection. contious memory location

 

 


We can do searching,

List belongs to
mutable or unmutable objects?
->mutable object
Is it sequence type?
- Yes

List is collection type and stores related group of data together.
say ticket numbers of passenger flying together.

we treat all these data as group.

lets say, university offering some courses to her students.
- all courses should be treated as group..

treat data as a group.

course1 = "Math"
course2 = "Science"
course3 = "English"

courses= = ["Math", "Scence", " English" ]

ticket_numbers = [ 12, 23, 34 ]

List
- list is mutable object.

you have to decide what type of collection you are going to use based on what that collection going to offer.

Choose collection type as list,
suppose your requirement is you have to modify the content of that object anytime or want to modify the content of the object. for eg, ticket numbers

passenger cancel ticket 23, i want to remove this entry from this group of data. This object should allow me to modify.
what object allow you to modify its content?
- mutable object
list, set - allow you to change

it should allow you to access the data and allow to modify

choose collection based on what operation you want to perform on data.

list is sequence type (mutable object)
allow you to modify based on index.

ticket_numbers = [ 12, 23, 24 ]
index_number        1   2   3
you want to remove second index [1]
 

 

 

 

 

 

 

Git branch show detached HEAD

  Git branch show detached HEAD 1. List your branch $ git branch * (HEAD detached at f219e03)   00 2. Run re-set hard $ git reset --hard 3. ...