Tuesday, February 1, 2022

Day3 - Docker training notes, persistence volume (PV)

Day3 - docker info

Containerization
Physical machine vs Virtual Machine vs Container
PM vs PM
PM - Virtualization is enabled on memory lavel.
VM - Virtualization is enabled on OS level
Container
     - Light weight image
     - Isolate the software configuration
     - Easy to ship
     - Dynamic memory
     - faster boot up
     - Simple network
Docker
- Client
- Engine
- Registry

-----------------------
apt install aws-cli
# whic aws
# aws configure
access key:
user-> security credential -> 
get the security key
-----------------------
We created VM and install docker software


Use public registry - docker Hub which you can subscribe for free..
on aws, you can configure using
- go to ecr -> getting started .. next next 

Docker tag - tag the image (version) and push it to registry


docker tag ...

contiiner registry ..
a

aws configure
aws ecr get-login-passwd --region us-west-1 | docker login --username AWs --password-stdin 3763627288.dkr.ecr.us-west1.amazonaws.com
# docker tag d13.... 3626464.dc.de.....
# docker push 98283.dkr.ecr.us-west-1.awsaws.com/avcv:latest

go to ecr -> repository -> avcv 
apt install awscli -y
---------------------
run docker as jobs
$ docker run <imageid> command
job can run at particular time.
Service keep on running
$ docker run image-id
@azure
-> all services 
devops.mar1.2019@gmail.com
container registry -> create container registry

-> create a virtual machine
- create -> virtual machine
- subscription
- resource group
virtual machine name
security: std
image:
size:
adn=ministrator account:
authentication type:
username: docker
pw: ***
in bound port rule:
allow seclected port : 22
leave default
monitoring disable
get the ip
$ ssh docker@IP 
$ apt update
$ systemctl status docker
# docker run -itd --name=n1 nginx:latest
pulls the image and installs after download.
# docker ps
service is running and port is listening.
# docker exwc -it n1 bash
you are on the container with bash shell
# curl -i https://localhost:80
Here, you started a container n1, your service is started and port is listening at 80.
Now, we want to access at the docker level. How do you do that?
by default, docker is going to provide one IP address to this container. a bridge network. 
# exit
# docker inspect n1
inspect meta data of the container. Everything display is about n1.
check under network, you will se ebridge and review the IPAddress
# docker network ls
# docker network inspect bridge
# docker stop n1
# docker run -itd --name=dummy alpine
# docker start n1
# curl -i http://172.17.0.2:80
# it does not find because IP got changed.
# docker network inspect bridge
IP got changed. n1 will have different ip since n1 was stopped.
we can not relie on this ip address. customer may not be able to connect since its changing. 
how do you resolve (over come) this kind of situation?
- you can't relie on this kind of address.
http://172.17.0.2:80
http://localhost:80
expose the port outpide.
instead of ealie on port, we will relie on port. port mapping - port forwarding...
how it works?
inside pod, it will have port and outside also have port number.
such as 80, 8080
like this you can create different container.
this way, you can create n number of container.
let do the majoc
# docker stop n1
# docker rm n1
# docker run -itd -p 80:80 --name=n1 nginx:latest
# curl http://localhost:81
you will get output.
if you try
# docker run -itd po 80:80 --name=n2 nginx:latest
error: port is already on use.
# docker rm n2
# docker run -itd -p 82:80 --name=n2 nginx:latest
# curl lpcalhost:82
# dcoker run -itd -p --name=p3 nginx:latest

# docker ps
# curl -i http://localhost:43567

how do i connect from my pc to the container.
- instead of container ip, we will use public IP address of host machine and the port 
13.145.123.55:81
nothing display? why?
you have to enable the port number.
go to aws -> security group
or 
@azure,
Vm -> networking -> network security group - add in bound rule. 
destination port range *

now go back to the browser and paste the ip address.
you should see the output on the browser.

============
There is another type of network - host
perhost, you want to create nginx container, in this scenario, you can create 'host' type of network.
# docker run -itd --network=host --name=n1 nginx
what it 
# docker ps
# curl http://localhost:80
At the browser
http:IP:80
#
docker run -itd --network=host --name=n1 nginx
# docker ps / -a
# docker logs n1
you can't create host network 
but can create bridge network
# docker network
# docker network create mynw
# docker ps
# docker network ls
what scenario you use host ?
- to use single container.
# docker run -itd -p --network=mynw nginx
# docker network inspect mynw
you will see different ip range.

# docker network
you will see output (help)
you can create, disconnect network.
by default bridge network.

# docker network create \
--driver=bridge
--
...

# docker run -itd -p 8080:8080 --name=jen1 jenkins/jenkind:lts
@docker hub,
search for jenkins

@the browser
IP:8080
# docker ps
# docker logs jen1
login to jenkins
ip:8080
user: docker
pw: ***
start using jenkins
create a job and run a sample job..
build now, and other, instlal plugins
lets asume, we just configure jenkins and everything is done..
go back to command prompt, and run,
# docker rm jen1
everything is gone
lets try again 
Lets try,
# docker run -itd -p 8080:8080 --name=jen1 jenkins/jenkind:lts
You lost all configuration information. 
we should be able to store the configuration. so how to make data safe and secure ..
store the data secure..
anything we can configure save it. 
- store your data in storage -> on vm or ebs - leastic block store or blog on azure..

@azure -> search, storage -> we will use it..
# docker run -itd -p 8080:8080 --name=jen1 jenkins/jenkind:lts
docker can not store data permanently
google "docker volume"
# docker volume ls
you see volumes but when you remove containers, these will be removed.
Lets work on volume
create a directory and store some content
# mkdir test; cd test
# cat > file1
from host machine
Now, mount it to container
# docker run -itd -v /root/test:/app --name=v1 ubuntu
-v -> volume
/app -> is a mount point to the container
# docker exec -it v1 bash
# cd /app; cat >file1
from container
# cat file1
you see the content
# exit
# cat file2
now, you can see it
# docker exec -it v1 bash
# exit
# docker rm -f v1
# ls 
you will see the data
-----------------------
lets try another eg
# mkdir jenk; chmod 777 jenks
# docker rm -f jen1
# docker run -itd -p 8080:8080 -v /root/jenk:/var/jenkins_home --user=jenkins --name=jen1 jenkins/jenkins:lts

/var/jenkins_home - this is the storage info for jenkins. all configures, plugins will be stored here.
# docker ps
if everything is create properly, 
# ls -ltr jenks
cd jenks/secrets
# cat *password*
get the pw and go to the browser
# http://ip:8080
login and all plugins will be loaded
create a siple job
item name: test
pipeline
hello world
build now
done..
go back to command line
# docker stop jen1
# docker rm jen1
# refresh the browser, everything is gone
docker run -itd -p 8080:8080 -v /root/jenk:/var/jenkins_home --user=jenkins --name=jen1 jenkins/jenkins:lts
login, you will be able to see everything.
your job is there..
As of now, we added physical volume.
How do use dynamically?
you can do dynamically.
# docker volume create --help
# docker volume create myvol
# docker volume inspect myvolume
review the output.
attach
# docker run -itd -v myvol:/app ubuntu
# docker ps
# docker exec -it <container id> bash
# df -h
# cd /apps; cd /apps; touch best
# exit
# cat best

create docker volume with elastic block storage
# docker volume create ebs....
download volume plugin for aws EBS.
https://docs.docker.com/storage/volumes

clean all containers
# docker ps -a -q
all container id will be listed
# docker rm -vf $(docker ps -a -q)
# docker ps -a -q
Everything is gone.
# docker images -q
# docker rmi -f $(docker ps -a -q)
All images are also deleted.

As of now,
job,
service
access
network
persist
next session:
how to build an image without manual effort.
docker file
docker-compose
docker swarm
@azure
go to resource group
and delete
@aws
go to ec2 instance and stop the instance

===================================

No comments:

Post a Comment

Git branch show detached HEAD

  Git branch show detached HEAD 1. List your branch $ git branch * (HEAD detached at f219e03)   00 2. Run re-set hard $ git reset --hard 3. ...