Sunday, April 11, 2021

Day 38 - AWS - CloudWatch log / SNS / Alerm / Metrics

AWS - Cloudwatch Log - 4-11-2021
Class note

Today's topic
Cloudwatch/SNS/Alarm/Metrics

you have multiple servers running.
why problem happened, to analyze certain issue, or to find some metrix, we use logs
user log, system boot log or any other log are used to analyze the logs

Logs are stored on local systems and collected on central logging system (log server) or remove log server.

Web server, database server, app server generate logs, other network or system device also generate log

plain of glass of view is created to view the log

one of the popular tool is splunk.

we will be discussing Cloudwatch


Go to aws console and create an ec2 instance.
- create web server
- Log Location: /var/log/httpd/access.log
- We will push this log to cloud watch.

check amazon linux -> 1 instance tag name: web1
allow all traffic -> create key and launch.

we will send the log to cloud
- log server is configured in such a way to push log to log server.
- We will install agent program which collect the log from local system is send it to log server.
- Cloud watch can collect from different sources such as system, network device or other cloud provider ssytems.


Now, login to your instance
- Configure apache web server

# yum httpd -y install
# systemctl start httpd

# cd /var/www/html
# cat >index.html
welcome
# cat >hello.html
Test page

Log location
# cd /var/log/httpd
There is not log here, when user is connected, log will be recorded.

go to browser and connect
come back and check the files
access_log
error_log

check real time log
# tail -f access_log
# tail -f error_log

try some page that does not exists

htto://myip/testbest.html

go to error log and check, you will see this incident is recorded.

Now, lets go to cloud watch
- Alarm
- Logs
- Metrics

go to logs -> loggroups
create log group
name: weblg
retention setting: 1 months (after 1 month, date will be removed so plan accordingly)
or select never expires

click on create to create log group

Now, you have to reate Log stream
click on log group and create log stream
log stream : web

note: they will chagrge you for this service

You have to install agent on the EC2 install

since we are using amazon linux and yum is configured
for other platform search for cloud agent software
# yum install awslogs

Config location

# cd /etc/awslogs
# vi swscli.conf

[plugins]
cwlogs =cwlogs
[default]
region = us-west-1        # what region you want to send log to

Configure another file

cloud watch you might have multiple group, thats why you have to specify

# vi awslogs.conf

review the example..

logsteam_name
log_group_name

we have out log name as weblog
log_stream_name = webls


log_steam_name = {instance_id}
but we change
log_stream_name = webls


data in the log is time series data.

# cat /var/log/httpd/access_log

review the log file format

review the fileawslogs.conf for log format and other stuffs
$ cat awslogs.conf
[webblockname]
datetime_format = %b %d %H:%M:%S
file = /var/log/httpd/access_log
bufer_duration = 5000
log_steam_name = webls
initial_position = start_of_file
log_group_name =webls

Start your log service
# systemctl restart awslogs
# systemctl enable awslogs

go to cloudwatch -> group -> log steam

log is yet not here.

go back to agent side and check

now, lets troubleshoot by checking the log
# cd /var/log
# more awslogs.log
you see the file content.
you see error, unable to locate credentials

we know ec2 is one service and cloud watch is different service.

Now, we have to create a role which can allow ec2 instance to write to cloudwatch.
We have to attach the poslicy to the role.

Go to AWS dash board
IAM -> click on Role:
click on Ec2 next permission

filter, serch for cloudwatchlogs
There is a policy cloudwatchLogsFullAccess

Rolename ec2-cwlog

[action, modify role ]

now attach to policy to role.

Restart logs
# systemctl restart awslogsd
# cat acess_log
we didn't see any errors related to access.

This message mean, it is working. No, error is see
To confirm go to Logs -> Log Groups -> Log streams
reresh
- webls
click on webls
you can see all the logs here.
whatever you wirte on this page, exactly file will be avilable.

Entire log is here. This is what we want.

now, our log is centralize.
Go to log groups
-> action -> export data to Amazons3 to keep log for longer time.

Athena and Glue can be using to analyze.

lets do it.
click on export
send it to s3.

choose s3 bucket
steam prefix: mylogweb

export


it failed.

int: permission since cloudwatch or permission issue.


cloudwatch -> metrics -> alarm

Now, go to cloudwatch -> cloudwatch logs

optn your instance and go to monitoring,
- you will see CPU, status ccheck fail,
- Disk read/write, network info and more..

cloud watch monitors these all

Monitoring (Metrics)
- CPU
- N/W
- Block


go to cloud watch and click on Metrics on left side

select your region
you will see what they are monitoring.

Click on all metrics
you will see matrics services
EBs
EC2

click on ec2
they mornitor on instance basis

check uner metric name


what is status check - 2/2
- system check
- instance check

before they provide you any service
- they check hardware and instance.
once both are good, they provide you the instance.

when hardware is down/failed,


go to cloud watch -> metrics ->

cloud watch check cpu, mem or other informaiton.

create an alarm if used % is more than 80%

go to alarm -> set the billing alarm

SNS -> simple notification service

you attach to cloudwatch to send notification


cloud watch -> metric - creat graph
go to action
add to dashboard to check everyday.

give name to your dahsborad: mydashboard
customize the widget title

select metrics
select ec2 ->
metric name
select disk and other what you want to monitor.

create widget

Note: Dash board is seen from any region but the logs,metrics are only regin specific



Plan"
service (Cloud Watch)    ---------> SNS -------> send email to -----------> Gamil
You have to subscribe to receive the email. That way, sns can send notification.


1. Create sns service
- goto aws dashboard
- click on topics
  there is another option sunbscription
- create topic
  -  FIFO (first in first out)
  - standard - select standard

email :
click on create

now, click on subscribe
click on subscription
protocol: email
end point: your gmail vimal.linuxworld@gmail.com
Click on create

you see status: pending
now, go to topic -> you see who subscribe

Now, go to your email and approve the subacription

oprn the link and confirm.

go back to your SNS page and refresh, now its been subscribe.

click on topic -> sunbscriptions
this program will send notification to all subscriber.

Makesure you are on right region

- create alarm -> set the matrics
- selecct ec2 -> matrics
networkpackaetout

you can set up kwwp on checking on every 1 minutes interval,
statics - Average

write a condition
trafffic more than 600 threshold value, trigger the alerm.

click next
configure actions
- select in alarm
- select an sns topic
- select exisitng sns topics


click on send notification to : select an email list



in summary
cloudwatch -> alarm-> create alarm
- specify metrics and conditions
- configure actions
- add name and description
- preview and create








goto cloud watch
-> Click on Alarm -> create alarm
name: myalarmforwebform

go and try to access your page couple of time,
an alerm is generated in cloudwatch.

go to cloudwatch -> alarms -> myalarmfowebforNet


custom metrics
--------------------
create your own metrics

something happened, create an alerm
when somevalue = value say 500, create

clickon logs -> log group
-> on log groups
-> metrics filter

on the tio you see create metrics filter

go to metrics
-> click on metric filter
on test pattern
- select every message under (websp)


go to logs -> log groups

weblg-> create metrics
metrics name: myns

select what you want to
name: mycustommetricfor403


1. Metric create
2. Create an alemr
go to log group -> click on log group -> click on create alerm


athena also has capibility to get log from cloud  watch as well.

go to athena - select different ddata source
connect to data source

->


#






No comments:

Post a Comment

Git branch show detached HEAD

  Git branch show detached HEAD 1. List your branch $ git branch * (HEAD detached at f219e03)   00 2. Run re-set hard $ git reset --hard 3. ...