SQS Simple Queue Service - 2-28-2021
Class Notes
Tightly couple
A (Program) --> data info ---> B (Program)
tightly coupled (sync(
A --> data info ---> C
A --> SYNC --> C
tightly coupled
Create a new instance
MQ - queue
so A sends message to MW and then MQ system will send message to program B
MQ is a concept that helps on this kind of scenario
A wants to communiccate to C
A -> B
A -> C
Here A is called producer that generates the message
B and C are consumers. There can be may other consumers such as other databases.
Once the message is produced, it can be sent to multiple destinations.
Message Queue is a program which has persistance storage and messae are stored there as a queue.
This middle program comes in the middle of two different program thats why its called middleware.
This kind of set up is called Decoupled
The reason is that A and B are not sync directly.
If B is down, A is still working, capturing messages and sending to centralized storage (MQ server).
Instead of MQueue, can we put a DB server here.
- Its not a good practice.
When message is generated, data is stired on DB services such as MySQL DB.
Since we already have DB server, why do you want to storage again?
and on top of it, its really slow..
We have to use such a program which stores data and also forwards the message at no time.
so, we want to have a program between A and B which can store, receive, and forward milllions of records.
MQ is a program where they get message, store in their storage, progrss and sent to the target location.
we use these message for some purpose.
AWS has a product SQS.
- By default they store for 14 days.
- You can change the retention time..
Amazone, as soon as you buy any product, you get a message. invoice may take time, that time, may be your mail server was down.
or there might be lots of transaction goin on..
MQ is a middleware which we put it in between two programs.
- They can be microservices.
MQ
- RabbitMQ
- ApacheMQ
- ActivcMQ
- Kafka
Managed MQ services
- auto scaling
- security
one of the managed service by AWS is SQS
You two programs A and B and information is stored in between these two programs.
ProgramA -> MQ (DB) ---> ProgramB
Program A
- Producer of message
MQ-DB
- Queue message
- Stores the message on persistance storage
Program B
- Consumer
- It keeps on checking on database (MQ) if there is any new message.
- This checking process is called POLL
- It then download, use and delete the message
Go to AWS console
- Go to SQS
Check how it works
- Click on create queue
Note: think queue as a database
- You get two options
Standard
FIFO
select standard (default)
Name: myq1
Configuration
Leave default for now.
visibbility timeout: 30
message retention : 4
mexx size message: 256 KB
This program is meant to exchange message between program A and ProgramB.
These messages are very small in size. Proabbly half a page
- The smaller size is to storage and faster delivery.
- It takes long time to deliver if size if bigger
Access Policy - who is going to use it
- Only the queue owner
[Note: You can integrate with LAMBDA]
Encription
-------------
Do you want to encropt the in-transit data (data while transfering)
Do you want to encript Data at rest (storage) disable by default
If you want to encript, you need keys.
for now, disable
Dead letter queue - disable by default
as of now, we haven't changed any thing except name.
now, click on create queue.
Click on Monitoring
you see messages from cloudWatch.
- old Age of message
- Message received
- Message Deleted
...
This kind of service is used by Developer.
They write programA and ProgramB.
They write code on A say java and send message to MQ and then to B.
On ProgramB you write other program which pulls the info from MQ.
You have to test is MQ is properly working or not,
Go to your queue
click on the name of your queue
- send and receive message
send message
write some message and click send message.
This message goes to message queue
message is sent to is on MQ.
Now, B has to come to A
Go down on the page and you will see receive message
Messages Available, you will see
This part is a consumer section. you still don't have message received
Now, go to queue page and refresh the page
You see the message.
go to messages
and go down and see if you have message
On some places, on programB, you can write some python program which goes to MQ and pulls the messages.
Click the message, you see the body of the message.
If you go to queue, and refresh.
- You still see the message.
- Our consumer program only downloaded and use it but message is not deleted.
- Delete the messae and click on poll for message again, you will see the same message again.
- The reason is that you still have the same message is on MQ.
Now, go back to message queue,
You see message retention period is 4 days
consumer can use for 4 days.
if you lower the retention time, say 1 sec, and your consumer system is down, they may not receive message.
Do not change it.
There is a concept, that consumer has a permission to delete it if you dont want it.
MQ does not delete the message until it receah the retention day.
Gooogle for AWS SQS delete message
There is an API
delete_message
To delete,
go to received message
select the message and click on delete.
Now, you see message available = 0
- even you poll the message, you will not get any.
Visibility timeout
- Its a common architecture
- When client connects to front end of producer program,
- producer program creates and sends message to MQ
- Now, program B uses it..
consumer -> LB -> systems A|B|C
Consumer -> Public LB -> Systems A|B|c ----> MQ ---> Private LB -----> sys 1|2|3
from public
Say message is generated on A and goes to MQ and then goes to private LB and then to say sys1.
system 1, 2 and 3 are consumers.
The keep polling the message
all client consumers are polling the message so they will have 4 message
but we don't want this
what we want is
until message is downloaded, we dont want other to download
in that case, we want this message to be invisable for say 30 sec. Message will be there but can't see it.
this is visibility timeout.
in this case say system1 is accessing the message but for other its invisible.
say, system1 downloads the message and takes 100 sec but that time message is visible and system 2 may download again.
So, you have to know how your polling and proceesssing program function. based on that you set the visible timeout.
Lets go to our mq - myq
send and receive message
send a message
and go to receive message and click on poll
it can't download for 30 sec
first consumer cant download for 30 sec. so system1 should use and delete within 30 sec.
MQ - timing is very important.
If load increase
- Load Balancer
- AutoScaling
if more ram/cpu -> add more nodes
more and more message coming on consumer side, or
check MQ server and see how many messages are there.
- Set up a metric on cloudWatch, if there are more than 500 message coming per second, auto launch a new system on consumer side.
- if less message coming, remove the nodes.
Go to your message queue
- Retention perion
- Visibility period
are important.
Producer - MQ - Consumer
- Producer creates and stores on MW
- Consumer will poll the message and use it, delete it.
1. Poll
2. Process
3. Delete
what happens if they forget to delete
after consumer process message, and for 30 sec, message it invisible.
All nodes keep polling.
say first message is generated at
8:15:05 - msg1
8:15:36 - > msg1
Every 30 second, new message is delivered.
and same message, so you think it has a loop created.
You have to terminate this loop.
SQA admin may see it but they can't do anything for customer's message.
But what you can do it, after certain time, you can tag this message and say this message is dead.
and move this message to Dead letter Queue (DLQ)
This message is keep going and store it on queue but do not delete.
Go to your queue and click on it
you can go t odeal letter queue.
it under encription
you can enable or disable.
you have to create a queue - deal letter queue.
go to your queue and click a create queue
name: DLQ
config
visibility timeout: 30
message retention: 14 # keep for longer time for developer to look into
size: 256
Click on create queue..
now, any message stuck on myq1, this message it forwarded to DLQ queue
go to myqu1 and enable deal-letter-queue
max receive: 3
so our main queue, we implemented DLQ.
go and send message
message body: testing for dead
and click on dend
to to polling
it start pooling..
you will see the message available: you see the value is changing..
go to queue, and see available message
after 3 attempt, message goes to deal leter queue that you created..
Dead letter queue messages are not delivered to consumers.
this is to analyze why message is on dead-letter queue.
There is a message option for Delivery Delay
- when message comes to me, I want to hold for certain time and deliver.
By default delivery delay: 0
Open - ID - Protocal
Queue service LAMBDA
Lambda Function
----------------------
Lets see we have S3 Bucket
- we set event trigger
as soon as we put one object here, contact lambda function (invlcate )
S3(bucket) - > Trigger Lambda function (invocation) -> after successful -> store on MQ -> C1 (consumer polls the message)
if message failure, they wil take
You can build this complete pipeline.
S3 bucket (source) ---------------> Destination (consumer)
Lets see how we can do it.
Go to lambda on aws
open new tab for s3 as well
- On Lambda- create functon
function name: function
Run time: python 3.8
create
go to s3
create a bucket, and create a trigger
go to bucket -> properties -> sent yo event notification
you can create he here
to go to lambda
add trigget
s3-> bucketname
event type
put
and create from here.
either you create from lambda or s3, they are same.
import json
def lambda_handler(event, context):
# TODO implement
print("This is a lambda test for SQS")
go to lambda and click on event
cloudwatch lg info is stored
go to cloud watch and go to log section, you won't see it.
open your trigger and add destination
destination configuration
source
or go to queue and creat a new queue lambdaQ
we have producer is lambda.
go to destination configuration of your lambda or s3, set destination...
Note: You have to create IAM role, but system will automatically creates for you.
s3 -> Trigger -> Lambda -> SQS
now, add some file on s3.
so we have put request trigger.
now go to monitoring tool on lambda
click on view logs in cloudlogs
refresh the cloudwatch and you will see one log stream file.
you see the message here.
got oyour mysqs1, you will se the message
go down and poll the message
go to body of the message and review the message..
create lambda function
name: f1
python 3.8
create function
click on function
print("Trigget from SQS as consumer")
in this case you have to create Role for SQS
SQs need to have capalibity to run lambda.
Paurge
delete message queue...
FIFO
- You want your mesage process in order
No comments:
Post a Comment