Request Queuing Implementation

Not applicable

Help needed on this,

We want to implement the request queuing process in apigee. Below are the requirement:

1. If suppose, 5 requests/minute want to process after then 6th request comes it should be hold in queue and process the request when one of the request is completed from the among 5 requests.

2. How many request comes to proxy it should be keep in the queue and process it later?

3. We are following the below articles for spike arrest buffer size. whether is it fit it to my requirement or not?

https://github.com/apigee-127/volos/tree/master/spikearrest/common

4. else, what is the way to do this in apigee?

0 3 1,767
3 REPLIES 3

Not applicable

Spike Arrest would not put the incoming messages in queue for later processing, rather it would simply discard the messages, if the limit crosses.

Can you check if this topic helps ?

Thank for quick response.

Actually, we are not working on the jms queue. we want to take some request limit once the limit is crossed then we would like to keep in some place it might be queue (NOT JMS QUEUE) or buffer for further processing the same request. So could you please help me regarding to request processing.

adas
Participant V

@suresh Please note that Apigee Edge runtime layer is an http proxy layer. It doesn't have any queueing mechanism for processing incoming http requests. Spike arrest works on the basis of in-memory counters which gets incremented everytime a new request is received. Once the request count hits the limit, all subsequent requests within that interval are dropped, nothing is queued.

The apigee routing layer which is built using nginx can implement some queue based solution for request processing based on number of concurrent connections or number of concurrent requests, but its not something that you can keep tuning based on your requirements. Its more of an out of the box solution that nginx employs to manage burst of requests. But the meta point is an 'http proxy' is not supposed to enforce or employ any queueing mechanism.