Limiting concurrent requests per token

Not applicable

Hello,

We need to limit the number of concurrent requests sent per customer (token) to our API. How can we implement this via Apigee? The concurrent rate policy does not provide a method to limit per customer.

thanks,

Yaniv

0 7 777
7 REPLIES 7

If you are requirement is to throttle the number of requests processed by an API proxy and sent to a backend, use Spike Arrest policy.

Eg -

<SpikeArrest async="false" continueOnError="false" enabled="true" name="Spike-Arrest-1">
    <DisplayName>Spike Arrest</DisplayName>
    <Rate>30ps</Rate>
    <Identifier ref="YOUR_TOKEN_ATTRIBUTE_HEADER"/>
</SpikeArrest>

Not applicable

Can you elaborate more on your usecase ?

Not applicable

Spike Arrest provides request rate limiting, however, we are looking for limiting the number of the number of requests that are currently being processed in the backend for a specific token.

For example:

We would like to allocate 2 simultaneous requests to user A. So there will be no more than 2 connections open to the backed user for that user at any time.

Sorry, your question "We need to limit the number of concurrent requests sent per customer (token) to our API." ... made be think that you want to limit requests to APIs at Apigee layer. Thanks for the clarification.. However I don't think there is a straight forward way to implement this request.

Not applicable

Quota policy is the best bet in this case. Quota policy will let you allocate different quotas for # of requests per second allowed for each user/org.

The way to do this would to to create apps for each user/org and allocate custom attributes in each app with each org simultanious request processing limits. Then in the quota policy, refer to these limits to enforce limits for each user/org.

Not applicable

The quota policy does not meet our requirement since each request can have a different processing time, and if a user sends all requests at the same time (but still does not use all the quota) they can use more than the concurrent requests that are allowed on the backend.

For example: if we allocate 1 concurrent request to a token, the client should not be able to send another request until the first request has been completed.

Another example: If we allocate 2 processing concurrent requests in our backend for each user -

Let's say there are 2 requests from the same token (user) entering APIgee:

1. Request that will be processed for 5 seconds

2. Request that will be processed for 30 seconds

In case a third request from the same token enters Apigee, it should be rejected in case both requests above (#1 and #2) are still being processed by the backend.

Hi @Yaniv Yemini , I don't think there is a way to achieve what you want .