How does Concurrent Rate Limit policy work?

Not applicable

There are two questions from a customer regarding 'Concurrent Rate Limit policy'.

1. Why is the policy designed for protecting backend service only?

I understand that this policy is placed on Target Endpoint as described at

http://apigee.com/docs/api-services/content/rate-limiting

and then what's the reason behind why it cannot be placed at Proxy Endpoint like Quota policy and Spike Arrest policy - meaning don't we need to protect Apigee Edge from the perspective of simultaneous connections limit?

2. How does it count simultaneous connections?

Does this policy count simultaneous connections against the same backend service for all the API proxies in an org, and is it compared with the 'count' attribute defined in any one of the policies of these API proxies having it?

If it's not the case, what is the scope of the numbers of connections to be tallied?

Solved Solved
0 4 2,371
1 ACCEPTED SOLUTION

Not applicable

Hi @Toshihiro Shibamoto,

Pls refer http://apigee.com/docs/api-services/reference/concurrent-rate-limit-policy for more information.

<AllowConnections> is used to configure the count and <Distributed> to enable distributed Ratelimit

View solution in original post

4 REPLIES 4

Not applicable

#1

Quota and Spike works at the application layer for requests where as Concurrent Rate Limit works at connection/network layer. I believe Edge has a system level property where you can have similar setting but that doesn't make sense to have as a policy for each resource. However you can control the concurrency on a particular resource using a load balancer like Haproxy if that is really needed .

#2

ConcurrentRatelimit policy allows the application to control the no of concurrent connections/requests to be made to the target (apiproxy -- target)at a given point of execution. The connection count could be specified based on per message-processor level or distributed across pods/regions. In case of distributed, the connection count/limit information would be stored/accessed in the in cassandra as counters.

Thank you very much for the answers.

As to #2 how can we configure the connection counter as to work per message-processor level or distributed across pods/regions? Is there any attribute or system property for that?

Not applicable

Hi @Toshihiro Shibamoto,

Pls refer http://apigee.com/docs/api-services/reference/concurrent-rate-limit-policy for more information.

<AllowConnections> is used to configure the count and <Distributed> to enable distributed Ratelimit

I understood how it's configured. Thank you so much.