Parallel requests (same instant) on Apigee, with configured Spike Arrest

Hi everyone.

Reading apigee's documentation, most specifically the part where the usage of the Spike Arrest Policy is explained, I came across an issue where:

If I set a Spike Arrest policy on my API proxy with a set Rate (let's say 100 per second) and Identifier (a user identifying header - his personal ID, for example), the way that policy works is it allows only ONE request every 10ms (1000 / 100 = 10). 

But what if on my solution, for better performance/reuse reasons, I want to make parallel requests to my API proxy coming from that same customer (thus having the same <Identifier>), that will always be made on the same instant/milissecond? Apigee allows only one to go through, allowing only the first to go through and blocking the second.

How do I achieve my business needs while utilizing the perks from that policy? I ended up concluding that the way it works, it does not allow solutions that contemplate parallel processing on them, but that doesn't make a lot of sense to me since it's a limiting factor and a lot of architectures nowadays are very welcoming to asynchronous/parallel processing solutions.

Thank you!

 

0 4 265
4 REPLIES 4

The spike arrest is intended to smooth traffic ie consumption of your APIs.. With that said, if you're using a per second rate, it should have 100ms buckets, not 10ms. One thing to check is that you're using UseEffectiveCount set to true so your Message Processors don't have the rate divided amongst them, and uses a shared counter instead.

I didn't understand the 100ms buckets calculation... In a scenario where I have two MP and a Rate of 100, UseEffectiveCount = true, it would the following:

 

# of MPs2
Value of <Rate>100
Effective Rate per MP50
Aggregate Limit100

That is, each MP would have a bucket of 20ms (1000ms / 50 rate = 20 rate/ms).

So the theoretical scenario where two parallel requests from the same user (Identifier) and received by the same MP in the same 20ms range, one of them would still be blocked, not allowing the Spike Arrest Policy + Parallel requests to coexist properly (I might be completely wrong tho, I appreciate your response).

How do I achieve my business needs while utilizing the perks from that policy?

What exact perk are you referring to? The SpikeArrest policy is intended to protect the "target system" behind Apigee from spikes/traffic surges. 

Also, when using the UseEffectiveCount = true, the policy will not smooth traffic but rate-limit it as described in the docs. Have you tried it?

You could also consider using the Quota policy, have you looked into it? The Quote policy is usually recommended for rate limiting the Client consumers (while Spike Arrest is to protect target endpoints from surge).

Hi Pablo! Thanks for your response. The perk I was referring to was the exact one you mentioned: protecting a target system from spikes, while also trying to have the scenario where two (or more) requests from the same user, that are received by a same MP, at the same milissecond for example, are accepted.

 

Also, yes I have been using UseEffectiveCount = true and have considered the Quota policy as well, though I don't think it applies to my very specific scenario. Thanks for the help!