Spike Arrest - Identifying and denying duplicate requests

Hi,

We have a scenario where in a span of 1-2 seconds, we get a spike of incoming requests on our API, which are essentially duplicates (our definition of duplicates is the same correlationId on the request header).

I tried to fix this by creating a spike arrest policy which uses request.header.correlationId as identifier and:

<Rate>1ps</Rate>
<UseEffectiveCount>true</UseEffectiveCount>

But, this is not working as expected - it seems that by the time the spike arrest policy arrests the spike, about 10-11 duplicate requests are already through. (I even removed the identifier from the policy to make it more generic, but the same behaviour is exhibited)

What we want here is an effective policy that does not allow more than 1 request per second, based on a certain identifier.

If someone can suggest anything, it will be greatly appreciated.

0 5 211
5 REPLIES 5

Not applicable

As you wan to avoid the duplicate, I would suggest trying 1pm and identifier of correlation id. This should restrict all duplicate correlation ids.

1pm won't help us as we want the user to be able to re-send the request (to retry) after 30 seconds. What we need here is a per-second level control. You could say that we could do a 2pm here but the main issue here is the bombardment of multiple duplicate requests per second.

You are seeing this as the spike arrest counters are not sync in MPs.

Thanks, I suspected that as well. But, does it mean that such a requirement can't be fullfilled in a scenario where we have multiple MPs? Whats the point of having a feature of "a per-second rate + useEffectiveCount" then?

Theoretically we can achieve. Again I will say the network latency and things come into picture.