Spike Arrest (Message Processor (MP) Issue)

Hi,

I have a Scenario where i have Dynamic(Auto-scaling) Message processors and I want to use (UseEffectiveCount) as True. Now, in this condition the distribution of the Request(per MP) will be impacted and so there will be a fluctuation in the count that is some of the request will fail even before the completion of the specified rate.

Is there anything we can do to stop it.

0 6 175
6 REPLIES 6

As requests should be getting by the processed by the message processors at random, there's nothing you can do to stop this from happening 

What value are you using for your rate? eg if you're using a per second rate, using a per minute rate would give more flexibility for your use case, but on the flip side you may end up allowing more requests than wanted. 

Actually we were trying 300tpm and we are getting 30-35( more requests. So, just wanted to know if there is something we can do to be more accurate on this.

 

Which version of Apigee are you using?

What type of identifier have you configured with the policy?

What do you mean when you say you are getting 30-35 more requests? Do you mean there are 30-35 more requests per minute than expected? 

I mean to say the limit was set to 300tpm in the Policy which is being extracted from the KVM. When we tried it with 600tpm, it should have passed 300 request as the limit is set to 300 but it allows more than 300 requests that is extra. So, just wanted to know if we can make this extra count a bit more less.

Till now no identifiers are being used.

When you tried 600tpm, im not sure if you mean you changed the policy to 600 or if you tried to send 600tpm to your API .. 

Keep in mind, the spike arrest is intended to act as a throttle to prevent your API from attempting to process 300 requests in one go, and instead intends to spread out the 300 over a minute.

It could probably help to set an identifier that's relevant for your API consumer use case so you're not counting all requests to an API proxy with the same counter

I totally understand your point, what i am trying to say is that the limit of the policy was 300tpm defined by us and we are just testing it out to see what will happen if it will get 600tpm or something like unexpected traffic.

We found that the policy was allowing few more requests then it is expected to.

We are not focusing on dividing the request in equal intervals of minute, we are just focusing on the  number of 2XX request every minute.