SpikeArrest policy - are requests queued?

mail-2
New Member

"If you specify a rate limit of 100 calls per second, only 1 call every 1/100 second (10 ms) will be allowed on the message processor. A second call within 10 ms will be rejected."

(https://docs.apigee.com/api-platform/develop/comparing-quota-spike-arrest-and-concurrent-rate-limit-policies)

I interpret this quote such that no queuing of requests is performed behind the scenes. Is that correct?

Solved Solved
1 8 349
1 ACCEPTED SOLUTION

@Raffael2020, if you are asking if the requests are queued and reprocessed in the next time window, then NO. The requests are just dropped with an error. There is no queueing of requests.

View solution in original post

8 REPLIES 8

@Raffael2020, if you are asking if the requests are queued and reprocessed in the next time window, then NO. The requests are just dropped with an error. There is no queueing of requests.

Is it possible to implement request queuing with Apigee? Meaning that requests are routed either at a specified frequency or as fast as possible through the proxy instead of being dropped.

If not, is this in the planning? I guess that would be quite useful for many applications that expect bursts of requests. For example if an Apigee proxy serves as an IoT endpoint.

Apigee is capable of processing the requests at a faster rate. However, the idea of introducing the spike arrest is to prevent flooding of requests by say bots and prevent bringing down the backend service. Secondly, queuing the requests and reprocessing them might not help either. Imagine a bot sending 100 requests per second. If you have spike arrest implemented with queuing, you will still process those fictitious requests, and the actual requests from a genuine consumer might still be blocked in the queue.

I do not know if this feature is in the planning or not, but just posted my thoughts on it.

> Apigee is capable of processing the requests at a faster rate.

No doubt about that. But not necessarily the backend. A queue might help buffer requests.

> Imagine a bot sending 100 requests per second. If you have spike arrest implemented with queuing, you will still process those fictitious requests, and the actual requests from a genuine consumer might still be blocked in the queue.

But SpikeArrest won't help with that either as not only bot requests will be blocked but also those of genuine consumers.

The scenario I had in mind was IoT sensors sending data to an API. Let's say we are talking about 100'000 devices sending data every five minutes. Then on average there will be 333 requests per second. Now the backend can handle 1000 requests per second. So, you set the spike arrest to that. But due to correlation / synchronization effects it might happen that this is exceeded on a regular basis. The question would be can Apigee provide a solution that does not drop requests or only in extreme cases?

Or is that scenario simply not a proper use case for Apigee and a custom solution using some IoT / queuing services in GCP or AWS would have to be applied to handle that.

@Raffael2020, I agree that both genuine and fictitious requests will be dropped with spike arrest, although the intention here is to protect the backend.

Right now, it seems like a custom solution would be needed.

> Right now, it seems like a custom solution would be needed.

Is there a limit to how many requests per time span Apigee can handle? Or is the sky the limit as long as the backend can keep up?

There is nothing specifically documented for Apigee as the message processing limit.It depends on the how much the backend can handle.

that is not really answering the question, though 🙂 just because no limit is documented doesn't mean the system can handle anything. there must be some SLA or experiences with customers. would you be comfortable with having a million requests per second unleashed onto some Apigee proxy?