Circuit Breaker to block failed calls

We have a requirement in our apigee platform to block the calls of a consumer if the calls are getting failed. We have our quota policy which protects the backend but it doesn't protect the apigee platform. If a consumer sends large number of failed calls to apigee, our memory space of analytics is getting full which ultimately brings downs the analytics components. We are trying create a circuit breaker which will block the consumer to send any calls to apigee for certain time if there's large number failed call from them. Is there anything we can do to implement this through load balancer or any other way?

0 2 333
2 REPLIES 2

Not applicable

Have you added a statistic collector policy to send big size custom properties to the Postgres? Any request coming to the message processor will be pushed to qpid. We cannot stop the analytics as you asked. But to resolve the issue, you can increase the disk size of postgres or can purge the postgresdb more frequently. Also have a look on the logs of the Postgres, if that is the reason of memory issue, you can rotate them and move to some different storage.

We are trying create a circuit breaker which will block the consumer to send any calls to apigee for certain time if there's large number failed call from them.

There are multiple steps you can take.

If the consumer is just launching a denial-of-service attack, you can use a bot blocker or WAF like Cloud Armor or Imperva or Akamai.

If the consumer is a non-malicious, buggy client, trying to get a token, you can require a proof-of-work or similar.

our memory space of analytics is getting full which ultimately brings downs the analytics components.

That is a separate problem which you need to address. You'll need to scale the data collection portions of your Apigee deployment in order to handle unexpected loads.