Soft Rate Limiting in Apigee X

Is there a way in Apigee X to apply Soft Rate Limit (un-enforced limit)? Basically when the rate limit is reached, the requests are not denied, but instead some form of error-log or notification (like an email) is internally generated. This is to ensure that an admin/provider is aware of such traffic from a particular app dev.

Solved Solved
0 5 530
2 ACCEPTED SOLUTIONS

In Apigee Edge, you can use the Quota policy with a continueOnError='true'. This will set context variables indicating how many calls have been consumed, and how many are remaining. You can then examine the context variables and take some action when the "calls available" goes to zero or negative.

In Apigee X / hybrid, you can do the same thing.

The trick is, what action will the proxy take? You could send a message out to a pub/sub queue or set a flag in a database, etc. But then you'd need some other mechanism to "reduce" all those notifications to one email per day or something like that, to avoid noise.

Imagine the scenario: the client exceeds their quota allotment by 1pm on a given day. The logic in the proxy detects that and.... takes some action. The client continues using the APIs, and the logic in the proxy, continues to notice this, and log events associated to that. It could be hundreds or thousands of events over the course of the day. You don't want 100's of emails going out. Ideally you'd like to reduce that to a single notification "on June 18th, client X exceeded the quota". How you do that reduction, is sort of up to you. Apigee isn't set up to reduce these kinds of things into a single notice.

one way you COULD do it, now that I'm thinking about it. The logic in the API Proxy could use PopulateCache , to "set a flag" that the quota has been exceeded for that particular client/consumer. Use the <ExpirySettings>/<TimeOfDay> element to keep the cached item until ... "midnight" or whenever is approprpiate. And just BEFORE midnight, have a job that goes through and READS all those cache entries. If a cache entry exists, that means the client had exceeded quota during the day, and a notice should get sent out.

There is also an ExpiryDate option.   So if your Quota is "monthly" then you could set the cache to expire on the last day of the month.  And do the same "scan job" at the end of the month. 

This will work regardless of scale-up/scale-down events in the cluster, because the cache is backed  by L2 storage. Even if pods/MPs come and go, the cache should persist until the configured expiry.  

It would be up to you do design the scan job so that it checks every client id.  

View solution in original post

Instead of using rate limiting in Apigee, I think you could find a solution by using Cloud Monitoring in the Google Cloud Console. Try creating an alert based on the Apigee proxy (v2), request cumulative count metric. This can then be configured to directly send an email for example

View solution in original post

5 REPLIES 5

In Apigee Edge, you can use the Quota policy with a continueOnError='true'. This will set context variables indicating how many calls have been consumed, and how many are remaining. You can then examine the context variables and take some action when the "calls available" goes to zero or negative.

In Apigee X / hybrid, you can do the same thing.

The trick is, what action will the proxy take? You could send a message out to a pub/sub queue or set a flag in a database, etc. But then you'd need some other mechanism to "reduce" all those notifications to one email per day or something like that, to avoid noise.

Imagine the scenario: the client exceeds their quota allotment by 1pm on a given day. The logic in the proxy detects that and.... takes some action. The client continues using the APIs, and the logic in the proxy, continues to notice this, and log events associated to that. It could be hundreds or thousands of events over the course of the day. You don't want 100's of emails going out. Ideally you'd like to reduce that to a single notification "on June 18th, client X exceeded the quota". How you do that reduction, is sort of up to you. Apigee isn't set up to reduce these kinds of things into a single notice.

one way you COULD do it, now that I'm thinking about it. The logic in the API Proxy could use PopulateCache , to "set a flag" that the quota has been exceeded for that particular client/consumer. Use the <ExpirySettings>/<TimeOfDay> element to keep the cached item until ... "midnight" or whenever is approprpiate. And just BEFORE midnight, have a job that goes through and READS all those cache entries. If a cache entry exists, that means the client had exceeded quota during the day, and a notice should get sent out.

There is also an ExpiryDate option.   So if your Quota is "monthly" then you could set the cache to expire on the last day of the month.  And do the same "scan job" at the end of the month. 

This will work regardless of scale-up/scale-down events in the cluster, because the cache is backed  by L2 storage. Even if pods/MPs come and go, the cache should persist until the configured expiry.  

It would be up to you do design the scan job so that it checks every client id.  

Thanks @dchiesa1 ! To your first point, is there no management/analytics API that stores the API counter (per app) information? Is it possible to fetch that information from an external app like once a day and send out notification?

 

There's no management API that stores this. The information is available only within the context of a request that uses the Quota policy.  So what you *could do* is invoke an API that wraps the Quota policy, that uses the same identifier.  And then read the context variables. But there's no builtin way to facilitate that. 

Instead of using rate limiting in Apigee, I think you could find a solution by using Cloud Monitoring in the Google Cloud Console. Try creating an alert based on the Apigee proxy (v2), request cumulative count metric. This can then be configured to directly send an email for example

Thanks @dknezic ! This sounds like a great approach. I didn't get a chance to play around with GCP Cloud Monitoring. I will give this a try. Thanks again!