Quota policy exceed count value

Not applicable

Hi Team,

I have a requirement to send the count of number of requests in the final response.For this, I am using ratelimit.{policy_name}.used.count. Also, upon quota violation I need to send the count of requests. I was planning to use ratelimit.{policy_name}.used.count + ratelimit.{policy_name}.exceed.count. But I observed that ratelimit.{policy_name}.exceed.count is 1 for subsequent number of requests.

Is there any restriction from Apigee that after QuotaViolation occurs, the exceed count is not maintained?

Kindly guide me on the procedure to follow to fetch these, as the quota flow variables are read-only variables.

Thanks in advance

0 13 630
13 REPLIES 13

Not applicable

@Anil Sagar @Dino @Floyd Jones Please guide us , as this issue is causing delay in SIT

@Madhuri Sridharan , Please post quota policy you are using. Most probably it's due to more than 1 message processor. If you set Counter Distributed & Synchronous, You should see exact values. See 4MV4D videos that explains same below.

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<Quota async="false" continueOnError="false" enabled="true" name="NBNQuota" type="calendar">
    <DisplayName>NBNQuota</DisplayName>
    <Properties/>
    <Properties/>
    <Identifier ref="app.name"/>
    <Allow countRef="app.NBNLimit"/>
    <StartTime>2017-2-23 12:00:00</StartTime>
    <Interval>5</Interval>
    <TimeUnit>minute</TimeUnit>
    <Distributed>true</Distributed>
    <Synchronous>true</Synchronous>
</Quota>

I want exceed count value also to increment upon QuotaViolation. How do I achieve this?

@Madhuri Sridharan , Looks like your policy is fine. It's a bug. Found a similar issue reported earlier here. @Floyd Jones also mentioned same. I can able to reproduce same issue. Let me follow up with Engineering Team - APIRT-3910 (Jira Ticket). I will keep you posted.

In the mean time, Can you open a support ticket with Apigee Edge ? You can escalate same by changing the ticket priority so that Apigee Team can join the call & bring attention of engineering. Keep us posted if any.

@Anil Sagar Could you please suggest some other solution for this?

You can use KVM to store the exceeded count value & update same using fault rules, kvm policy. It's resource intensive though. You also need to clear kvm value if you don't want to track total exceeded count. You can set the KVM to zero if it's 200 response. Keep us posted.

@Anil Sagar I tried to use a custom counter- Put operation in KVM policy. This was working fine. But when multiple requests are triggered at the same time, there is a time lag in put and get, which is why the exact count cannot be maintained.

That is, when 2 requests are triggered at the same time, first i get the current value from KVM, increment it and then put in the KVM. Within the completion of this process, 2nd request get is executed which will fetch old counter value. How do I deal with this as in production environment, any number of requests may come

@Madhuri Sridharan , That's true. It's due to the way cassandra works where counters are stored. I am not sure of any other way. Did you get a chance to follow up with support since it's causing delay in SIT ?

Thanks @Anil Sagar. Will check. Meanwhile, if you come across a solution for this, please share.

Sure will do @Madhuri Sridharan

@Madhuri Sridharan ,

Looks like it's expected as of today, Please find engineering team response below,

Both the flow variables "ratelimit.exceed.count" & "ratelimit.total.exceed.count" will always show the total number of rejected requests as 1 (after rejection phase kicks in). These variables are maintained for backward compatibility. In the current design, exceeded count cannot be maintained, maintaining them will affect the functionality of reset quota flow.

We will keep you posted if any updates.

Thanks @Anil Sagar. Can you let me know how to configure the time zone for quota refresh? I want the refresh to happen at the mentioned <StartTime> AEST/AEDT time zone.


@Madhuri Sridharan , Please post as a new question. Thank you.