Concurrancy rate limit using header attributes

Not applicable

Hi - We have a requirement where based on a header value we want to restrict the Nr of concurrent connections to the target service. Is it possible ? I see that this is possible in Rate limit policy but not in concurrency rate limit policy !

Thanks,

Siddharth

0 11 345
11 REPLIES 11

@Siddharth patnaik , Interesting requirement.

As far as i know, It's by design & I believe there is a reason behind it. Please find what i think about same below.

Rate limit policy it's possible because quotas can be applied per identifier like developer app / product level / any other flow variables.

Concurrent rate limit policy is more of protecting target server & throttling in bound connections and it doesn't support any identifier. So, If it's allowed then one API call can block all other api calls by setting the number of concurrent connections as 1 if it's allowed based on header value.

As of today, I believe, It's by design. Hope it helps.

Update : Number of Allowed connections doesn't support any reference variable. But, You can leverage multiple concurrent rate limit policies with policy conditions & targetidentifier xml element in policy to throttle inbound connections based on the header or any other flow variable at runtime. Check detailed answer here that explains how it works. Thank you @Siddharth patnaik for asking this question. Hope your query is resolved. Keep us posted if any.

@Siddharth patnaik , Apigee controlled area is between Apigee Proxy & Target Endpoint. Interaction between Target Endpoint & Other backends is beyond the control area of Apigee. I don't think Apigee can do anything here. You have to handle it in your backend implementation.

Hmm ...Why do you think so ? With concurrency control policy, Apigee will allow only 2 concurrent connections to the app server (for backend-1) & 3 concurrent connections to App server (for backend-2. app server need not know about these logic. Thats exactly what we want !!!

@Siddharth patnaik , If you can create two different target endpoints, you can route the requests based on header & set individual concurrent rate limit policies for each targetendpoint. Does that help ?

No.Not possible to create 2 target endpoints. We are a multitenant server and from our App Server we make connections to different backend systems.

I do not understand why this won't work.

Let's take an example:

proxy gets request 1 for backend-1. It passes to App server which will make a call to Appserver1

proxy gets request 2 for backend-1. It passes to App server which will make a call to Appserver1

proxy gets request 3 for backend-1. Since reqest1 & 2 are still in-progress, concurrant policy kicks in & will block this request !

proxy gets request 4 for backend-2. It passes to App server which will make a call to Appserver2

Note:From App server to backend server calls are REST API calls which are executed from a HTTPConnectionPool.

So basically what we want is to block more that 2 concurrent requests for backend-1 should be allowed from proxy to Appserver

Does that make sense ?

@Siddharth patnaik

Thank you for additional details,

How about having two different concurrent limit polices with different "TargetIdentifier" name in the policy & conditionally execute policies based on request header ? I guess that should work, Haven't tried though. Does that help ?

@Siddharth patnaik , I have updated answer, I hope your query is resolved. Keep us posted if any. Check here for the video that explains same.

adas
Participant V

@Siddharth patnaik I am trying to understand your question a little better. Are you asking if the value for the concurrent rate limit policy can be set based on a header or if you can apply the concurrent rate limit policy itself based on a header. If your question is the latter, then its easily doable using conditions. Here's a small example:

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<TargetEndpoint name="static">
    <Description/>
    <FaultRules/>
    <Flows/>
    <PostFlow name="PostFlow">
        <Request/>
        <Response>
            <Step>
                <Name>NonDistributedCRL</Name>
                <Condition>request.queryparam._test = "local_crl"</Condition>
            </Step>
        </Response>
    </PostFlow>
    <PreFlow name="PreFlow">
        <Request>
            <Step>
                <Name>NonDistributedCRL</Name>
                <Condition>request.queryparam._test = "local_crl"</Condition>
            </Step>
        </Request>
        <Response/>
    </PreFlow>
    <HTTPTargetConnection>
        <Properties/>
        <URL>http://mocktarget.e2e.apigee.net</URL>
    </HTTPTargetConnection>
    <DefaultFaultRule name="DefaultFaultRule">
        <AlwaysEnforce>true</AlwaysEnforce>
        <Step>
            <Name>NonDistributedCRL</Name>
            <Condition>request.queryparam._test = "local_crl"</Condition>
        </Step>
    </DefaultFaultRule>
</TargetEndpoint>

In this case, all that your are doing is calling the NonDistributedCRL (concurrent rate limit policy) based on a condition which is referring to a query parameter called "test".

Hi - Thanks for the response. Let me describe the use case in more detail:

3758-throttling.jpg

From App server, we are allowed to make 2 concurrent connections to backend-1

From App server, we are allowed to make 3 concurrent connections to backend-2

In the request, we have a header value to know which backend to connect to for the respective request.

Is concurrent rate policy with the condition that you have described above the right solution ? Once you confirm, we will try out.

Hope the use case is clear now. Let me know incase you need more information.

Thanks,

Siddharth

@Siddharth patnaik , What do you mean "App Server" in above diagram ? When you say backend 1 , backend 2 are they different target end points that you configure in Apigee Edge ? Are they different servers you are using as load balancer ?

App server is configured as the target end point from Apigee. From App server, the interaction happens to the backend servers. Hope its clear !