Response cache policy for multiple target endpoints in multiple environments

Not applicable

We've 3 environments (prod, test and sandbox) in Apigee and each environment supports multiple target endpoints. The target endpoints conditions are defined in default post flow RouteRules. E.g.

<RouteRule name="xyz">
 <Condition>request.header.host = "abc.com"</Condition>
 <TargetEndpoint>XYZ</TargetEndpoint>
</RouteRule>

Now, to add response cache for multiple target endpoints, I added the caches for different endpoints in the default preflow (following Apigee tutorial doc) -

<PreFlow>
 <Request>
  <Step>
   <Name>Verify-API-Key-1</Name>
  </Step>

  <Step>
   <Name>Remove-Query-Param</Name>
  </Step>

  <Step>
   <Name>cache_x</Name>
  </Step>

  <Step>
   <Name>cache_y</Name>
  </Step>

  <Step>
   <Name>cache_z</Name>
  </Step>

 </Request>

 <Response/>

</PreFlow>

But, this is not working as intended. Tracing api calls shows that apis with different target endpoints are not fetching from the right cache always.

Any suggestion on adding multiple cache policies in a single proxy that's been deployed in multiple environments and have different target endpoints?

1 18 791
18 REPLIES 18

@nabilarahman Can you provide some additional info:

What are you using as the cachekey in the response cache policy defined? Are you including the host header in the key? Try adding request.header.host to the key fragment if you haven't

Can you provide your trace file or the api proxy export to check?

What is the usecase where you are trying to use multiple caches? If you need to use multiple caches, set appropriate cache keys in each one

Thanks! I've been using request.uri as the key. I'll try adding request.header.host.

So, My understanding from this Apigee tutorial video is that we need to attach the cache policy to a specific target endpoint. Since we've multiple target endpoints, I created multiple caches. Is that not the right approach?

Thanks for the configuration cut/paste. That's helpful.

I'd like more information

Can you ... explain your intent with "cache_z" "cache_y" and "cache_z"? Why are there three separate cache steps? What ARE those cache steps? LookupCache? PopulateCache? What is the configuration of each of these steps? What cache keys do you use? What data are you caching there? Why do you have three of them?

You said

apis with different target endpoints are not fetching from the right cache always.

What makes you say that? What did you observe that leads you to this conclusion? What did you expect to see?

Thanks for your response. Cache x, y, z etc. are added when I used the Edge UI +Step button in default preflow to add Response cache policies to x, y, z target endpoints. I followed the instructions here - https://docs.apigee.com/api-platform/reference/policies/response-cache-policy
I didn't do any configuration those steps.

The key used is - <KeyFragment ref="request.uri" type="string"/>

I used ResponseCache since for some of our APIs the backend data is updated only periodically.

There're multiple caches since my idea from reading the doc is that we need to attach cache policy to a specific target endpoint. Since we've multiple target endpoints, I created multiple caches.

In our prod environment, there are 2 target endpoints A & B. I added a response cache policy cache_a to target endpoint A. Then I traced to check if responses are faster. It was working fine for target a, but API calls to b were also going through cache_a and having faster response, even though I haven't applied any cache policy to target b yet.

@nabilarahman You mentioned: My understanding from this Apigee tutorial video is that we need to attach the cache policy to a specific target endpoint

That is correct, you can add the same cache policy to all the target endpoints you have. That way a single cache would have the information from all the backend targets and you can look up from one single cache instead of three caches.

You dont need to use multiple caches, just because you have multiple target endpoints. If you can set the appropriate cache keys to lookup from the cache (similar to your routerules), one cache should be sufficient.

Again, my answer is generic and without the complete understanding of your usecase.

Thanks for your reply.

I tried your suggestion and defined the key as follows -

<CacheKey>

<Prefix/>

<KeyFragment ref="request.uri" type="string"/>

<KeyFragment ref="request.header.host" type="string"/>

</CacheKey>

Then I tried removing multiple caches and apply the same one to 2 target endpoints.

But when I'm trying to deploy I get following error-

"Response cache step definition cache_sandbox can not be attached more than once in the response path"

@nabilarahman - Yes I hit the same error too, and this one is a new learning for me. Thanks to you.

Attached is an example implementation for what you are looking for. Hope this helps.

responsecacheformultipletargets-rev3-2018-12-12.zip

@Dino-at-Google - Any specific reason why we cant attach the same response cache policy in multiple target end points?

That's just how the product works.

Upon further review, this looks like an unnecessary restriction. A bug.

You've got a reasonable workaround for now, as shown by nabilarahman: use two ResponseCache policies.

Thanks @Dino-at-Google. I was made aware of this restriction - thanks to @nabilarahman's post.

@Nagashree B Thanks a lot! I think the solution you sent is quite similar to what I have right now, except for <KeyFragment ref="request.header.host" type="string"/> part. I'll add that to each cache policy and see how that works.

@Nagashree B I implemented the caches in the proxies same way as your example. But I'm facing an issue that I saw before. For an environment sandbox I've 2 target endpoints sandbox & staging. There are 2 response cache policies - cache_sandbox & cache_staging for these 2 target endpoints. I traced api calls for both targets, the sandbox one seems to be working fine (trace shows it's going to cache_sandbox). But api calls for the staging endpoint also hitting cache_sandbox instead of cache_staging. Do you think it could be a flaw from Apigee side? I added the trace file in the previous comment.

Show the complete cache policy config for both policies, please.

configs.zip

thanks!

@nabilarahman I dont think there is a flaw in the product. Its the way the policies are included. In the code I posted previously, the two cache policies were included one after the other. So as per the order of execution in the flow, it will first check in cache1 and then proceed to cache2. If the entry is found in cache1, cache2 will not be checked. So, if you have the same key entries in both the caches, there is a possibility that entry is found in cache1 itself

If you are sure which cache has the entry based on the host header, you could add a condition to the cache step in the proxy preflow and lookup from the specific cache itself. Similar to the code below: (trace attached - trace-1544817756620.txt)

API proxy code: responsecacheformultipletargets-rev3-2018-12-14.zip

	    <Step>
                <Name>Target1_Response_Cache</Name>
                <Condition>request.header.route = "1"</Condition>
            </Step>
            <Step>
                <Name>Target2_Response_Cache</Name>
                <Condition>request.header.route = "2"</Condition>
            </Step>

trace-1544817756620.txt Tracefile wasnt uploaded for soem reason although the it shows the link.

Thanks @Nagashree B, your suggestion was helpful. The cache policies seem to be working right 🙂