Cache expiration of less then 180 seconds, how do you handle that?

Hi all, we found that Apigee is planning to enforce a cache expiry time of minimum 180 seconds.

We have several use cases where we are caching current data (e.g. traffic conditions, twitter feeds, etc.) where we feel that 1 minute is a maximum. And because some of these are high volume APIs, caching seems a smart thing to do. However, three minutes feels so not 2019 when it comes to providing current and timely information. Our current settings of 60 seconds works perfectly fine by the way!

Wondering how your organization feels about 3 minutes as a minimal cache expiry time. Does that feel like an acceptable “delay” in this information age?

And how do you solve these use cases ?

Thanks in advance, Bas.

1 3 786
3 REPLIES 3

we found that Apigee is planning to enforce a cache expiry time of minimum 180 seconds.

I was unaware that the product limits documented the minimum Cache expiry time of 180 seconds.

three minutes feels so not 2019 when it comes to providing current and timely information.

Your perspective seems very reasonable to me.

I'm also interested in the views from other organizations. For those of you reading my response, please take the time to offer your own opinions in separate responses.


I don't believe that limit is enforced now, and I know of no plans to implement enforcement as you describe. But my knowledge is no guarantee; given the stated product "limit", it is certainly possible in the future that the product will start to "enforce" this limit. I suggest that you contact your Apigee sales team to request the opportunity to provide your feedback directly to the Apigee product team, on this matter.

Having said that, you could work around an enforced 180s minimum by adding extra logic into your API Proxy.

The PopulateCache would have to store the actual data as well as the current time (time of PUT). The LookupCache policy would have to extract the time of PUT, and then a subsequent policy would discard the cached result if the time-of-PUT was not within 60s. Bonus points if that logic also performs an InvalidateCache to remove that cached item. All this could be encapsulated in a Shared Flow.

A simpler alternative would be to introduce a "time bucket" Parameter into the compound cache key. This can be done with the "system.time.minute" context variable. Then cache with that compound key.

If the entry is stored or retrieved in the first minute of the hour, then you will get the value from the first minute. If the entry is stored in the 2nd minute of the hour, you will always get the value from the 2nd minute. The semantics of this approach are different from the above - for example an item cached at the 59th second of the minute would be treated as "out of cache" if you tried to retrieve in the 3rd second of the following minute, which means your cache would not be as effective - but this would be much simpler to implement.

I hope you would never have to go to the trouble of doing either of these things. Dirty hacks is what they are. It's so much trouble; the cache should behave correctly, as a cache is intended. It would be a sad state of affairs if you needed to resort to the extra work to work around the problem.

@dane knezic FYI

Hi Dino, thanks for your comment.

This page : https://docs.apigee.com/api-platform/reference/limits gives an nice overview and states that enforcement is "planned". This seems to be the same time limit as the OAuth access token expiration (using the same caching mechanism?). So all who are currently are using expiry shorter than the 180 seconds should be aware of this (this also is applicable for the OAuth access token expiry by the way).

And again thanks for the possible work arounds, we are currently investigating the different options. But like you also mentioned… These all feel like dirty hacks that might be worse than what they try to prevent by setting these limits. So we actually don’t want to go down that route, and the alternative would be setting another gateway with proper caching in front of Apigee, and that sounds even more ridicules.

So the main reason for my post was to get support for the 180 sec being far too long as a minimum expiry time, and now I have already one supporter of that, so thanks for that 🙂

Hi @bas eertink

Note that certain Apigee entities that are regularly accessed are automatically cached using the 180 second minimum right now, including tokens, apps, products, and developers, and their custom attributes. This means changes to these entities might not be picked up right away. See details at https://docs.apigee.com/api-platform/cache/cache-internals#inmemoryandpersistentcachelevels. (And I agree that 180 seconds for all caching would be too long, and I would have no problem using the "dirty hack" that you were discussing to make sure my caching strategy was correct for me.)