How can we set cache resources and cache entry infinite i.e. it should never expire .

@Dino-at-Google @Anil Sagar @ Google

My requirement is to set cache resource and cache entry for infinite time . I have set it to -1 in populate cache the expiry time (TimeoutInSec) but it is getting clear almost in every 5 min automatically . I have no clue why it is happening . below is the xml .

Populate cache :-

<?xml version="1.0" encoding="UTF-8"?>
<PopulateCache enabled="true" continueOnError="false" async="false" name="SaveJWEInCache">
   <CacheResource>cache_details</CacheResource>
   <Source>cacheInput</Source>
   <Scope>Global</Scope>
   <CacheKey>
      <KeyFragment ref="developer.app.id" />
      <KeyFragment ref="botInstance" />
   </CacheKey>
   <ExpirySettings>
      <TimeoutInSec ref="expiry_token" />
   </ExpirySettings>
</PopulateCache>

Lookup cache :-

<?xml version="1.0" encoding="UTF-8"?>
<LookupCache enabled="true" continueOnError="false" async="false" name="RetrieveJWEFromCache">
   <CacheResource>cache_details</CacheResource>
   <AssignTo>botDetailsJWE</AssignTo>
   <Scope>Global</Scope>
   <CacheKey>
      <KeyFragment ref="developer.app.id" />
      <KeyFragment ref="botInstance" />
   </CacheKey>
</LookupCache>

cache_details -- this is cache resource that I have set it to -1 .

In expiry_token ref i have set -1 as value in TimeoutInSec that i am picking from KVM .

I also tried 0 instead of -1 but no luck .

I have also tried to set date without using TimeoutInSec but still not working .

<ExpiryDate>12-01-2999</ExpiryDate>.

Could anyone please help me why it is happening or am I doing something wrong here . ?

Thanks in advance .

Solved Solved
0 3 532
1 ACCEPTED SOLUTION

I have set it to -1 in populate cache the expiry time (TimeoutInSec) but it is getting clear almost in every 5 min automatically . I have no clue why it is happening .

PopulateCache does not guarantee that the cache entry that you insert will remain in cache for the duration of the TTL you specify. That's not how this cache works.

The documentation says the ExpirySettings element "specifies when a cache entry should expire." There is no further information in the reference documentation about what "should expire" means, sadly. But let me try to clarify.

In Apigee, you can scope cache entries to Global, Application, Proxy, Target, Exclusive.... and the cache keys are composite and dynamic. With all of these options, it's possible to have many instances of the PopulateCache policy, executing concurrently, writing entries. But we know the cache is not infinite. The cache is a service that is backed by memory resources, the same memory resources the system uses for other purposes, like heap and I/O buffering and JIT compiling and etc. So there is competition within the system for the machine memory.

If you write an item into the cache with a TTL of 60s, if there is no competition or memory pressure, the cache itself will expire the entry in 60 seconds. Under these conditions, a LookupCache that executes within 60s using the same key will return the entry, whereas a LookupCache that executes at 61s will return null (cache miss). But, there is no guarantee that the entry will persist for the entire 60s. Imagine the scenario in which there is 1gb of cache memory, and during the intervening 60 seconds, across the organization (or more than one organization if you have more than one org on your MPs!), enough executions of PopulateCache execute to write 2gb of entries into the cache. In this case, the side effect is that the cache service will "eject" (not expire) older entries that had been placed into the cache based on a FIFO model, and not respecting the TTL, in order to make room for the newer entries.

This effect is exacerbated in Apigee eval organizations, in which a single cloud-based server handles many, many organizations. I can execute a proxy that runs PopulateCache with a TTL of 60s, and the cache entry may live for just a second or two before being ejected. (very sad face) This is a consequence of Apigee providing a service that densely packs eval organizations onto servers: In eval/trial organizations, the cache is generally under-resourced and delivers sub-optimal behavior.

If you're evaluating the behavior of the Apigee cache in a trial/eval org, and seeing the "early ejection" phenomenon, I'm sorry for the inconvenience. I know how frustrating that can be. For now there is no good workaround, except to move to a commercial (paid) organization.

If you are seeing earlier-than-expected ejection in a commercial org, then you need to examine your usage. How large are the cache entries? what else are you caching on the same machine? Do you have multiple orgs? Do you have non-prod orgs with load tests that use cache? etc.

BTW If you are using OPDK, you can examine cache behavior - there are JMX beans to let you look very closely. If you are using cloud, you don't get that visibility.

In summary, you should consider the TTL to be the maximum lifetime of the cache entry, not an guarantee of the exact lifetime of the cache entry. And, as Debora suggested, if you want persistent data ("infinite expiry"), then you should rely on a persistent store (like KVM), and not on an ephemeral cache.

View solution in original post

3 REPLIES 3

Not applicable

Can you try with below setting

<ExpirySettings>
<ExpiryDateref="{date_variable}">expiration_date</ExpiryDate>
</ExpirySettings>

If you don't want your entries to ever expire, a KVM would be a better option.

I have set it to -1 in populate cache the expiry time (TimeoutInSec) but it is getting clear almost in every 5 min automatically . I have no clue why it is happening .

PopulateCache does not guarantee that the cache entry that you insert will remain in cache for the duration of the TTL you specify. That's not how this cache works.

The documentation says the ExpirySettings element "specifies when a cache entry should expire." There is no further information in the reference documentation about what "should expire" means, sadly. But let me try to clarify.

In Apigee, you can scope cache entries to Global, Application, Proxy, Target, Exclusive.... and the cache keys are composite and dynamic. With all of these options, it's possible to have many instances of the PopulateCache policy, executing concurrently, writing entries. But we know the cache is not infinite. The cache is a service that is backed by memory resources, the same memory resources the system uses for other purposes, like heap and I/O buffering and JIT compiling and etc. So there is competition within the system for the machine memory.

If you write an item into the cache with a TTL of 60s, if there is no competition or memory pressure, the cache itself will expire the entry in 60 seconds. Under these conditions, a LookupCache that executes within 60s using the same key will return the entry, whereas a LookupCache that executes at 61s will return null (cache miss). But, there is no guarantee that the entry will persist for the entire 60s. Imagine the scenario in which there is 1gb of cache memory, and during the intervening 60 seconds, across the organization (or more than one organization if you have more than one org on your MPs!), enough executions of PopulateCache execute to write 2gb of entries into the cache. In this case, the side effect is that the cache service will "eject" (not expire) older entries that had been placed into the cache based on a FIFO model, and not respecting the TTL, in order to make room for the newer entries.

This effect is exacerbated in Apigee eval organizations, in which a single cloud-based server handles many, many organizations. I can execute a proxy that runs PopulateCache with a TTL of 60s, and the cache entry may live for just a second or two before being ejected. (very sad face) This is a consequence of Apigee providing a service that densely packs eval organizations onto servers: In eval/trial organizations, the cache is generally under-resourced and delivers sub-optimal behavior.

If you're evaluating the behavior of the Apigee cache in a trial/eval org, and seeing the "early ejection" phenomenon, I'm sorry for the inconvenience. I know how frustrating that can be. For now there is no good workaround, except to move to a commercial (paid) organization.

If you are seeing earlier-than-expected ejection in a commercial org, then you need to examine your usage. How large are the cache entries? what else are you caching on the same machine? Do you have multiple orgs? Do you have non-prod orgs with load tests that use cache? etc.

BTW If you are using OPDK, you can examine cache behavior - there are JMX beans to let you look very closely. If you are using cloud, you don't get that visibility.

In summary, you should consider the TTL to be the maximum lifetime of the cache entry, not an guarantee of the exact lifetime of the cache entry. And, as Debora suggested, if you want persistent data ("infinite expiry"), then you should rely on a persistent store (like KVM), and not on an ephemeral cache.