Intermittent cache issue in Node.js proxy

Dear All,

We are facing intermittent cache issue in Node.js proxy. We have two APIs as below.

app.post('/api1', ..);

app.post('/api2', ..);

Some data is being cached in api1 and the same is retrieved in api2. Sometime its working fine and sometime its failing - Error message is Invalid cached data. Behavior is really sporadic. We are using Apigee cloud for our enterprise.

API2 is being called immediately after API1. Most of the time being 1st API2 call after API1 is failing. But after 2/3 calls API2 is giving successful result. Its not predictable in giving success/error.

But If i wait for sometime after API1 call then most of the time API2 is successful. Our code as below

var cache = access.getCache('projcache', { resource: 'projname_cache', scope:'global' });

cache.put('userdata__' + <inputvalue>, user, 60); // in api1

cache.get('userdata__' + input.email, function (err, data) { // in api2

if (err) ......; // This is returning error - Invalid cached data

.......

});

Is it because of multiple MPs and cache is not distributed across the MPs (just guessing)? Any help would be appreciated.

Thanks in advance!

Joy

0 5 298
5 REPLIES 5

Not applicable

To others who are seeing this issue, here is my interpretation of / fix for the issue:

After some testing I believe the ‘Invalid cached data’ error to be a bug in the underlying Apigee Node cache module. My takeaway is that it basically translates to ‘I have not finished propagating the cache yet.’

My workaround was to call the cache.get() again for another 100ms or so, which seems to alleviate the issue. After 1 or 2 retries, the error goes away and the cache appears. It is an ugly fix but it works.

@Jon Buonaccorsi @Joydeep Paul

It is possible that the initial requests to access the cache fail because the cache has not replicated through the MP's and your API request for cache ends up on a Message Processor which doesn't have the cache yet.

When you let enough time elapse, the cache ends up replicating on all MP's and all your API's work well.

When i have dealt with distributed caches in the past (not just apigee) , It is often a decent fallback to try access the real source of data if the data is not found in the cache. If you built that logic, you would have some redundant processing (till the cache has replicated) but you might find the client experience more acceptable.

Not applicable

I agree @rmishra but I believe calling cache.get() is what triggers the distribution of the cache in some cases.

For example: If you wait 10 minutes after a successful cache.set() and call cache.get() on a valid key, it might initially respond with ‘Invalid cached data’ but calling cache.get() again immediately will frequently respond successfully with the value.

Wow, i did not know that . And that is an odd implementation choice but one which might have been made to prevent redundant serialization and propagation of cached data.

But if that is true, maybe your workaround could be to immediate call "cache.get" after setting the cache and invoking the two in a promise chain. Basically, your Cache Setter API should not return unless "cache.get" has been invoked.

We tried this as well but found that you will still need to call the get() two or three times before the 'Invalid cached data' error is replaced by the valid cached data.

Even calling the get() within the put()'s callback occasionally shows us the 'Invalid cached data' error.