Apigee caching issue

Hi @dchiesa1 ,

We have Lookup cache , Assign Message (Set variable), Populate cache policies in the same order. 

When a new request comes in, it looks for cache and if there is no entry the next two policies will get executed.  

<Condition>(lookupcache.Lookup-Cache-1.cachehit == false)</Condition> 


When a cache is populated, it is stored in Message processor cache(L1 - In-memory cache) in the Message Processor which received the request.

So, the MP which received the request, will it send the cache value to L2 persistent cache immediately? so that other MP which gets the request will be able to use the cached value? (Consider a heavy load testing scenario)

When a cache entry is invalidated or needs to be renewed due to expire time:

2 Message Processors have L1 cache.
Message Processor 1 received a request. Cache value should be renewed. It updates the cache value in L1. It's another duty is to send a broadcast to other Message Processor(2) to update the cache.
If this broadcast fails, Message Processor 2 has invalid cached value in L1. As the L1 cache is still there it won't look for cached value in L2.
In this case, the request will fail in my scenario. How to check it this broad casting is failing?

Even if the broadcast has failed between Message Processors will it send the cache value to L2?

Thank you in advance!

 

0 7 317
7 REPLIES 7

will it send the cache value to L2 persistent cache immediately? so that other MP which gets the request will be able to use the cached value? (Consider a heavy load testing scenario)

Yes, and Yes.

But further details, I cannot describe . The Apigee Edge implementation is different from the Apigee X implementation. You didn't mention which you are using. In any case these implementation details are not documented and are not part of the "supported interface" of Apigee.

The goal is that L2 cache allows all MPs to share the cached value.

Beyond this information, what problem are you solving?

I am using Apigee Edge.

If this broadcast fails, Message Processor 2 has invalid cached value in L1. As the L1 cache is still there it won't look for cached value in L2.

In this case, the request will fail in my scenario, Correct?

How to check it this broad casting is failing?

Even if the broadcast has failed between Message Processors will it send the cache value to L2?

I have a token which needs to be cached. Intermittently the expired token is sent.

I don't know how you check if the broadcast is failing.  You may want to contact Apigee support for that.  

I have a token which needs to be cached. Intermittently the expired token is sent.

Can you reproduce this?  

There can be a race condition, of course, in conditions of high concurrent load. The L1 cache may ot propagate into P2, and then into L1 again for the other MP,  between transactions. This does not mean the broadcast (synchronization) is failing.  Only that it is a race condition in a distributed system. 

The fix for that is to not update the token AFTER it is expired, but to update it BEFORE it expires, so that even the "old" token is still valid, at least for the duration of the cache propagation. 

Suppose the token expires at Time T1.  Refresh it at time T1 minus 60 seconds.  Then cache that.  Even if the cache prpagation takes 35 seconds, you still have 2 good tokens .

 

Where are the cache resources stored in Message Processors and Casandra, path? 

@dchiesa1 can you please help?

can you please help?

In my original response, I asked a specific question, and you did not answer. I also made some specific suggestions , and I didn't see you respond to those, either confirming that you tried those things, or rejecting them (and a reason for same). Then you asked for help. Did you actually SEE my prior response? Did you just choose to completely ignore it? 

Where are the cache resources stored in Message Processors and Casandra, path?

I don't understand what you wrote here, it does not seem to be a valid sentence.

*Commenting for the community*

Hi @dchiesa1 ,

Before I posted this question, as a workaround, I have reduced the cache time, and the API calls were working fine.

Later, it was found that the issue was with the syncing of the Management Servers.

Apologies and thanks for the support, @dchiesa1!!