Cache flow from L1 to L2

Not applicable

Hello everyone,

I have gone through the documentation on the cache internal and how APIGEE handles caching.

Can some please help me understand these:

1. L1 is in memory and the size of the cache is based on a % of system resources available. What is the %? I want to check if it suits our needs.

2. Is this shared cache distributive? Can it be in-memory and distributive at the same time?

3. I am considering shared cache is in memory which comes unique to the environment and there is an L2 cache. Now if the request comes to node 1 and populates the shared cache, will this cache entry be moved to L2 cache so that the second node can access this.The replication of cache entries only happen when L1 is filled up? or for every cache entry?

Thanks in advance.

regards

Raghav

Solved Solved
1 2 513
1 ACCEPTED SOLUTION

Hi @Raghavendra -

I wouldn't make too much of a differentiation between "shared cache" and "named cache". Think of shared cache as just something simple that's there when you don't care much about managing cache. For any real cache usage, we recommend creating a named cache. It's cleaner, more predictable, and you can manage it. It's also easy to configure. That said...

In the internals topic you mention, the section on How policies use the cache describes what you're asking in questions 2 and 3. The info should apply to shared and named cache.

In a nutshell, L1 and L2 get populated at the same time, so the only time L2 would have something that L1 doesn't is when system resources eject an in-memory value. But even then, Edge checks for a value in L2 if it doesn't find it in L1.

Also, a message processor (MP) that gets a new or updated L1 entry broadcasts to the other MPs. So L1 is distributed between MPs, and since L2 is shared by all MPs, there's no need for distribution.

Question 1 is tough to answer, even if you knew the exact amount of memory allocation, because resource usage always changes. That said, because L2 (Cassandra datastore) cache is always available, access to cached data isn't an issue even if L1 has a lot of churn.

Hope that helps.

View solution in original post

2 REPLIES 2

Hi @Raghavendra -

I wouldn't make too much of a differentiation between "shared cache" and "named cache". Think of shared cache as just something simple that's there when you don't care much about managing cache. For any real cache usage, we recommend creating a named cache. It's cleaner, more predictable, and you can manage it. It's also easy to configure. That said...

In the internals topic you mention, the section on How policies use the cache describes what you're asking in questions 2 and 3. The info should apply to shared and named cache.

In a nutshell, L1 and L2 get populated at the same time, so the only time L2 would have something that L1 doesn't is when system resources eject an in-memory value. But even then, Edge checks for a value in L2 if it doesn't find it in L1.

Also, a message processor (MP) that gets a new or updated L1 entry broadcasts to the other MPs. So L1 is distributed between MPs, and since L2 is shared by all MPs, there's no need for distribution.

Question 1 is tough to answer, even if you knew the exact amount of memory allocation, because resource usage always changes. That said, because L2 (Cassandra datastore) cache is always available, access to cached data isn't an issue even if L1 has a lot of churn.

Hope that helps.

Thanks Jones that helps...