How to configure L2 cache size limit and to clear L2 cache only?

Not applicable

Hi Team,

A customer asks about L2 cache config for Apigee Edge Private Cloud which doesn't seem to be documented.

1. Is it possible to configure the maximum size limit for L2 Cache?

In cache.properties file of MP there are properties:

second.level.cache.enabled=true
second.level.cache.element.max.size.in.bytes=524288
Then is there also config for placing the limit of the total number of cache elements?


2. Is it possible to clear L2 cache only without touching L1 cache data?

By looking at the Community post:

https://community.apigee.com/questions/3813/needed-info-about-caching.html https://community.apigee.com/questions/28997/how-to-find-out-if-l2-cache-is-disabled-in-onpremi.html

The Management API and Edge UI config looks like clearing both L1 and L2 at the same time.

If so, can we clear L2 cache entries on Cassandra only by the following data?

column family name - cache_entries


Regards,
Toshi

0 4 800
4 REPLIES 4

@Toshihiro Shibamoto - why would you need to clear L2 alone ? Can you elaborate?

@Sai Saran Vaidyanathan

Thank you for replying to this question. The customer says that they are using more APIs which use L1/L2 cache and there are disk space impacts used by L2 is increasing.
So, they want to save it by clearing L2 cache. But maybe we misunderstand how it's related between L2 cache and disk usage.
Please advise us what is the best way to reduce the usage of disk space by L2 cache.
Do we need to delete entries for that, not clear the contents manually? Then when is it recreated?

Not applicable

Hi @Toshihiro Shibamoto

Few important points to understand before we make any change

1) You can use the second.level.cache.enabled setting on the MP to not use L2 cache which means the cache data is stored only in the MP (L1) in memory which is not persistent and in some cases, the values get removed before the TTL is met as the size on MP is limited and the cache data is not distributed.

2) second.level.cache.element.max.size.in.bytes= 512Kb, if the cache value is >512kb, its skipped in L2 but is available in L1.

Based on the problem you explained,

Use 1 to skip L2 entirely but make sure your you are okay with the behavior in the absence of L1, this is good if the cache is being used as just response cache or else you can't predict the behavior.

You can reduce the max size of each cache element that gets stored in l2 to 100kb or less but the above will still apply or use SkipCachePopulation element in the response cache policy to avoid caching large values entirely.

Regarding the disk space issues, are you sure the disk space issues are because of the keyspace cache?

What are the expiry settings in your use case?

The default gc_grace for all the keyspaces is 10 days (Tombstones are created when we delete the data or when they get expired and eventually gets deleted when compaction is run. compaction runs every 10 days based on the gc_grace settings).

pls, work with the support team to see if we need to reduce the gc_grace period, run frequent compactions or if we can change the compression_options which ultimately depends on the problem that you are facing.

Hi @Maruti Chand

Thank you very much for response with useful information and advice.
The original problem the customer sees is that there are some cassandra/zookeeper nodes which consume large amount of disk space. And by observation they found that after upgrading OPDK 15.07 to 16.01 there are symlinks made from old /opt/apigee4/data/{component} to new /opt/apigee/data/{component} which can impact disk usage if it's not necessary.

So the first question is if there are any {component} under /opt/apigee4/data/ which are no more needed after upgrading to OPDK 16.0x from 15.0x.

Then they also found that on conducting 'nodetool repair -pr' on Cassandra node sometimes cause the disk usage keep growing and it drops only when they stop and restart the node.
And the usage sometimes hit 100% of disk space which should be avoided by some means.

Is there any relation between adding L2 cache usage and nodetool repair -pr operation with disk space increase? Then could you give advices on how to save the disk usage?