Need help understanding the performance impact of increased caching of API responses for each client ID, on disk level L2 Cache

Not applicable

While planning to enable disk level L2 caching for API responses based on each client ID, it is assumed that the number of cache entries will increase significantly.

Need help understanding the challenges and effects of this approach. Example questions:

1) Performance impact reading from the disk level cache

2) number of connections to the disk/number of requests to the disk level cache

3) Effect of this approach considering a peak traffic requirement which is 10 times current traffic.

--SF903198--

0 2 228
2 REPLIES 2

Not applicable

Hi @Arjav Goswami ,

L1 is In memory and

L2 is persistent (cassandra) . There is no disk level cache .

Pls refer this link http://apigee.com/docs/api-services/content/cache-... which talks internals of how it is implemented .

I think you should modify your answer slightly - because while there is no "disk level cache" disk, and network will come into play if you reach the cassandra persistence layer.

This is really important - as to be HIPPA or PCI compliant this stuff cant live at rest on disk - so it must stay in memory only. Which makes me want to ask a question!