Unreliable KVM values

I am settings KVM value through KVM operations policy in proxy with PUT entry
<Put override="true">
    <Key>
        <Parameter>some_value</Parameter>
    </Key>
    <Value ref="some_value"/>
</Put>
Cache set maximum value
<ExpiryTimeInSecs>2147483647</ExpiryTimeInSecs>
Next time when I request this value getting proper result.
On iteration 2 I am adding to the same key new value with the same PUT entry which should invalidate cache and set new value. But in result in GET request in KVM operations I get 2 different values - old and new with random order. It looks like some cache on 2 different instances which grouped with load balances upfront. One instance return old value and another return already updated value. However in settings (GET https://apigee.googleapis.com/v1/{parent=organizations/*}/instances) I see only one instance. What it can be? Environment is eval with internal access.

0 6 185
6 REPLIES 6

KVM sync delay? What region(s) you deployed your Apigee X?

This is not a delay, because value alway changing from old to new. I am just receiving old value for some period of time. Region us-central1 and another test in europe-west1-b.

 By sync delay, I meant that the old value is not replaced in the old region, only in a new one. Since this is a SaaS, you might need to contact support. 

We test solution on eval environment it has only 1 instance in one region, but still KVM values flapping. Do you know what is sync period for local L1 cache update? I thought PUT entry should invalidate cache on all instances immediately.

I thought PUT entry should invalidate cache on all instances immediately.

This is the expected behavior. 

Keep in mind there is an ExpiryTimeInSecs for Put as well as for Get. 

In my tests, this is the behavior I see. If you are not seeing this, then maybe you should contact Apigee support.

 

Also I suggest that you do not use 2147483647 as the expiry time. That seems extreme. Surely you can afford an I/O to read the KVM value, once every 300 seconds or so?  I don't know what the behavior will be at the extremes. I am unsure of the edge cases, what is the max value for ExpiryTimeInSecs, and what happens when an MP node gets created or restarted when the cache life is so much longer.  I suggest you perform your tests again, using a new key value, and a value for Expiry that is more realistic. 

When I tried this I used ExpiryTimeInSecs of 120 for Put, and an ExpiryTimeInSecs of 1, for the Get. and I saw consistent behavior - no "flapping". 

What is the difference between ExpiryTimeInSecs for PUT and GET requests? How does it work? In my understanding when you create record in KVM at the same time TTL generated for each record. How does GET ExpiryTimeInSecs works?
Thank you for reply. It would be great to have more explanation for TTL in PUT and GET entries.
We used maximum number for cache because of amount of stored data. For each request we getting 60 keys which leads to almost 400ms delay without cache. Keys value changes not so often, so we decided to set cache to maximum value. With cache delay for KVM extraction is only 30ms.