Cache latency between Message Processors

We have a scenario where we are populating the cache in one API and then reading it in another - we're using Global as scope. It appears that for a reasonable percentage of requests (about 30% over a month) the read is failing. Whilst we haven't got to the bottom of it, I'm wondering if it could be because the read request hits a different MP from the write, and the time between the requests is small enough for the cached data not to have been persisted / shared with the other MPs.

Anyone got any thoughts / experience of this?

1 4 474
4 REPLIES 4

Hello,

we are receiving the same problem, we populate the cache with the token generated from us, with some other data, and we pass the token to the client to reuse it in the next call.

The client use this token to make some request, maybe in paralell, it's seems that if two request with the same token, some times happens that one of these two request hits the cache, the other one fail and our logic si based on the retreive the data from the cache.

Anyone else seems to have this issues? can we do something about it?

thanks

Hi Nicola

Do you have more information regarding how soon after the cache is populated you're sending the subsequent requests?

Hi dane,

yes the request are sent after some milliseconds after obtained the token from the first requests, those requests are used in a web application, so they are very fast after the first response because is the first load of the page.

in this scenario we use the token as a key for the cache, we are now thinking about to use the attribute of the token to store the data we need to retreive when a token si sent to us, this will help us to reduce this problem.