Overriding SpikeArrest value for a client, provided endpoint-to-spikeValue is kept in KVM

We are using SpikeArrest policy, with:
 - identifier as ClientID (fetched from request.header.client_id)
 - value is a variable (we are keeping endpoint-to-value mapping in a KVM)
 
This way, we are enabling SpikeArrest value for endpoints to be changed, by just modifying the entry in KVM, if required.
 
The SpikeArrest policy thus works per-endpoint-per-clientID
 
Now we have a use-case, we might need to override the spike arrest value for some clients. These may be very few endpoints.
 
Say, if we have following entry in KVM:
 
endpoint_1 = "10TPS"
endpoint_2 = "20TPS"
 
The above works for all clients. If we need to increase the value of endpoint_1 to 100TPS for say client_A, we though of adding one additional entry in KVM:
 
endpoint_1 = "10TPS"
endpoint_2 = "20TPS"
client_A.endpoint_1 = "100TPS"
 
In the JSPolicy used to set the value for SpikeArrestPolicy, we will first check for "clientID.endpoint". 
So for client_A, we will check for client_A.endpoint_1 = "100TPS", if found we will use that value for SpikeArrestPolicy.
 
But this has performance implications, for all-endpoints-for-all-clients other, than client_A.endpoint_1, because if key is NOT found in KVM cache, a hit will be made to Cassandra to search for the key. So KeyValueMapOperations policy will make a hit to Cassandra for all API hits other that for client_A.endpoint_1 (as it will be found in KVM)
 
Any suggestions, on how we can override value of a particular endpoint, for a particular clientID, without impacting performance?
 
1 1 102
1 REPLY 1

One way we could think of is using PropertySet (at environment level) and keeping the overridden values in propertySet, like:

client_A.endpoint_1 = "100TPS"

In the JS Policy where we are setting the value in SpikeArrest, we first check the value in flowVariables, like:

propertyset.[property_set_name].client_A.endpoint_1

If its found, we replace it in SpikeArrest value and if NOT found we search for it in KVM. This way we can override it. 

If we want to go with this approach, do we have some limitations of PropertySet which can impact the implementation/performance.

What are the limits?
- Total number of PropertySet that we can have in an environment. Is it 100?
- Total number of records that a PropertySet have OR is there some limitation on size of PropertySet?

Also, do we see any performance issue if we have large number of records in PropertySet, as they will always be loaded, but will be used for only a limited number of endpoints? e.g. we will have say 5000 endpoints and may have to override a limited number of endpoints for a limited number of clients. May be like 500 entries in PropertySet, which will always be loaded, and added as flowVariables.

Any suggestion for a better design to solve our requirement?