Feature Toggle (via Launch Darkly) integration with Apigee

We have a requirement to integrate out Apigee Proxies and Shared Flows with our internal Launch Darkly service.  Any guidance or recommendations on how to integrate?  Has anyone else done this? 

Solved Solved
0 2 338
1 ACCEPTED SOLUTION

I don't know that much about Launch Darkly, only a little.  Enough to be helpful maybe? 

Lots of Apigee customers use a "maintenance mode" flag for their APIs. In the simplest case, there is an entry in the Apigee KeyValueMap , and the API proxy (or a SharedFlow) includes a KeyValueMapOperations policy that performs a GET on the map, retrieves the maintenance mode flag, and then "short circuits" the Apigee flow, based on the state of that flag.  Super simple.  It might look like this in the proxy flow: 

 


<PreFlow>
  <Request>
    <Step>
      <Name>KVM-Get-Maint-Mode</Name>
    </Step>
    <Step>
      <Condition>maintenance-mode = true</Condition>
      <Name>RF-Maintenance-Mode</Name>
    </Step>
    ...
  </Request>
</PreFlow>

 

In the above, the Condition checks what the prior KVM policy retrieved. And the RF-Maintenance-Mode policy is a RaiseFault, that returns a static 400  response indicating that the service is offline. 

The KVM policy in Apigee is clever enough to use a cache, and you can specify the cache settings within the policy configuration. The result is that you can configure your APIs to check for maintenance mode by reading the KVM, but requiring actual I/O only once  every 5 minutes or so. Which means latency will be very good for almost all API invocations.  Flip the bit in the KVM, and you get different behavior in your API proxy, within 5 minutes.  

You could do the same thing for other flags, feature flags if you like, by using the KVM in your Apigee proxy.  If you have 4 or 5 feature flags you won't want to read the KVM 4 or 5 times. Instead just store a complex blob for your flags, something like a JSON hash, and your KVM GET would retrieve all the flags with one read. Your proxy would need to parse the JSON to extract the flags of interest.  One way to do that is to use AssignMessage  / AssignVariable, with the jsonPath static function. Another way is to use ExtractVariables with a series of JSONPath elements. A final way is to use a JS policy that walks through the JSON hash.  Regardless, the outcome of any of these options is that you have one or more context variables accessible in your API Proxy, that tell it whether to enable or disable a feature, or a capability in the API Proxy.  

You could do something very similar with Launch Darkly.  LD has a REST API, which means you can call it from an Apigee API proxy via ServiceCallout. You would have to embed the server-side LD key in your Apigee API proxy (maybe in properties file, maybe in KVM), and then perform the feature query to LD, retrieve the result, parse it, and then use the output context variables in a Condition element around other steps in the flow. 

But, unlike KeyValueMapOperations, ServiceCallout can not automatically wrap a cache around the request.  (It is an interesting feature idea though!)  So to make this perform well, you would want to wrap the ServiceCallout in a pair of policies: LookupCache / PopulateCache  (Find a screencast overview of these policies here).  The rest of the logic would be the same as what I described above, in which the Apigee KVM is the store of feature flags.  

Does this answer your question?

View solution in original post

2 REPLIES 2

I don't know that much about Launch Darkly, only a little.  Enough to be helpful maybe? 

Lots of Apigee customers use a "maintenance mode" flag for their APIs. In the simplest case, there is an entry in the Apigee KeyValueMap , and the API proxy (or a SharedFlow) includes a KeyValueMapOperations policy that performs a GET on the map, retrieves the maintenance mode flag, and then "short circuits" the Apigee flow, based on the state of that flag.  Super simple.  It might look like this in the proxy flow: 

 


<PreFlow>
  <Request>
    <Step>
      <Name>KVM-Get-Maint-Mode</Name>
    </Step>
    <Step>
      <Condition>maintenance-mode = true</Condition>
      <Name>RF-Maintenance-Mode</Name>
    </Step>
    ...
  </Request>
</PreFlow>

 

In the above, the Condition checks what the prior KVM policy retrieved. And the RF-Maintenance-Mode policy is a RaiseFault, that returns a static 400  response indicating that the service is offline. 

The KVM policy in Apigee is clever enough to use a cache, and you can specify the cache settings within the policy configuration. The result is that you can configure your APIs to check for maintenance mode by reading the KVM, but requiring actual I/O only once  every 5 minutes or so. Which means latency will be very good for almost all API invocations.  Flip the bit in the KVM, and you get different behavior in your API proxy, within 5 minutes.  

You could do the same thing for other flags, feature flags if you like, by using the KVM in your Apigee proxy.  If you have 4 or 5 feature flags you won't want to read the KVM 4 or 5 times. Instead just store a complex blob for your flags, something like a JSON hash, and your KVM GET would retrieve all the flags with one read. Your proxy would need to parse the JSON to extract the flags of interest.  One way to do that is to use AssignMessage  / AssignVariable, with the jsonPath static function. Another way is to use ExtractVariables with a series of JSONPath elements. A final way is to use a JS policy that walks through the JSON hash.  Regardless, the outcome of any of these options is that you have one or more context variables accessible in your API Proxy, that tell it whether to enable or disable a feature, or a capability in the API Proxy.  

You could do something very similar with Launch Darkly.  LD has a REST API, which means you can call it from an Apigee API proxy via ServiceCallout. You would have to embed the server-side LD key in your Apigee API proxy (maybe in properties file, maybe in KVM), and then perform the feature query to LD, retrieve the result, parse it, and then use the output context variables in a Condition element around other steps in the flow. 

But, unlike KeyValueMapOperations, ServiceCallout can not automatically wrap a cache around the request.  (It is an interesting feature idea though!)  So to make this perform well, you would want to wrap the ServiceCallout in a pair of policies: LookupCache / PopulateCache  (Find a screencast overview of these policies here).  The rest of the logic would be the same as what I described above, in which the Apigee KVM is the store of feature flags.  

Does this answer your question?

Very helpful.  You have answered a lot of my questions.  A lot to consider.  Thanks again.