I have had this conversation with some of my peers on which approach to take while storing the credentials and other sensitive configuration which can be accessed by the runtime api in a secure and a performant way. The words "secure" and "performant" are important since we would want to have them both.
Two top choices would clearly be KVM and Apigee-Vault(Secure store service). Lets discuss about each and try come up with a more viable solution based on the use case.
The KVM way -
KVM's have been used in many,many implementations for storing environment related configurations like backend url's, usernames, passwords etc.
Pros -
1) Out of the box policy to GET,PUT or DELETE a KVM resource(KVM operations)
2) Performant, it was(may be is) known to have cached the KVM resources for some time, which can be super performant if almost every api call is performing the KVM operation.
3) You can have one place to store all your configurations, making it super easy for operations to maintain it.
4) Can be scoped all the way from org,env,proxy all the way through policy.
Cons -
1) Content is NOT encrypted by itself while storing data and is stored as plain text(However I think is should be a simple step to encrypt the data before storing it)
2) If not used correctly, say if it is used to store tremendously large data, then it could have an implication on the runtime traffic.
The Vault way -
Pros -
1)Data is encrypted and not stored as plain text!!
Cons -
1) No out of the box policy to retrieve the data - Needs a method apigee.getVault() which can be accessed through node.js
2) Configs stored in different places - You might not want all your configuration to be store in the Vault and would end up storing some either in config.json or KVM, which could be difficult to maintain.
3) Not necessarily a con but unlike KVM Vaults can only be scoped to an org or an environment level.
While Vault is more difficult to use, it's almost certainly the best way to store sensitive data such as credentials. Two facts make me believe this: 1) Vault is encrypted, but other potential methods are not, and 2) Practically speaking, information stored in Vault can only be *retrieved* by a runtime proxy making it more difficult for unauthorized personnel to gain access, where KVM can be accessed at runtime *and* via the management API. Non-sensitive data can be stored in many other places (KVM and custom attributes on the API Product, Developer or Developer Application come to mind most often), but credentials really should be encrypted when stored and access should be controlled as closely as possible.
Even though Vaults are scoped to org or environment level, you can have many entries at each level. This gives you a roughly equivalent level of flexibility to that provided by proxy-scoped KVM entries.
@Chris von See Very valid point. However, my concern is this - Since the only method of retrieving the vault variables is through a node.js implementation and currently node.js can only be implemented as a target and not a callout, this might impact my analytics.
Say if I have to grab the credentials for almost every api call hitting a target, How will I get the correct analytics with what you are suggesting? @David Allen
@Vinit Mehta You're correct in saying that target latency metrics won't be accurate because the actual target call may not be made from the endpoint defined in the TargetEndpoint. Other analytics - call counts, overall resource request latency, error codes, etc. are still valid.
Calling your target endpoint from Node.js might be one way to address the point you raise.
@Chris von See Just to clarify - when you say other analytics are still valid that is assuming you are calling the actual target as well from node.js, correct?
No, what I mean is that even if you call the real "target" via a ServiceCallout policy after the script target is called your overall request latency numbers (but not the target latency numbers) will still be accurate. I thought perhaps calling the target from Node.js after retrieving the creds from Vault might give you more accurate target latency stats as well.
Vault is definitely the way to go. There's an enhancement request to enable access to vault storage from the regular proxy flow. IMO that's required, so hopefully the primary con will go away.
Just curious, is this still an enhancement request or is work already completed/underway to create a Vault-Access sort of policy?
Yes. Work is underway. Please stay tuned.
Nice write up, @Vinit Mehta.
I believe KVMs can be used safely, but it is strongly recommended to lock them down via RBAC and encrypt data that is put in them, decrypting in the proxy. I feel that there is a cost to the use of Vault at this time, potentially forcing users to use node if they aren't using it otherwise.
That being said, as soon as Vault is available via policy, it will surely be the preferred way.
I don't think we can pass any security review if we store plain text passwords in C*. We also need to think about private cloud here as original article didn't make any distinctions.
Another approach is to create a new proxy with a node.js target just to get the data from vault. Original proxy will then cache the data for x hours. Not the best approach but seems good enough until we get policy access to vault... This option obviously requires protection around this vault access api.
What difference - if any - in auditing / logging capability between these methods? How granular is the logging around access to KVM via Management APIs - e.g. in case of a key change actions? Does audit logging differ by user type?
@mchalmers - we only audit log create/update/delete actions on developers, users, organisations, api products, api proxies and apps - currently. So KVM and vault get/set operations are not audit logged.
Hello all, I wanted to add an important note: encrypted KVMs are here. Details are in our documentation: http://docs.apigee.com/api-services/reference/key-value-map-operations-policy . You now have an option for encrypted data without having to use Node.js.
Customized APIs by device - Belly
Apigee and Cloud KMS for FIPS 140-2 compliant cryptography
Automate KVM creation on Apigee Edge
Apigee Developer? Here's The Stuff You Should Know
How to mock a target backend with a Node.js API proxy using apimocker
Using Apigee Edge with OpenID Connect
Top 10 trending technologies must learn in 2021
Contacting Support for CNAME or virtual host changes and whitelist IPs
How to: manually zip up an API Proxy bundle into something that can be imported to Apigee Edge