What is recommended way to validate Apigee token by producer hosted on other cloud provider?

Hi,

There is use case where third-party API provider/producer hosted on AWS and we want to expose those APIs via APIGEE to Consumers/customers. I am looking a recommended and secure way for producer to validate the apigee token coming from consumer, apart from iss, exp. As the token being sent over internet, we want producer to authenticate the token from apigee.

Producer in AWS

Consumer - any external user

Apigee - GCP

 

Solved Solved
2 3 106
1 ACCEPTED SOLUTION

With option 1

The internet will be "secure enough", if you trust TLS.  TLS zero-day attacks are rare but not un-heard of.  

 

View solution in original post

3 REPLIES 3

There are a couple ways to do this. 

One is to use Apigee as the GCP-hosted gateway.  Clients connect to the Apigee gateway, Apigee validates tokens, Apigee then proxies to the AWS-hosted system.  Usually you want a Cross-cloud interconnect to make this happen. 

From the way you asked the question I guess you had not imagined doing that. Some people discount this approach, never give it a thought, with the idea that "there will be too much latency if there is a cross-cloud call".  I caution you against jumping to that conclusion.  Cross-cloud calls are pretty fast, because the AWS and GCP datacenters are usually very close to each other, and have very big fat pipes connecting between them.   and usually there is an order of magnitude less latency involved in that hop, than in the upstream call itself.  Eg 10ms in the GCP-to-other-cloud hop, 80ms from the client to GCP, and 100ms spent in the upstream service, reading the database and so on. The extra 10ms is just not going to matter. 

If using a GCP-based proxy for an AWS-hosted service is not acceptable, there are other options.  Apigee  has a GenerateJWTAccessToken operation on the OAuthV2 policy.  So you could have your clients call into Apigee to get a token, then ... with that token, repeatedly call into the AWS-hosted service directly.  You said it was "3rd party", so I am not sure if it is feasible to change the behavior, but....  the AWS-hosted service would need to be modified to validate the signature, expiry, issuer, and so on, on that token.  (I am not sure if GenerateJWTAccessToken allows you to influence the aud claim) .  The AWS service can verify the signature with the use of the public key corresponding to the private key you used to generate the token . To make that work you might need to publish that key from the Apigee side via JWKS. 

That will work, and it's pretty simple. But you'd miss some Apigee benefits - namely Apigee analytics, and traffic managemebt.  For the first, analytics: If the client doesn't call Apigee, but instead calls the  AWS service directly, then Apigee is unable to track  how many calls the client makes, etc.  So there's no way to get Apigee analytics (traffic rates, error rates, latency, etc) on a per-client or per-app or per-developer basis.  Which is kind of a bummer.  For the second, You wouldn't be able to do rate limiting or metering or etc.   And you don't get to take advantage of alerting and Advanced API security.  All the stuff that layers on top of the analytics data stream. 

To solve that you COULD configure your AWS-based service to call Apigee on every inbound call, passing the token it receives into Apigee.  (Again, you would need to somehow be able to modify the AWS-hosted 3rd party service to do this).  This would then allow Apigee to validate the token and store the analytics record. Also Perform rate limiting for you. This would get you traffic, but not error rates or latency, since ... well you can see why. Because Apigee is not inline.  It's off to the side. To make this work you would need a thin loopback proxy on the Apigee side that just validates the token, and that's it. 

There's another approach - and that is to use the envoy adapter for Apigee .  With this option, you deploy Envoyproxy on the AWS side, and use the Apigee extauthz extension. In this case, the client calls into the envoyproxy which is hosted in AWS somewhere (EC2 I guess) and that proxy eventually calls into the AWS-hosted 3rd party service.  The effect is basically what I described above, except now it's the PROXY calling to Apigee to validate tokens,  and also  it collects real analytics data including latency, and it caches things, so it doesn't need to call Apigee for each request.  So it performs much better.

In summary, there are three ways: 

  1. Apigee X with cross-cloud interconnect
  2. custom token validation on the AWS side, plus optional callout for analytics
  3. use the Apigee Envoy adapter hosted on AWS. 

I would recommend 1 or 3, and 2 only as a last resort. Basically you're writing your own envoyproxy adapter with option 2. I described it above only to provide a basis for understanding what the Envoyproy and the Apigee adapter do. 

ps: there is a bonus 4th approach: re-host your AWS-based service into GCP !

 

 

 

 

@dchiesa1 

We will be using GCP ILB, Egress and SWP to connect to AWS, which will fall under your Option #1. Along with that, I am also aligning to your Option #2, exposing JWKS endpoint.

Question: Just going with option#1, is it secure enough over internet for the token?

 

With option 1

The internet will be "secure enough", if you trust TLS.  TLS zero-day attacks are rare but not un-heard of.