Combine multiple access policys

Hi,

I'm trying to understand how to setup policys to allow multiple ways of auth with a proxy.

In short we have 3 different scenarios.

1. Service to service calls from GCP cloud run, gke, workflow etc. In this case I would like to send the service account JWT, validate it and if possible extract the roles to check that the SA may invoke the proxy.

2. Calls from non-GCP hosted services. In this case I'd like to use OAuth2 with app credentials from the Portal.

3. User generated calls from internal apps where the user is authed using Azure AD SSO. In this case I'd like to send an Azure AD JWT along, have it validated and extract roles/claims to check if the user may invoke the proxy. In some cases users in department A may only read, and department B may update so I would like to check this on endpoint level or uri based.

Is this possible? Or is it a bad approach? 

0 1 152
1 REPLY 1

Sure, it's possible!

OAuthV2 is a good approach, it sounds like you are happy with that. That makes sense to me.

Three different token issuers. There are two kinds of JWT, and then an additional opaque token. So three different "kinds" of token, to correspond to the three different token issuers (Azure, GCP, and Apigee).

You could introduce logic into your all of your various API Proxies, to examine the token type, and then conditionally validate differently, depending on the kind of token that the client sent in. And then after verifying that your proxy trusts the token (it's valid, issued by a trusted issuer, etc), you'd need your authorization check, to verify that, given the token is valid, is it valid for the resource and action requested? To implement this, one could imagine a sharedflow that checks for the three different token types, and then does "parallel" validation, and then authorization. And then just call out to that SharedFlow from each proxy that needs it.

That will work. You will need to insure an Apigee-issued (or Apigee-recognized) client id is somehow attached to the externally-generated token. The client id is important because it's the thing that gives you API Product behavior, including the API PRoduct operation authorization checks (client A can send GET to /api1/foo but cannot send POST to that URL, client B can send GET or POST, etc). As long as you have that client ID associated with each token, no problem. This usually means synchronizing client IDs across systems - like Azure AD (new name: Entra ID?) must have the same client ID as Apigee. You can do that by importing credentials into Apigee.

Possibly a better approach, maybe more extensible and easier to maintain, would be to implement an RFC 8693-style token exchange. What does this mean?

First some background. If you know OAuth V2, you are aware that the original specification (RFC 6749) (dating to 2012!!) specified a set of "grant types", basically different ways clients might request tokens. These are, authorization code, password, client credentials, and implicit. (Implicit grant type has since been "dis-recommended", one might say, deprecated). And the result of any successful request-for-token was... an access token. But the OAuth specification was open enough, and said that there might be other grant types. RFC 7523 described one - using a signed JWT in place of client credentials to request an access token. And RFC 7522 described using a SAML token, as credentials to request an access token.

RFC 8693 introduces a new grant type, urn:ietf:params:oauth:grant-type:token-exchange, to generalize the idea of exchanging one kind of token for another. A client sends in a token issued by "some external system", and the token dispensary evaluates & validates that token, before issuing a "native" acess token. A request for token, according to RFC 8693, can look something like this:

 

 POST /dispensary/token HTTP/1.1
 Host: tokendispensary.example.com
 Content-Type: application/x-www-form-urlencoded

 grant_type=urn%3Aietf%3Aparams%3Aoauth%3Agrant-type%3Atoken-exchange
 &resource=https%3A%2F%2Fapi.example.com%2Fapi
 &subject_token=EXTERNALLY_ISSUED_TOKEN_HERE
 &subject_token_type=TYPE_OF_EXTERNALLY_ISSUED_TOKEN

 

That looks... pretty close to the pattern for all other OAuthV2 grant types. So it should be easy for client developers to implement.

Notice there is a subject_token_type in the request-for-token. The spec says it can take values, like those described in section 3, such as urn:ietf:params:oauth:token-type:access_token or urn:ietf:params:oauth:token-type:saml2 , and it also says "Other URIs MAY be used to indicate other token types." Which means you can adapt this pattern to your specific needs. You can do what you want!

Supposing you have a token-exchange endpoint, then you can define it to accept the three different types of tokens from you three different types of clients. And in all cases, after validation, that token exchange endpoint issues a "native" token, native to your Apigee. Although the one that already uses client credentials, maybe is already a native token and won't need the exchange endpoint. So maybe you support two kinds of subject_token_type: GCP and Azure. Attached to the native token the proxy generates, you can have any custom claim (attribute) you want, like scopes, audience, subject, etc...

Then, in your business APIs, you don't need to introduce three different kinds of token validation. You just have one kind of validation - OAuthV2/VerifyAccessToken. And then you can do the normal Authorization checks - either via API Product operations, or via that plus some other, external Authorization service (like maybe an OPA endpoint). You still may want this in a sharedflow, but it's much simpler.

The token exchange pattern just shifts the validation of the various kinds of token to the token dispensary endpoint. The validation in the business APIs is pretty vanilla, just regular Apigee token validation. The benefit is, all the variation is in the token dispensary, and you standardize the attributes ad claims on the "native" token. If you want to introduce a new credential type, do so, but those changes affect only the token dispensary, not the various business APIs. And the performance will be better too. So to me, this token exchange pattern is attractive.

You still need to consider the client ID - when a client sends in an externally-issued token to the exchange endpoint, and asks for a new access token, somehow you still need a client ID there, in that request, in order for Apigee to issue the right kind of access token. again, you can synchronize credentials - for example, in he GCP Service Account case, you could use the ID of the service account as the client ID. Just import that into an app in Apigee, and then ... the token dispensary can issue a token for that client id.

ok, one question you may be asking - the OAuthV2/GenerateAccessToken policy doesn't support grant type of urn:ietf:params:oauth:grant-type:token-exchange. So how would Apigee issue a token via an RFC 8693-style exchange? What I would do is just use client_credentials grant type in the policy configuration. That means you'd need the form param "grant_type" to hold "client_credentials", and the Authorization Header to contain the basic auth encoded client_id and secret of the app. Keep in mind that you can contrive this - you don't need the client to SEND that header and form param, but those things must be present before calling OAuthV2/GenerateAccessToken . So use an AssignMessage before the OAuthV2 policy to set things up. Where do you get the client_id and secret? Well the client_id, as I said, needs to be in that inbound request somewhere, either in the original token, or in a form param, or somewhere else. You don't need the secret in the inbound request; the token dispensing proxy can look that up with GetOAuthV2Info (or maybe VerifyAccessToken) using the client_id as the credential. Then you can use AssignMessage to set the client_id and the secret into the Authorization header, and set grant_type to be client_credentials.

I hope all of this makes sense. I am not sure if anyone has constructed an example proxy demonstrating RFC 8693. I couldn't find one. But I have one for RFC 7523, which is pretty similar. You could start with that and implement what you need for 8693.

Related: