How to secure the proxy that dispenses OAuth access tokens?

Hello everybody,

We would like to use an OAuth proxy for authentication purpose with a client_credentials grant type.

This proxy will be used by all entities/organization in our apigee platform

How should I configure securely  the proxy ?

What are the best practices in terms of policies to apply to our OAuth proxy ?

Thanks in advance

 

Solved Solved
0 2 391
1 ACCEPTED SOLUTION

It's good that you're asking this question, taking consideration of how to secure that proxy. A proxy that dispenses access tokens is like a login endpoint on a web UI. You need to protect it against all the same threats, and manage it with the same care.  

Where should we start?  How about at the edge?  If the token-dispensing endpoint is exposed to the public network, then that means you probably want to use a WAF in front of the endpoint, to mitigate DOS attacks and perform rate limiting. Basic edge security. In the Google Cloud portfolio you can use Cloud Armor for that purpose. Of course there are numerous other options from other companies. If the token endpoint is not exposed to the public internet, then the WAF may not be necessary.

In either case, whether the endpoint is publicly accessible or not, you need to protect that endpoint with TLS.  And you should ensure that it uses a current modern TLS - TLS 1.2 or 1.3 at least, with a restricted set of cipher suites.  Really I dislike even saying "TLS 1.2 or 1.3" - you should bias to TLS 1.3.  This is just standard good practice. What Mozilla recommends for securing browser traffic ("Use TLS 1.3")  is good advice for any client.  If you dispense tokens over a known insecure version of TLS or a known insecure cipher suite, then basically you're exposing yourself to handing out access tokens to malicious actors. 

Some people go further than using a WAF, and also include bot-detection and scoring. a CAPTCHA (for example reCAPTCHA) can help your system score endpoints, to decide whether to dispense a token based on whether the endpoint "looks like a bot" or not. When people hear "reCAPTCHA", often they think of the older captcha user experience, like typing in a code you see in distorted graphics on a web ui, or clicking all the photos with traffic signals. But those are older experiences, circa 2013. The new reCAPTCHA experience is UI-free. Basically reCAPTCHA just scores the endpoint, based on fingerprinting the user-agent, and profiling the IP address. There's no UI at all. How it works: the code on the client side needs to contact the recaptcha server to get a token, then when the client wants to request an OAuth token from Apigee, it sends in its recaptcha token along with the Apigee-dispensed client credentials. Apigee can redeem the score for the recaptcha token, determine if the endpoint seems to be a bot, and refuse to grant a token if so. Otherwise, just dispense a token according to the normal OAuth flow. There are other bot-scoring and mitigation approaches beyond recaptcha. This extra layer is worth exploring if you have a high-value token endpoint.

Logging and monitoring is an important part of the security posture. You need to have visibility into traffic rates, error rates, and so on, on the token-dispensing endpoint, and also on the endpoints for all the other APIs. And you need alerting on anomalous behavior - aggregate rate spikes, or spikes from specific client endpoints, or from specific regions. Apigee Enterprise gives you the monitoring and logging you need. You can augment that with alerting from other SIEM systems.

You also need to consider the process through which you dispense client credentials that can be used against this endpoint.You can think of "credential dispensing" as a "supply chain" issue in securing the token endpoint. If the credentials are not securely distributed - for example if you send credentials in email - then securing the token-dispensing endpoint won't matter much. It does no good to put a lock on your front door (the token dispensing endpoint) if you hand out keys to anyone who asks for them (dispensing credentials). For credential generation and dispensing, you probably want to use a developer portal. Of course that needs to be protected with TLS and you need to have the correct sign-on to that portal, probably SAML powered by some trusted identity provider. As you probably know, the Apigee platform supports a couple different options for the developer portal capability. You need to take care that you don't dispense credentials freely. For example maybe you want to limit each developer to 3 client id/secret pairs, maximum. Some companies put expiry on these id/secret pairs.  The Solarwinds incident earlier this year illustrates why you might want to do that.  Suppose you securely dispensed credentials to a partner, but then the partner does not securely store those credentials, allowing a bad actor to get them. Sometimes those creds are compromised well before they are used - they sit on a shelf on the dark web, and then months later, someone purchases those credentials and starts using them. If you force rotation of credentials you can limit your exposure to this sort of risk. Apigee can enforce expiry of credentials, too. 

Some systems go a little further in securing their token endpoint. Rather than using the client credentials grant type in which a secret is passed over the network from client to server, some systems rely on public/private key pairs, to insure non-repudiation. The token endpoint for Googleapis.com is one such example system. Rather than using grant_type=client_credentials as described in IETF RFC 6749 (the RFC that defined OAuth 2.0), this approach uses the grant_type of urn:ietf:params:oauth:grant-type:jwt-bearer as described in IETF RFC 7523. The client self-signs a JWT and sends *that* over the network. Your Apigee-based accesstoken-dispensing endpoint then verifies the JWT (signature, expiry, subject and so on), before granting the token. A while back I produced a screencast describing this, along with an example API proxy; this is still available and relevant. Again you must consider supply-chain issues here. How does the client get the private key?  What is the key strength?  The Apigee developer portal  coupled with the subtle crypto API in modern browsers can help with that, too.  

You also need to consider the lifetime of the access token itself. It should be brief, maybe 15 minutes or 30 minutes, or 60 minutes.  24 hours or longer is probably too long. The lifetime you need depends on the use-cases for the clients. 

You also need the right processes. A runbook. Consider whether and how your organization would react if you saw a spike in use of a particular set of client credentials, indicating leaked creds. How would you get alerted if this were happening?  Who gets alerted?  How do you turn off (revoke) the credentials? How do you dispense new credentials to the authorized client?  And so on.  Security is not just about technology, you need the right processes for using the technology. 

Lots of things to think about!  

View solution in original post

2 REPLIES 2

It's good that you're asking this question, taking consideration of how to secure that proxy. A proxy that dispenses access tokens is like a login endpoint on a web UI. You need to protect it against all the same threats, and manage it with the same care.  

Where should we start?  How about at the edge?  If the token-dispensing endpoint is exposed to the public network, then that means you probably want to use a WAF in front of the endpoint, to mitigate DOS attacks and perform rate limiting. Basic edge security. In the Google Cloud portfolio you can use Cloud Armor for that purpose. Of course there are numerous other options from other companies. If the token endpoint is not exposed to the public internet, then the WAF may not be necessary.

In either case, whether the endpoint is publicly accessible or not, you need to protect that endpoint with TLS.  And you should ensure that it uses a current modern TLS - TLS 1.2 or 1.3 at least, with a restricted set of cipher suites.  Really I dislike even saying "TLS 1.2 or 1.3" - you should bias to TLS 1.3.  This is just standard good practice. What Mozilla recommends for securing browser traffic ("Use TLS 1.3")  is good advice for any client.  If you dispense tokens over a known insecure version of TLS or a known insecure cipher suite, then basically you're exposing yourself to handing out access tokens to malicious actors. 

Some people go further than using a WAF, and also include bot-detection and scoring. a CAPTCHA (for example reCAPTCHA) can help your system score endpoints, to decide whether to dispense a token based on whether the endpoint "looks like a bot" or not. When people hear "reCAPTCHA", often they think of the older captcha user experience, like typing in a code you see in distorted graphics on a web ui, or clicking all the photos with traffic signals. But those are older experiences, circa 2013. The new reCAPTCHA experience is UI-free. Basically reCAPTCHA just scores the endpoint, based on fingerprinting the user-agent, and profiling the IP address. There's no UI at all. How it works: the code on the client side needs to contact the recaptcha server to get a token, then when the client wants to request an OAuth token from Apigee, it sends in its recaptcha token along with the Apigee-dispensed client credentials. Apigee can redeem the score for the recaptcha token, determine if the endpoint seems to be a bot, and refuse to grant a token if so. Otherwise, just dispense a token according to the normal OAuth flow. There are other bot-scoring and mitigation approaches beyond recaptcha. This extra layer is worth exploring if you have a high-value token endpoint.

Logging and monitoring is an important part of the security posture. You need to have visibility into traffic rates, error rates, and so on, on the token-dispensing endpoint, and also on the endpoints for all the other APIs. And you need alerting on anomalous behavior - aggregate rate spikes, or spikes from specific client endpoints, or from specific regions. Apigee Enterprise gives you the monitoring and logging you need. You can augment that with alerting from other SIEM systems.

You also need to consider the process through which you dispense client credentials that can be used against this endpoint.You can think of "credential dispensing" as a "supply chain" issue in securing the token endpoint. If the credentials are not securely distributed - for example if you send credentials in email - then securing the token-dispensing endpoint won't matter much. It does no good to put a lock on your front door (the token dispensing endpoint) if you hand out keys to anyone who asks for them (dispensing credentials). For credential generation and dispensing, you probably want to use a developer portal. Of course that needs to be protected with TLS and you need to have the correct sign-on to that portal, probably SAML powered by some trusted identity provider. As you probably know, the Apigee platform supports a couple different options for the developer portal capability. You need to take care that you don't dispense credentials freely. For example maybe you want to limit each developer to 3 client id/secret pairs, maximum. Some companies put expiry on these id/secret pairs.  The Solarwinds incident earlier this year illustrates why you might want to do that.  Suppose you securely dispensed credentials to a partner, but then the partner does not securely store those credentials, allowing a bad actor to get them. Sometimes those creds are compromised well before they are used - they sit on a shelf on the dark web, and then months later, someone purchases those credentials and starts using them. If you force rotation of credentials you can limit your exposure to this sort of risk. Apigee can enforce expiry of credentials, too. 

Some systems go a little further in securing their token endpoint. Rather than using the client credentials grant type in which a secret is passed over the network from client to server, some systems rely on public/private key pairs, to insure non-repudiation. The token endpoint for Googleapis.com is one such example system. Rather than using grant_type=client_credentials as described in IETF RFC 6749 (the RFC that defined OAuth 2.0), this approach uses the grant_type of urn:ietf:params:oauth:grant-type:jwt-bearer as described in IETF RFC 7523. The client self-signs a JWT and sends *that* over the network. Your Apigee-based accesstoken-dispensing endpoint then verifies the JWT (signature, expiry, subject and so on), before granting the token. A while back I produced a screencast describing this, along with an example API proxy; this is still available and relevant. Again you must consider supply-chain issues here. How does the client get the private key?  What is the key strength?  The Apigee developer portal  coupled with the subtle crypto API in modern browsers can help with that, too.  

You also need to consider the lifetime of the access token itself. It should be brief, maybe 15 minutes or 30 minutes, or 60 minutes.  24 hours or longer is probably too long. The lifetime you need depends on the use-cases for the clients. 

You also need the right processes. A runbook. Consider whether and how your organization would react if you saw a spike in use of a particular set of client credentials, indicating leaked creds. How would you get alerted if this were happening?  Who gets alerted?  How do you turn off (revoke) the credentials? How do you dispense new credentials to the authorized client?  And so on.  Security is not just about technology, you need the right processes for using the technology. 

Lots of things to think about!  

Thank you @dchiesa1for you quick and detailed response.

This was a question in which I needed clarifications. We use apigee OPDK with different planet (intranet and internet) shared with many organization. So, our proxy will serve both internal clients as well as external.

I have also some questions to ask by the way.

Do I need to apply specific policies to prevent from threat or policies helping to protect from OWSAP API security Top 10 to my OAuth proxy API knowing that it does not used for back end ? What policies do you recommend to attach to an OAuth proxy generally ?

We use Drupal-based Dev portal for showing client_id and client_secret.

Is it possible with Apigee OPDK to configure rotation and an expiration for API keys ?

 

Thank you