Creating gcloud print access token programmatically

 

Hello ,

we need to integrate apigee hybrid with our cicd pipeline , for deployment purpose for js scripts are ready but we need to obtain accesstoken and store it in a variable and access at runtime. Please guide us how to do that?

Solved Solved
1 2 6,904
1 ACCEPTED SOLUTION

When you automate configuration tasks for Apigee hybrid, your script will directly or indirectly invoke requests on the endpoint apigee.googleapis.com . In all cases, the requests must be authenticated with an OAuthV2 (bearer) access token. If you were using a curl command it would be

 

curl -i -H "Authorization: Bearer $TOKEN" https://apigee.googleapis.com/v1/organizations/ORGNAME ...

 

The access token is required for any endpoint on googleapis.com . What i said above is true for apigee.googleapis.com , and also storage.googleapis.com, bigquery.googleapis.com , logging.googleapis.com , and so on. This is just standard Google Cloud stuff. Even with Apigee hybrid, in which the gateways can run externally to Google cloud (let's say in AWS EKS), the control plane is in Google Cloud, and you configure Apigee hybrid by interacting with the control plane. You need that access token to authorize the call.

There are three ways that allow you to obtain an access token, which then can authorize APIs on googleapis.com:

  1. via an interactive user-login session. You can do this with gcloud auth print-access-token if you have previously "logged in" via gcloud auth login(link). Generally the gcloud auth login results in an interactive experience, in which the default browser opens up a page that asks for authentication. After successful authentication, the gcloud command itself keeps a cache of the results of the login, including the just-acquired access token and a refresh token. So when you invoke gcloud auth print-access-token , gcloud command is either printing the cached token if it is not expired, or ...refreshing that token, and printing the new token.
  2. Via a service account key file. the token dispensing URI ("https://oauth2.googleapis.com/token") accepts an inbound POST call, as a request-for-token. If the inbound call is correctly formatted, you get an access token in response. The inbound POST must be form encoded (content-type = application/x-www-form-urlencoded ), and the body must be like grant_type=${grant_type}&assertion=${assertion} , in which

    • grant_type is the value urn:ietf:params:oauth:grant-type:jwt-bearer
    • assertion is a signed JWT! It is a token, but not an access token. Think of it as a request-for-access-token token. The JWT must be created with a specific payload, like this:

        {
              iss   : client_email,
              aud   : token_uri,
              iat   : nowInSeconds,
              exp   : nowInSeconds + 60,
              scope : requiredScopes
        }
      

      ...where the client_email and token_uri are taken from the downloaded service account key file, the nowInSeconds is self-explanatory, and the requiredScopes is replaced with one or more of the GCP scopes. Often this latter is simply https://www.googleapis.com/auth/cloud-platform. And the whole thing is signed via RS256 using a private key that has been created for that service account.

      In response, you get an access token.

      You can actually do the same with gcloud auth activate-service-account (link), passing the service account credentials file . And you can do it using the Google Cloud SDK library in various programming languages. Golang, Java, JavaScript, .NET, etc.

  3. by sending a GET request the metadata endpoint for GCP Compute Engine and simply asking for a token. This is really simple, the request looks like

    curl "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token" \
    -H "Metadata-Flavor: Google"

    This is described here. The catch here is that this works if and only if the command is run from a Google Compute Engine instance. It gets a token for the service account which is used by the GCE instance. You do not need to create or download a service account key file for this to work. This call won't work if you try invoking that endpoint from your laptop, or a build server that runs outside of GCP.

In your case it sounds like you want an access token to allow your deployment scripts to work. That's not an interactive user, so you will use a service account for that purpose . That means either case 2 or 3 from above. If case 3, then it's super easy, but remember your build/deployment script must be running on GCE. If case 2, then here is an example for generating a token in JavaScript/NodeJS. And HERE is a .NET program that does similar.

EDIT

There actually is a 4th way to obtain a token that is usable against googleapis.com : Workload Identity Federation.  WIF follows IETF RFC 8693, the OAuth 2.0 token exchange specification (https://tools.ietf.org/html/rfc8693), which describes a standard way to exchange one kind of token for another

 
How it works: 
  1. you set up a “security token service” configured for your own use. You configure it to trust your own  identity provider,  something like AWS, Azure AD, On-premises ADFS, or Okta, which you run/operate.
  2. At runtime, your tool submits a credential obtained from your identity provider, to the Security Token Service
  3. The STS verifies the identity on the credential, and then returns an access token good for use against GCP APIs.

 

View solution in original post

2 REPLIES 2

When you automate configuration tasks for Apigee hybrid, your script will directly or indirectly invoke requests on the endpoint apigee.googleapis.com . In all cases, the requests must be authenticated with an OAuthV2 (bearer) access token. If you were using a curl command it would be

 

curl -i -H "Authorization: Bearer $TOKEN" https://apigee.googleapis.com/v1/organizations/ORGNAME ...

 

The access token is required for any endpoint on googleapis.com . What i said above is true for apigee.googleapis.com , and also storage.googleapis.com, bigquery.googleapis.com , logging.googleapis.com , and so on. This is just standard Google Cloud stuff. Even with Apigee hybrid, in which the gateways can run externally to Google cloud (let's say in AWS EKS), the control plane is in Google Cloud, and you configure Apigee hybrid by interacting with the control plane. You need that access token to authorize the call.

There are three ways that allow you to obtain an access token, which then can authorize APIs on googleapis.com:

  1. via an interactive user-login session. You can do this with gcloud auth print-access-token if you have previously "logged in" via gcloud auth login(link). Generally the gcloud auth login results in an interactive experience, in which the default browser opens up a page that asks for authentication. After successful authentication, the gcloud command itself keeps a cache of the results of the login, including the just-acquired access token and a refresh token. So when you invoke gcloud auth print-access-token , gcloud command is either printing the cached token if it is not expired, or ...refreshing that token, and printing the new token.
  2. Via a service account key file. the token dispensing URI ("https://oauth2.googleapis.com/token") accepts an inbound POST call, as a request-for-token. If the inbound call is correctly formatted, you get an access token in response. The inbound POST must be form encoded (content-type = application/x-www-form-urlencoded ), and the body must be like grant_type=${grant_type}&assertion=${assertion} , in which

    • grant_type is the value urn:ietf:params:oauth:grant-type:jwt-bearer
    • assertion is a signed JWT! It is a token, but not an access token. Think of it as a request-for-access-token token. The JWT must be created with a specific payload, like this:

        {
              iss   : client_email,
              aud   : token_uri,
              iat   : nowInSeconds,
              exp   : nowInSeconds + 60,
              scope : requiredScopes
        }
      

      ...where the client_email and token_uri are taken from the downloaded service account key file, the nowInSeconds is self-explanatory, and the requiredScopes is replaced with one or more of the GCP scopes. Often this latter is simply https://www.googleapis.com/auth/cloud-platform. And the whole thing is signed via RS256 using a private key that has been created for that service account.

      In response, you get an access token.

      You can actually do the same with gcloud auth activate-service-account (link), passing the service account credentials file . And you can do it using the Google Cloud SDK library in various programming languages. Golang, Java, JavaScript, .NET, etc.

  3. by sending a GET request the metadata endpoint for GCP Compute Engine and simply asking for a token. This is really simple, the request looks like

    curl "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token" \
    -H "Metadata-Flavor: Google"

    This is described here. The catch here is that this works if and only if the command is run from a Google Compute Engine instance. It gets a token for the service account which is used by the GCE instance. You do not need to create or download a service account key file for this to work. This call won't work if you try invoking that endpoint from your laptop, or a build server that runs outside of GCP.

In your case it sounds like you want an access token to allow your deployment scripts to work. That's not an interactive user, so you will use a service account for that purpose . That means either case 2 or 3 from above. If case 3, then it's super easy, but remember your build/deployment script must be running on GCE. If case 2, then here is an example for generating a token in JavaScript/NodeJS. And HERE is a .NET program that does similar.

EDIT

There actually is a 4th way to obtain a token that is usable against googleapis.com : Workload Identity Federation.  WIF follows IETF RFC 8693, the OAuth 2.0 token exchange specification (https://tools.ietf.org/html/rfc8693), which describes a standard way to exchange one kind of token for another

 
How it works: 
  1. you set up a “security token service” configured for your own use. You configure it to trust your own  identity provider,  something like AWS, Azure AD, On-premises ADFS, or Okta, which you run/operate.
  2. At runtime, your tool submits a credential obtained from your identity provider, to the Security Token Service
  3. The STS verifies the identity on the credential, and then returns an access token good for use against GCP APIs.

 

Awesome answer Dino - 

I wanted to add this tool I've recently become aware of that can help with command line ouath2 tokens:

https://github.com/google/oauth2l

Hope that helps! Chad