Dynamically set Service Account for an API Proxy - Apigee X

Hello,

I'm developing an api proxy that publish messages to a pubsub topic, but we actually have this topics split in different gcp projects, therefore, I need to use a different service accounts for each project. 

bselistre_0-1651244316220.png

I have managed to publish messages using 'Publish Message' policy, with a single service account, but I would like to know if there is a way to set this service account dynamically.  

p.s.: Using pubsub REST API is not an option, since it gives an error of permission,  with the same credentials that work in the api proxy

Solved Solved
0 2 671
1 ACCEPTED SOLUTION

No, you cannot dynamically assign the Service Account that the proxy uses for outbound connections. You can specify only one, at deployment time. This is the SA that will be used for outbound calls that have a configuration stanza like this:

 

    <Authentication>
      <GoogleAccessToken>
        <Scopes>
          <Scope>https://www.googleapis.com/auth/cloud-platform</Scope>
        </Scopes>
      </GoogleAccessToken>
    </Authentication>

 

p.s.: Using pubsub REST API is not an option, since it gives an error of permission, with the same credentials that work in the api proxy

I think you've made an assumption here that is not valid. You CAN use the pubsub REST API from within an Apigee API Proxy. You can connect into the pubsub.googleapis.com via ServiceCallout. The trick is, you need to specify the right authorization header in the ServiceCallout configuration. One option relies on the GoogleAccessToken thing like the above, and looks like this (works on Apigee X only):

 

<ServiceCallout name='SC-PubSub'>
  <Request variable='myrequestvariable'>
    <Set>
      <Verb>POST</Verb>
      <Payload contentType='application/json'>{
  "data": string,
  "attributes": {
    "name1": "value1",
    "name2": "value2",
    ...
  },
  "messageId": "string",
  ..
}
</Payload>
    </Set>
  </Request>

  <Response>pubsubResponse</Response>

  <HTTPTargetConnection>

    <!-- this will use the Service Account you specified at deployment time -->
    <Authentication>
      <GoogleAccessToken>
        <Scopes>
          <Scope>https://www.googleapis.com/auth/pubsub</Scope>
        </Scopes>
      </GoogleAccessToken>
    </Authentication>

    <SSLInfo>
        <Enabled>true</Enabled>
        <IgnoreValidationErrors>false</IgnoreValidationErrors>
    </SSLInfo>
    <Properties>
      <Property name='success.codes'>2xx, 3xx</Property>
    </Properties>
    <URL>https://pubsub.googleapis.com/v1/projects/{gcp-project}/topics/{pubsub-topic}:publish</URL>
  </HTTPTargetConnection>
</ServiceCallout>

 

This will call out to pubsub using a token for the SA you specified at deployment time. At runtime what that does is, gets a token and injects it into the right Authorization header.

You told me that the SA that you specified at deployment time isn't the right one. No problem. You can "manually" specify the Authz header. It looks like this:

 

<ServiceCallout name='SC-PubSub-2'>
  <Request variable='myrequestvariable'>
    <Set>
      <Verb>POST</Verb>
      <Headers>
        <Header name='Authorization'>Bearer {generated-token}</Header>
      </Headers>
      <Payload contentType='application/json'>{
  "data": string,
  "attributes": {
    "name1": "value1",
    "name2": "value2",
    ...
  },
  "messageId": "string",
  ..
}
</Payload>
    </Set>
  </Request>

  <Response>pubsubResponse</Response>

  <HTTPTargetConnection>
    <SSLInfo>
        <Enabled>true</Enabled>
        <IgnoreValidationErrors>false</IgnoreValidationErrors>
    </SSLInfo>
    <Properties>
      <Property name='success.codes'>2xx, 3xx</Property>
    </Properties>
    <URL>https://pubsub.googleapis.com/v1/projects/{gcp-project}/topics/{pubsub-topic}:publish</URL>
  </HTTPTargetConnection>
</ServiceCallout>

 

At this point, you just need to generate a token for "the right service account". And you can do THAT within the API proxy too. One way to do it is to POST to the generateIdToken endpoint. This accepts the name of a service account, and gives you back a token for that service account. You would authenticate to the generateIdToken endpoint with... the SA that you specified at the time you deployed the proxy. This is a method by which one SA, let's call it SA1, can impersonate another SA, let's call it SA2, even if SA1 and SA2 are in distinct GCP projects.

Of course, you need to have configured the proper permissions to do that. This article explains how you would set that up. In short, you must grant permissions to SA1 that allow it to impersonate each different SA2. To do that, grant the role roles/iam.serviceAccountTokenCreator to SA1 on each SA2.

Then you would need to actually get the token for the SA2. To do that, use ServiceCallout, like this:

 

<ServiceCallout name='SC-IAM'>
  <Request variable='iamRequest'>
    <Set>
      <Verb>POST</Verb>
      <Payload contentType='application/json'>{
  "audience": "{desired-audience}"
}
</Payload>
    </Set>
  </Request>

  <Response>iamResponse</Response>

  <HTTPTargetConnection>

    <!-- this will use the Service Account you specified at deployment time -->
    <Authentication>
      <GoogleAccessToken>
        <Scopes>
          <Scope>https://www.googleapis.com/auth/cloud-platform</Scope>
        </Scopes>
      </GoogleAccessToken>
    </Authentication>

    <SSLInfo>
      <Enabled>true</Enabled>
      <IgnoreValidationErrors>false</IgnoreValidationErrors>
    </SSLInfo>
    <Properties>
      <Property name='success.codes'>2xx, 3xx</Property>
    </Properties>
    <URL>https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/{sa2_account_email}:generateIdToken</URL>
  </HTTPTargetConnection>
</ServiceCallout>

 

And following that you need to extract out from the json response, the token. And then use that in the 2nd ServiceCallout, pointing to the pubsub endpoint.

Does this make sense?

If I were doing this I would wrap the generateIdToken operation in a cache, so that the proxy will re-use the generated token for SA2, repeatedly. These tokens have a lifetime of 1 hour, so using the cache should provide some nice performance benefits.

View solution in original post

2 REPLIES 2

No, you cannot dynamically assign the Service Account that the proxy uses for outbound connections. You can specify only one, at deployment time. This is the SA that will be used for outbound calls that have a configuration stanza like this:

 

    <Authentication>
      <GoogleAccessToken>
        <Scopes>
          <Scope>https://www.googleapis.com/auth/cloud-platform</Scope>
        </Scopes>
      </GoogleAccessToken>
    </Authentication>

 

p.s.: Using pubsub REST API is not an option, since it gives an error of permission, with the same credentials that work in the api proxy

I think you've made an assumption here that is not valid. You CAN use the pubsub REST API from within an Apigee API Proxy. You can connect into the pubsub.googleapis.com via ServiceCallout. The trick is, you need to specify the right authorization header in the ServiceCallout configuration. One option relies on the GoogleAccessToken thing like the above, and looks like this (works on Apigee X only):

 

<ServiceCallout name='SC-PubSub'>
  <Request variable='myrequestvariable'>
    <Set>
      <Verb>POST</Verb>
      <Payload contentType='application/json'>{
  "data": string,
  "attributes": {
    "name1": "value1",
    "name2": "value2",
    ...
  },
  "messageId": "string",
  ..
}
</Payload>
    </Set>
  </Request>

  <Response>pubsubResponse</Response>

  <HTTPTargetConnection>

    <!-- this will use the Service Account you specified at deployment time -->
    <Authentication>
      <GoogleAccessToken>
        <Scopes>
          <Scope>https://www.googleapis.com/auth/pubsub</Scope>
        </Scopes>
      </GoogleAccessToken>
    </Authentication>

    <SSLInfo>
        <Enabled>true</Enabled>
        <IgnoreValidationErrors>false</IgnoreValidationErrors>
    </SSLInfo>
    <Properties>
      <Property name='success.codes'>2xx, 3xx</Property>
    </Properties>
    <URL>https://pubsub.googleapis.com/v1/projects/{gcp-project}/topics/{pubsub-topic}:publish</URL>
  </HTTPTargetConnection>
</ServiceCallout>

 

This will call out to pubsub using a token for the SA you specified at deployment time. At runtime what that does is, gets a token and injects it into the right Authorization header.

You told me that the SA that you specified at deployment time isn't the right one. No problem. You can "manually" specify the Authz header. It looks like this:

 

<ServiceCallout name='SC-PubSub-2'>
  <Request variable='myrequestvariable'>
    <Set>
      <Verb>POST</Verb>
      <Headers>
        <Header name='Authorization'>Bearer {generated-token}</Header>
      </Headers>
      <Payload contentType='application/json'>{
  "data": string,
  "attributes": {
    "name1": "value1",
    "name2": "value2",
    ...
  },
  "messageId": "string",
  ..
}
</Payload>
    </Set>
  </Request>

  <Response>pubsubResponse</Response>

  <HTTPTargetConnection>
    <SSLInfo>
        <Enabled>true</Enabled>
        <IgnoreValidationErrors>false</IgnoreValidationErrors>
    </SSLInfo>
    <Properties>
      <Property name='success.codes'>2xx, 3xx</Property>
    </Properties>
    <URL>https://pubsub.googleapis.com/v1/projects/{gcp-project}/topics/{pubsub-topic}:publish</URL>
  </HTTPTargetConnection>
</ServiceCallout>

 

At this point, you just need to generate a token for "the right service account". And you can do THAT within the API proxy too. One way to do it is to POST to the generateIdToken endpoint. This accepts the name of a service account, and gives you back a token for that service account. You would authenticate to the generateIdToken endpoint with... the SA that you specified at the time you deployed the proxy. This is a method by which one SA, let's call it SA1, can impersonate another SA, let's call it SA2, even if SA1 and SA2 are in distinct GCP projects.

Of course, you need to have configured the proper permissions to do that. This article explains how you would set that up. In short, you must grant permissions to SA1 that allow it to impersonate each different SA2. To do that, grant the role roles/iam.serviceAccountTokenCreator to SA1 on each SA2.

Then you would need to actually get the token for the SA2. To do that, use ServiceCallout, like this:

 

<ServiceCallout name='SC-IAM'>
  <Request variable='iamRequest'>
    <Set>
      <Verb>POST</Verb>
      <Payload contentType='application/json'>{
  "audience": "{desired-audience}"
}
</Payload>
    </Set>
  </Request>

  <Response>iamResponse</Response>

  <HTTPTargetConnection>

    <!-- this will use the Service Account you specified at deployment time -->
    <Authentication>
      <GoogleAccessToken>
        <Scopes>
          <Scope>https://www.googleapis.com/auth/cloud-platform</Scope>
        </Scopes>
      </GoogleAccessToken>
    </Authentication>

    <SSLInfo>
      <Enabled>true</Enabled>
      <IgnoreValidationErrors>false</IgnoreValidationErrors>
    </SSLInfo>
    <Properties>
      <Property name='success.codes'>2xx, 3xx</Property>
    </Properties>
    <URL>https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/{sa2_account_email}:generateIdToken</URL>
  </HTTPTargetConnection>
</ServiceCallout>

 

And following that you need to extract out from the json response, the token. And then use that in the 2nd ServiceCallout, pointing to the pubsub endpoint.

Does this make sense?

If I were doing this I would wrap the generateIdToken operation in a cache, so that the proxy will re-use the generated token for SA2, repeatedly. These tokens have a lifetime of 1 hour, so using the cache should provide some nice performance benefits.

Hi @dchiesa1 , 

I was initially following a similar approach,  of "manually" specifying the Authz header.
When I referred that pubsub REST API was not an option, it was due to a problem with the gcp project that was sorted out by Google support.
Anyway, this solution will do the job 🙂

Thank you