how service callout policy works

Not applicable

Hi ,

I am new to APIGEE and learning it.

As per the documentation , i have created two policies proxies: 1 for "token generating proxy" and another for OAuth-protected API proxy.

So far it is good and working fine.

Now i want to add service callout proxy to "OAuth-protected API proxy" which should make a call to "token generating proxy" and the get the response and add the response(acces s token) parsed and inserted in myAPI's response message.

Could you help me in learning this also can you share a sample proxy for this?

regards

ramakrishna

Solved Solved
0 2 843
1 ACCEPTED SOLUTION

I'm sorry - I don't understand what you want to do. I understand what you wrote, but it doesn't make sense to do it that way.

You wrote:

i want to add service callout proxy to "OAuth-protected API proxy" which should make a call to "token generating proxy" and the get the response...

That does not make sense.

Let me take a step back. Let's review what's going on. In Apigee Edge, the basic metaphor is the "api proxy". The API Proxy is the thing that gets deployed into Edge, and it includes instructions (policies and conditions and targets, etc) for what to do when a request arrives. Each API proxy exposes one or more endpoints, but for simplicity, let's just assume for now that every API proxy has one proxy endpoint. This is a base URL at which the proxy "listens". (Not quite accurate. The proxy doesn't "listen." In reality, Apigee Edge "listens" and activates your proxy logic when an appropriate request arrives. But "the proxy listens" is close enough.)

The typical scenario in which an endpoint is protected by OAuth 2.0 token validation, requires two endpoints. For example:

url purpose
http://domain/oauth2 token-dispensing endpoint
http://domain/hello "protected" endpoint

The typical pattern of interaction at runtime is like this: the client app makes an API request to the token-dispensing endpoint to acquire a token. This happens once. Then the client app can make many API requests to the "protected" endpoint, passing the token. The client can make as many requests as it likes, until the token expires. Typical token lifetimes might be 30 minutes or 60 minutes. The lifetime is under the control of the token-issuing endpoint.

OK. Are we clear on all that?

What you are suggesting is .... for one API Proxy endpoint to call into another. That is reasonable - not typical, but reasonable. But specifically you are imagining the "protected" endpoint to call out to the other endpoint to acquire a token. That does not make sense.

That's not how the model typically works.

We must maintain a separation of concerns.

  • The protected endpoint validates tokens. It should not generate tokens, or invoke other proxies to generate tokens.
  • The token-issuing endpoint generates (aka "issues") tokens. It should not validate tokens.

If you still think you need the protected endpoint to call out to generate a new token, Can you please explain why? In more detail, why do you think you should cross concerns, and enable the protected endpoint to also generate tokens, by indirectly invoking the token-issuing endpoint?

View solution in original post

2 REPLIES 2

I'm sorry - I don't understand what you want to do. I understand what you wrote, but it doesn't make sense to do it that way.

You wrote:

i want to add service callout proxy to "OAuth-protected API proxy" which should make a call to "token generating proxy" and the get the response...

That does not make sense.

Let me take a step back. Let's review what's going on. In Apigee Edge, the basic metaphor is the "api proxy". The API Proxy is the thing that gets deployed into Edge, and it includes instructions (policies and conditions and targets, etc) for what to do when a request arrives. Each API proxy exposes one or more endpoints, but for simplicity, let's just assume for now that every API proxy has one proxy endpoint. This is a base URL at which the proxy "listens". (Not quite accurate. The proxy doesn't "listen." In reality, Apigee Edge "listens" and activates your proxy logic when an appropriate request arrives. But "the proxy listens" is close enough.)

The typical scenario in which an endpoint is protected by OAuth 2.0 token validation, requires two endpoints. For example:

url purpose
http://domain/oauth2 token-dispensing endpoint
http://domain/hello "protected" endpoint

The typical pattern of interaction at runtime is like this: the client app makes an API request to the token-dispensing endpoint to acquire a token. This happens once. Then the client app can make many API requests to the "protected" endpoint, passing the token. The client can make as many requests as it likes, until the token expires. Typical token lifetimes might be 30 minutes or 60 minutes. The lifetime is under the control of the token-issuing endpoint.

OK. Are we clear on all that?

What you are suggesting is .... for one API Proxy endpoint to call into another. That is reasonable - not typical, but reasonable. But specifically you are imagining the "protected" endpoint to call out to the other endpoint to acquire a token. That does not make sense.

That's not how the model typically works.

We must maintain a separation of concerns.

  • The protected endpoint validates tokens. It should not generate tokens, or invoke other proxies to generate tokens.
  • The token-issuing endpoint generates (aka "issues") tokens. It should not validate tokens.

If you still think you need the protected endpoint to call out to generate a new token, Can you please explain why? In more detail, why do you think you should cross concerns, and enable the protected endpoint to also generate tokens, by indirectly invoking the token-issuing endpoint?

@Dino It is clear now. Thanks for your detailed explanation.