Return data from shared flow to client

Hi 

Is it possible   to have shared flow which sends request to  some endpoint,  wait for the response and returns response back to apiproxy which returns it to client  ?

  1. Apiproxy accepts request
  2. Shared flow sends request to http.test.com
  3. Shared flow gets response  and pass it to apiproxy
  4. Apiproxy  returns result to client
0 5 208
5 REPLIES 5

The Sharedflow can have the ServiceCallout policy

The policy will have the Response attribute, which will make the callout synchronous and it fetches the response data. Within the sharedflow you can access Response variable and set in on the proxy "response.content" variable to return to the client.

Thanks

Yes, you can do what you describe, as Gana explained.

And also, NORMALLY, that is not the best use of a SharedFlow. Normally, when you design your API Proxy to connect to some other endpoint, you would use the Target Endpoint . That is the thing that is intended to call out to an endpoint (like http.test.com) in Apigee. You don't need a sharedflow for that. The targetendpoint will, as you said, "send a request to some endpoint, and wait for the response". If you add nothing else to your API proxy, then Apigee will return the response it received from the target, to the original client. You can alter this behavior by attaching policies to the response flow. For example, if you wanted to add headers to the response, or remove headers, so that the client receives something slightly different than what the target sent back to Apigee, you could do that using a policy in the response flow. Or if you wanted to manipulate the payload in some manner - adding a JSON field or converting from XML to JSON.... those are things you could accomplish by adding policies into the response flow of the API proxy.

I have 2 API proxies which expose endpoints with almost identical behaviour,  let's imagine we have endpointA  and endpointB. When client hits endpoint A then it should send request  to  http.test.com and return result back. When client hit endpointB it should do the same - send request to http.test.com and return response back to client. In order to not duplicate logic to do with sending request to http.test.com I would   implement shared flow an attach it to each API proxy. In future I will have more API proxies which will expose endpoints which will communicate with http.test.com. I thought that Shared Flow that's exactly what I need

@ganadurai @dchiesa1 

In order to not duplicate logic ... I would implement shared flow an attach it to each API proxy.


Yes, avoiding duplication is exactly the purpose of the SharedFlow concept.

But, what is the logic related to "invoking the upstream"?   if it is as simple as "invoke this UrL" then ... the duplication we are discussing is copying that URL into 2 distinct proxies. That's not very extensive duplication, and if i were the chief architect of the system you are imagining, I probably wouldn't worry about repeating a URL in 2 places. 

Your proposal is to embed the calling of that target into a SharedFlow, and then invoke the SharedFlow (via a FlowCallout policy) from 2 distinct proxies. So you wouldn't be "reducing duplication" but merely using different duplication.  Instead of a duplicated target URL, you would use a duplicated FlowCallout.  And, by doing it with a SharedFlow, you would explicitly be avoiding the main thing in Apigee which is designed to invoke upstream systems: the TargetEndpoint. And you'd miss the analytics associated to that.

But maybe there is something else, some other requirement you have that you have not yet articulated, that suggests a sharedflow is still a better idea. Maybe there is some post-processing of the response that you need to do. 


@vladcherniavsky wrote:

In future I will have more API proxies which will expose endpoints which will communicate with http.test.com.


The duplication tradeoff I described above, either a target or a FlowCallout,  will still apply. 

If you have 3 or 4 or 12 or 20 proxies that all call the same endpoint, and all need the same post-processing, then... what I would do is create an internal API proxy, and then use a LocalTargetConnection from each of those 3 or 4 or 12 or 20 proxies into that "target proxy".  

 

I don’t see any sophisticated logic being added for proxies which are supposed to communicate with upstream. The one thing I know will be there is the policy to do with getting API key from KVM since upstream is not public.
If I go with TargetEndpoint then there will be duplication of  that in every single Api proxy. 
If I go with shared flow - it looks like will be no subluxation apart from attaching shared flow to each Api proxy.
Looks like internal API proxy is a good option too

Thanks @dchiesa1