limit number of concurrent HTTP connections (requests) made from javascript

Hello,

API proxy has to split incoming JSON and make a number of HTTP requests to the back-end (sort of batch API). Additionally there is a need to limit number of simultaneous HTTP connections to that back-end.

This has been implemented in javascript policy in the following way.

First iterate until reach defined connection limit value and make httpClient.send with onComplete function. In onComplete recursively call same httpClient.send with the same onComplete.

function onComplete(res,err) {
	//... process response
	//... prepare next request
	sendRequest(req)
}


function sendRequest(req) {
	httpClient.send(req, onComplete(res,err))
}


for (i=0; i< poolSize; i++) {
 //... prepare initial request
  sendRequest(req)
 
 }

A few questions in regards to that

1) Are there better (more optimal, more efficient) ways to achieve the same? I failed to make promise/await or generator/yield code even deployed.

2) For higher numbers javascript returns error "too many Callbak calls". Is it possible to overcome this anyhow?

Any suggestions/comments are appreciated.

Thank you

Solved Solved
1 3 601
1 ACCEPTED SOLUTION

How many calls are you planning to make? 2 or 3, would work fine. 10-60, probably not a good idea. At what point did you find "too many callback calls"?

Apigee's JS callout is ideally suited for performing small scriptable logic, modifying request and response payloads (filtering fields, etc), or dynamically computing headers. The product includes an httpClient to do basic requests outbound. But it is not intended to be a general-purpose JS hosting platform. It is based on Rhino, which is a JS interpreter built on the JVM. It is not v8, and it does not include all of the features that have been added to JS since ES6. It lacks Promises, it does not include setTimeout or setInterval.

If I were solving this for the general case, I would be using something like App Engine or Cloud Run to host my Javascript logic. This gives you the ability to use generators/ yield, or to use promises, and so on.

View solution in original post

3 REPLIES 3

How many calls are you planning to make? 2 or 3, would work fine. 10-60, probably not a good idea. At what point did you find "too many callback calls"?

Apigee's JS callout is ideally suited for performing small scriptable logic, modifying request and response payloads (filtering fields, etc), or dynamically computing headers. The product includes an httpClient to do basic requests outbound. But it is not intended to be a general-purpose JS hosting platform. It is based on Rhino, which is a JS interpreter built on the JVM. It is not v8, and it does not include all of the features that have been added to JS since ES6. It lacks Promises, it does not include setTimeout or setInterval.

If I were solving this for the general case, I would be using something like App Engine or Cloud Run to host my Javascript logic. This gives you the ability to use generators/ yield, or to use promises, and so on.

Thank you for the response

From the testing it looks like 10 is a limit. More than 10 call-backs causes issue.

This is used for the batching service which splits JSON array and processes each element separately. Incoming is something like

[{"orderId":"12345"}, {"orderId": "23456"}]

Assuming my connection limit on the back-end is 8 concurrent requests, I can process up to 80 orders in 1 request.

I think it could live with that rather than introduce extra layers/components (Cloud Run). Validation can be easily set-up for the inbound to reject requests bigger than 80 objects. It already gives me significant performance increase comparing with one-to-one processing. And it looks for me to be quite a "tasty" use-case for Apigee: it can be developed quickly, leveraging off-the box policies only handle actual requests in JS.

Yes. Something like you have proposed will work for the small case.

I fiddled around and came up with this, which seems to work.