File Transfer via HTTP - Apigee On-premise - Guidelines for file size, whether a good idea or not?

What are best practises for file transfer via HTTP?
Solved Solved
0 2 2,419
1 ACCEPTED SOLUTION

Generally a bad idea unless the file size is small.

A better idea, if you have larger files (images, archives), is to use Apigee as the control plane to generate a signed URL, a one-use URL that is time-limited. Apigee woudl then send THAT to the client, and the client could directly contact the storage system, like Google Cloud Storage, AWS S3, or Azure Storage.

From my friend Steve:

the client sends the request to retrieve a large stream into Apigee. Apigee authenticates it, maybe rate limits, and etc, and then generates a signed URL and redirects to the storage server. The client follows the redirect to a server that will serve up the stream directly. This allows the backend stream/media platform to scale horizontally and it enables features like resuming stream if for some reason the connection gets disconnected during the stream, etc.

9701-image.png

The key question is "what is small, and what is large?" If you look at the service limits for Apigee, the largest file you should expect to transmit through the proxy is 10mb. (check the limits doc). But anything file-ish that is over 1mb is probably too much data.

Google Cloud storage supports the pattern I described implicitly with "Signed URLs". But you can implement this on your own backend streaming or storage serving system without too much trouble. The pattern is the same.

BTW here is an Apigee proxy that shows how to implement this pattern with GCS as the backend:

https://github.com/DinoChiesa/ApigeeEdge-Java-GoogleUrlSigner

View solution in original post

2 REPLIES 2

Generally a bad idea unless the file size is small.

A better idea, if you have larger files (images, archives), is to use Apigee as the control plane to generate a signed URL, a one-use URL that is time-limited. Apigee woudl then send THAT to the client, and the client could directly contact the storage system, like Google Cloud Storage, AWS S3, or Azure Storage.

From my friend Steve:

the client sends the request to retrieve a large stream into Apigee. Apigee authenticates it, maybe rate limits, and etc, and then generates a signed URL and redirects to the storage server. The client follows the redirect to a server that will serve up the stream directly. This allows the backend stream/media platform to scale horizontally and it enables features like resuming stream if for some reason the connection gets disconnected during the stream, etc.

9701-image.png

The key question is "what is small, and what is large?" If you look at the service limits for Apigee, the largest file you should expect to transmit through the proxy is 10mb. (check the limits doc). But anything file-ish that is over 1mb is probably too much data.

Google Cloud storage supports the pattern I described implicitly with "Signed URLs". But you can implement this on your own backend streaming or storage serving system without too much trouble. The pattern is the same.

BTW here is an Apigee proxy that shows how to implement this pattern with GCS as the backend:

https://github.com/DinoChiesa/ApigeeEdge-Java-GoogleUrlSigner

Not applicable

File transfer over http doesn't sound good. FTP is fine. Apigee should not be used for file transfer.