How can one determine the required memory for Message Processor?

Hi @dchiesa1 ,

Apigee is receiving request payload greater than the default limit that MP is set to process (10MB) due to which I am receiving: 

"413 Request Entity Too Large: {"fault":{"faultstring":"Body buffer overflow","detail":{"errorcode":"protocol.http.TooBigBody"}}}"

 Apigee streaming and Singed URLs are not an options for me. I want to increase 

HTTPRequest.body.buffer.limit=10m

to a greater value. I want to be able to calculate how much memory my MP server needs in order to support a payload size of, say, 30 MB. How can I calculate this?

Also, can you please outline the pros and cons of dividing the larger payload into smaller chucks and sending them in multiple requests to Apigee and increasing the default limits of Apigee?

1 2 95
2 REPLIES 2

Hi @dchiesa1 ,

Please let me know if you need more details.

Sorry, I don't know the answers here. I am not an expert on OPDK or managing Apigee infrastructure. Actually I'm a big fan of using the managed version of Apigee, which is called "Apigee X".  One big reason: I think Apigee X frees you from worrying about the size of MP machines, memory consumption, CPU utilization, I/O saturation, and all the other machine-oriented issues associated with managing your own systems. 

There may be other people here on community that can engage on the question of how to size an RMP machine.  

BUT, 
there may be no good concrete answer.

I am aware that the OPDK documentation provides recommendations on where to start for machine sizing.  And after that, determining the optimal size of the machine needed (memory, cpu, i/o)  is basically an empirical exercise. Run it, measure it, see what results you get.  It's possible you could spend LOTS of time optimizing this, which could be expensive in terms of labor.  That may make sense if you have many many machines. If you save 4% across 10,000 machines, the savings can really add up.  But if you have 20 machines in your cluster, ... in some cases optimizing the hardware is not worth your time. Over-provision and then monitor it.  Done.  

Good luck with that. 

As for splitting large requests into smaller - I think why not use a storage endpoint, like Google Cloud Storage, with a signedURL pattern (eg. pre-authenticated one-use URL) to allow uploads of very large size?  It's clearer and cleaner. and a proven pattern. If I were an architect on an API-based system I would not want to spend my time thinking about how to allow requests > 10mb in chunks. So many different failure scenarios, so many different issues with clients.  That's just my opinion. 

Good luck on this too, Rohit!