Apigee Edge - Auto Scaling in OPDK Environment ?

Understand that Apigee Edge cloud does auto scaling on the cloud. On OPDK it could vary based on the implementation and infrastructure.

With Ansible automation and monitoring tools, I believe, we can achieve auto scaling in private cloud as well leveraging Apigee Edge Management APIs.

Any one tried same ? Any learnings to share with community ?

~~Q:S:TC~~

0 2 1,138
2 REPLIES 2

Former Community Member
Not applicable

Apigee does not provide tools for autoscaling in OPDK. However, if the infrastructure permits it, certain components of OPDK can be under autoscaling policies. The most likely candidates for autoscaling are Router and Message processor. Here is an implementation that run on GCP: https://github.com/apigee/edge-gcp

Not applicable

Edge provides all the necessary building blocks to implement auto-scaling by leveraging the capabilities offered by IaaS providers such as GCP, AWS and Azure.

The software makes easy to add and remove components and manage their logical association to Pods and Environments.

We recommend focusing the autoscaling initiative on Routers and Message Processors, while performing more traditional capacity planning for the rest of the components.

Routers and Message Processors are two of the three key components for the runtime. Message Processors are where APIs are actually executed and the first component that may need to scale out/in as API volume increase/decrease.

The instructions for adding components are described here:

http://docs.apigee.com/private-cloud/latest/scaling-edge-private-cloud 

Autoscaling Routers and Message Processor requires:

  • Planning to determine thresholds triggering infrastructure autoscaling.
  • Ability to trigger events to provision Edge software (Routers and Message Processors) during scale out. Most IaaS providers offer hooks and mechanism for you to do that.
  • Recognizing that autoscaling, even a solution, it is not a silver bullet. Pro-active understanding of business events, sales, or any other event that may trigger sharp spikes of traffic is important.
    • Autoscaling will react to the spike but, depending on provider and provisioning time, you may not scale out fast enough and some pre-warm of infrastructure or adjustments to threshold related to minimum number VMs/nodes of a component type may be required.
  • Local Apigee RPM repository can expedite software provisioning during autoscaling.
  • Having images ready with bootstrap, apigee-setup utility, any monitoring software needed, etc, ready can expedite software provisioning during autoscaling.
  • Provisioning during autoscaling should be limited to performing setup.sh -p [ r | mp | rmp ] -f <response file> and adding Message Processors to environments.
  • You must remember to not just provision Message Processors but add them to applicable environments.
  • As you autoscale Routers, these must be added to Load Balancer.
  • As you autoscale Message Processors, communication path from newly added VMs to backend systems should be possible.
  • Do not forget to add newly added Routers and Message Processors to your monitoring system.
  • When scaling in, remember to do the reversed process described above, deregistering Message Processors form Environments, and both Routers and Message Processors from Gateway Pod and the system (/v1/servers).