Maximum TPS APIGEEX supports

Hi,

What are the maximum TPS being supported by APIGEEX

per Proxy,

per Endpoint,

per Environment,

per Environment Group,

per instance,

per Org

 

Thanks & Regards

Amit

2 3 283
3 REPLIES 3

Hi!

Apigee X has been designed with scalability and resiliency in mind. Our engineering team has leveraged the best of what Google can offer to endure the most demanding traffic patterns so that customers from retail, media, financial services industries can entrust Apigee with their critical and high-value traffic. Apigee has been tested to scale to up to millions of transactions per second by some of our customers.

Every customer is different and their target backends may require different types of scalability requirements beyond high transactional volume. Some other performance metrics may include number of concurrent connections, latency (p99, p95, etc.), among others. I encourage you to work with our support team if you're expecting to have very high scalability requirements.

Thanks a lot for reply!

Seems like my question had some gap. Let me ask in other words.

As per my understanding, APIGEEX behind the scene creates different PODs to handle the load.

I want to understand how much load a single instance can support.

How is load (Pods) distributed? Is it per org, is it per environment group? Is it per environment? Or is it per Proxy? Or is it per Endpoint of API?

It will be great if I can get some baseline numbers which can be supported by single Instance and how different APIs will be handled. Trying to understand how should I segregate APIs having highest load.

 

Thanks & Regards

Amit

The internal implementation of the Apigee X runtime (pods, namespaces, etc) is not a documented part of the Apigee X product. We don't say "this is how it is implemented internally", and of course the ops/eng team may choose to change the implementation at any time.

But we do say that environments scale elastically. A proxy gets deployed to an environment, and if you ramp up load on that proxy, then there will be more cloud resources dedicated to handling that load.

It will be great if I can get some baseline numbers which can be supported by single Instance and how different APIs will be handled.

As stated previously, you can handle tens or hundreds of thousands of transactions per second easily. If you want millions, it may require some special configuration, and we'd like to talk to you about that. If you want to understand "a baseline" then you are welcome to conduct your own performance evaluation by sending lots and lots of concurrent requests into your Apigee instance, and measuring what you observe.

Trying to understand how should I segregate APIs having highest load.

Probably, you need not worry about this. Are you actively experiencing a performance obstacle? If not, I'd say, don't worry about it. You do not need to segregate your APIs. Just configure them and load them. Apigee takes care of the rest. It's a managed service. You don't need to manage the resources, and you don't need to concern yourself with how Apigee manages the resources. The resources are just available to you. There's an SLA and it's not bounded by transaction rate.

There ARE things you should concern yourself with, when considering performance: locality of the systems participating in the distributed transaction, size of payload, latency of the upstream (backend) system, network capacity. Those are all under your control. In my decade of experience, I observe that those factors are much more important in determining the practical performance characteristics of your managed Apigee proxies.