Sizing of on premises infrastructure

Not applicable

Hi All,

We are in the process of estimating the infrastructure required to deploy Apigee on premises. The required topology will have a primary datacentre where the majority of the API requests will be serviced from and 8 regional data centers that will also need to support API requests.

The deployment models in the Apigee documentation has too large a footprint to support this kind of model and I was hoping for guidance on alternative multi regional deployment models or alternatively how we could go about reducing the infrastructure footprint required.

Regards

Nisch

0 4 6,030
4 REPLIES 4

Yes, I think you're imagining a multi-datacentre installation of Apigee Edge with redundancy and failover among all those datacentres. And often that is too heavyweight, too much infrastructure to support a relatively modest load of requests.

Some possibilities for you.

  1. Use a central Edge + distributed micro-gateway approach.
    You can deploy one 5-node system in the primary datacentre to handle the majority of the API calls that arrive there.
    Then, you can deploy one or more lightweight Edge micro-gateways in each of the subsidiary datacentres. These microgateways will run co-located with your service, so no need for any incremental VMs in the distributed data centres. If you had budgeted for 3 nodes in those datacentres to support your services, there's no need to expand that number.
  2. Use a central Edge and proxy requests across the WAN
    In this scenario, you still have the 5-node install of Apigee Edge in the main datacentre. There's nothing at any of the subsidiary datacentres. API Requests being handled by those systems would need to traverse into the main datacentre, before being proxied out to the remote ones. This data path may or may not be acceptable to you.

@Dino thank you for your feedback.

Since posting the question, we have also debated the merits of having a fully redundant and failover capability in each region in deemed it probably too much of an investment considering the anticipated volumes.

We are leaning towards having micro-gateway instances in each of the non-primary regions.

Some follow up questions I'm hoping for assistance on:

1. Is there a recommended baseline size for the micro gateway (in terms of number of cores, RAM and storage for each server). I ask because I did not see any in the micro gateway documentation.

2. Which of the API Edge components (Management UI, Cassandra, Zookeeper, QPID Server, Router, Message Processor, etc.) are in use by the micro gateway. My understanding is that analytics (QPID and Postgres) are not part of it and it works with the edge instance to upload analytics but are there any other differences in components?

3. You mention 3 instances. Due to the number of regions, we were planning to go with 2 in each region for failover, do you foresee issues with this?

Apologies for the number of questions.

Regards,

Nisch

  1. The Edge Micro-gateway can be deployed on any server node. I don't think we (Google/Apigee) document a minimum server size to support micro-gateway. We designed it to run on the same node as the actual service implementation. The thinking was, if you have a 2-core single cpu server to host your service, you should be able to run the micro-gateway on the same machine without any additional memory or cpu.

    Now, for systems that are already performance constrained, that may not be true. You may need to provision a separate node.But that is a rare circumstance in my experience. Usually there is a little extra "room" on a server to run an agent like the Micro-gateway.
  2. micro-gateway depends on a (usually remote) Apigee Edge. If you are managing the Edge installation yourself (not using Edge SaaS), then that means you need everything in the Edge installation. Micro-gateway communicates with an API Proxy in the Edge cloud, and the data sent in those messages eventually gets propagated to the Analytics subsystem (Qpid, postgres) . So you need "all of Edge" somewhere.
  3. Two of what in each region? Microgateways? Yes, that sounds appropriate.

@dino. Thanks. This definitely helps clear things up.

One other question relating to disaster recovery. The plan is to go with the 12 node deployment of edge across two data centres to provide for the DR of the primary instance.

From my very limited understanding of how the edge and micro gateway instances work together, I would think that this setup would sufficiently provide DR for the regional micro gateway deployments as well as all the configuration is obtained from what is in edge?

Regards,

Nisch