Single v/s Multiple Installations of Edge Within Enterprise

Not applicable

All,

We're about to start our API journey; so a kinda novice in that sense. I've gone over the threads on Single v/s Many API teams and liked the idea to have "Many API Teams" create their own proxies for reasons cited in those threads.

Extending that idea to single v/s multiple Edge installations within the enterprise:

We're mulling over an option of running multiple distinct instances (separate installations) of Apigee Edge within the enterprise. Each installation will be managed/used/operated by a specific department in dedicated manner. They may be running in different DCs. The reason is each team wants the full ownership (own toy to play with) of their platform to run their API workload.

So would like to know if this is a good idea?

The things that come to mind immediately are: fragmented visibility of Enterprise APIs, repeated operational tasks, inconsistencies in standards. If someone can provide detailed pros and cons of single v/s multiple installations, that would be really helpful.

Thanks- AD

Solved Solved
0 3 367
1 ACCEPTED SOLUTION

fragmented visibility of Enterprise APIs, repeated operational tasks, inconsistencies in standards.

You've got it. And couple that with multiple distinct namespaces for keys. Distinct schema.

I have seen multiple distinct installations of Apigee Edge to facilitate load testing, including testing of upgrade procedures of Apigee Edge itself.

But for production use, probably the best approach is to have a single cluster with multiple distinct organizations and environments. Ideally you have 2 organizations: 1 for production use, and 1 for non-prod use to facilitate the SDLC of API proxies. In the non-prod organization you can include multiple environments, one for each stage in the corp SDLC, as well as a sandbox env for each team that wants an independent sandbox.

People (humans) have access to the non-prod org, and can create proxies, deploy and undeploy as they see fit. Ideally only the CI/CD pipeline (non-human) has the authorization to deploy things into the production organization.

It takes more effort setting up a common Apigee Edge cluster to support the enterprise across multiple datacenters, but if Edge will be used throughout, then that initial effort will pay off.

@Christin @Diego Zuluaga @dyonan or @Dom Couldwell may wish to weigh in.

View solution in original post

3 REPLIES 3

fragmented visibility of Enterprise APIs, repeated operational tasks, inconsistencies in standards.

You've got it. And couple that with multiple distinct namespaces for keys. Distinct schema.

I have seen multiple distinct installations of Apigee Edge to facilitate load testing, including testing of upgrade procedures of Apigee Edge itself.

But for production use, probably the best approach is to have a single cluster with multiple distinct organizations and environments. Ideally you have 2 organizations: 1 for production use, and 1 for non-prod use to facilitate the SDLC of API proxies. In the non-prod organization you can include multiple environments, one for each stage in the corp SDLC, as well as a sandbox env for each team that wants an independent sandbox.

People (humans) have access to the non-prod org, and can create proxies, deploy and undeploy as they see fit. Ideally only the CI/CD pipeline (non-human) has the authorization to deploy things into the production organization.

It takes more effort setting up a common Apigee Edge cluster to support the enterprise across multiple datacenters, but if Edge will be used throughout, then that initial effort will pay off.

@Christin @Diego Zuluaga @dyonan or @Dom Couldwell may wish to weigh in.

Not applicable

We went with the "logical separation" approach for our private cloud Apigee, for much of the same reasons cited by Dino. The coordinate space we chose was organization per DevOps team (enabling self-administration) and environment/virtual host per continuous deployment pipeline stage. A key problem was using one API URL per environment, across all orgs. We did that by requiring each API to use a basepath beginning with the org name, and having an irule in the F5 load balancer that added the correct host header - including the organization name, so the request would get routed to the correct virtual host.

All was not perfect. The URLs given in the UI are incorrect, because they're constructed by Apigee with the org name. We also were not happy with the Developer Channel, which has to have one instance per org.

dcouldwell
Participant III

+1 to having one production org across the enterprise.

You will likely have some assets that are common and that you'd like to keep consistent e.g. Identity API. Reuse is also promoted when all assets are within one org as is innovation. If one department creates an innovative API backed product then it should be as easy as possible to share that asset.

When it comes to 'I don't want anyone else breaking my stuff' that can be accomplished in non prod using RBAC and in production and non prod by promoting automation thus minimising the need for manual access and reducing the changes of mistakes.

Some enterprises even go so far as to completely remove UI and management API access for API developers which can further remove this risk although you are removing some of the useful features of Edge from API developers e.g. trace.

Final thought is to consider having 3 layers of orgs - dev, non prod and prod.

Dev can be a playground for people to try things out and assess platform capability, as open as possible and potentially rebuild every night to maintain a stable base.

Non prod would have more gated access but would still promote collaboration, automation and have RBAC in place if more controls were required.

Prod access would be heavily restricted and the majority of activity being orchestrated by CI / CD process.

Key point I would make is that you don't want to stifle innovation, reuse and collaboration by enforcing unnecessary separation.