Deploying apis to multiple environments dynamically

Hi,

I have a requirement to route the request to different target endpoints and load balance them in different environments.

Each environments has multiple sub environments (UAT - has UAT1, UAT2...etc)

These apis have to deployed using jenkins to the higher environments. The number of target servers in each environment is large (approx 18) and its not advisable to create all the target servers in each environment (QA, UAT)

For jenkins to successfully deploy the apis, all the target servers (approx 54 in this case ) need to be created for all the environments , otherwise jenkins deployment will fail.

Is there a cleaner approach to route the api to the specific environment and deploy using jenkins without creating these large number of target servers in all the environments ?

Can this scenario be handled at the deployment stage ?

Thanks

0 4 980
4 REPLIES 4

Is there a cleaner approach to route the api to the specific environment and deploy using jenkins without creating these large number of target servers in all the environments ?

Mmm, could you clarify on what exactly is meant by "cleaner" ? Why is it "not advisable" to create the target servers in the UAT environments? What problem are you solving?

You are aware of the administrative API for Apigee Edge, I guess. If you build scripts that use the API to create target servers, that means the effort to create 18 targetservers is probably about 5% higher than the effort required to create a single targetserver. In other words, it ought to be easy.

So I don't understand what you mean by "not advisable". It's probably not the effort involved. What problem specifically do you have with this approach?

There are ways to spread load across multiple backend systems without using the TargetServer artifact.

  • use a virtual-IP which is configured externally, like in an external Netscaler or F5 device, or a software load balancer like nginx or haproxy that runs outside of Apigee Edge.
  • Use a roll-it-yourself URL-chooser inside the Apigee Edge proxy. You could do this using the key-Value-Map to store the set of 18 URLs, and then some javascript to randomly select one of the URLs for each request. You could also add weights to the URLs, so that it's a weighted-random selection.
  • maybe something else I'm not thinking of

But whether these ideas are good for you, I can't be sure, because I don't know what problem we're specifically trying to solve.

Completely agree that by creating scripts using adminstrative api I can create target servers for each environment. But still the route rule has to be hard-coded within the api proxy to route the target servers based on some condition (eg: environment header or a custom attribute )

By this approach you have to create all the target servers in every environment (i.e 54 target servers in DEV,SIT ,UAT ) which is not required since SIT target servers will not be used in UAT and vice-versa.

Another approach that I have currently implemented is the creation of target servers specific to an environment and hence you only create 18 in each environment. I was finding a way to see if instead of creating target servers and hard code the route rule to route to the correct target server, can this routing take place dynamically ?

I have also achieved to dynamically point to the intended target server using javascript callout but javascript doesn't have OOB capability to load balance among the servers.

If load balancing is achieved using the javascript approach , then the problem is solved.

The problem to solve is to remove the hard coding of route rule within the api and dynamically manage the routing.

Hope this clarifies the problem statement

Thanks

Hi @Ramnath

Not sure I completely follow the use case. My understanding of the use case - depending on some condition and for a given environment, say DEV, the proxy should send the request to one of the many DEV target servers ? For example

api-dev.example.com/employees/123?env=DEV1 should send the request to DEV1 Target server and

api-dev.example.com/employees/123?env=DEV2 should send the request to DEV2 Target server and so on ...

Is the above correct ? If yes, why not try one of the below?

Can you not configure the target server info in KVM as entries and get it using KVM policy and set the value as target.url using Javascript ? You can design the entry keys so that its easily query-able using KVM policy. In this case, the KVM keys will DEV1 and DEV2 and their corresponding entries will be the target host details

or

Build another proxy for Target configuration discovery. This proxy will take some input (like env, etc) and the response will be the Target server information. You can then have the main proxy call this proxy(via Service Callout/chaining) and then set the response as target.url using Javascript policy?

I am not sure what you meant by load balance. Would be nice if you can give an example or provide a simple diagram on how the request flows and how the different target servers come into action.

Hi @Sai Saran Vaidyanathan

The above solutions provided by you would work fine when there is a single node.

Consider the below configuration

DEV1- host1, host2,...hostn

DEV2 - host3, host4...hostn

QA1- qahost1, qahost2

QA2 - qahost3, qahost4

Similar setup for UAT too. The approach of creating kvm entries as environment and hostname key value pairs will work fine in this case if I implement custom load balancing strategy. The goal is to have a dynamic single code which will serve all the available environments by routing to the proper target host and also load balance between them .(between host1 and host2 for DEV1, qahost1 and qahost2 for QA1 )

Thanks