Auto deploy proxies through Kubernates/Openshift

Hi guys, I have seen API Management and Kubernetes and would like to know more about Managing Kubernates APIs & automation with Apigee Edge.

I have a similar use case where I will deploy Java/Nodejs Apps on Openshift(which uses Kubernates & Docker) and it should create a proxy in Apigee automatically.

Are the proxies/files/demo shown in the webinar available for external use? If yes, can anyone please share them here. Any Github repo?

@Mukundha Madhavan @Bala Kasiviswanathan @Dino @Anil Sagar

Any help/info on this is appreciated. @Dino-at-Google

0 3 607
3 REPLIES 3

I hope someone from Apigee gives you the code references you need. While i do not have access to the proxies/files shown in the video, i can suggest a possible approach to achieve this :

I am going to assume that you could use an approach (instead) if you don't have the source code samples:

  1. Create a docker image which has the capability to create a Proxy in Apigee . Let us call this image "apigee-proxy-registrator"
    • This Docker image can basically package a Node.JS command line application (Look at https://www.npmjs.com/package/commander).
    • The command line application could use openapi2apigee (or something custom) to generate a proxy bundle from a template . Your template could reference shared flows/global policies.
    • It would use apigetool to deploy the application to apigee.
    • It would take the following arguments (similar to the video)
      • Credentials (to register proxy) - Sourced from Kubernetes Secret or a Vault implementation
      • Environment Name
      • Organization Name
      • Proxy Name
      • Management Server URL

2. Use the above docker image as an init-container ( not sidecar) and define your kubernetes service manifest.

Note:If you are using a Service Mesh which supports automated Injection, use that to automatically inject your "apigee-proxy-registrator" image as a init-container to all admissions. If you don't use a service mesh , then you would have to write a couple of Admission Controllers (mutating and validating). This extra enhancement allows you to automatically inject your init-containers to all target API's which need a proxy in Apigee.

3. Define your kubernetes deployment manifest . Your Service type will depend on the location of your Apigee Installation and Kubernetes Installation. If you have a private apigee cloud and a private kubernetes cluster , you should be able to define your service type as ClusterIP and define an nginx ingress controller to route to your services. This configuration of this step will vary depending on your installation of both softwares (Apigee and kubernetes). Your Apigee Proxy will use this service/ingress endpoint as the Target Endpoint. You could build these endpoints based on conventions, so your proxy can begin to guess them - it's upto your imagination

4. Define your kubernetes deployment manifest , include your application container image and also your init-container (if you haven't enabled auto injection)

5. Deploy your kubernetes services/deployments as you normally would - This is regular kubernetes

As you can see i have glossed over a lot of details but we could discuss them as your adventure unfolds. Most of your development will be in the "apigee-proxy-registrator" image. Rest of it is just kubernetes configuration and manifests.

Good Luck

Hi @rmishra , thanks a ton for providing a detailed approach. I will definitely give it a try.

On the other hand, I have a work-around, please have a look & let me know if it is a good approach.

Openshift internally uses Kubernates, all similar pods are internally load balanced via a Service Layer which can be linked to a Route to access data.

  1. I have an Openshift Jenkins Pipeline which pulls the Java/Nodejs code from Git, run all the tests & deploy the application(pod/pods) to Openshift and creates a Route to expose the app.
  2. This Route can be pre-defined in Openshift such as,
    1. http(s)://<application-name>-<project>.<default-domain-suffix>
  3. I am thinking to add another step to my Pipeline, which will use Apigee Maven Plugin to deploy a proxy to Apigee and in config.json >> targets value I would hardcode this Route URL.

Step 3 is completely related to Apigee Maven Plugin, I am only replacing the target value in the config with a pre-defined Openshift Route.

Does it make sense? If not, please let me know, I will try to elaborate.

@Siddharth Barahalikar

Yes, it does. And it will work as long as Apigee Edge Message Processors have a network line of sight to Kubernetes Services.

You may also need to worry about the "atomicity" of the deployment process - Is that a concern? What happens if the proxy deployment succeeds but the Target API deployment fails? Should you roll back Apigee deployment?

I would recommend you deploy the Target API first, ensure it is deployed, deploy the proxy bundle next and finally activate the bundle. Your deployment should be considered successful only when a heartbeat down the Proxy to the Target API comes back with a HTTP 200 OK.

I use a pretty similar approach with Amazon ECS (Container Service) and Apigee.

P.S > If you use helm charts , you will be able to define routes as environment variables for both the maven plugin as well as your service route.