An External HTTPS Load Balancer with Cloud Armor to go with every Apigee Deployment Option

In a previous post we explained three different options for configuring ingress to an Apigee hybrid runtime environment. The most interesting and powerful out of these three options was the External HTTPS load balancer (L7 XLB). In addition to traditional load balancing tasks the L7 XLB also enables you to leverage a number of interesting additional features such as:

The goal of this article is to give an overview over the necessary infrastructure components for configuring an external HTTPS Load Balancer to work with Apigee X, hybrid and Edge as well as to illustrate how these approaches can be combined to transparently migrate between different deployment options.

Apigee X

With Apigee X the customer is responsible for routing traffic to the private ingress endpoint that is provided though the peered network connection to the Apigee tenant project. For enabling external access to the API runtime you can create an L7 XLB that terminates TLS at the edge and sends encrypted traffic to the private Apigee runtime. Because the L7 XLB is (at the time of writing this article) not able to route traffic directly to an endpoint that is located in a peered network, we need to provide a lightweight middle proxy. The middle then proxy transparently forwards traffic to the Apigee runtime.

The configuration of the middle proxy as well as the L7 load balancer is documented in the official Apigee documentation and part of the configuration wizard. The Apigee DevRel X provisioning script also allows you to create an L7 XLB with google-managed certificates and a publicly reachable hostname to get started quickly.

It is also important to remember that the internal routing within Apigee relies on Server Name Indication (SNI) to route traffic to the correct environment group and ultimately environment message processor. You therefore have to ensure that the hostname used for the L7 XLB matches one of the hostnames that are attached to your Apigee environment group.

Apigee hybrid (GKE on GCP)

With Apigee hybrid the networking architecture is easier and automated as the runtime is already in the customer’s VPC network and reachable by the L7 XLB. As described in the previous blog post you need to make sure that your ASM ingress is capable of serving the non-SNI traffic from the load balancer and the health check by adding additional Kubernetes resources to your Apigee hybrid cluster. The Network endpoint groups are automatically created by using the correct ingress NEG annotations. This way the NEG can directly be used as a backend for your L7 XLB.

Apigee Edge

Even though Apigee Edge is a fully managed SaaS offering that does not have native GCP networking integration it can still leverage an L7 XLB to gain access to the security features of Cloud Armor or the global CDN capabilities of Cloud CDN to improve performance. Important to note here is that whilst Apigee Edge does not charge for networking traffic, having an L7 XLB in front of it means that you will have to pay for egress traffic (traffic sent to Apigee Edge and traffic sent to the client). In for Apigee X and hybrid egress traffic will occur regardless of whether you use an L7 XLB or not.

For adding an L7 XLB in front of Apigee Edge we use a load balancer backend of type internet network groups (NEGs) that uses the existing Apigee fully qualified domain name (FQDN) of your Apigee edge environment host.

You then have to create a new virtual host entry for the hostname that points to the L7 XLB IP and add it to your API proxies.

The downside of this approach is that the L7 XLB and cloud armor could be circumvented by clients that know the hostname of the existing apigee virtualhost. To limit this exposure the serverless internet NEG would have to be replaced with a middle proxy that can provide mTLS based client authentication towards the Apigee virtualhost.

Bonus: Load Balancing Apigee X and Edge for Migrating API Proxies

By merging the approaches above we can also transparently migrate API traffic between different Apigee Deployment options. In this section we focus on Apigee X and Apigee Edge and how you can use the L7 XLB to split and ultimately migrate traffic from Apigee Edge to X. The same obviously applies to any other combination of deployment options. In order to be able to load balance between multiple Apigee deployments you have to create an L7 XLB with two backend services. The first backend service uses the managed instance group of middle proxies that direct traffic to the peered Apigee X runtime. The second backend uses the internet NEG that points at the FQDN of your Apigee Edge virtualhost. On Apigee, this setup requires that we configure the same hostname for our Apigee X environment group as well as the Apigee Edge virtualhost.

At the load balancer you can then make routing decisions based on hostnames and paths. In this example any proxies under the /legacy-app/* prefix are routed to Apigee Edge while any unmatched path is routed to the middle proxy and the new Apigee X runtime.

This setup obviously only covers the routing aspect of migrating API proxies and does not cover important other considerations such as credential management and other API resources that potentially need to be managed across the two deployment options.

Comments
dchiesa1
Staff

Daniel, #ThisIsTheContentWeHaveBeenWaitingFor

CodyK
Bronze 3
Bronze 3

Great Post @strebel Do you know, if there is a way with Cloud Armor, to restrict access to the Apigee X instance IP, when using Private Service Connect? For example, ensuring traffic is routed through the ILB or GLB? It’s easily done when using the MIG as a proxy but wondering if it can be done without it? Thank you

OliverJacob12
Bronze 1
Bronze 1

@strebel wrote:

In a previous post we explained three different options for configuring ingress to an Apigee hybrid runtime environment. The most interesting and powerful out of these three options was the External HTTPS load balancer (L7 XLB). In addition to traditional load balancing tasks the L7 XLB also enables you to leverage a number of interesting additional features such as:

The goal of this article is to give an overview over the necessary infrastructure components for configuring an external HTTPS Load Balancer to work with Apigee X, hybrid and Edge as well as to illustrate how these approaches can be combined to transparently migrate between different deployment options.

Apigee X

With Apigee X the customer is responsible for routing traffic to the private ingress endpoint that is provided though the peered network connection to the Apigee tenant project. For enabling external access to the API runtime you can create an L7 XLB that terminates TLS at the edge and sends encrypted traffic to the private Apigee runtime. Because the L7 XLB is (at the time of writing this article) not able to route traffic directly to an endpoint that is located in a peered network, we need to provide a lightweight middle proxy. The middle then proxy transparently forwards traffic to the Apigee runtime.

The configuration of the middle proxy as well as the L7 load balancer is documented in the official Apigee documentation and part of the configuration wizard. The Apigee DevRel X sheboygan fire scanner also allows you to create an L7 XLB with google-managed certificates and a publicly reachable hostname to get started quickly.

It is also important to remember that the internal routing within Apigee relies on Server Name Indication (SNI) to route traffic to the correct environment group and ultimately environment message processor. You therefore have to ensure that the hostname used for the L7 XLB matches one of the hostnames that are attached to your Apigee environment group.

Apigee hybrid (GKE on GCP)

With Apigee hybrid the networking architecture is easier and automated as the runtime is already in the customer’s VPC network and reachable by the L7 XLB. As described in the previous blog post you need to make sure that your ASM ingress is capable of serving the non-SNI traffic from the load balancer and the health check by adding additional Kubernetes resources to your Apigee hybrid cluster. The Network endpoint groups are automatically created by using the correct ingress NEG annotations. This way the NEG can directly be used as a backend for your L7 XLB.

Apigee Edge

Even though Apigee Edge is a fully managed SaaS offering that does not have native GCP networking integration it can still leverage an L7 XLB to gain access to the security features of Cloud Armor or the global CDN capabilities of Cloud CDN to improve performance. Important to note here is that whilst Apigee Edge does not charge for networking traffic, having an L7 XLB in front of it means that you will have to pay for egress traffic (traffic sent to Apigee Edge and traffic sent to the client). In for Apigee X and hybrid egress traffic will occur regardless of whether you use an L7 XLB or not.

For adding an L7 XLB in front of Apigee Edge we use a load balancer backend of type internet network groups (NEGs) that uses the existing Apigee fully qualified domain name (FQDN) of your Apigee edge environment host.

You then have to create a new virtual host entry for the hostname that points to the L7 XLB IP and add it to your API proxies.

The downside of this approach is that the L7 XLB and cloud armor could be circumvented by clients that know the hostname of the existing apigee virtualhost. To limit this exposure the serverless internet NEG would have to be replaced with a middle proxy that can provide mTLS based client authentication towards the Apigee virtualhost.

Bonus: Load Balancing Apigee X and Edge for Migrating API Proxies

By merging the approaches above we can also transparently migrate API traffic between different Apigee Deployment options. In this section we focus on Apigee X and Apigee Edge and how you can use the L7 XLB to split and ultimately migrate traffic from Apigee Edge to X. The same obviously applies to any other combination of deployment options. In order to be able to load balance between multiple Apigee deployments you have to create an L7 XLB with two backend services. The first backend service uses the managed instance group of middle proxies that direct traffic to the peered Apigee X runtime. The second backend uses the internet NEG that points at the FQDN of your Apigee Edge virtualhost. On Apigee, this setup requires that we configure the same hostname for our Apigee X environment group as well as the Apigee Edge virtualhost.

At the load balancer you can then make routing decisions based on hostnames and paths. In this example any proxies under the /legacy-app/* prefix are routed to Apigee Edge while any unmatched path is routed to the middle proxy and the new Apigee X runtime.

This setup obviously only covers the routing aspect of migrating API proxies and does not cover important other considerations such as credential management and other API resources that potentially need to be managed across the two deployment options.


Do you know, if there is a way with Cloud Armor, to restrict access to the Apigee X instance IP, when using Private Service Connect? For example, ensuring traffic is routed through the ILB or GLB? It’s easily done when using the MIG as a proxy but wondering if it can be done without it? 

CodyK
Bronze 3
Bronze 3

@OliverJacob12 

In the article, Because the L7 XLB is (at the time of writing this article) not able to route traffic directly to an endpoint that is located in a peered network.

That’s not exactly true anymore with private service connect. Apigee X provides a service attachment, where we create a regional network endpoint group of type private service connect. Not to be confused with a network endpoint group that’s an internet NEG used for Apigee Hybrid. 

When deploying an apigee instance we whitelist, which GCP projects can connect via PSC. But that still doesn’t prevent someone from reaching the internal apigee IP. I suppose you could probably always block all traffic to that IP and it should still flow through the service attachment. You can ultimately only get mTLS to where it’s terminated before a managed apigee instance, such as the MIG or there is a preview feature of the Layer 7 load balancers supporting mTLS natively. This is one reason why I’ve been looking at the Apigee Adapter for Envoy because I think it solves a lot of those security concerns. 

strebel
Staff

@CodyK good questions. 

Your mTLS comment is valid. You'd do the client cert verification at the LB or MIG and then optionally forward the original client certificate in an x-header to do some additional checks in an Apigee policy.

On your other question: Cloud Armor can't be used to restrict access to the instance IP. Another idea to consider: With PSC you no longer have to peer with a routable VPC. If you are concerned about access to the Apigee instance from within your own subnets, you could peer Apigee with a standalone VPC and PSC to explicitly control the approved in- and outbound connections via PSC (with authorization at the project level). 

 

 

CodyK
Bronze 3
Bronze 3

Thanks @strebel

That’s an interesting good idea. I never considered that, creating a separate VPC.

We are still evaluating if PSC is a good option with the lack of active health checks. It is preferred because of the reduction of operational overhead for using a MIG as a proxy. But if we continue with PSC, definitely a good option to consider. 

wahidOne
Bronze 1
Bronze 1

HI @strebel 

how cloud armor protect domain in outside gcp? can i use https LB and routes the domain in there host  

strebel
Staff

Hi @wahidOne,

You could use the HTTPS LB together with an internet NEG to use Cloud Armor with domains outside of GCP.

For an overview of internet NEGs please see here: https://cloud.google.com/load-balancing/docs/negs/internet-neg-concepts

Note that clients could still try to directly connect to the underlying FQDN target so you'd have to ensure these requests are rejected according to your needs.

 

Version history
Last update:
‎04-13-2021 01:47 AM
Updated by: