Apigee hybrid ingress - Three different options to expose your ingress gateway on GKE

[NOTE for users of Apigee hybrid 1.8+] The Apigee-managed ingress gateway of  Apigee hybrid introduced breaking changes for the approach described in this article. In particular the ingress configuration via istio resources is no longer supported. Please refer to the most recent documentation of Apigee hybrid for this topic.

This article follows our previous post that explained the internal routing behavior within an Apigee hybrid cluster and how the routing decisions are controlled by Apigee Custom Resources. This time around we want to shift our focus to what happens before a request reaches the ASM ingress gateway and outline three different exposure options.

An important decision to make when designing your Apigee hybrid architecture is from where the APIs should be reachable. Should anyone in the world be able to reach the same hybrid cluster, do we have geographically distributed deployments and want to route clients to the nearest hybrid cluster or do we need to permit only consumers from within the same private network? In this section we will describe the different options to implement any of these scenarios using the managed Google Cloud Load Balancer products. For all general decision guide on all available load balancing options in GCP please see this page.

Whilst the concern of exposing the ASM ingress service is the same for all deployment options, this article will describe the necessary configuration steps for GKE on GCP. Other deployment environments will require slight tweaks in the vendor-specific configuration parameters. The most recent Apigee hybrid version at time of writing this article is 1.4.2. Any references to Apigee specific configuration entries are tested with 1.4.x.

“Default”: Exposure via External Network Load Balancer

In installation instructions for Apigee hybrid the specification for the ingress gateway recommends to create a kubernetes service of type load balancer with an external IP as follows:

  components:
    ingressGateways:
    - name: istio-ingressgateway
      enabled: true
      k8s:
        service:
          type: LoadBalancer
          loadBalancerIP: static_ip
          ports:
          - name: status-port
            port: 15021 # for ASM 1.7.x and above, else 15020
            targetPort: 15021 # for ASM 1.7.x and above, else 15020
          - name: http2
            port: 80
            targetPort: 8080
          - name: https
            port: 443
            targetPort: 8443

GKE will take this service specification and create an external L4 network load balancer. This is a regional load balancer that passes TCP/UDP traffic to the ASM ingress pods without terminating TLS. The TLS termination happens on the istio ingress pods.

You will want this approach if you allow for traffic that comes from outside the VPC and you don’t need the L7 network capabilities as described in the next section.

External HTTP(S) Load Balancer

Similarly to the network load balancer described above, an external HTTP(S) load balancer can also be used to front the Apigee ingress service to allow for external consumption of the APIs in your Apigee cluster. In comparison to the network load balancer the HTTP(S) load balancer is a global load balancer and can be used to balance traffic across deployments in multiple regions. Additionally the HTTP(S) load balancer operates on the ISO/OSI Layer 7 and can therefore make more intelligent routing decisions based on the host, or path e.g. to only route a subset of incoming traffic to your Apigee hybrid cluster.

Lastly the HTTP(S) load balancer can also be used to front the Apigee deployment with Cloud Armor to add an additional layer of security and protect your APIs against a range of DDOS and web attacks.

Configuring an external HTTP(S) load balancer is slightly different from a simple network load balancer and we will walk through the required steps one by one.

First, the ingress gateway config will no longer need to be directly publicly accessible so we configure it as a ClusterIP. And add the vendor-specific annotations for Network endpoint groups and HTTPS protocol.

components:
    ingressGateways:
    - name: istio-ingressgateway
      enabled: true
      k8s:
        serviceAnnotations:
          cloud.google.com/app-protocols: '{"https":"HTTPS"}'
          cloud.google.com/neg: '{"ingress": true}'
        service:
          type: ClusterIP
          ports:
          - name: status-port
            port: 15021 # for ASM 1.7.x and above, else 15020
            targetPort: 15021 # for ASM 1.7.x and above, else 15020
          - name: http2
            port: 80
            targetPort: 8080
          - name: https
            port: 443
            targetPort: 8443

Secondly we create a TLS Kubernetes secret that holds the SSL key and certificate that is used by the external load balancer. Note that this certificate will be used by all environment groups within your Apigee cluster and should therefore cover all the required hostnames.

kubectl create secret tls apigee-tls --key privkey.pem --cert fullchain.pem -n istio-system

We also need to create a global static IP address to be used by the external Load Balancer

gcloud compute addresses create apigee-global --global

To create the external HTTP(S) Load balancer we now create an ingress resource that references the external IP as well as the Kubernetes secret that holds the TLS credentials.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.allow-http: "false"
    kubernetes.io/ingress.global-static-ip-name: apigee-global
  name: xlb-apigee-ingress
  namespace: istio-system
spec:
  backend:
    serviceName: istio-ingressgateway
    servicePort: 443
  tls:
  - secretName: apigee-tls

At this point we have an external HTTPS Load Balancer created that points to the istio-ingress gateway service. However the Load Balancer backend service is not yet recognized as healthy because the default Apigee istio ingress relies on Server Name Indication (SNI) which is not supported by the HTTP(S) Load Balancer at the point of writing this article.

If we try to call an API endpoint on the Load Balancer we get a 502 Bad Gateway error because no healthy backend was identified:

curl --cacert https:/my-api.example.com/httpbin/v0/anything -I

-- OUTPUT --

HTTP/2 502 
content-type: text/html; charset=UTF-8
referrer-policy: no-referrer
content-length: 332
date: Thu, 11 Mar 2021 07:15:27 GMT
alt-svc: clear

To allow for health checks we have to create an additional istio gateway that does not depend on SNI as well as a virtual service to redirect to rewrite the health check URI.

apiVersion: apigee.cloud.google.com/v1alpha1
kind: ApigeeRoute
metadata:
  name: wildcard-gateway-apigee
  namespace: apigee
spec:
  enableNonSniClient: true
  hostnames:
  - '*'
  ports:
  - number: 443
    protocol: HTTPS
    tls:
      credentialName: apigee-tls
      minProtocolVersion: TLS_AUTO
      mode: SIMPLE
  selector:
    app: istio-ingressgateway

---

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: health
  namespace: apigee
spec:
  gateways:
  - wildcard-gateway-apigee
  hosts:
  - '*'
  http:
  - match:
    - headers:
        user-agent:
          prefix: GoogleHC
      method:
        exact: GET
      uri:
        exact: /
    rewrite:
      authority: istio-ingressgateway.istio-system.svc.cluster.local:15021
      uri: /healthz/ready
    route:
    - destination:
        host: istio-ingressgateway.istio-system.svc.cluster.local
        port:
          number: 15021 # for ASM 1.7.x and above, else 15020

After a few minutes the backend of the load balancer will now show as healthy but our API call still returns an error. This time the error indicates a 404 Not Found error.

curl https://my-api.example.com/httpbin/v0/anything -I

-- OUTPUT --

HTTP/2 404 
date: Thu, 11 Mar 2021 07:23:21 GMT
server: istio-envoy
via: 1.1 google
alt-svc: clear

This happens because our non-sni gateway that allows incoming traffic from the load balancer is not connected to the Apigee-managed virtual services. For this we have to attach “additionalGateways” entries to all our “virtualhosts” entries in our overrides yaml and re-apply it using apigeectl.

virtualhosts:
  - name: test
    sslCertPath: ./fullchain.crt
    sslKeyPath: ./test.key
    additionalGateways: ["wildcard-gateway-apigee"]
  - name: prod
    sslCertPath: ./fullchain.crt
    sslKeyPath: ./test.key
    additionalGateways: ["wildcard-gateway-apigee"]

Once the changes are propagated to the Apigee virtual services traffic is correctly routed to the Apigee runtime pods and our call succeeds. Note that even though we now bind the istio virtual services to the wildcard gateway the virtual services are still able to perform routing decisions based on the requested hostname.

curl https://my-api.example.com/httpbin/v0/anything -I

-- OUTPUT --

HTTP/2 200 
date: Thu, 11 Mar 2021 07:25:18 GMT
content-type: application/json
content-length: 913
server: istio-envoy
access-control-allow-origin: *
access-control-allow-credentials: true
via: 1.1 google
alt-svc: clear

The same mechanism that we used to expose a single Apigee cluster can optionally also be used to extend our setup with an additional Apigee cluster in a different region. We will use the external HTTP(s) load balancer to balance traffic between the different Apigee hybrid clusters. For this we configure a multi-regional Apigee setup that connects multiple hybrid clusters via a mutual cassandra ring and create an external HTTPS Load Balancer with a backend that contains a network endpoint per cluster that should be load balanced.

Once the backends are added, traffic will be distributed in an active-active scenario and reach the best Apigee region based on availability and latency.

Internal Network Load Balancer

Sometimes the ingress of an Apigee hybrid cluster should not be consumable from outside the private network and we want to create an internal network load balancer instead.

Instead of reserving a global external IP address we reserve an internal IP address that can be used by the load balancer.

gcloud compute addresses create apigee-ingress-ip --region "$REGION" --subnet default --purpose SHARED_LOADBALANCER_VIP

And annotate the Load Balancer service for the istio ingress gateway to be of type internal:

apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
  components:
    ingressGateways:
    - name: istio-ingressgateway
      enabled: true
      k8s:
        serviceAnnotations:
          networking.gke.io/load-balancer-type: internal
        service:
          type: LoadBalancer
          loadBalancerIP: $INGRESS_IP
          ports:
          - name: status-port
            port: 15021 # for ASM 1.7.x and above, else 15020
            targetPort: 15021 # for ASM 1.7.x and above, else 15020
          - name: http2
            port: 80
            targetPort: 8080
          - name: https
            port: 443
            targetPort: 8443

If you now list the services in your istio-system namespace you see an “external” IP address as you would when creating an external Load Balancer. This time the external IP just refers to external from a cluster perspective but still internal to the VPC. The APIs can therefore only be reached by consumers within the same VPC.

Contributors
Version history
Last update:
‎12-14-2023 03:43 AM
Updated by: