Apigee X Network Connectivity using Private Service Connect (PSC)

Overview

Apigee is Google Cloud’s differentiated API Management solution. With Apigee, developers can focus more on the product/application feature development rather than worrying about how to implement common flows like Security, Traffic Management, etc., which can be delegated to and configured at the Apigee API Management layer. The API traffic flow between client and Apigee is referred to as Northbound and the API traffic flow between Apigee and target backends is referred to as Southbound. Apigee comes in two flavours, Apigee X (Google-managed solution) and Apigee Hybrid (customer manages the Apigee Runtime Plane and Google manages the Apigee Control Plane).

Private Service Connect (PSC) is Google Cloud’s secure service-oriented networking construct that allows access to Google APIs and services, and managed services in other VPC networks. One can even have private consumption of services, using IP addresses, across VPC networks that belong to different groups, teams, projects, or organizations. Refer to the official documentation for more information.

This article talks about some common Apigee X Networking Patterns which use Private Service Connect.

Background

When an Apigee X instance is created, Apigee Instance’s ingress is exposed through a private IP address associated with an L4 Internal Load Balancer (ILB). For a multi-region setup, Apigee X instances can be deployed in the desired regions; note: one region can contain up to one Apigee X instance only. Until the launch of PSC, a Global External Load Balancer (GXLB) along with regional Managed Instance Group(s) (having iptables definition) were required to forward the external traffic to respective regional Apigee L4 ILBs. 

Let’s consider a case where GXLB + MIG(s) are getting used for the Northbound connectivity. Apigee X uses Service Networking which creates peering between the Service Networking Host Project and the customer VPC where GXLB along with MIG(s) exist. Because of this peering, Apigee X is only able to reach target backends which are one hop away i.e. hosted in the peered VPC; VPC peering is non-transitive. This brings a challenge as not all the target backends may reside in the peered VPC. Check out Figure 1 which should the non-transitive VPC Peering setup. Though it is possible to create additional peerings, Load Balancers, and Managed Instance Groups to achieve private connectivity across a chain of VPCs, it requires complex configurations and requires a lot of maintenance.

 Figure 1: Non-transitive VPC PeeringFigure 1: Non-transitive VPC Peering

PSC helps address these concerns by providing easy connectivity between Apigee X and clients/backends which are hosted in different VPCs (that could even belong to different GCP Projects or GCP Organizations). Let’s understand some PSC concepts before getting into the network patterns.

Producer and Consumer Networks

A Producer Network (a VPC) defines a service and exposes it for consumption, while a Consumer Network (a different VPC than the Producer Network VPC) consumes the service that is exposed by Producer Network. Figure 2 shows the Consumer and Producer Network interaction.

Figure 2: Producer & Consumer ModelFigure 2: Producer & Consumer Model

Endpoint and Service Attachments

The Producer Network uses a construct called PSC Service Attachment (SA) to expose its service (one Service Attachment per service) through a unique URI; the SA refers to the service’s Load Balancer forwarding rule. Each Load Balancer can be referenced only by a single SA; one cannot configure multiple SAs that use the same Load Balancer. In case of Apigee X, Service Attachment URI for connecting with the Apigee Instance is automatically created during the Apigee Instance creation process.

PSC enables service consumers to connect to service producers using a construct called PSC Endpoint Attachment (EA). A Consumer needs to create a forwarding rule with an IP address that can be used to direct all the traffic to the producer service. The target of the forwarding rule is the SA exposed by the producer. In case of Apigee, an Endpoint Attachment needs to be created explicitly for every Service Attachment; there is a one-to-one mapping between an EA and and SA. Figure 3 shows how a consumer can utilize an EA to reach a specific target service in the producer through an SA.

Figure 3: Private Service ConnectFigure 3: Private Service Connect

PSC Network Endpoint Group (NEG)

A PSC Network Endpoint Group (NEG) specifies a group of services running across networks and projects as backends to a Google Load balancer. It is a single endpoint that can resolve to one of the Google-managed regional API endpoints or to a managed service published using PSC.

In the context of Apigee X Northbound networking, the network where Apigee X runtime resides acts as producer and the network where GXLB that is exposing APIs to external clients or(/and) the network where internal clients reside acts as consumer network(s). Whereas in the context of Apigee X Southbound networking, the network where Apigee X runtime resides acts as consumer and the network where the target applications reside acts as producer network.

Use-Cases

Some common Northbound networking use-cases where PSC can be utilized are stated below.

  • Exposing APIs to external clients outside GCP through Global or Regional Load Balancers in different GCP Projects/Organizations.
  • Exposing APIs to internal client applications in multiple GCP Projects/Organizations (hosted in different VPC networks than the one peered with Apigee X). 
  • Substitute for  Managed Instance Groups (MIGs) for proxying traffic from Global Load Balancer to Apigee X.
  • Compliance requirements that do not allow VPC peering.

Some common Southbound networking use cases where PSC can be utilized are stated below.

  • Target Application on different VPC in same or different GCP Project/Organization.
  • Privately connect Apigee X to target services running across VPC networks.
  • Avoid setting up complex infrastructure with self-managed components like virtual machines to reach target applications.

Sample Network Architecture

Figure 4 shows an end-to-end network architecture for Apigee X using PSC for Northbound and Southbound flows.

Figure 4: Apigee X Northbound and Southbound with PSCFigure 4: Apigee X Northbound and Southbound with PSC

 

Up and Running with PSC for Apigee X: End-to-End Sample Automation

Steps that would be required on a high level for using PSC Northbound with a GXLB are stated below.

  1. Create Apigee X Instance(s). Service Attachment(s) get created as part of Apigee Instance creation, in the same region, in the Google-managed space where Apigee is hosted.
  2. Create a Global External Load-Balancer (GXLB).
  3. Create PSC Network Endpoint Group (NEG) for connecting to the Service Attachment. If multi-region setup is there, PSC NEG will be required for each region and will point to the respective regional Service Attachment.
  4. Attach the PSC NEG as a backend to the GXLB.
  5. Apigee API Proxies can be accessed via GXLB.

Steps that would be required on a high level for using PSC Southbound for connecting to target backends hosted in a different VPC are stated below.

  1. Create a PSC Service Attachment (requires creation of a NAT Subnet) in the VPC network where the target services are deployed.
  2. Create one PSC Endpoint Attachment in Apigee X VPC for each PSC Service Attachment; note that EA and SA are regional resources.
  3. Apigee X can now access the target services via Endpoint Attachment; API Proxies need to define the HTTP Target Connection with Endpoint Attachment(s).

Useful reference scripts are available in the Apigee DevRel GitHub Repository to get started with using Terraform.

Note: These scripts provide a good reference point but would require changes to adopt as per the use-case(s).

Let us understand Northbound and Southbound sample scripts from the above-mentioned repository.

  1. We would need at least two GCP Projects to try this pattern. The project module can be used to create GCP Projects as per our requirement. An existing GCP Project can also be used by setting the project_create parameter to false. Apigee subscription entitlements will need to be tied to the GCP Project (Customer GCP Project - 1, as shown in Figure 4) for the paid version.
    module "project" {
     source          = "github.com/terraform-google-modules/cloud-foundation-fabric//modules/project?ref=v16.0.0"
     name            = <Project ID>
     parent          = <Parent Folder/Organization ID>
     billing_account = <Billing Account ID>
     project_create  = <true/false>
     services = [
       "apigee.googleapis.com",
       "cloudkms.googleapis.com",
       "compute.googleapis.com",
       "servicenetworking.googleapis.com"
     ]
    }
  2. The vpc module will create a VPC and necessary Apigee ranges will be added as per the input. An existing VPC can also be used by setting an additional parameter vpc_create to false in the source module.
    module "vpc" {
     source     = "github.com/terraform-google-modules/cloud-foundation-fabric//modules/net-vpc?ref=v16.0.0"
     project_id = <Project ID>
     name       = <VPC Network Name>
     psa_config = {
       ranges = {
         apigee-range         = <Peering Range>
         apigee-support-range = <Support Range>
       }
       routes = null
     }
    }
  3. The nip-development-hostname module under Northbound scripts can be used to generate Google Managed Certificates and hostnames under nip.io, if the setup is for testing purposes. In case some custom domains are to be used, the following Terraform resources can be created.
    resource "google_compute_global_address" "external_address" {
     name         = <External Address Name>
     project      = <Project ID>
     address_type = "EXTERNAL"
    }
     
    resource "google_compute_managed_ssl_certificate" "google_cert" {
     project = <Project ID>
     name    = <Certificate Name>
     managed {
       domains = <List of domains>
     }
    }
  4. The apigee-x-core module under Northbound scripts can be used to create Apigee Organization, Apigee Instance(s), Apigee Environment(s), Apigee Environment Group(s), KeyRing(s),  Service Account, and all the desired Apigee Attachments.
    module "apigee-x-core" {
     source              = "../../modules/apigee-x-core"
     project_id          = <Project ID>
     ax_region           = <Analytics Region>
     apigee_instances    = <Map of Apigee Instances>
     apigee_environments = <List of Environments>
     apigee_envgroups    = <Map of Environment Groups with Environments & Hostnames>
     network = <VPC Network ID>
    }
  5. The module psc-ingress-vpc under Northbound scripts can be used to create the VPC that will host Global Load Balancer and PSC NEG(s).
    module "psc-ingress-vpc" {
     source                  = "github.com/terraform-google-modules/cloud-foundation-fabric//modules/net-vpc?ref=v16.0.0"
     project_id              = <Project ID>
     name                    = <PSC Ingress VPC Name>
     auto_create_subnetworks = false
     subnets                 = <List of subnets for exposing Apigee via PSC>
    }
  6. The next step is to create PSC NEG using google_compute_region_network_endpoint_group resource from Northbound scripts.
    resource "google_compute_region_network_endpoint_group" "psc_neg" {
     project               = <Project ID>
     for_each              = <Map of Apigee Instances>
     name                  = "psc-neg-${each.value.region}"
     region                = each.value.region
     network               = <PSC Ingress VPC ID>
     subnetwork            = <Subnet URL to which all network endpoints belong>
     network_endpoint_type = "PRIVATE_SERVICE_CONNECT"
     psc_target_service    = <Target Service URL of PSC producer Service Attachment>
     lifecycle {
       create_before_destroy = true
     }
    }
  7. Create L7 Global External Load Balancer using nb-psc-l7xlb module from Northbound scripts.
    module "nb-psc-l7xlb" {
     source                  = "../../modules/nb-psc-l7xlb"
     project_id              = <Project ID>
     name                    = <PSC XLB Name>
     network                 = <PSC Ingress VPC ID>
     psc_service_attachments = <Map of region to service attachment ID>
     ssl_certificate         = <SSL certificate ID created in previous steps>
     external_ip             = <External Address created in previous steps>
     psc_negs                = <List of PSC NEG IDs to be used as backends>
    }
  8. The next step is to create a VPC for target backend applications using the backend-vpc module from Southbound scripts. An existing VPC can also be used by setting an additional parameter vpc_create to false in the source module.
    module "backend-vpc" {
       source     = "github.com/terraform-google-modules/cloud-foundation-fabric//modules/net-vpc?ref=v16.0.0"
       project_id = <Project ID>
       name       = <Target Application VPC Name>
       subnets = <List of subnets for Target Applications>
    }
  9. Create a sample target backend application using the backend-example module from Southbound scripts. This step can be ignored if there is already an existing application.
    module "backend-example" {
       source     = "../../modules/development-backend"
       project_id = <Target Application Project ID>
       name       = <Target Application Name>
       network    = <Target Application VPC ID>
       subnet     = <Target Application VPC subnets>
       region     = <Target Application Region>
    }
  10. Create a subnet for PSC SA (NAT subnet) using resource google_compute_subnetwork from Southbound scripts.
    resource "google_compute_subnetwork" "psc_nat_subnet" {
       name          = <PSC Subnet Name>
       project       = <Target Application Project ID>
       region        = <Region>
       network       = <Target Application VPC ID>
       ip_cidr_range = <PSC Subnet IP CIDR>
       purpose       = "PRIVATE_SERVICE_CONNECT"
    }
  11. The next step is to create the SA and EA using the southbound-psc module from Southbound scripts.
     module "southbound-psc" {
       source              = "../../modules/sb-psc-attachment"
       project_id          = <Target Application Project ID>
       name                = <PSC Name>
       region              = <Region>
       apigee_organization = <Apigee Organization ID>
       nat_subnets         = <PSC NAT Subnet ID create from previous step>
       target_service      = <Target Service for service attachment i.e. forwarding rule>
       depends_on = [
         module.apigee-x-core.instance_endpoints
       ]
     }
  12. Create firewall rules using resource google_compute_firewall from Southbound scripts.
     resource "google_compute_firewall" "allow_psc_nat_to_backend" {
       name          = <Firewall Rule Name>
       project       = <Target Application Project ID>
       network       = <Target Application VPC ID>
       source_ranges = <List of PSC Subnet IP CIDR Ranges>
       target_tags   = <List of Target Application Tags>
       allow {
         protocol = "tcp"
         ports    = ["80", "443"]
       }
     }​

Once all the Terraform resources have been created, you get an end-to-end automated Apigee X Northbound and Southbound PSC setup.

Limitations

This section describes some of the limitations when using PSC with Apigee.

Northbound PSC Limitations

  • Support for PSC in the provisioning wizard is not available and has to be installed via CLI or Terraform only. 
  • Global External HTTP(S) Load Balancer (Classic) is not supported.
  • For each Apigee Instance, the number of customer projects that can connect via PSC NEGs is 15.
  • For each Apigee Instance, the number of PSC NEGs customers can create in the same project to connect to Apigee is 10.

Refer Northbound limitations for more up-to-date information on limitations.

Southbound PSC Limitations

  • In an Apigee Organization, one Endpoint Attachment is allowed for a given Service Attachment. 
  • Service Attachments and Endpoint Attachments must be in the same region.

Refer Southbound limitations for more up-to-date information on limitations

References

Acknowledgements

Special thanks to @strebel for authoring the Terraform modules.

Version history
Last update:
‎11-30-2022 02:18 AM
Updated by: