This article was co-authored by Daniel Strebel and Joel Gauci
[NOTE for users of Apigee hybrid 1.8+] The Apigee-managed ingress gateway of Apigee hybrid introduced breaking changes for the approach described in this article. In particular the ingress configuration via istio resources is no longer supported. Please refer to the most recent documentation of Apigee hybrid for this topic.
This article aims to shed some light on how incoming API requests are routed internally within Apigee hybrid. We will explain how Apigee is routing traffic to the correct environment message processor pods and how you can control this behaviour using Kubernetes Custom Resources and the apigeectl overrides functionality. Even though all the necessary routing components are generated automatically by Apigee tooling, as described in the official installation documentation, the lower level concepts described in this article should help you in the design of more sophisticated topologies and troubleshoot connectivity issues for your deployments.
This section presents an overview of an ingress path and the main components of the Apigee runtime plane, which are part of this journey. It also describes some different configuration options that are available when installing Apigee hybrid.
Did you know that the Apigee hybrid runtime plane is composed of different Custom Resources (CR), which play a critical role in the operation and functioning of the runtime plane?
These Custom resources are extensions of the Kubernetes API. They can be installed into it by creating Custom Resource Definitions (CRDs). Regardless of how they are installed, the new resources are referred to as Custom Resources to distinguish them from built-in Kubernetes resources (like pods). If you want to learn more about CRDs in Kubernetes, here is the link to the official Kubernetes documentation.
The Apigee hybrid runtime defines the following CRDs, on the apigee namespace:
These CRDs are managed by an Apigee controller manager that is part of the runtime and that is installed during the initialization phase of the runtime installation, on the apigee-system namespace.
You can see the pod of the apigee controller manager using the following command:
kubectl get pods -n apigee-system
Coming back to the overview of an ingress path, here is a picture, which puts the different components of the runtime in perspective:
We decide here to highlight the main components involved in the ingress path.
The different components defined on the istio-system namespace are provisioned during the Anthos Service Mesh (ASM) installation (cf. Install ASM).
The only exception are the cryptographic objects (private key and certificate) used on the Istio Gateway, which are created as a Kubernetes opaque secret during the Apigee runtime installation process.
The name of this secret is a concatenation of the Apigee organization name ($ORG on the previous picture), and the environment group name that has been set on this organization ($ENV_GROUP on the previous picture).
The private key and certificate are defined on the Apigee hybrid overrides (yaml) configuration file.
You have two options to create these crypto objects:
Please refer to the following community article if you want to know more about this topic: Free, trusted SSL Certificates for Apigee hybrid ingress on GKE.
Here is a table based on the previous picture, which sums up apigee and istio-system namespaced components of the hybrid runtime and Apigee CR responsible for their creation:
Apigee hybrid component | K8s Resource Name | Apigee CR |
Gateway | $ORG-$ENV_GROUP-$id | ApigeeRoute |
VirtualService | $ORG-$ENV_GROUP-$id | ApigeeRoute |
DestinationRule | apigee-runtime-$ORG-$ENV-$id | ApigeeDeployment |
Service | apigee-runtime-$ORG-$ENV-$id | ApigeeDeployment |
ReplicaSet | apigee-runtime-$ORG-$ENV-$id-$version-$code | ApigeeDeployment |
ApigeeDeployment |
apigee-runtime-$ORG-$ENV-$id | ApigeeEnvironment |
Secret (for ingress gateway) | $ORG-$ENV_GROUP or $ORG-$ENV_GROUP-cacert | N/A |
It is important to note that the runtime pod (at the bottom of the previous picture) is created by a ReplicaSet and exposed by a Kubernetes service. This runtime pod contains the Apigee Message Processor (MP), which executes policies: security, mediation, traffic management and extensions.
In the next chapter, we describe how the different Apigee CRs are created as well as the ingress gateway’s secret
In this section, we describe the different options to configure the Apigee hybrid runtime.
First, we present the Apigee hybrid runtime configuration file (overrides.yaml) and we discuss the apigeectl command and its different outputs regarding the components of the ingress path.
If you have already installed Apigee hybrid, you may have already asked yourself this question: “why is the Apigee hybrid runtime configuration file named overrides.yaml?”
apigeectl is a command-line interface (CLI) for installing and managing Apigee hybrid in a Kubernetes cluster. For information on downloading and installing apigeectl, see Download and install apigeectl
Once apigeectl has been installed, here are the different files and directories that you can see from the root (the example below is based on version 1.3.4 of apigeectl):
The default Apigee hybrid runtime configuration is defined in the ./config/values.yaml file.
The configuration you want to promote on your runtime cluster “overrides” a part of these values and this is the reason for the name of the configuration file: overrides.yaml.
The list of all of the configuration properties that you can use to customize the runtime plane of your Apigee hybrid deployment is presented in the Configuration property reference doc.
Regarding the ingress path presented in this article, here are the properties of the overrides.yaml file that come into play in the creation of various kubernetes components of the runtime:
As an example, here is an extract of a simple overrides.yaml file. It emphasizes how virtual hosts and environments can be configured:
At least one hostname must be configured for each environment group (envgroup). As envgroups are defined on the Apigee management plane, the envgroup’s configuration must be done through the User Interface (UI) or the Apigee API.
In this example, the envgroup’s name is test-group. Cryptographic objects (certificate and key) are configured for each envgroup and related to the hostname that is used by the istio ingress gateway.
Envgroups are associated to at least one environment (test in this example)
On the Apigee UI, you can see the envgroup and its hostname(s), as well as the environment(s) associated to this envgroup, as shown on the following picture:
In the example above the characteristics of the envgroup are:
Should you need detailed explanation and examples on how to use envgroups and environments, please refer to the Apigee doc dealing with Environments and Environment Groups.
In this section, we present the apigeectl CLI and the Kubernetes resources that are created during the initialization and configuration phases of the Apigee hybrid runtime installation.
Before the initialization phase, it is necessary to install 2 types of resources on the target cluster:
All the details about the installation of these resources are detailed in the Apigee hybrid doc.
The output of the ASM installation consists in the following istio-system namespaced components:
The downloading and installation of the apigeectl CLI is presented here.
The command used to initialize the Apigee hybrid runtime operates in 2 steps:
If your kubectl version is 1.17 and older, use the following dry run command for initialization:
apigeectl init -f overrides.yaml --dry-run=true
If your kubectl version is 1.18 and newer, use the following dry run command for initialization:
apigeectl init -f overrides.yaml --dry-run=client
If there are no errors, execute the init command as follows:
apigeectl init -f overrides.yaml
The apigeectl CLI uses the apigee-operators plugin files in order to install the following components:
Details of each CRD can be found in the apigee-operators.yaml file present in the apigeectl tool:
$APIGEECTL_HOME/plugins/apigee-operators/apigee-operators.yaml
...where APIGEECTL_HOME is the home directory of an apigeectl installation.Once the init phase has been completed, you should see the “Apigee controller manager” and “Apigee resources install” pods, when executing the following command:
kubectl get pods -n apigee-system
While the controller manager pod must be in “running” state, the other pod must be “completed” as it is referenced by a Kubernetes Job (apigee-resources-install), whose aim is to install different types of resources as described above.
The next step is the installation of the runtime components and among others the different Apigee Custom Resources.
We proceed in 2 steps as for the init phase.
If your kubectl version is 1.17 and older, use the following dry run command for initialization:
apigeectl apply -f overrides.yaml --dry-run=true
If your kubectl version is 1.18 and newer, use the following dry run command for initialization:
apigeectl apply -f overrides.yaml --dry-run=client
If there are no errors, execute the init command as follows:
apigeectl apply -f overrides.yaml
You can check the status of the deployment, run the following command:
apigeectl check-ready -f overrides.yaml
Please refer to the Apigee hybrid runtime installation for more details.Once all the pods are in “running” or “completed” state, the installation of the Apigee hybrid runtime components has been completed.
In this section, we focus on the runtime components related to the ingress path, which are created during the final configuration phase.
Based on the values of the configuration properties of your Apigee hybrid runtime (overrides.yaml), here are the different CRs created:
Which configuration property (overrides.yaml) ? | Which CR is created ? | Purpose of the CR |
virtualhosts[].sslCertPath
virtualhosts[].sslKeyPath or: virtualhosts[].sslSecret |
ApigeeRouteConfig | Reference secret of istio Gateway for each hostname of an envgroup |
envs[] | ApigeeEnvironment | Create an ApigeeDeployment CR for each environment scoped resource: runtime(*), udca, synchronizer |
virtualhosts[].name |
ApigeeRoute |
This CR contains hostnames and routing information and is used to create Istio Gateway and VirtualService |
(*): Regarding the ingress path, the ApigeeDeployment CR is responsible for creating the following components on the apigee namespace:
Istio Gateway and VirtualService are created on the apigee namespace based on the ApigeeRoute CR. Gateways and VirtualServices are scoped to an envgroup of an Apigee organization.
The Gateways are applied to the Envoy proxy running on a pod with label: app: istio-ingressgateway. Their specification describes the port (443) that should be exposed, the type of protocol to use (HTTPS), the TLS credential name and hostnames.
The VirtualService contains the routing information based on the basePath property of the different API proxies. This resource is automatically updated when new proxies are deployed to the environment.
Private keys and certificates used on the ingress gateway are stored into a Kubernetes opaque secret. These cryptographic objects are defined on the overrides.yaml file and are transformed into secret during the Apigee configuration step, through the virtualhosts.yaml template file.
This file is present in the apigeectl tool:
$APIGEECTL_HOME/templates/virtualhosts.yaml
...where APIGEECTL_HOME is the home directory of an apigeectl installation.
There are two runtime components, which an active role in the deployment process of an API Proxy:
This means there are at least as many synchronizers as environments defined on an organization and there is at least one watcher (as there is at least one watcher per organization).
Let’s see what are the exact roles of the synchronizer and watcher...
API Proxy deployments are two stage processes with Apigee hybrid:
The ApigeeRoute CR is then able to generate/modify the Istio Gateway (new envgroup) and VirtualService (new API proxy or basePath). As the API proxy configuration has already been deployed on MP/runtime pods, the full ingress path is now operational.
The purpose of this session is to step through the end to end routing path for an incoming request. We will look at a correctly configured setup in this section. If you are interested in troubleshooting possible routing errors, please check the subsequent section. The scenario assumes that we have a setup where we have deployed an Apigee hybrid runtime for an environment “env1” that is part of an environment group “envgroup1”.
The environment group has a hostname configured to “api.envgroup1.example.com”. We also have an API proxy deployed in “env1” with a base path of “/my-proxy/v1.
The request path starts by a client that is making a request against the API proxy that is running on Apigee hybrid. In our example this could look something like this:
curl https://api.envgroup1.example.com/my-proxy/v1/something
2. Resolve server IP via DNS
The client application will resolve the hostname of api.envgroup1.example.com to an IP address. This IP address corresponds to the load balancer for the ingress service for the runtime cluster.
The client app performs a TLS handshake with the ASM ingress. The ASM ingress has the TLS credentials for each environment group hostname configured via the gateway object (this resource is automatically generated by Apigee). You can see the gateway configuration via the Kubernetes API.
kubectl get gateway -n apigee -o yaml
This should list something like this configuration that points to the Kubernetes secret that contains the TLS credentials:
apiVersion: networking.istio.io/v1beta1 kind: Gateway metadata: … spec: selector: app: istio-ingressgateway servers: - hosts: - api.envgroup1.example.com port: name: apigee-https-443 number: 443 protocol: HTTPS tls: credentialName: xxx-kvm-envgroup1 mode: SIMPLE
4. ASM Ingress routing
Once the request reaches the AMS ingress the traffic is decrypted and re-encrypted to be sent to the runtime pods for environment “env1”. The routing information for this is contained in the VirtualService resource in the apigee namespace.
This resource is also automatically generated for every environment group via the Apigeectl tool and the user provided overrides as described above. The virtualservice resource is also automatically updated when you deploy proxies to an environment that is part of the environment group.
kubectl get virtualservice -n apigee -o yaml
The virtual service references the gateway resource from above and contains the routing rules to direct the traffic to the correct destination. The hosts array specifies again the hostname to match only requests for a specific environment group. The uri matches are used to match the base paths of a proxy such that all traffic for this basepath is routed to the correct environment.
apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: … spec: gateways: - ...-envgroup1-f8181c8 hosts: - api.envgroup1.example.com http: - match: - uri: regex: /my-proxy/v1(/[^/]+)*/? route: - destination: host: apigee-runtime-...--env1-11b180d.apigee.svc.cluster.local port: number: 8443 subset: v140-hvszw weight: 100 timeout: 300s
Looking at the destination entry you will see that the destination is defined by a specific subset identifier.
This is again automatically managed for you but is used to identify different versions of the Apigee runtime and helps with rolling updates.
You can inspect the destination rule for a specific runtime by running this command:
kubectl get destinationrules apigee-runtime-DR_NAME -n apigee -o yaml
This should contain something like this:
apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: … spec: host: apigee-runtime-...--env1-11b180d.apigee.svc.cluster.local subsets: - labels: com.apigee.apigeedeployment: apigee-runtime-...--env1-11b180d com.apigee.revision: v140-hvszw com.apigee.version: v140 name: v140-hvszw trafficPolicy: tls: mode: SIMPLE
Troubleshooting
A proxy is deployed to an Apigee environment but is not reachable via the ingress hostname/IP.
Details about how to access Envoy access logs of Apigee hybrid runtime are presented in this community article
curl -H "Authorization: Bearer $TOKEN" https://apigee.googleapis.com/v1/organizations/my-org/envgroups/envgroup1/attachments
curl -H "Authorization: Bearer $TOKEN" https://apigee.googleapis.com/v1/organizations/my-org/envgroups/envgroup1
If not, set the correct hostname(s) using the Apigee API or UI
kubectl get virtualservice -n apigee -o yaml
And look for something like this
apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: … spec: gateways: - ...-envgroup1-f8181c8 hosts: - api.envgroup1.example.com http: - match: - uri: regex: /my-proxy/v1(/[^/]+)*/? route: - destination: host: apigee-runtime-...--env1-11b180d.apigee.svc.cluster.local port: number: 8443 subset: v140-hvszw weight: 100 timeout: 300s
kubectl exec -it $(kubectl get pods -n $NAMESPACE -l org=${ORG},env=${ENV},app=apigee-runtime --output=jsonpath='{.items[0].metadata.name}' | head -n 1) -n apigee -- curl -k https://localhost:8443/httpbin/v0/anything
If this fails, the proxy was most likely not deployed properly. Try to undeploy and redeploy the proxy from the Apigee API or the UI.
Scenario 2 TLS credentials missing / invalid
When calling the API we get a TLS error. Doing a curl against the API the error message looks like this:
LibreSSL SSL_connect: SSL_ERROR_SYSCALL in connection to api.envgroup1.example.com
This usually means that the certificate reference on the Gateway is missing or invalid. Check the secrets for your gateways with the following command.
kubectl get gateway -n apigee -o jsonpath='{range .items[*].spec.servers[*]} {.tls.credentialName}{"\t"}{.hosts[0]}{"\n"}{end}'
This will list the tls credential secrets (1) together with the hostnames (2) that they are used for:
xxx-envgroup1 api.envgroup1.example.com
To validate that all referenced secrets exist you can do the following:
for SECRET_REF in $(kubectl get gateway -n apigee -o jsonpath='{range .items[*].spec.servers[*]}{.tls.credentialName}{" "}{end}'); do kubectl get secret -n istio-system $SECRET_REF; done;
If the secret exists, check if it is valid:
kubectl get secret -n istio-system [SECRET_NAME] --template={{.data.cert}} | base64 -d | openssl x509 -text
A useful command to see which certificate is returned by the ingress is the following, based on openssl:
openssl s_client -connect $(kubectl get svc -l app=istio-ingressgateway -o custom-columns=:status.loadBalancer.ingress[0].ip -n istio-system):443
We demonstrated the routing components for Apigee hybrid and how they can be configured using the apigeectl overrides functionality. Apigee users are advised not to create any of the intermediary Kubernetes resources themselves and stick to the official apigeectl overrides process whenever possible. Furthermore the Apigee generated routing resources such as the istio custom resources are managed by the Apigee controllers and do not allow any manual modifications to them. Any edit will automatically be overridden in order to prevent any form of config drift that would make it impossible to incorporate things like newly deployed API proxies. The underlying concepts that were introduced in this article are interesting to help understand the internal routing processes and sometimes helpful in troubleshooting connectivity issues.
Great writeup. Thanks @joel_gauci
Good stuff!
hi @joel_gauci - this is an excellent article which has lots of insights into a request flow. Could you please let me know if this can be updated to reflect the latest Hybrid versions ? For instance, I could not find gateway, destinationrules, virtualservice etc. Let me know if I am missing something