Securing your internal APIs with Apigee Edge

7 5 6,930

Got "internal APIs"?

We often come across situations where the API provider has 2 sets of APIs:

  • "public APIs" that expose a capabilities to be consumed by "partner developers" and the apps they build. They're accessible through public networks, and could be part of an "open" API offering, or a "closed" (restricted) one, in which only a specific group of developers will be approved for access.
  • internal APIs that are consumed by the systems that expose public APIs for data and authentication etc. These APIs need not be accessible through public networks normally.

Pretty typical. Usually, the API providers don't want to expose the internal APIs to the outside world. They want to restrict access by letting only the systems that implement the public APIs call them.

There are multiple ways to implement this model. One easy approach is to protect the internal APIs using the various security policies provided by Apigee Edge: VerifyApikey, AccessControl, OAuthV2/VerifyAccessToken (for client_credentials, I guess). This would mean only authorized clients can call these internal APIs. But it also means that the so-called "internal" APIs are exposed to the public internet. Only the policy flow is protecting the APIs from being accessed publicly.

is there a better way to restrict access to these APIs so that only the publicly exposed APIs can call them?

In particular, is it possible to host some APIs on Apigee Edge so that they cannot be called externally? YES, this is possible using Apigee Edge's API Management solution. We can use something called " proxy chaining" to implement this model.

Let's consider a simple use-case:

I have an apiproxy "mycompany-public" which is exposed publicly and it provides some data to the caller. Now this data, is fetched from another api proxy deployed in Edge itself. Let's call it "mycompany-internal". So I would like to restrict access to "apigee-internal" proxy such that no outside system can call this proxy, but an API Proxy running in Edge, for example the apigee-public" proxy, will be able to call this and fetch data from it.

Here's how I can implement this:

1. Create a new virtualhost called "internal" which has a contrived hostAlias, say internal.mycompany.com. This alias should not be resolveable in DNS; it's just a placeholder. Apart from this, I have the regular "default" and "secure" virtualhosts which come by default with any Edge organization+environment.

2. Deploy the "mycompany-internal" proxy to the "internal" virtualhost only. This means no one from outside would be able to call this proxy since the hostAlias doesn't resolve to a valid DNS. (and remember, the hostAlias is an internal artifact in Edge, it is hidden from the outside world.)

3. Deploy the regular proxy "mycompany-public" to the secure virtualhost so that its available for consumption in public domain over TLS. Optionally use a "white-labeled" hostname for this, like api.mycompany.com; this would need to be a real DNS name with it's own cert. On the "mycompany-public" API, use appropriate security policies like OAuthV2/ VerifyAccessToken, VerifyApiKey etc. for protection.

4. Within the "mycompany-public" API, use a ServiceCallout policy to call into the "mycompany-internal" apiproxy. Like this:

<ServiceCallout name="ServiceCallout">
    <DisplayName>ServiceCallout</DisplayName>
    <Request clearPayload="true" variable="myRequest">
        <IgnoreUnresolvedVariables>false</IgnoreUnresolvedVariables>
    </Request>
    <Response>calloutResponse</Response>
    <!-- LocalTargetConnection : chain to a proxy instead of HTTP URL -->
    <LocalTargetConnection>
        <APIProxy>mycompany-internal</APIProxy>
        <ProxyEndpoint>default</ProxyEndpoint>
    </LocalTargetConnection>
</ServiceCallout>

Notice we are calling the mycompany-internal proxy using the APIProxy name and ProxyEndpoint attribute instead of a complete URL. This allows us to call this API even though it is not exposed to the outside.

For the sake of illustration, I have used a simple AccessControl policy in the mycompany-internal proxy to show that its actually getting invoked when you call the mycompany-public proxy. The AccessControl policy looks as follows:

<AccessControl name="IP-Whitelist">
    <DisplayName>IP-Whitelist</DisplayName>
    <IPRules noRuleMatchAction="DENY">
        <MatchRule action="ALLOW">
            <SourceAddress mask="32">10.10.10.20</SourceAddress>
        </MatchRule>
        <MatchRule action="DENY">
            <SourceAddress mask="24">10.10.10.20</SourceAddress>
        </MatchRule>
    </IPRules>
</AccessControl>

In the apigee-public proxy, before making the service callout I am using the AssignMessage policy to create a request object which would inject the x-forwarded-for header which the ACL policy above has whitelisted. Instead of hardcoding the IP, you can also read it from a configuration store, such as the key-value map in Apigee Edge.

<AssignMessage name="AssignMessage-1">
    <DisplayName>AssignMessage-1</DisplayName>
    <Add>
        <Headers>
            <Header name="x-forwarded-for">10.10.10.20</Header>
        </Headers>
        <QueryParams/>
        <FormParams/>
    </Add>
    <IgnoreUnresolvedVariables>true</IgnoreUnresolvedVariables>
    <AssignTo createNew="true" transport="http" type="request">myRequest</AssignTo>
</AssignMessage>

Now let's see what happens when I make calls to these proxies. I am also attaching the sample proxies so that you can try this out against your own Apigee Edge orgs and see it working real time.

1. Hitting the mycompany-internal proxy from outside:

curl -v http://api.mycompany.com/v1/internal
>
< HTTP/1.1 404 Not Found
< Date: Fri, 17 Jun 2016 03:47:51 GMT
< Content-Type: application/json
< Content-Length: 168
< Connection: keep-alive
< Server: Apigee Router
<
* Connection #0 to host api.mycompany.com left intact
{"fault":{"faultstring":"Unable to identify proxy for host: default and url: \/v1\/internal","detail":{"errorcode":"messaging.adaptors.http.flow.ApplicationNotFound"}}}

2. Hitting the mycompany-public proxy from public IP:

curl -v http://api.mycompany.com/v1/public
>
< HTTP/1.1 200 OK
< Date: Fri, 17 Jun 2016 03:50:16 GMT
< Content-Type: text/plain; charset=utf-8
< Content-Length: 13
< Connection: keep-alive
< ETag: W/"d-GHB1ZrJKk/wdVTdB/jgBsw"
< X-Powered-By: Apigee
< Server: Apigee Router
<
* Connection #0 to host api.mycompany.com left intact
Hello, Guest!

Now let's say somehow an unauthorized person, let's call her Mallory, has learned the internal domain alias that the internal proxy is deployed on. In Apigee Edge, passing this as the "host" header normally will route the request to the virtualhost with that alias. So, Mallory tries making a call to that proxy:

3. Hitting the mycompany-internal proxy using the host header:

curl -v http://api.mycompany.com/v1/internal -H 'host: internal.mycompany.com'
>
< HTTP/1.1 403 Forbidden
< Date: Fri, 17 Jun 2016 03:58:02 GMT
< Content-Type: application/json
< Content-Length: 122
< Connection: keep-alive
< Server: Apigee Router
<
* Connection #0 to host api.mycompany.com left intact
{"fault":{"faultstring":"Access Denied for client ip : 10.10.10.1","detail":{"errorcode":"accesscontrol.IPDeniedAccess"}}}

Even in this case, as you can see, the internal proxy would be protected by the IP-Whitelist policy since the allowed IPs, is something that only the other proxy knows.

I hope you enjoyed this simple demonstration, showing you how to protect your internal apis using Apigee Edge. Please find the sample api proxies attached. I'd love to hear your feedback on this approach. Useful?

apigee-internal.zip

apigee-public.zip

Comments
WILLIT51
New Member

Great stuff, Arghya! This is very helpful!

When configuring the vhost, would you typically configure this as HTTPS or HTTP? In general I want all of my API traffic to be over HTTPS, but I'm thinking HTTP might be sufficient in this case assuming that these requests would never actually leave the local host or at least not the data center. Is this a correct assumption, or does it go back out to the internet to come back in?

anilsr
Staff

@WILLIT51 ,

Great Question. Next time, Please feel free to post same as a new question for better visibility.

Regarding your query,

If you’re building any sort of web application, use HTTPS!

It doesn’t matter what sort of application or service you’re building, if it’s not using HTTPS, you are doing it wrong.

While network security matters, so does transit encryption!

If an attacker is able to gain access to any of your internal services all HTTP traffic will be intercepted and read, regardless of how ‘secure’ your network may be. This is a very bad thing. This is why HTTPS is critically important on both public AND private networks.

ozanseymen
Staff

Hi @arghya das

I have a feeling that I am misunderstanding something here so would be great if you can help me out.

We are assigning a non-existing domain name to the internal APIs in order to hide them from internal/external requests. However as you demonstrated later on, there is actually a way to get to them using the Host header. So I am questioning why going all the trouble of setting dummy domain names as host alias in the first place.

Later on the solution uses IP addresses (X-Forwarded-For) header to really block external access from those APIs - which seems to be perfect solution but we know how X-Forwarded-For headers can be manipulated on the request or by proxy servers in the middle. So I am questioning whether IP whitelisting covers all attack points. Definitely interested to hear about your experiences.

How about using application level security (rather than network level) to solve this problem, e.g. OAuth tokens only providing products containing access to public apis only but then use client credentials (or simply an internal apikeys) for inter-api communication?

Not applicable

Ideally, do enterprises have sets of APIs - Internal & Public? or have one set of APIs controlled by roles and permissions?

What is your recommendation ?

dchiesa1
Staff

This is an older question, still outstanding. Still interesting.

We are assigning a non-existing domain name to the internal APIs in order to hide them from internal/external requests. However as you demonstrated later on, there is actually a way to get to them using the Host header. So I am questioning why going all the trouble of setting dummy domain names as host alias in the first place.

The non-existing domain name needs to be a secret. You can use whatever you like. The vhost requires a comain name, so you MUST apply one. That's why you use a domain name.

Later on the solution uses IP addresses (X-Forwarded-For) header to really block external access from those APIs - which seems to be perfect solution but we know how X-Forwarded-For headers can be manipulated on the request or by proxy servers in the middle. So I am questioning whether IP whitelisting covers all attack points. Definitely interested to hear about your experiences.

Yes, XFF can be manipulated. I think the AccessControl here was employed primarily as an illustration. The key to the security of the thing is the domain name.

How about using application level security (rather than network level) to solve this problem, e.g. OAuth tokens only providing products containing access to public apis only but then use client credentials (or simply an internal apikeys) for inter-api communication?

Yes, that would work. You'd need to provision all those artifacts to make your option work. As an alternative, you could use a 2-way TLS vhost which has an empty truststore, or a truststore with no root CA in it. That would prevent any client request getting through the router/vhost layer to the MP/apiproxy. But in my opinion it's extra effort, not needed.

Version history
Last update:
‎06-16-2016 09:01 PM
Updated by: