Can't create static Redis connection in Java Callout

Hi there.

I have a pre-req to develop a JWT token based session component, where this token only contains a session ID, and the session data resides inside a Redis. I have to retrieve this data at every request to add it to the request payload, and also persist new data at session based on backend response.

To do that, I chose to use a Shared Flow Java Callout that connects to a Redis and sets this data on a context variable (for request) and writes some context variables (for response) at this Redis instance.

I tried to use two Java Redis connectors (Jedis and Lettuce, the former one beign more recommended due to beign thread-safe without connection pooling), but I can't seem to instantiate this connection statically (on a class attribute instantiation or even a Java static initializer).

Everytime I do one of the previous options, I get an error at the deploy (The revision is deployed, but traffic cannot flow). I tried to create this connection at a @PostConstruct but, as the documentation says, class instantiation isn't called at all calls, so sometimes I get a NullPointerException due to this.

What could be causing this traffic cannot flow error? I had seen @Dino say a couple times that in most cases it's recommended (for separation of concerns) to create an application layer to do this, but, if possible, I'd rather not have to develop an application layer to this due to increase of TTL (network hops). Should I anyways?

Solved Solved
2 3 315
1 ACCEPTED SOLUTION

I don't know the specific reason that you get The revision is deployed, but traffic cannot flow.

But often, when you see this error with a proxy that includes a Java callout, it means the Java class cannot be instantiated. Apigee restricts what a Java class can do; for example a customer-provided Java class cannot read the local filesystem, and a Java class cannot perform reflection. There are many other restrictions.

I don't know Jedis and Lettuce, but it seems likely that these client libraries are trying to do something that violates the permission restrictions in Apigee.

There is no way to work around this, sadly. You cannot relax the restrictions in Apigee SaaS, and you cannot ask Apigee support to relax those restrictions for you. If you have Apigee OPDK, you can change the permissions on your own Apigee; check the docs. The rest of my remarks apply to either OPDK or Apigee SaaS. (Even if you CAN unrestrict the permissions on a Java callout, I don't think you SHOULD do that).

Last - I don't understand the JWT + Session ID + Redis architecture, precisely. But, maybe consider whether you REALLY need the API Proxy to somehow persist something like a session ID into Redis directly, and update it more or less continuously, from within an API Proxy.

the session data resides inside a Redis. I have to retrieve this data at every request to add it to the request payload, and also persist new data at session based on backend response.

Maintaining a session from within the API Proxy is a violation of the REST model, and also it is outside the norm of what we have designed Apigee to do. It sounds like you're trying to smush application requirements into the API Proxy layer. Maybe you're doing it wrong. Is there a chance to re-consider your approach?

If you don't wish to re-consider, then I suggest that you stand up a separate application to handle the work - something that you control fully. Google App Engine is a good host for Java applications. Your App Engine instance can run in the same datacenter as your Apigee proxy, which means... while you might suspect that a remote service will incur some significant additional latency, that may not be the case. Google has really solid networks in the datacenters, and in some cases the network hop costs you 1-2ms. This number is not zero, but it is probably dwarfed by any I/O that is performed by the API Proxy.

These two suggestions - reconsider the plan, or stand up a separate service to connect to Redis, might amount to the same thing.

View solution in original post

3 REPLIES 3

I don't know the specific reason that you get The revision is deployed, but traffic cannot flow.

But often, when you see this error with a proxy that includes a Java callout, it means the Java class cannot be instantiated. Apigee restricts what a Java class can do; for example a customer-provided Java class cannot read the local filesystem, and a Java class cannot perform reflection. There are many other restrictions.

I don't know Jedis and Lettuce, but it seems likely that these client libraries are trying to do something that violates the permission restrictions in Apigee.

There is no way to work around this, sadly. You cannot relax the restrictions in Apigee SaaS, and you cannot ask Apigee support to relax those restrictions for you. If you have Apigee OPDK, you can change the permissions on your own Apigee; check the docs. The rest of my remarks apply to either OPDK or Apigee SaaS. (Even if you CAN unrestrict the permissions on a Java callout, I don't think you SHOULD do that).

Last - I don't understand the JWT + Session ID + Redis architecture, precisely. But, maybe consider whether you REALLY need the API Proxy to somehow persist something like a session ID into Redis directly, and update it more or less continuously, from within an API Proxy.

the session data resides inside a Redis. I have to retrieve this data at every request to add it to the request payload, and also persist new data at session based on backend response.

Maintaining a session from within the API Proxy is a violation of the REST model, and also it is outside the norm of what we have designed Apigee to do. It sounds like you're trying to smush application requirements into the API Proxy layer. Maybe you're doing it wrong. Is there a chance to re-consider your approach?

If you don't wish to re-consider, then I suggest that you stand up a separate application to handle the work - something that you control fully. Google App Engine is a good host for Java applications. Your App Engine instance can run in the same datacenter as your Apigee proxy, which means... while you might suspect that a remote service will incur some significant additional latency, that may not be the case. Google has really solid networks in the datacenters, and in some cases the network hop costs you 1-2ms. This number is not zero, but it is probably dwarfed by any I/O that is performed by the API Proxy.

These two suggestions - reconsider the plan, or stand up a separate service to connect to Redis, might amount to the same thing.

I agree with every design/separation of concerns/models that you presented, it just turns out that these are security requirements (avoid traffic of sensitive data between client and server, even on HTTPS). This Apigee instance that I'm working on is new, we are setting it up as OPDK at one of the top 20 financial services / banks worldwide, with a huge TPS and I'm really concerned that any undesired delay will break my gateway, consequently making the entire bank unavailable.

Does creating an application layer and deploying it at bare metal, in the same network/backbone, as closest as possible to the Apigee on-premises deployment, solves my problem satisfactorily?

Thank you!

Yes - an application in the same network will be the best option. of course we cannot say what the performance of such a system will be until you test it. Empirical evaluation of performance is the only sure thing. But we can say that would be optimal.