jetty + gzip + heroku returns gzipped data without Content-Encoding header

Not applicable

I have a very basic test app that supports gzip encoding: https://ecd0-hello.herokuapp.com/

I also have a pass-through proxy with no proxies here: http://ecd-test.apigee.net/hello

The result is that my response is gzipped as expected, but the "Content-Encoding: gzip" header is dropped, causing it to be malformed.

I'm not sure what exactly is going on here. It works via Apigee If I don't use heroku, don't enable gzip on the origin, or use tomcat instead of jetty. The origin seems to be working properly in all cases.

Anyone have any hints?

Solved Solved
1 5 2,111
1 ACCEPTED SOLUTION

Not applicable

Hi Eric,

Yes, your origin is working fine, but we have a known issue for your situation. Here's your response headers.

< HTTP/1.1 200 OK
< Connection: close
< Date: Fri, 01 May 2015 18:21:10 GMT
< Content-Type: application/json;charset=UTF-8
< Content-Encoding: gzip
< Vary: Accept-Encoding, User-Agent
* Server Jetty(9.2.9.v20150224) is not blacklisted
< Server: Jetty(9.2.9.v20150224)
< Via: 1.1 vegur

If you notice, there is no Content-Length or Transfer-Encoding header. You do have a Connection: close header. The HTTP 1.1 spec says this is fine, i.e. "...the transfer-length of the body may be determined by the server closing the connection..." However, we do not handle this situation well at this point in time but we do have this in our backlog to fix.

As a workaround, you could..

  • Update your back-end to send either the Transfer-Encoding or Content-Length header.
  • You may be able to add an AssignMessage policy in the response flow that adds the Content-Encoding: gzip header to the response from the proxy (based upon whether the Accept-Encoding header was passed into the proxy by the client).

Hope this helps.

-Dave

View solution in original post

5 REPLIES 5

Not applicable

Hi Eric,

Yes, your origin is working fine, but we have a known issue for your situation. Here's your response headers.

< HTTP/1.1 200 OK
< Connection: close
< Date: Fri, 01 May 2015 18:21:10 GMT
< Content-Type: application/json;charset=UTF-8
< Content-Encoding: gzip
< Vary: Accept-Encoding, User-Agent
* Server Jetty(9.2.9.v20150224) is not blacklisted
< Server: Jetty(9.2.9.v20150224)
< Via: 1.1 vegur

If you notice, there is no Content-Length or Transfer-Encoding header. You do have a Connection: close header. The HTTP 1.1 spec says this is fine, i.e. "...the transfer-length of the body may be determined by the server closing the connection..." However, we do not handle this situation well at this point in time but we do have this in our backlog to fix.

As a workaround, you could..

  • Update your back-end to send either the Transfer-Encoding or Content-Length header.
  • You may be able to add an AssignMessage policy in the response flow that adds the Content-Encoding: gzip header to the response from the proxy (based upon whether the Accept-Encoding header was passed into the proxy by the client).

Hope this helps.

-Dave

Has this been fixed in one of the OPDK releases? We just ran into this with one of our APIs. We're on version 4.15.07.00

Hello,

I am running into the same problem on on-prem version 4.17.05.01???

For example when calling http://petstore.swagger.io/v2/swagger.json

Thanks,

Joris

This could be a bug. Even I see that "Content-Encoding: gzip" is being dropped in the response from Apigee.

But if you don't specify the "Accept-Encoding" header in your request, it works fine. (see the attached screenshot).

379-screen-shot-2015-05-01-at-14421-pm.png

I think tools like Postman add a default "Accept-Encoding" header as "gzip, deflate" that creates a problem. I used cUrl on your end point and it works fine.

Not applicable

I would not recommend adding the gzip header to the response by the way. This does not work in this instance.

Basically if you have a gzip response coming back from the target API and the 'Content-Encoding: gzip' header is stripped in this connection close use case then you still have gzipped content coming back but no header to signify this.

If you add the header in Apigee on the response then the system is a bit too clever in this situation. It believes that you want to compress the response, so it will compress the already compressed stream. Any receiving application cannot handle this automatically as it unpacks the stream and expects JSON or XML and it still gets a binary zipped response...

The only ways that we found to work around this is:

1. Set the target not to gzip even if the client supports it.

2. If an Accept-Encoding: gzip header is on the request remove it and make note in a flow variable, do uncompressed comms to the target API and then add the Content-Type: gzip header to the response if requested. This will then compress the stream to the client.

3. Change the target API so that it does not close the connection immediately and sends a Content-Length or uses Transfer-Encoding: chunked.