Invalid session errors at the client end

Error: Transaction API_BP_COTS failed for Time:2015-06-29 11:49:04|Vuser:19|ItNo:126|Parameters:Account-000|Error:Invalid Session The following errors are seen in the router logs:

2015-06-29 11:21:03,212 org: env: Apigee-Timer-0 ERROR CONNECTION-REAPER - ConnectionReaper$ReaperTask.run() : Exception while running reap task 
java.util.ConcurrentModificationException: null 
at java.util.AbstractList$Itr.checkForComodification(AbstractList.java:372) ~[na:1.6.0_45] 
at java.util.AbstractList$Itr.next(AbstractList.java:343) ~[na:1.6.0_45] 
at com.apigee.proxy.container.netty.ConnectionReaper$ReaperTask.safeRun(ConnectionReaper.java:69) ~[message-router-api-1.0.0.jar:na] 
at com.apigee.proxy.container.netty.ConnectionReaper$ReaperTask.run(ConnectionReaper.java:57) ~[message-router-api-1.0.0.jar:na] 
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) [na:1.6.0_45] 
at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317) [na:1.6.0_45] 
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150) [na:1.6.0_45] 
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98) [na:1.6.0_45] 
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180) [na:1.6.0_45] 
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204) [na:1.6.0_45] 
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) [na:1.6.0_45] 
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) [na:1.6.0_45] 
at java.lang.Thread.run(Thread.java:662) [na:1.6.0_45] 
2015-06-29 11:23:03,261 org: env: Apigee-Timer-3 ERROR CONNECTION-REAPER - ConnectionReaper$ReaperTask.run() : Exception while running reap task 
java.util.ConcurrentModificationException: null 
at java.util.AbstractList$Itr.checkForComodification(AbstractList.java:372) ~[na:1.6.0_45] 
at java.util.AbstractList$Itr.next(AbstractList.java:343) ~[na:1.6.0_45] 
at com.apigee.proxy.container.netty.ConnectionReaper$ReaperTask.safeRun(ConnectionReaper.java:69) ~[message-router-api-1.0.0.jar:na] 
at com.apigee.proxy.container.netty.ConnectionReaper$ReaperTask.run(ConnectionReaper.java:57) ~[message-router-api-1.0.0.jar:na] 
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) [na:1.6.0_45] 
at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317) [na:1.6.0_45] 
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150) [na:1.6.0_45] 
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98) [na:1.6.0_45] 
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180) [na:1.6.0_45] 
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204) [na:1.6.0_45] 
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) [na:1.6.0_45] 
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) [na:1.6.0_45] 
at java.lang.Thread.run(Thread.java:662) [na:1.6.0_45] 

This environment was recently upgraded to version: 4.15.01. The previous versions did not have this issue.

Solved Solved
0 2 502
2 ACCEPTED SOLUTIONS

This issue is usually resolved by adding HTTPServer.streaming.buffer.limit=0 to router.properties file and restarting both routers.

View solution in original post

The cause is due to either the client in front of Apigee or the server behind Apigee is slow compared to the other end. In other words, if they are not relatively in sync on processing the large payload, then Apigee will need to buffer the payload. Similarly, if a payload is not consumed fast enough, then the buffer being set to 0 could potentially grow, and yes eventually will run into an OOM situation. At that point, we need to fix the slow end point ( or points) to have the issue resolved.

The fix is in 4.15.04.03, the release notes at http://apigee.com/docs/release-notes/content/4150403-apigee-edge-private-cloud-release-notes has this information.

View solution in original post

2 REPLIES 2

This issue is usually resolved by adding HTTPServer.streaming.buffer.limit=0 to router.properties file and restarting both routers.

The cause is due to either the client in front of Apigee or the server behind Apigee is slow compared to the other end. In other words, if they are not relatively in sync on processing the large payload, then Apigee will need to buffer the payload. Similarly, if a payload is not consumed fast enough, then the buffer being set to 0 could potentially grow, and yes eventually will run into an OOM situation. At that point, we need to fix the slow end point ( or points) to have the issue resolved.

The fix is in 4.15.04.03, the release notes at http://apigee.com/docs/release-notes/content/4150403-apigee-edge-private-cloud-release-notes has this information.