Node.js server restarts

Not applicable

Hi.

I'm beginner in node.js.

My task is to have a proxy that mint recur api call every x hours and possibility to manage this schedulers via API.

I create node.js project - (here is GitHub link) that use setInterval function for this, but facing with some problems:

1. APIGEE runs 2 server at same time. here is a log:

[2015-04-08T20:37:18.782Z nodejs/stderr svr.332]***Starting script

[2015-04-08T22:18:53.886Z nodejs/stderr svr.341]***Starting script

so when i call api i never know what exactly server receive call.

i found that both servers use same cache so i use cache "Timers" as global variable.

2. once in 2-3 days APIGEE restarts server even i have active timers, and this clears custom Cache (line 7) and lead to stops emitter.

i don't found in log any errors or another explanation why server restarts.

I try to remove cache clearance at start of server but found that second server also restarts unexpectedly.

Is server restarting is normal for Apigee ?

How can i find the root of problem?

Solved Solved
0 4 1,820
1 ACCEPTED SOLUTION

When you deploy any API in Apigee it is deployed to each Message Processor (a component of Apigee that executes policies). You are probably on the free org so you see 2 servers for the 2 MPs.

All the symptoms you see are correct and is expected. The servers running in MPs are supposed to stateless and hence there is no way to target a particular node server.

Any kind of synchronization you need must be managed externally. So the timer and any other data you need has to be stored in a persistent store and not in cache. The node.js servers can restart due to various reasons that are not under our control. Apigee is horizontally scalable and hence instances can be added or removed without notice. Hence the need for external persistence and synchronization.

Usually the restart events are logged and you will see them in system.log (on-prem instance).

Let know if you have any more questions.

View solution in original post

4 REPLIES 4

When you deploy any API in Apigee it is deployed to each Message Processor (a component of Apigee that executes policies). You are probably on the free org so you see 2 servers for the 2 MPs.

All the symptoms you see are correct and is expected. The servers running in MPs are supposed to stateless and hence there is no way to target a particular node server.

Any kind of synchronization you need must be managed externally. So the timer and any other data you need has to be stored in a persistent store and not in cache. The node.js servers can restart due to various reasons that are not under our control. Apigee is horizontally scalable and hence instances can be added or removed without notice. Hence the need for external persistence and synchronization.

Usually the restart events are logged and you will see them in system.log (on-prem instance).

Let know if you have any more questions.

Not applicable

ok, so unexpected server restarting is normal. Cool!

Actually i can put in timer object next tick timestamp and set new timers each time server restarts to keep emitters.

Is it possible to have only 1 Message Processor/server on free subscription?

i see that servers restarts synchronously so both of them may sets new timers same time ...

Can you explain why custom Cache is bad idea for timer store?

or Cache also clears unexpectedly ?

You would always want more MPs in free subscription - to ensure high availability. There is no way to increase or decrease the number of MPs though.

Synchronous restarts is only a coincidence. node.js or host restarts do not happen unnecessarily, there is usually an exception somewhere or manual intervention is involved. They can happen just on one host/node.js server.

Custom cache is fine as long as you can live with a transient data store. Cache lifetime is controlled by the settings in the cache.

I create API that automatically raise all timers in case server restarts.

Time shift in case server reloading - about 40-50ms.

Limitation is reloading both MPs (in this case only one timer survive because second server reads cache when first server sets only first timer...)

Github updated