runs a specified node.js script in a daemonized cluster (using
modules). By default, it spawns as many worker processes as there are CPUs.
Worker needs not to know anything about being run under cluster. Its
are intercepted and written to log files.
Failing workers are respawned, unless they exit cleanly, or fail to run at all. Daemon terminates when no running workers are left.
The daemon opens a controlling socket, through which commands like
can be issued. Accompanying
utility is provided for this.
Command line options
-h, --help Display help and exit --daemon Daemonize (use --no-daemon to stay in foreground) - default: true -v, --loglevel One of: debug, info, notice, warning, error, crit, alert, emerg Set log level - default: info -l, --logfile String Log file - default: ./node-daemon.log -e, --errorlog String Error log - default: ./node-daemon.err -p, --pidfile String PID file - default: ./node-daemon.pid -s, --socket String Controlling socket - default: ./node-daemon.socket -n, --workers Int How many workers to fork (defaults to # of CPUs) -w, --worker String Path to worker - default: ./worker
The daemon also reacts to Unix signals
SIGTERMstop the daemon (gracefully)
SIGHUPstops all workers and then respawns them. Can be used if the worker file was changed. Note: this does not restart the daemon itself. If the worker file contains errors, it will fail to spawn, and the master process will exit.
- Implement logging to syslog