Why would an exception cause resource leaks in Node.js? Why would an exception cause resource leaks in Node.js? javascript javascript

Why would an exception cause resource leaks in Node.js?


Unexpected exceptions are the ones you need to worry about. If you don't know enough about the state of the app to add handling for a particular exception and manage any necessary state cleanup, then by definition, the state of your app is undefined, and unknowable, and it's quite possible that there are things hanging around that shouldn't be. It's not just memory leaks you have to worry about. Unknown application state can cause unpredictable and unwanted application behavior (like delivering output that's just wrong -- a partially rendered template, or an incomplete calculation result, or worse, a condition where every subsequent output is wrong). That's why it's important to exit the process when an unhandled exception occurs. It gives your app the chance to repair itself.

Exceptions happen, and that's fine. Embrace it. Shut down the process and use something like Forever to detect it and set things back on track. Clusters and domains are great, too. The text you were reading is not a caution against throwing exceptions, or continuing the process when you've handled an exception that you were expecting -- it's a caution against keeping the process running when unexpected exceptions occur.


I think when they said "we are leaking resources", they really meant "we might be leaking resources". If http.createServer handles exceptions appropriately, threads and sockets shouldn't be leaked. However, they certainly could be if it doesn't handle things properly. In the general case, you never really know if something handles errors properly all the time.

I think they are wrong / very misleading when they said "By the .. nature of how throw works in JavaScript, there is almost never any way to safely ..." . There should not be anything about how throw works in Javascript (vs other languages) that makes it unsafe. There is also nothing about how throw/catch works in general that makes it unsafe - unless of course you use them wrong.

What they should have said is that exceptional cases (regardless of whether or not exceptions are used) need to be handled appropriately. There are a few different categories to recognize:

A. State

  1. Exceptions that occur while external state (database writing, file output, etc) is in a transient state
  2. Exceptions that occur while shared memory is in a transient state
  3. Exceptions where only local variables might be in a transient state

B. Reversibility

  1. Reversible / revertible state (eg database rollbacks)
  2. Irreversible state (Lost data, unknown how to reverse, or prohibitive to reverse)

C. Data criticality

  1. Data can be scrapped
  2. Data must be used (even if corrupted)

Regardless of the type of state you're messing with, if you can reverse it, you should do that and you're set. The problem is irreversible state. If you can destroy the corrupted data (or quarantine it for separate inspection), that is the best move for irreversible state. This is done automatically for local variables when an exception is thrown, which is why exceptions excel at handling errors in purely functional code (ie functions with no possible side-effects). Likewise, any shared state or external state should be deleted if that's acceptable. In the case of shared state, either throw exceptions until that shared state becomes local state and is cleaned up by unrolling of the stack (either statically or via the GC), or restart the program (I've read people suggesting the use of something like nodejitsu forever). For external state, this is likely more complicated.

The last case is when the data is critical. Well, then you're gonna have to live with the bugs you've created. Everyone has to deal with bugs, but its the worst when your bugs involve corrupted data. This will usually require manual intervention (reconstructing the lost/damaged data, selectively pruning, etc) - exception handling won't get you the whole way in the last case.

I wrote a similar answer related to how to handle mid-operation failure in various cases in the context of multiple updates to some data storage: https://stackoverflow.com/a/28355495/122422


Taking the sample from the node.js documentation:

var d = require('domain').create();d.on('error', function(er) {  // The error won't crash the process, but what it does is worse!  // Though we've prevented abrupt process restarting, we are leaking  // resources like crazy if this ever happens.  // This is no better than process.on('uncaughtException')!  console.log('error, but oh well', er.message);});d.run(function() {  require('http').createServer(function(req, res) {    handleRequest(req, res);  }).listen(PORT);});

In this case you are leaking connections when an exception occurs in handleRequest before you close the socket.

"Leaked" in the sense that you finished processing the request without cleaning up afterwards. Eventually the connection will time out and close the socket, but if your server is under high load it may run out of sockets before that happens.

Depending on what you do in handleRequest you may also be leaking file handles, database connections, event listeners, etc.

Ideally you should handle your exceptions so you can clean up after them.