Is a non-blocking, single-threaded, asynchronous web server (like Node.js) possible in .NET? Is a non-blocking, single-threaded, asynchronous web server (like Node.js) possible in .NET? multithreading multithreading

Is a non-blocking, single-threaded, asynchronous web server (like Node.js) possible in .NET?


The whole SetSynchronizationContext is a red herring, this is just a mechanism for marshalling, the work still happens in the IO Thread Pool.

What you are asking for is a way to queue and harvest Asynchronous Procedure Calls for all your IO work from the main thread. Many higher level frameworks wrap this kind functionality, the most famous one being libevent.

There is a great recap on the various options here: Whats the difference between epoll, poll, threadpool?.

.NET already takes care of scaling for you by have a special "IO Thread Pool" that handles IO access when you call the BeginXYZ methods. This IO Thread Pool must have at least 1 thread per processor on the box. see: ThreadPool.SetMaxThreads.

If single threaded app is a critical requirement (for some crazy reason) you could, of course, interop all of this stuff in using DllImport (see an example here)

However it would be a very complex and risky task:

Why don't we support APCs as a completion mechanism? APCs are really not a good general-purpose completion mechanism for user code. Managing the reentrancy introduced by APCs is nearly impossible; any time you block on a lock, for example, some arbitrary I/O completion might take over your thread. It might try to acquire locks of its own, which may introduce lock ordering problems and thus deadlock. Preventing this requires meticulous design, and the ability to make sure that someone else's code will never run during your alertable wait, and vice-versa. This greatly limits the usefulness of APCs.

So, to recap. If you want a single threaded managed process that does all its work using APC and completion ports, you are going to have to hand code it. Building it would be risky and tricky.

If you simply want high scale networking, you can keep using BeginXYZ and family and rest assured that it will perform well, since it uses APC. You pay a minor price marshalling stuff between threads and the .NET particular implementation.

From: http://msdn.microsoft.com/en-us/magazine/cc300760.aspx

The next step in scaling up the server is to use asynchronous I/O. Asynchronous I/O alleviates the need to create and manage threads. This leads to much simpler code and also is a more efficient I/O model. Asynchronous I/O utilizes callbacks to handle incoming data and connections, which means there are no lists to set up and scan and there is no need to create new worker threads to deal with the pending I/O.

An interesting, side fact, is that single threaded is not the fastest way to do async sockets on Windows using completion ports see: http://doc.sch130.nsc.ru/www.sysinternals.com/ntw2k/info/comport.shtml

The goal of a server is to incur as few context switches as possible by having its threads avoid unnecessary blocking, while at the same time maximizing parallelism by using multiple threads. The ideal is for there to be a thread actively servicing a client request on every processor and for those threads not to block if there are additional requests waiting when they complete a request. For this to work correctly however, there must be a way for the application to activate another thread when one processing a client request blocks on I/O (like when it reads from a file as part of the processing).


What you need is a "message loop" which takes the next task on a queue and executes it. Additionally, every task needs to be coded so that it completes as much work as possible without blocking, and then enqueues additional tasks to pick up a task that needs time later. There is nothing magical about this: never using a blocking call and never spawn additional threads.

For example, when processing an HTTP GET, the server can read as much data as is currently available on the socket. If this is not enough data to handle the request, then enqueue a new task to read from the socket again in the future. In the case of a FileStream, you want to set the ReadTimeout on the instance to a low value and be prepared to read fewer bytes than the entire file.

C# 5 actually makes this pattern much more trivial. Many people think that the async functionality implies multithreading, but that is not the case. Using async, you can essentially get the task queue I mentioned earlier without ever explicility managing it.


Yes, it's called Manos de mono

Seriously, the entire idea behind manos is a single threaded asynchronous event driven web server.

High performance and scalable. Modeled after tornadoweb, the technology that powers friend feed, Manos is capable of thousands of simultaneous connections, ideal for applications that create persistent connections with the server.

The project appears to be low on maintenance and probably wouldn't be production ready but it makes a good case study as a demonstration that this is possible.