Named pipes efficient asynchronous design Named pipes efficient asynchronous design windows windows

Named pipes efficient asynchronous design


If server must handle more than 64 events (read/writes) then any solution using WaitForMultipleObjects becomes unfeasible. This is the reason the Microsoft introduced IO completion ports to Windows. It can handle very high number of IO operations using the most appropriate number of threads (usually it's the number of processors/cores).

The problem with IOCP is that it is very difficult to implement right. Hidden issues are spread like mines in the field: [1], [2] (section 3.6). I would recommend using some framework. Little googling suggests something called Indy for Delphi developers. There are maybe others.

At this point I would disregard the requirement for named pipes if that means coding my own IOCP implementation. It's not worth the grief.


I think what you're overlooking is that you only need a few listening named pipe instances at any given time. Once a pipe instance has connected, you can spin that instance off and create a new listening instance to replace it.

With MAXIMUM_WAIT_OBJECTS (or fewer) listening named pipe instances, you can have a single thread dedicated to listening using WaitForMultipleObjectsEx. The same thread can also handle the rest of the I/O using ReadFileEx and WriteFileEx and APCs. The worker threads would queue APCs to the I/O thread in order to initiate I/O, and the I/O thread can use the task pool to return the results (as well as letting the worker threads know about new connections).

The I/O thread main function would look something like this:

create_events();for (index = 0; index < MAXIMUM_WAIT_OBJECTS; index++) new_pipe_instance(i);for (;;){    if (service_stopping && active_instances == 0) break;    result = WaitForMultipleObjectsEx(MAXIMUM_WAIT_OBJECTS, connect_events,                     FALSE, INFINITE, TRUE);    if (result == WAIT_IO_COMPLETION)     {        continue;    }    else if (result >= WAIT_OBJECT_0 &&                      result < WAIT_OBJECT_0 + MAXIMUM_WAIT_OBJECTS)     {        index = result - WAIT_OBJECT_0;        ResetEvent(connect_events[index]);        if (GetOverlappedResult(                connect_handles[index], &connect_overlapped[index],                 &byte_count, FALSE))            {                err = ERROR_SUCCESS;            }            else            {                err = GetLastError();            }        connect_pipe_completion(index, err);        continue;    }    else    {        fail();    }}

The only real complication is that when you call ConnectNamedPipe it may return ERROR_PIPE_CONNECTED to indicate that the call succeeded immediately or an error other than ERROR_IO_PENDING if the call failed immediately. In that case you need to reset the event and then handle the connection:

void new_pipe(ULONG_PTR dwParam){    DWORD index = dwParam;    connect_handles[index] = CreateNamedPipe(        pipe_name,         PIPE_ACCESS_DUPLEX | FILE_FLAG_OVERLAPPED,        PIPE_TYPE_MESSAGE | PIPE_WAIT | PIPE_ACCEPT_REMOTE_CLIENTS,        MAX_INSTANCES,        512,        512,        0,        NULL);    if (connect_handles[index] == INVALID_HANDLE_VALUE) fail();    ZeroMemory(&connect_overlapped[index], sizeof(OVERLAPPED));    connect_overlapped[index].hEvent = connect_events[index];    if (ConnectNamedPipe(connect_handles[index], &connect_overlapped[index]))     {        err = ERROR_SUCCESS;    }    else    {        err = GetLastError();        if (err == ERROR_SUCCESS) err = ERROR_INVALID_FUNCTION;        if (err == ERROR_PIPE_CONNECTED) err = ERROR_SUCCESS;    }    if (err != ERROR_IO_PENDING)     {        ResetEvent(connect_events[index]);        connect_pipe_completion(index, err);    }}

The connect_pipe_completion function would create a new task in the task pool to handle the newly connected pipe instance, and then queue an APC to call new_pipe to create a new listening pipe at the same index.

It is possible to reuse existing pipe instances once they are closed but in this situation I don't think it's worth the hassle.