How does an asynchronous socket server work? How does an asynchronous socket server work? multithreading multithreading

How does an asynchronous socket server work?


One thread per connection is bad design (not scalable, overly complex) but unfortunately way too common.

A socket server works more or less like this:

  • A listening socket is setup to accept connections, and added to a socketset
  • The socket set is checked for events
  • If the listening socket has pending connections, new sockets are created by accepting the connections, and then added to the socket set
  • If a connected socket has events, the relevant IO functions are called
  • The socket set is checked for events again

This happens in one thread, you can easily handle thousands of connected sockets in a single thread, and there's few valid reasons for making this more complex by introducing threads.

while running    select on socketset    for each socket with events        if socket is listener            accept new connected socket            add new socket to socketset        else if socket is connection            if event is readable                read data                process data            else if event is writable                write queued data            else if event is closed connection                remove socket from socketset            end        end    donedone

The IP stack takes care of all the details of which packets go to what "socket" in which order. Seen from the applications point of view, a socket represents a reliable ordered byte stream (TCP) or an unreliable unordered sequence of packets(UDP)

EDIT: In response to updated question.

I don't know either of the libraries you mention, but on the concepts you mention:

  • A session cache typically keeps data associated with a client, and can reuse this data for multiple connections. This makes sense when your application logic requires state information, but it's a layer higher than the actual networking end. In the above sample, the session cache would be used by the "process data" part.
  • Buffer pools are also an easy and often effective optimization of a high-traffic server. The concept is very easy to implement, instead of allocating/deallocating space for storing data you read/write, you fetch a preallocated buffer from a pool, use it, then return it to a pool. This avoids the (sometimes relatively expensive) backend allocation/deallocation mechanisms. This is not directly related to networking, you can just as well use buffer pools for e.g. something that reads chunks of files and process them.


How off is my understanding?

Pretty far.

Does each client socket require it's own thread to listen for data on?

No.

How is data routed to the correct client socket? Is this something taken care of by the guts of TCP/UDP/kernel?

TCP/IP is a number of layers of protocol. There's no "kernel" to it. It's pieces, each with a separate API to the other pieces.

The IP Address is handled in on place.

The port # is handled in another place.

The IP addresses are matched up with MAC addresses to identify a particular host. The port # is what ties a TCP (or UDP) socket to a particular piece of application software.

In this threaded environment, what kind of data is typically being shared, and what are the points of contention?

What threaded environment?

Data sharing? What?

Contention? The physical channel is the number one point of contention. (Ethernet, for example depends on collision-detection.) After that, well, every part of the computer system is a scarce resource shared by multiple applications and is a point of contention.