ZeroMQ multithreading: create sockets on-demand or use sockets object pool? ZeroMQ multithreading: create sockets on-demand or use sockets object pool? multithreading multithreading

ZeroMQ multithreading: create sockets on-demand or use sockets object pool?


You can't really ask a question about performance without providing real figures for your estimated throughput. Are we talking about 10 requests per second, 100, 1,000, 10K?

If the HTTP server is really creating and destroying threads for each request, then creating 0MQ sockets repeatedly will stress the OS and depending on the volume of requests and your process limits, it'll work, or it'll run out of handles. You can test this trivially and thats a first step.

Then, sharing a pool of sockets (what you mean by "ZMQ publisher") is nasty. People do this but sockets are not threadsafe so it means being very careful when you switch a socket to another thread.

If there is a way to keep the threads persistent then each one can create its PUB socket if it needs to, and hold onto it as long as it exists. If not, then my first design would create/destroy sockets anyhow, but use inproc:// to send messages to a single permanent forwarder thread (a SUB-PUB proxy). I'd test this and then if it breaks, go for more exotic designs.

In general it's better to make the simplest design and break it, than to over-think the design process (especially when starting out).


It sounds like premature optimization to me too, and if at all possible, you should stick with the first strategy and save yourself the headaches.

But as an alternative to your second option, you could perhaps maintain an Executor thread pool inside your application to do the actual zmq sending. This way each executor thread can keep its own socket. You can listen to application/servlet life cycle events to know when to shutdown the pool and cleanup the sockets.

EDIT:

The simplest way to do this is to create the Executor using Executors.newFixedThreadPool() and feed it Runnable jobs that use a ThreadLocal socket. (See Java Executors and per-thread (not per-work unit) objects? ) The threads will be created only once and reused from then on until the Executor is shutdown.

This gets a little tricky when an exception is thrown in the job's run() method. I suspect you'll find you need a little bit more control over the executor threads' lifecycle. If so, you can copy the source for newFixedThreadPool:

return new ThreadPoolExecutor(nThreads, nThreads,                              0L, TimeUnit.MILLISECONDS,                              new LinkedBlockingQueue<Runnable>());

and subclass the ThreadPoolExecutor that gets instantiated to customize it. This way you could for example override afterExecute to detect and clean up broken sockets.

The send jobs get transferred to the worker threads through a blocking queue. I realise that this is not the ZeroMQ way to hand off the messages to the worker threads, which would be inproc messaging. This moves ZeroMQ away from the HTTP worker threads whose lifecycle is out of your control and therefore hard to maintain sockets in, more towards the edge of the application. You'd have to simply test which of the both is more efficient and have to make a judgement call on how rigorously you want your application to adopt the ZeroMQ messaging paradigm for inter-thread communication.