Python - Using nonces with multithreading Python - Using nonces with multithreading multithreading multithreading

Python - Using nonces with multithreading


It sounds like the requests library doesn't have any support for sending asynchronously.

With the default Transport Adapter in place, Requests does not provide any kind of non-blocking IO. The Response.content property will block until the entire response has been downloaded. If you require more granularity, the streaming features of the library (see Streaming Requests) allow you to retrieve smaller quantities of the response at a time. However, these calls will still block.

If you are concerned about the use of blocking IO, there are lots of projects out there that combine Requests with one of Python’s asynchronicity frameworks. Two excellent examples are grequests and requests-futures.

I saw in a comment that you hesitate to add more dependencies, so the only suggestions I have are:

  • Add retry logic when your nonce is rejected. This seems like the most pythonic solution, and should work fine as long as the nonce isn't rejected very often.
  • Throttle the nonce generator. Hold the timestamp used for the previous nonce, and sleep if it hasn't been long enough when the next nonce is requested.
  • Batch the messages. If the protocol allows it, you may find that throughput actually goes up when you add a delay to wait for other messages and send them as a batch.
  • Change the server so the nonce values don't have to increase. If you control the server, making the messages independent of each other will give you a much more flexible protocol.
  • Use a session pool. I'm guessing that the nonce values only have to increase within a single session. If you create a thread pool and have each thread open its own session, you could still have reasonable throughput without the timing problems you currently have.

Obviously, you'd have to measure the performance results of making these changes.

Even if you do decide to add a dependency that lets you release the lock after sending the headers, you may still find that you occasionally have timing issues. The message packets with the headers could be delayed on their way to the server.