How to store the chat history in Django + Pusher? Is Tornado or Celery needed? How to store the chat history in Django + Pusher? Is Tornado or Celery needed? django django

How to store the chat history in Django + Pusher? Is Tornado or Celery needed?


Should we use Tornado ?

The underlying question is : would you benefit from Tornado's asynchronous capabilities on requests coming from your clients ? Do you have to wait for async results (like the result of a request to pusher) to produce the http response ?

If you do (that could be feedback you want to send to a client sending a message), then tornado web server will allow you to handle another request while waiting for the needed resource to be fetched asynchronously.

I repeated async a lot of times because it's really the core benefit you can get from tornado, if you don't have or need non blocking async resources to produce a response, tornado will just behave like any other blocking webserver.

Can we use Tornado with django ?

Sure, you can use django's orm, forms, templates ans other parts of the stack from a tornado app. It will go off tracks from django's documentation, but you can find some articles on a tornado+django stack on the web

Tornado and Pusher

What the tornado channel in pusher uses is the tornado async http client.

That's one example of non-blocking asynchronous resource you can use.

Will celery help ?

Celery will enable you to enqueue/schedule jobs asynchronously, like push messages to pusher or to your persistence backend, make a long-running search, schedule pushes of some stats.

It can be used as a non blocking async resource with tornado too. See tornado-celery

What you could try for example is multiplexing jobs to minimize roundtrips over the network where you can, but that's premature optimization :D

Persistence - Postgres, Redis.

What you'll probably have to worry about is partitioning and replication for scalability, to distribute the load on several instances of your persistence backend.

Redis is often said to be very scalable in that way, I'd try it, but it's personal opinion and eagerness rather than experience and benchmarking.

Hope that helps :)