How to deploy rabbitmq with flask and docker? How to deploy rabbitmq with flask and docker? flask flask

How to deploy rabbitmq with flask and docker?


Here is how I did it:

  1. Flask server
from flask import Flaskimport pikaimport uuidimport threadingapp = Flask(__name__)queue = {}class FibonacciRpcClient(object):    def __init__(self):        self.connection = pika.BlockingConnection(            pika.ConnectionParameters(host='rabbit'))        self.channel = self.connection.channel()        result = self.channel.queue_declare('', exclusive=True)        self.callback_queue = result.method.queue        self.channel.basic_consume(            queue=self.callback_queue,            on_message_callback=self.on_response,            auto_ack=True)    def on_response(self, ch, method, props, body):        if self.corr_id == props.correlation_id:            self.response = body    def call(self, n):        self.response = None        self.corr_id = str(uuid.uuid4())        queue[self.corr_id] = None        self.channel.basic_publish(            exchange='',            routing_key='rpc_queue',            properties=pika.BasicProperties(                reply_to=self.callback_queue,                correlation_id=self.corr_id,            ),            body=str(n))        while self.response is None:            self.connection.process_data_events()        queue[self.corr_id] = self.response        print(self.response)        return int(self.response)@app.route("/calculate/<payload>")def calculate(payload):    n = int(payload)    fibonacci_rpc = FibonacciRpcClient()    threading.Thread(target=fibonacci_rpc.call, args=(n,)).start()    return "sent " + payload@app.route("/results")def send_results():    return str(queue.items())
  1. Worker
import pikaconnection = pika.BlockingConnection(    pika.ConnectionParameters(host='localhost'))channel = connection.channel()channel.queue_declare(queue='rpc_queue')def fib(n):    if n == 0:        return 0    elif n == 1:        return 1    else:        return fib(n - 1) + fib(n - 2)def on_request(ch, method, props, body):    n = int(body)    print(" [.] fib(%s)" % n)    response = fib(n)    print(" [.] calculated (%s)" % response)    ch.basic_publish(exchange='',                     routing_key=props.reply_to,                     properties=pika.BasicProperties(correlation_id=props.correlation_id),                     body=str(response))    ch.basic_ack(delivery_tag=method.delivery_tag)channel.basic_qos(prefetch_count=1)channel.basic_consume(queue='rpc_queue', on_message_callback=on_request)print(" [x] Awaiting RPC requests")channel.start_consuming()

The above 2 are based on RabbitMQ tutorial on RPC.

  1. Dockerfile
FROM python:3RUN mkdir codeADD flask_server.py requirements.txt /code/WORKDIR /codeRUN pip install -r requirements.txtENV FLASK_APP flask_server.pyEXPOSE 5000CMD ["flask", "run", "-h", "0.0.0.0"]
  1. Docker-compose.yml
services:    web:        build: .        ports:            - "5000:5000"        links: rabbit        volumes:            - .:/code    rabbit:        hostname: rabbit        image: rabbitmq:latest        ports:            - "5672:5672"

Run docker-compose up, and Flask server should start commiunicating with RabbitMQ server.


There are many ways to write the RabbitMQ server, the worker and the Dockerfile.
The first answer shows good examples of them.

I'll just emphasize that the RabbitMQ server might not be ready when the worker (the web service in your case) will try to reach it.
For that I'll suggest writing the docker-compose.yml file like this:

version: "3"services:  web:    build: .    ports:      - "5000:5000"    restart: on-failure    depends_on:      - rabbitmq    volumes:      - .:/code  rabbit:    image: rabbitmq:latest    expose:      - 5672    healthcheck:      test: [ "CMD", "nc", "-z", "localhost", "5672" ]      interval: 3s      timeout: 10s      retries: 3

So, what I did here?

1) I've added the depends_on and the restart properties in the web service and the healthcheck property in the rabbit service.
Now the web service will restart itself until the rabbit service becomes healthy.

2) In the rabbit service I used the expose property instead of ports because in your case the 5672 port need to be shared between the containers and doesn't need to be exposed to the host.

From the Expose docs:

Expose ports without publishing them to the host machine - they’ll only be accessible to linked services. Only the internal port can be specified.

3) I removed the links property because (taken from here):

Links are not required to enable services to communicate - by default, any service can reach any other service at that service’s name.


https://youtu.be/ZxVpsClqjdwthis videos explains everything with code

and has a docker-compose as

version: '3'services:    redis:    image: redis:latest    hostname: redis  rabbit:    hostname: rabbit    image: rabbitmq:latest    environment:      - RABBITMQ_DEFAULT_USER=admin      - RABBITMQ_DEFAULT_PASS=mypass  web:    build:      context: .      dockerfile: Dockerfile    hostname: web    command: ./scripts/run_web.sh    volumes:      - .:/app     ports:      - "5000:5000"    links:      - rabbit      - redis  worker:    build:      context: .      dockerfile: Dockerfile    command: ./scripts/run_celery.sh    volumes:      - .:/app    links:      - rabbit      - redis    depends_on:      - rabbit

and to make the connection use

BROKER_URL = 'amqp://admin:mypass@rabbit//'CELERY = Celery('tasks',backend=REDIS_URL,broker=BROKER_URL)

for further explanation https://medium.com/swlh/dockerized-flask-celery-rabbitmq-redis-application-f317825a03b