Running Heroku background tasks with only 1 web dyno and 0 worker dynos Running Heroku background tasks with only 1 web dyno and 0 worker dynos heroku heroku

Running Heroku background tasks with only 1 web dyno and 0 worker dynos


Modify Procfile to look like this:

web: bin/web

Now create the bin directory, and create the file bin/web to look like this:

#!/bin/bashpython app.py &python worker.py

Make sure you give this file the executable permission:

$ chmod +x bin/web


I am currently running both my web and backend scheduler in Heroku using only 1 dyno.

Idea is to provide one main python script for Heroku to start in 1 dyno. This script is used to start both the web server process(es) and the customer scheduler process(es). You can then define your jobs and add them to the custom scheduler.

APScheduler is used in my case.

This is what I did:

in Procfile:

 web: python run_app.py    #the main startup script

in the run_app.py:

# All the required importsfrom apscheduler.executors.pool import ThreadPoolExecutor, ProcessPoolExecutorfrom apscheduler.triggers.cron import CronTriggerfrom run_housekeeping import run_housekeepingfrom apscheduler.schedulers.background import BackgroundSchedulerimport osdef run_web_script():    # start the gunicorn server with custom configuration    # You can also using app.run() if you want to use the flask built-in server -- be careful about the port    os.system('gunicorn -c gunicorn.conf.py web.jobboard:app --debug')  def start_scheduler():     # define a background schedule      # Attention: you cannot use a blocking scheduler here as that will block the script from proceeding.     scheduler = BackgroundScheduler()     # define your job trigger     hourse_keeping_trigger = CronTrigger(hour='12', minute='30')     # add your job     scheduler.add_job(func=run_housekeeping, trigger=hourse_keeping_trigger)     # start the scheduler     scheduler.start()def run():    start_scheduler()    run_web_script()if __name__ == '__main__':    run()

I am also using 4 Worker processes for serving the web from Gunicorn -- which runs perfectly fine.

In gunicorn.conf.py:

loglevel = 'info'errorlog = '-'accesslog = '-'workers = 4

You may want to checkout this project as an example: Zjobs@Github


You could use a process manager such as god or monit.

With god, you can set up your configuration like so

God.watch do |w|  w.name = "app"  w.start = "python app.py"  w.keepaliveendGod.watch do |w|  w.name = "worker"  w.start = "python worker.py"  w.keepaliveend

Then you put this in your Procfile

god -c path/to/config.god -D

By default, it automatically restarts the process if it crashes, and you can configure it to restart the app if memory usage gets too high. Check out the documentation.