How to start flyway after database initialization in Docker
Dockerize / wait-for-it.sh
For the errors:
Invalid argument: ./wait-for-it.sh
and
Invalid argument: dockerize
This is because the entrypoint for the Flyway container is the flyway
executable, and the contents of the command you have specified are appended to the entrypoint as arguments. So, in effect, the container is running the following:
flyway dockerize ...
or
flyway wait-for-it.sh ...
Neither of these are valid arguments to the Flyway command line.
The entrypoint needs to be updated, as you have done in P.S.5
. However, you have then hit the error:
"wait-for-it.sh": executable file not found in $PATH"
This is because wait-for-it.sh
(and dockerize
) are not available in the Flyway container.
You can either create a Dockerfile that extends the Flyway container then ADD
or COPY
the scripts, e.g.:
FROM boxfuse/flyway:latestRUN mkdir /flyway/binADD wait-for-it.sh /flyway/bin/wait-for-it.shRUN chmod 755 /flyway/bin/wait-for-it.sh
Or mount a volume that contains the script / executable:
version: '3'services: ... migration: image: boxfuse/flyway:latest container_name: flyway_migration volumes: - ../sql:/flyway/sql - ../bin:/flyway/bin entrypoint: ["/flyway/bin/dockerize", "-wait", "tcp://my_sql_db:3306", "-timeout", "15s", "--", "flyway"] ...
where the local directory ../bin
contains dockerize
(or wait-for-it.sh).
That should be enough to get dockerize / wait-for-it.sh working. However, both tools only check that a port is available and not that the database itself is actually ready to serve requests.
Compose v2.1
That said, using the docker-compose v2.1 depends_on: condition
syntax might be a reasonable approach. As you have mentioned in the comments, that syntax has been removed in v3 and a lot of people are unhappy about it.
However, as one of the Docker developers says in a comment on that issue:
There's no reason to use the v3 format if you don't intend to use swarm services.
Custom healthcheck script
Another approach is to extend the Flyway container to add a custom MySQL healthcheck script, similar to the Postgres one shown in the docker compose documentation:
#!/bin/bash# wait-for-mysql.shset -ehost="$1"shiftcmd="$@"until MYSQL_PWD=$MYSQL_ROOT_PASSWORD /usr/bin/mysql --host="$host" --user="root" --execute "SHOW DATABASES;"; do >&2 echo "MySQL is unavailable - sleeping" sleep 1done>&2 echo "MySQL is up - executing command"exec $cmd
Then create a Dockerfile to extend Flyway, install the MySQL client and add this script:
FROM boxfuse/flyway:latestRUN apt-get update && \ apt-get install -y mysql-client && \ mkdir /flyway/binADD wait-for-mysql.sh /flyway/bin/wait-for-mysql.shRUN chmod 755 /flyway/bin/wait-for-mysql.sh
You can then use the custom Flyway image in your compose file:
version: '3'services: my_sql_percona: ... migration: build: ./flyway_mysql_client container_name: flyway_migration environment: MYSQL_ROOT_PASSWORD: password volumes: - ../sql:/flyway/sql entrypoint: ["bash", "/flyway/bin/wait-for-mysql.sh", "my_sql_percona", "--", "flyway"] command: -url=jdbc:mysql://my_sql_db:3306/abhs?useUnicode=true&characterEncoding=utf8&useSSL=false -user=root -password=password migrate depends_on: - my_sql_percona
The downside of this approach is that you would need to extend every container with a custom healthcheck script for each of its dependencies.
docker stack
The v2.1 depends_on: condition
syntax seems to have been removed in favour of restart policies in v3. However, these are nested under the deploy section, which:
only takes effect when deploying to a swarm with docker stack deploy, and is ignored by docker-compose up and docker-compose run.
So, a further option is to ditch docker-compose and run on docker swarm, as follows:
Add an on-failure
restart policy to the Flyway container:
version: '3'services: my_sql_percona: ... migration: image: boxfuse/flyway:latest ... depends_on: - my_sql_percona deploy: restart_policy: condition: on-failure
Create a swarm cluster (single-node in this case):
docker swarm init --advertise-addr <your-ip-address>
Deploy the services:
docker stack deploy --compose-file docker-compose.yml flyway_mysql
The Flyway container will then be restarted by swarm every time it exits withan error, until it eventually exits successfully.
Whilst this does seem to work, I'm not sure it is the best approach in this case. For instance, if the Flyway container exits because of an error in a migration script, swarm will continue to restart the container even though it will never succeed.
Summary
I have created a repository with these five different approaches.
Personally, I think I would use the v2.1 approach as the healthcheck is kept with the database container itself and not duplicated in each container that depends on it. I don't need to use swarm services though, so pick whatever works for you. :-)
With flyway 5.2.0, you can add the parameter connectRetries which specifies the maximum number of times in 1 second intervals that flyway will try to reconnect.
command: -connectRetries=20 -url=jdbc:mysql://my_sql_db:3306/abhs?useUnicode=true&characterEncoding=utf8&useSSL=false -user=root -password=password migrate
alternatively you can create a migrate.dev.sh
and use docker run .. boxfuse/flyway:latest
etc..
here's a sample file
$FLYWAY_PASSWORD=$PGPASSWORD$FLYWAY_URL=jdbc:postgresql://$DB_HOST:$PORT/mydb$FLYWAY_USER=postgresdocker run \ --rm \ -e FLYWAY_PASSWORD=$FLYWAY_PASSWORD \ -e FLYWAY_URL=$FLYWAY_URL \ -e FLYWAY_USER=$FLYWAY_USER \ -e FLYWAY_SCHEMAS=$FLYWAY_SCHEMAS \ -v $(pwd)/sql:/flyway/sql \ boxfuse/flyway:latest $1
and then you can call it like this
migrate.dev.sh info
ormigrate.dev.sh migrate
assuming your migrations are in ./sql/