I have set up a Docker Django/PostgreSQL app closely following the Django Quick Start instructions on the Docker site.
The first time I run Django\'s manage.py migr
You can use docker exec
command
docker exec -it container_id python manage.py migrate
I use these method:
services:
web:
build: .
image: uzman
command: python manage.py runserver 0.0.0.0:8000
ports:
- "3000:3000"
- "8000:8000"
volumes:
- .:/code
depends_on:
- migration
- db
migration:
image: uzman
command: python manage.py migrate --noinput
volumes:
- .:/code
depends_on:
- db
Using docker
hierarchy we made, the service migration runs after set up the database and before to run the main service. Now when you run your service docker
will run migrations before runs the server; look that migration
server is applied over the same image that web server, it means that all migrations will be taken from your project, avoiding problems.
You avoid made entry point or whatever other thing with this way.
I know this is old, and maybe I am missing something here (if so, please enlighten me!), but why not just add the commands to your start.sh
script, run by Docker to fire up your instance? It will take only a few extra seconds.
N.B. I set the DJANGO_SETTINGS_MODULE
variable to make sure the correct database is used, as I use different databases for development and production (although I know this is not 'best practice').
This solved it for me:
#!/bin/bash
# Migrate the database first
echo "Migrating the database before starting the server"
export DJANGO_SETTINGS_MODULE="edatool.settings.production"
python manage.py makemigrations
python manage.py migrate
# Start Gunicorn processes
echo "Starting Gunicorn."
exec gunicorn edatool.wsgi:application \
--bind 0.0.0.0:8000 \
--workers 3
If you have something like this in your docker-compose.yml
version: "3.7"
services:
app:
build:
context: .
dockerfile: docker/app/Dockerfile
ports:
- 8000:8000
volumes:
- ./:/usr/src/app
depends_on:
- db
db:
image: postgres
restart: always
environment:
POSTGRES_USER: docker
POSTGRES_PASSWORD: docker
POSTGRES_DB: docker
Then you can simple run...
~$ docker-compose exec app python manage.py makemigrations
~$ docker-compose exec app python manage.py migrate
You just have to log into your running docker container and run your commands.
docker-compose build -f path/to/docker-compose.yml
docker-compose up -f path/to/docker-compose.yml
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3fcc49196a84 ex_nginx "nginx -g 'daemon off" 3 days ago Up 32 seconds 0.0.0.0:80->80/tcp, 443/tcp ex_nginx_1
66175bfd6ae6 ex_webapp "/docker-entrypoint.s" 3 days ago Up 32 seconds 0.0.0.0:32768->8000/tcp ex_webapp_1
# postgres docker container ...
docker exec -t -i 66175bfd6ae6 bash
Now you are logged into, then go to the right folder : cd path/to/django_app
And now, each time you edit your models, run in your container : python manage.py makemigrations
and python manage.py migrate
I also recommend you to use a docker-entrypoint for your django docker container file to run automatically :
Here is an example (docker-entrypoint.sh
) :
#!/bin/bash
# Collect static files
echo "Collect static files"
python manage.py collectstatic --noinput
# Apply database migrations
echo "Apply database migrations"
python manage.py migrate
# Start server
echo "Starting server"
python manage.py runserver 0.0.0.0:8000
Have your stack running then fire off a one shot docker-compose run command. E.g
#assume django in container named web
docker-compose run web python3 manage.py migrate
This works great for the built-in (default) SQLite database, but also for an external dockerized database that's listed as dependency. Here's an example docker-compose.yaml file
version: '3'
services:
db:
image: postgres
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
https://docs.docker.com/compose/reference/run/