Finally, Django, with Docker, on production!
I finally managed to deploy Django in a Docker container on production! I’ve been trying to switch to a full Docker development/production model since Docker came out, but only recently did the ecosystem mature enough to allow me to easily use Docker both for development (where it excels, in my opinion) and on production (where it’s pretty okay and quite useful).
In this post, I will quickly give you all the relevant details and files you need to go from a newly checked-out repository to a full development environment in one command, and to deploy that service to production. As a bonus, I’ll show you how to use Gitlab(which is awesome) to build your containers and store them in the Gitlab registry.
First of all, let’s start with the
. In case you don’t know,
is a way to run multiple containers at once, easily connecting them to each other. This is a godsend when doing development, but not that useful in production (where you usually want to deploy services manually).
To make development easier, we’ll write a
to set up and run the essentials: Postgres, Django’s dev server, and Caddy (just to proxy port 8000 to 80, you can remove it if you like port 8000).
We have to do some contortions with the Django devserver, because Docker doesn’t care if Postgres is ready before starting the server, so Django sees that it can’t contact the database and quits. So, we just wait until port 5432 is ready before starting the devserver.
To connect to Postgres, just set the database hostname to
, the user and database to
, and the password to
. That’s pretty much all the settings you need for this. I’ve also helpfully set the
environment variable so your settings file can know whether it’s running in Docker or not.
Generally, with Docker, you have to rely heavily on environment variables for configuration, rather than, say, a
file. That’s not necessarily a bad thing, as environment variables can be pretty handy as well.
Here’s the complete
# Docker hack to wait until Postgres is up, then run stuff.
command: bash -c "while ! nc -w 1 -z db 5432; do sleep 0.1; done; ./manage.py migrate; while :; do ./manage.py runserver_plus 0.0.0.0:8000; sleep 1; done"
With this one file, setting up a new developer on the team consists of:
That’s it. They have a complete development environment that mirrors production on their local computer, with one command. That environment also handles hot reloads, as usual, and will persist the database data under a directory of the repo.
To start the entire stack up, run
and open http://localhost/, you should see your app’s front page.
If you need to run a
command, you can do it like so:
Done and done! Complete isolation with no extra RAM or CPU usage!
Since the dev setup is done, we’ll start with the
. Its job is to list all the commands needed to get a container from a newly-installed Linux instance all the way to running your application. The Dockerfile is completely separate from
in this setup, since the latter is for setting up a local analog to the production server, and the former is for detailing what needs to be done to a blank OS to run the app. docker-compose does not use the Dockerfile.
This is the
, I will include comments in the file itself so you can follow along.
It’s pretty straightforward, except for that last line. Why do we need a script? Why not just
? What’s in that file? The questions keep mounting.
It’s just because I want to
on the server every time before a run. This is what the
script looks like:
Pretty simple! I use uWSGI to run Django, all you need is the appropriate configuration. I like to stick it in an .ini file, it’s too trivial to post. Adjust to taste.
Using Gitlab to build your images
As I said earlier, Gitlab is amazing. It can run pretty much anything in its CI stage, including building your Docker images. It also has an integrated Docker registry, which means that, every time you push your code to the repo, Gitlab can automatically build a container so you can go to a newly-provisioned, fresh server that has Docker installed and do:
will use the host’s networking and save you a lot of trouble forwarding ports. It’s less secure, because everything inside the container will be running on the host’s network space, but, since the dev server only listens to localhost anyway (and would run on the host’s net space without Docker), I’m fine with that.
For our case, the four CI stages that will run on Gitlab are:
Run a static check using
(which you should always do).
Run your tests (which you should always do as well).
Build your Docker image and copy it to the registry.
Deploy everything to production (in my case, this happens by triggering a Captain WebhookURL.
that will build your images:
Pretty much all you need to run Docker bothlocally, for development, and on production, is in those two files. If you want to use Gitlab’s fantastic integration with everything, you have that third file, for no extra charge.
If you know of something I can install that will handle starting/restarting/updating my containers on the server, please let me know! I hear there are various solutions, like Kubernetes, but ideally I’d prefer something more lightweight. My ideal scenario is one where I can have a service or some software I can deploy containers to, and which will abstract all the service running and container updating away.
If you’re aware of something that will do the job, or if you have any questions or feedback, leave a comment here or tweet at me. Thanks!