Why you should be using Fig and Docker

This is a introductory article to convince and prepare you try setting up your web app development environment with Fig and Docker.

The snowflake Problem

Let me take a moment to lay some foundation by rambling about dev environments.  They take weeks to build, seconds to destroy, and a lifetime to organize.  As you configure your machine to handle all the projects you deal with, it becomes a unique snowflake and increasingly difficult to duplicate (short of full image backups).  The worst part is that as you take on more projects, you configure your laptop more, and it becomes more costly to replace.

I develop on Linux and Mac and primarily do web development.  Websites have the worst effect on your dev environment because they often (read always) need to connect a number of other services like databases, background queues, caching services, web servers, etc.  At any given moment, I probably have half a dozen of those services running on my local machine to test things.  It is worse when I am working on Linux, because it is so easy to locally install all the services an app runs in production.  I routinely have MongoDB, PostgreSQL, MySQL (MariaDB), Nginx, and Redis running on my machine.  And lets not even talk about all the python virtualenv’s or vendorized Rails projects I have lying around my file system.

Docker Steps In

Docker is an such an intriguing tool.  If you have not heard, Docker builds on the ideas of Linux container features (cgroups and namespace isolation) to create lightweight images capable of running processes completely isolated from the host system.  It is similar to running a VM, but much smaller and faster.  Instead of emulating hardware virtually, you access the host system’s hardware.  Instead of running an entire OS virtually, you run a single process.  The concept has many potential use cases.

But with Docker, you can start and stop processes easily without needing to clutter your machine with any of that drama.  You can have one Docker image that runs Postgres and another that runs Nginx without having them really installed on your host.  You can even have multiple language runtimes of different versions and with different dependencies.  For example, several python apps running different versions of Django on different or the same versions of CPython.  Another interesting side effect, if you have multiple apps using the same kind of database, their data will not be on the same running instance of your database.  The databases, like the processes, are isolated.

Docker images are created with Dockerfiles.  They are simple text files that start from some base image and build up the environment necessary to run the process you want.  The following is a simple Dockerfile that I use on a small Django site:

FROM python:3.4
MAINTAINER tgroshon@gmail.com

ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
RUN pip install -r requirements.txt
ADD . /code/

Simple right?  For popular platforms like Python, Ruby, and Node.js, prebuilt Docker images already exist.  The first line of my Dockerfile specifies that it builds on the python version 3.4 image.  Everything else after that is configuring the environment.  You could even start with a basic Ubuntu image and apt-get all the things:

FROM ubuntu:14.04

# Install.
RUN \
  apt-get update && \
  apt-get -y upgrade && \
  apt-get install -y build-essential && \
  apt-get install -y software-properties-common && \
  apt-get install -y byobu curl git htop man unzip vim wget

From there you can build virtually any system you want. Just remember, the container only runs a single process. If you want to run more than one process, you will need to install and run some kind of manager like upstart, supervisord, or systemd. Personally, I do not think that is a good idea.  It is better to have a container do a single job and then compose multiple containers together.

Enter Fig

Problem is, Docker requires quite a bit of know-how to get configured in this kind of useful way.  So, let’s talk about Fig.  It is created specifically to use Docker to handle the Dev Environment use case.  The idea is to specify what Docker images your app uses and how they connect.  Then, once you build the images, you can start and stop them together at your leisure with simple commands.

You configure Fig with a simple yaml file that looks like this for a python application:

web:
  build: .
  command: python app.py
  links:
   - db
  ports:
   - "8000:8000"
db:
  image: postgres

This simple configuration specifies two Docker containers: a Postgres container called db and a custom container built from a Dockerfile in the directory specified by the web.build key (current directory in this case).  Normally, a Dockerfile will end with the command (CMD) that should run it.  The web.command is another way to specify that command.  web.links is how you indicate that a process needs to be able to discover another one (the database in this example).  And web.ports simply maps from a host port to the container port so you can visit the running container in your browser.

Once you have the Dockerfile and fig.yml in your project directory, simply run fig up to start all of your containers and ctrl-c to stop them.  When they aren’t running, you can also remove them from fig by running fig rm although it seems to me that the docker images still exist, so you might also want to remove those for a completely clean install.

Conclusion

Once I learned about Docker and Fig, it is one of the first things I do on new web projects.  The initial configuration can take some time, but once you have it configured it pays for itself almost immediately.  Especially when you add other developers to a project.  All they need to have installed are Docker and Fig, and they are up and running with a simple fig up command.  Configure once, run everywhere.  Harness all that effort spent configuring your personal machine and channel it into something that benefits the whole team!

Advertisements
Why you should be using Fig and Docker