Software in context

Tags


How to Build and Deploy Docker Images with Drone

30th August 2014

In How to set up a Private Continuous Deployment Server with Drone it was hinted that getting Drone to output a Docker container as part of its build process wasn't trivial. What follows is a look at how I use Drone to build and deploy a Dockerized app, and how I wrangled Docker into Docker. This is by no means the perfect Drone/Docker setup, but is a major step toward getting the two technologies to play nicely together.

1. The Premise

Assume we're starting with:

  1. An app that wants to be deployed as a Docker container. There's a Dockerfile in the root of the codebase. The current release process is to manually run tests, build and push the docker image, then log in to a production server and upgrade the container.
  2. A functioning Drone server. We've maybe wired in our app and have Drone running the tests, but we haven't figured out how to get it to build and deploy as a Docker container. For reference, my Drone host is a 2G DigitalOcean droplet running Ubuntu 14.04 with Docker 1.1.1.

The Goal

The goal is continuous integration & deployment, which seems to be right up Drone's alley. For a Dockerized app like this one, Drone should:

  1. Run tests
  2. Build a Docker image
  3. Push this image to a registry
  4. Upgrade the production container

Let's dive in…

2. Configure Your App for Drone

First we'll wire up our app's repository into a functioning Drone server. If you'd like a primer on Drone, please refer to this tutorial. This is a matter of (1) setting up OAuth keys in your version control system, (2) telling Drone about the app repository, and (3) adding some Drone-specific files to the codebase:

/myrepo
|
|---- .drone.yml (required)
|
|---- /.drone (optional)
|     |
|     |---- build.sh
|     |---- deploy.sh
|
|---- CODE..

Drone resources are kept in /.drone by convention. Our .drone.yml file reads like:

image: http://my-docker-registry/my-docker-image:version  
script:  
  - ./.drone/build.sh
deploy:  
  bash:
    script:
      - ./drone/deploy.sh
notify:  
  email:
    recipients:
      - [email protected]

NOTE! This is not the final .drone.yml! We'll run into some bumps later that will cause us to modify this file. The final forms of all the required files can be found at the end.

Let's break it down by section:

1. Specify a build image.

image: http://my-docker-registry/my-docker-image:version  

Drone's pre-built images are available on-demand with common build tools, or you can specify your own. This tutrial works with a custom image, due to some Docker-in-Docker-specific build requirements.

2. List the build commands.

script:  
  - ./.drone/build.sh

For cleanliness, we forward to a bash script in .drone/. We'll get to the details of this script shortly.

3. List the deploy commands.

deploy:  
  bash:
    script:
      - ./drone/deploy.sh

We specify deploybashscript because Drone has a handful of options for deployment (like Heroku, OpenStack) of which a bash script is only one.

4. Notify people via email.

notify:  
  email:
    recipients:
      - [email protected]

Cool feature.

3. Build a Docker Image

Instead of just giving away the final setup, I'd like to work through the issues as they come up. This not a simplistic setup, and there are a handful of young projects & technologies at play here worth understanding.

Keep in mind that our build environment is a Docker container running on the Drone host. But we also need the process running in the build container to output a Docker image, which means a Docker instance must be available inside. Going forward, we need clear thinking about which Docker is running where. Hopefully this diagram helps:

Docker in Drone Diagram

Given this understanding, we'll want a build environment that has Docker installed and running alongside our other build tools. After poking around on the Docker install docs, we might add the following block to the Dockerfile of our (Ubuntu-based) build container:

# install docker
RUN apt-get install -y apparmor  
RUN curl -s https://get.docker.io/ubuntu/ | sudo sh  

We could add an init script to the build container to start the Docker daemon, or start it ourselves within build.sh. Then ideally, we should now be able to run Docker commands no problem as part of our build process, right?

Not so fast! Before we go down this path, I'll reveal that our inner Docker will be utterly broken; bummer! Yes, there is some disappointing irony in the fact that Docker can't run inside a Docker container without moderate hackery, but these things are never so simple; try writing a program that prints itself.

3.1 Use wrapdocker

It turns out that Docker is well aware of Docker-in-Docker difficulties and has provided an accepted solution in the form of a bash script wrapper for starting an inner Docker instance. The script—wrapdocker—can be found on GitHub: https://github.com/jpetazzo/dind. The situation is indeed reminiscent of Inception.

Wrapdocker accomplishes the following:

  1. Ensures that cgroups are correctly mounted
  2. Closes extraneous file descriptors
  3. Starts Docker on the correct port

Now we can modify our build container's Dockerfile to use wrapdocker:

# install docker
RUN apt-get install -y apparmor  
RUN curl -s https://get.docker.io/ubuntu/ | sudo sh  
ADD wrapdocker /usr/local/bin/wrapdocker  
RUN chmod +x /usr/local/bin/wrapdocker  

There are two new lines here:

1. Grab wrapdocker.

ADD wrapdocker /usr/local/bin/wrapdocker  

Here we're just ADDing a local copy of wrapdocker into the build container, but one could also wget it from Github or Dropbox.

2. Make wrapdocker executable.

RUN chmod +x /usr/local/bin/wrapdocker  

Setting permissions on the actual wrapdocker file could also work, but it's nice to be able to guarantee its's state at build time.

3.2 Enable the --privileged flag

The only other requirement for Docker in Docker is that the outer one support the --privileged flag. Setting this flag on a container when it runs allows it to see and utilize more of the host's resources, some of which are required by Docker. In our case, the outer container is a build environment controlled by Drone, which helpfully exposes the privileged option to us in the Drone web UI. From your repository's page, go to Settings and check Enable Privileged Builds under Admin-only settings:

Drone Privileged Build Screenshot

3.3 Write a build script

Now that Docker can successfully run inside a Docker container, let's look at the actual build script:

build.sh

#!/bin/bash
set -e  
cd /var/cache/drone/src/path/to/app

# [pass tests here]

wrapdocker &  
sleep 5

docker build -t docker-registry/image-name .  
docker push docker-registry/image-name  

The breakdown:

1. Exit on failure.

set -e  

Setting this option at the beginning of the script will cause the script to fail if any of the subsequent commands return a non-zero (failure) code. Returning non-zero informs Drone that something has gone wrong, causing it to stop the build. Without this, for example, our build could fail to pass tests, but Drone would have no way of knowing and proceed with a deployment!

2. Go to the source code.

When starting the build container, Drone copies your source into the location:

/var/cache/drone/src/$domain/$owner/$name

Let's make this place our working directory with:

cd /var/cache/drone/src/path/to/app  

3. Run tests.

# [pass tests here]

Here's the part that's most specific to your app. You'd want to ensure your regression suite passes before proceeding to build a production container. This could be, for example a grunt task for a JavaScript project, or an ant target for a Java-based app.

4. Start (wrap)docker.

Now we can correctly start Docker!

wrapdocker &  
sleep 5  

We installed an executable wrapdocker into /usr/local/bin, so it should be on the path. We'll be using it to start the Docker daemon, which we installed as well. Since we need Docker to run in the background, we fork it with wrapdocker & and sleep for 5 seconds to allow it to initialize before moving on. This isn't a very pretty (or failsafe) approach to starting a required background process, but I can't find a better solution. Any suggestions?

5. Build and push the Docker image.

docker build -t docker-registry/image-name .  
docker push docker-registry/image-name  

Finally we can build our app's production container. Remember, the Dockerfile used by this docker build lives in the root of our app's source code, and is different from one used to create the build environment. It's tagged appropriately so that we can then push it to our docker registry.

If the build passes, there should be a new, trustworthy Docker image waiting in the registry. In step 4 we'll figure out how to upgrade the container on a production machine using this new image.

3.4 Clean up Loopback Devices

I had gotten this far and was having Drone successfully build and push my apps as Docker containers. But then after roughly ten builds, all of them started failing with this message:

[error] attach_loopback.go:42 There are no more loopback device available.
loopback mounting failed  

This would happen while the build script was running wrapdocker &. I found some information on this dind issue hinting that the inner Docker was using some filesystem resources from the host (in this case, loop devices) that were never being freed. This was confirmed by running losetup -a on the Drone host, and seeing a list of devices. These devices would refuse to unmount manually, only disappearing after the host was rebooted.

This is an issue we should solve, otherwise we'll be restarting our build machine every ten builds! I'll spare you the mental acrobatics and frustration I underwent thinking about this problem…This one can be fixed by simply stopping the inner Docker gracefully after the build is complete, allowing Docker to free its resources, instead of having it discourteously killed by Drone when our build environment ends.

Stopping Docker could be as simple as appending service docker stop to the build script, but I found that this didn't work for all my setups; for some build containers it would fail with:

* Stopping Docker: docker
start-stop-daemon: warning: failed to kill 1067: No such process  
1 pids were not killed  
No process in pidfile '/var/run/docker-ssd.pid' found running; none killed.  

For this problem, @Cactusbone points out on the issue thread:

it seems the pid used by service docker is not always the same as wrapdocker :)

and rightly suggests an alternative:

start-stop-daemon --stop --pidfile "/var/run/docker.pid"  

For reusability, I moved this command into its own script, calling it after build.sh in the Drone YML file:

.drone/stop_docker.sh

#!/bin/bash
start-stop-daemon --stop --pidfile "/var/run/docker.pid"  

This method has been working without a hitch since I employed it, and periodic losetup -a's on my host confirm that no loop devices are being leaked.

4. Deploy the Docker Image

A simple manual deployment for a Dockerized app is to log on the production machine, docker pull and docker run the container in question. I looked at Dokku but learned it was overkill for what I needed, and probably not flexible enough anyway. Capistrano seemed to better fit the bill, but I wanted to see if there was a pure Drone/bash solution before I went ahead and introduced another tool.

4.1 Write a deploy script

First let's get down on paper what we've been typing manually every deploy:

.drone/deploy.sh

#!/bin/bash
set -e  
docker pull docker-registry/image-name:latest  
docker stop image-name  
docker rm image-name  
docker run --name image-name [OPTIONS] docker-registry/image-name [COMMAND] [ARG...]  

These are standard docker commands:

1. Pull the latest image.

docker pull docker-registry/image-name:latest  

2. Stop the running container.

docker stop image-name  

3. Remove the stopped container.

docker rm image-name  

This is to avoid the buildup of "zombie" containers. With Docker, stopped containers don't automatically get removed. Naming them help keeps track of them, but I think it's a good habit to remove stopped containers you don't intend to restart.

docker run --name image-name [OPTIONS] docker-registry/image-name [COMMAND] [ARG...]  

4. Run the new image.

Finally, start the newly downloaded image by running it, making sure to name it consistently, so that it can be stopped eventually.

4.2 Run deploy.sh remotely

What we want now is to have Drone run deploy.sh on the production server on our behalf. This is accomplished by making the following modification to .drone.yml:

deploy:  
  bash:
    script:
      - chmod 600 .drone/private_deploy_key
      - ssh -i .drone/private_deploy_key [email protected] 'bash -s' < ./.drone/deploy.sh

Here we use ssh key pairs to run deploy.sh on the production server. The arbitrarily named private key should be saved to .drone/private_deploy_key. The public key should be concatenated to ~/.ssh/authorized_keys on the remote—in the home directory of the user Drone will log in as. This Ubuntu help page describes how to generate a key pair using ssh_keygen and push it to a remote using ssh-copy-id.

Here's the breakdown:

1. Make the private key private.

- chmod 600 .drone/private_deploy_key

Before adding this, I would see the error:

Permissions 0777 for 'private_deploy_key' are too open.  
It is recommended that your private key files are NOT accessible by others.  
This private key will be ignored.  

So we should restrict permissions for both reading and writing the file to the file owner (600).

I tried setting this on the private_deploy_key file itself, before it gets copied into the into the build container by Drone, but it appeared to always lose the permission I had set. This may be an issue where Drone changes the permissions on your app's source files. But it's easy enough to set them correctly at runtime.

2. Run deploy.sh via SSH.

- ssh -i .drone/private_deploy_key [email protected] 'bash -s' < ./.drone/deploy.sh

Here we're logging on to the remote server via SSH using the freshly restricted private key via the -i option. Then we're sending deploy.sh through the remote bash. Pretty nifty!

4.3 Create a Drone user

Notice that we logged in as a drone user. Creating a special user for Drone deployments is a good idea for security, so that we can restrict what Drone is able to do if the login gets compromised. I won't take this one very far, but recommend a couple of helpful links:

  1. How to create a new user in Ubuntu
  2. How to add a user to the docker group

That's a wrap!

With Drone now deploying our built container, let's add to the original diagram to show the whole picture:

Drone and Docker Diagram

5. Final Drone Files

Here's the final Drone config after we've worked through the pain points. Keep in mind these files are quite skeletal; expect YML options and build steps to vary betwen apps:

/myrepo
|
|---- .drone.yml
|
|---- /.drone
|     |
|     |---- build.sh
|     |---- deploy.sh
|     |---- stop_docker.sh
|     |
|     |---- private_deploy_key
|
|---- CODE..

.drone.yml

image: http://my-docker-registry/my-docker-image:version  
script:  
  - ./.drone/build.sh
  - ./.drone/stop_docker.sh
deploy:  
  bash:
    script:
      - chmod 600 .drone/private_deploy_key
      - ssh -i .drone/private_deploy_key [email protected] 'bash -s' < ./.drone/deploy.sh
notify:  
  email:
    recipients:
      - [email protected]

.drone/build.sh

#!/bin/bash
set -e  
cd /var/cache/drone/src/path/to/app

# [pass tests here]

wrapdocker &  
sleep 5

docker build -t docker-registry/image-name .  
docker push docker-registry/image-name  

.drone/stop_docker.sh

#!/bin/bash
start-stop-daemon --stop --pidfile "/var/run/docker.pid"  

.drone/deploy.sh

#!/bin/bash
set -e  
docker pull docker-registry/image-name:latest  
docker stop image-name  
docker rm image-name  
docker run --name image-name [OPTIONS] docker-registry/image-name [COMMAND] [ARG...]  

Conclusion: Possible Improvements

Version numbers on containers — With Drone we've achieved clean separation of build and run environments, and at least codified the dependencies for each in a versioned Dockerfile. This part of the picture is starting to look more like a build system such as framewerk. One thing we're missing though is automatic versioning of our Docker image based on a version number we set in the code somewhere. Should be simple, but I'd love to hear suggestions for how to achieve this.

Better container dependency management — If you run multiple Docker containers on the same host in production, there may be dependencies between them. When deploying with basic bash scripts, its necessary to manage these relationships by hand, stopping and starting dependent containers in the right order. A container orchestration tool like maestro could help out here.

No downtime — When we upgrade a production container, whatever app or service it runs is unavailable for a while. True continuous deployment deals with this problem nicely, even allowing instant reversion to the previous release in case of a problem. CoreOS is an ideal operating system for this requirement, as it was designed not only for fast updates and rollbacks, but intended to run only Docker containers!

Docker in Docker… in Docker?

Why not? Sounds crazy, but Docker and @jpetazzo say its possible. It's also inevitable if we want to package Drone itself inside a Docker container. If you've achieved a triple-docker setup, please share!

Related posts:

Caleb Sotelo
AUTHOR

Caleb Sotelo

I'm a Software Engineer and Director of OpenX Labs. I try to write about software in a way that helps people truly understand it. Follow me @calebds.

View Comments