Docker in Action - Development to Delivery, Part 3

Michael Herman, Thursday January 15, 2015

This is a guest post by Michael Herman from Real Python - learn Python programming and web development through hands-on, interesting examples that are useful and fun!

This three part series will teach you everything you need to know about developing with Docker - from setting up your environments and utilizing Flask on Docker to detailing a powerful development workflow that covers setting up a fully functional development environment, on your Mac, and managing continuous integration and delivery.

  1. Part 1: Local Docker Setup
  2. Part 2: Continuous Integration
  3. Part 3: Continuous Delivery (current)

END GOAL:

steps

So, in the last tutorial, we went over a nice development workflow that included continuous integration with CircleCI (steps 1 through 6 from above). In this final piece we'll add continuous delivery into the mix (step 7).

Who doesn't love Heroku’s git push heroku master? Let's make delivery that easy...

Digital Ocean

To set up Docker on Digital Ocean, create a new Droplet, and choose "Applications" and then select the Docker Application. Make sure you also set up an SSH key. For help with this, please see this tutorial.

Once setup, SSH into the server as the 'root' user:

sh $ ssh root@<some_ip_address>

Pull the Docker image from Docker Hub and run a new container:

sh $ docker pull mjhea0/flask-docker-workflow $ docker run --name flask -d -p 80:80 mjhea0/flask-docker-workflow

Make sure you replace mjhea0 with your Docker Hub username.

Sanity check. Navigate to your Droplet's IP address in the browser. You should see, "Flask is running on Docker!".

Now, instead of having to SSH into the server and pull down the new image each time we want to deploy, let's automate the process so that once a new build is generated on Docker Hub, we pull in the new image and run a new container automatically.

Deploy Script

We can utilize Docker Hub's webhooks to trigger a post request to a URL on our Digital Ocean server. On the server, we then need to have a "listener" setup to trigger a simple bash script:

sh docker pull mjhea0/flask-docker-workflow docker stop flask docker rm flask docker run --name flask -d -p 80:80 mjhea0/flask-docker-workflow

Here we pull in the new image, remove the currently running container, and then run the new container. It's not exactly zero-downtime; it's more like as-minimal-as-possible-downtime.

Let's get this set up along with the listener...

Docker Listener

To set up the listener, we can use a separate Docker container that's already set up. You will need to update the code before you can use it, though. So clone the Github repo.

Update the app/deploy.sh file, replacing mjhea0 with your Docker Hub username. If you're curious, check out the the Flask app code within app/app.py. Essentially, we just confirm that the token is correct when a post request hits the /ping endpoint. If it's correct, then the deploy script is fired. Make sense?

After you clone the repo and update the deploy script. Add this repository to Docker Hub, just as you did in the last tutorial.

Next, SSH back into Digital Ocean, and then pull the image and run the container (making sure to add the TOKEN to the environment with the -e flag:

sh $ docker pull mjhea0/docker-hook-listener $ docker run --name listener -e TOKEN="test654321" -d -p 5000:5000 mjhea0/docker-hook-listener

Again, replace mjhea0 with your Docker Hub username.

With the listener running, add a webhook (under Settings) to your Docker build: http://your-hostname:5000/ping?token=test654321.

Profit!

Now after every single build completes on Docker Hub-

  • A post request is sent.
  • The listener handles the post request by ensuring that the token is valid and then firing deploy.sh.

Time to test. On the feature branch, update the app - Add some cities to the cities list, perhaps. Commit your changes. Open a pull request. Once the automated tests pass, merge the request. After the tests run again, a new build will trigger on Docker Hub. After the build is complete, the post request is sent and handled by the listener, which fires the bash script.

Make sure these changes are reflected in the browser.

Conclusion

Well, this concludes our look at a powerful Docker workflow - from development to deployment. What's left?

  1. Staging server: We need a pre-production server for one last line of tests.
  2. Integration tests: Right now we just have some basic unit tests, so make sure to add integration tests.
  3. Create a new user: Add a new user to your Linux server so that you're not using 'root'.
  4. Tagging: It's a good idea to introduce a system of tagging so that Docker images can be traced back to a commit (and ultimately back to the code).

Thanks for reading this series on Docker in Action! For more tips on managing continuous delivery workflow, check out our ebook, Getting to Continuous Deployment.

Further reading: