TECH.insight

Shipping your environments with Docker

Monday 16 March 2015

Docker takes us to the next evolutionary stage of Continuous Delivery

Software delivery

As software engineers, there are multiple environments in which the code we write may live and run, including but not limited to:

In the spirit of reducing complexity, by minimising the number of variables at play, it's highly desirable to enforce that your environments work in as similar way to each another as possible. Or in other words, and from the perspective of software delivery, the simplest way is a single and reusable environment. One way of achieving this is to use Docker.

Docker has been aptly described as ‘an engine that automates the deployment of any application as a lightweight, portable, self-sufficient container that will run virtually anywhere’.

In the specific scenario of deploying Java artefacts into a web container/server, it's intriguing to look at how Docker can be leveraged to create an isolated wrapper around a single environment.

Solution design

First, let's clearly define what an environment looks like to us in its most basic form. In my example:

In this most basic form, any engineer can see that it is easy to replicate this environment almost anywhere – albeit the three components and their respective configuration files would still be movable parts. The first step would be to compose a single artefact to contain all of the components along with their configurations. This is where Docker comes in.

First, we create our Docker container. There are two ways to do this: either by executing each step manually on a base container or by composing the commands into a configuration Dockerfile. Having created a container with the three components defined above, the first major step is almost complete.

Second, our task is to package this container into an image and/or to archive the contents into a tarball archive for transport. Et voilà. The basic environment artefact can now be shipped across to any of the environment machines.

To expand our scenario further, the actual environment is required to support multiple instances of the app running in parallel. Classically, one could either boot a new Virtual Machine (VM) to deploy a copy of the app onto, or requisition another physical server to do the same – or even tweak the configuration of the environment to support multiple deployments of it onto the same host machine.

This process is greatly simplified by wrapping the basic app environment within a Docker container – a great feature of which is the isolation it provides on the network layer. Even though Tomcat is deployed on a specific port inside the container (by default on port 8080), the port it runs on in the actual host machine can be completely different.

During the startup of a Docker container on a host machine, it is possible to either specify a port number on the host machine to map to the local container port, or let Docker randomly assign an unused port to it.

As an example, the following two commands deploy a Docker container sourced from a test-app image to ports 8081 and 8082 – even though the actual app inside the container is running on port 8080.

docker run -d -p 8081:8080 --name dev_1 test-app
docker run -d -p 8082:8080 --name dev_1 test-app

As a direct consequence, it is simple to deploy multiple instances of a Docker container without having to worry about port collision or having to configure against that explicitly. It is easy to deploy multiple instances of our basic environment artefact – excluding the configuration of a load balancer – thereby affording parallel execution of the underlying app and enabling horizontal scaling. It is now highly valuable to automate and integrate this entire process into the CD pipeline.

To get the CD or Continuous Integration (CI) server to replicate the above process, the first step is to define exactly how the Docker container should be built. This basically boils down to creating a Dockerfile – a text file that is just a series of instructions to Docker on how to provision a Docker image.

In our case, the Dockerfile is responsible for building the entire basic environment container. This step depends of course on actually having built the app WAR file first. Once the CD server is capable of using a Dockerfile to build full images of the environment, it can use the docker save command to create a tarball archive of the image. For example:

docker save dev-build-0.923asd > dev-build-0.923asd.tar

This package can then be transported directly to any of the target environments by a network transport mechanism of your choice, e.g. SCP. Last but not least, the docker load and docker run commands can be leveraged on the target environment host machines to unpack and spin up your complete artefact. For example:

docker load < dev-build-0.923asd.tar
docker run -d -p 8081:8080 --name dev_1 dev-build-0.923asd

Having a centralised mechanism to control the flow of building, packaging, distributing and executing Docker containers across all environments is a serious advantage. Automating the process is just the icing on the cake.

Next steps

While it is fantastic to be able to set up the process described above manually, the caveat is that this is only efficient for small-to-medium scale projects. When the number of components and apps scales up, setting up and managing the entire process requires more effort.

To help with exactly this situation, the Docker team have already begun work on a complete orchestration framework. One of the core questions the framework seeks to answer is: ‘My singleton Docker containers are 100% portable to any infrastructure, but how do I ensure my multi-container distributed app is also 100% portable – whether moving from staging to production or across data centres or between public clouds?”

The three tools that make up this orchestration framework are:

By using Docker and its associated machinery, I firmly believe we can further reduce the complexity of our infrastructure, and move a step closer towards realising massive but manageable app environments.

About The Author

Rohit is a Principal Software Engineer at AKQA in Berlin. He believes that automation and DevOps are integral to the future of building software services and that it is the responsibility of every engineer to incorporate it into their workflow.

@rohitdantasakqa