The main issues I experienced were related to the immaturity of ASP.NET 5 and the half-baked integration between this and Docker
Microsoft has been dipping its toes into the open source pool for a number of years, but in late 2014 they jumped in the deep end by open-sourcing the ASP.NET core, their web application framework. The timing of this decision was interesting, given that Microsoft are in the midst of finalising the next major release of ASP.NET and Visual Studio.
ASP.NET 5 (or vNext as it was known initially) has been described as the biggest redesign of the framework since version 2.0 in 2005. Where previously ASP.NET was delivered as a single and all-encompassing (what some considered bloated) framework, the new stack focuses on simplicity, modularity and flexibility. There are too many features to describe in proper detail here, but a few of the big wins for developers are:
- Runs anywhere – Windows/Linux/Mac/Embedded devices
- Composable – Use only the features you need
- Open source – No more black box
If you’d like to dig deeper into ASP.NET 5, check out Scott Hanselman’s fantastic write-up.
When I first learned that ASP.NET applications will (one day) be possible to host on a Linux box, without going down the Mono route, my first reaction was disbelief, followed shortly after by a sense of freedom. Being confined to the Windows world had meant a large percentage of bleeding edge – open source software was either off limits until someone ported a Windows version or only included a half-baked version full of known issues:
- nginx (http://nginx.org/en/docs/windows.html#known_issues)
- RabbitMQ (https://www.rabbitmq.com/windows-quirks.html)
- Redis (http://redis.io/download)
With this in mind, I decided to attempt a simple proof of concept involving an ASP.NET 5 web application, deployed onto an Ubuntu VM using Docker – then throw in some basic integration with another application running on a different, Docker-deployed VM. Easy, right?
Combining multiple pieces of beta software results in the following:
timeSpentWithHeadInMyHands = (softwareVersion ^ missingDocumentation) / coffeesConsumed
Initially, I planned to connect the ASP.NET website to a MongoDB instance running in a separate container to insert and query some data. I quickly learned how production-ready ASP.NET 5 is not – finding that the MongoDB C# driver does not support the CoreCLR yet. (In hindsight, this makes perfect sense, given it’s still in beta.)
As an alternative, I decided to leverage one of the official Docker repositories: Elasticsearch and here is the stack and associated versions I went with:
|ASP.NET 5||Beta 2|
|Docker Machine||0.1.0 RC3|
When I first looked at Docker, being on a Windows machine meant management happened via boot2docker, which is essentially a headless Linux distribution with Docker pre-installed and some other useful defaults.
In late 2014, another project, Docker Machine, was released. It doesn’t change the workflow significantly (and still leverages boot2docker behind the scenes), but does add another layer of abstraction to streamline the process of managing multiple local and cloud-hosted Docker machines.
As it’s still in Release Candidate, the Docker Machine documentation is fairly limited, which did lead to several confusing exceptions popping up:
- To run
docker-machinefrom the command line, the Docker Machine exe location needs to be in your
- If you’re using the VirtualBox driver, the
VBoxManage.exelocation also needs to be in your
PATH. By default this installs into
Once you have Docker Machine installed and configured, creating a new instance is straightforward:
docker-machine create -d virtualbox development
-dspecifies the machine driver - in this case VirtualBox. The current drivers available include
developmentis the machine name we are creating.
Once the machine is up and running, we can SSH into it and start executing Docker commands:
Next, we’re going to launch an Elasticsearch container using the Docker official repository:
docker run -t -d --name elasticsearch elasticsearch
-tattaches a pseudo-tty to the container (apparently this won’t be required in future).
-druns the process as a daemon.
--namespecifies container name.
elasticsearchis the name of the image to run.
Docker will search and try and find the image locally. If nothing is found, it will look in the Docker registry. Each time an image is downloaded from the registry, Docker caches the response locally so that any subsequent images will launch quickly.
Our ASP.NET project consists of a simple MVC 6 web application that calls into the Elasticsearch container via the RESTful API.
From the Docker VM root, I will first create a folder named
images into which I’ll store the Docker images being built:
mkdir images cd images
I will then clone my ASP.NET project into the
git clone https://github.com/timothyclifford/aspnet5-elasticsearch-docker.git aspnet5-elasticsearch-docker
For images to be run by Docker, they need to exist locally or in the Docker registry. Let’s move into the root of our cloned repository and build a Docker image for our ASP.NET project:
cd aspnet5-elasticsearch-docker docker build -t website .
-tspecifies the image name.
- Don’t forget the trailing
.– this tells Docker to execute within the current directory.
This looks for a
Dockerfile to build the image (think of a Dockerfile as a set of instructions Docker uses while creating an image).
In this case, our
Dockerfile is fairly simple – let’s step through it line by line:
# Use Microsoft aspnet image as our base FROM microsoft/aspnet # Copy the contents of /app/wwwroot to the /app directory of the container COPY /app/approot/src/Docker.Web /app # # Move to the /app directory of the container WORKDIR /app # Restore packages with kpm RUN ["kpm", "restore"] # Exposes port 5004 to the world EXPOSE 5004 # Configure the command to start the container # In this instance we’re starting the Kestrel web server ENTRYPOINT ["k", "kestrel"]
Our custom ASP.NET image is now built and available locally. If we issue the
docker images command, we check the images and we can see that there are three:
Because our website image is based on the
microsoft/aspnet image, this is also pulled down locally. You can ask Docker to remove intermediate containers following build if you choose.
Now we have our image built locally, it’s time to launch a container. The website container is slightly more complicated, mainly because we’re linking the two containers together so that they can communicate.
docker run -t -d -p 5004:5004 --name website --link elasticsearch:elasticsearch website
-pmaps internal/external container ports. The Kestrel web server runs on port 5004, so we’re mapping all external connections to the same port externally.
--linkcreates a secure tunnel between the two containers and allows the website to reference
elasticsearchvia an alias rather than IP address. The other benefit of linking is that we are not exposing the
elasticsearchcontainer to the network.
If everything is successful, our website container should be running on our Docker Machine host VM and be accessible on port 5004. To test, we need to get the IP address of the host:
From there, we can open a web browser, navigate to
http://192.168.99.100:5400 and see the ASP.NET 5 website running. I can add and retrieve values, querying the Elasticsearch index running on the linked container.
Bonus round: deploying to Amazon Web Services (AWS)
One of the benefits of using Docker Machine is the APIs it provides for launching Docker hosts in one of a number of cloud providers. After getting everything running locally, I decided to try to create the same configuration using a Docker host in an EC2 instance:
docker-machine create -d amazonec2 --amazonec2-access-key KEY --amazonec2-secret-key SECRET --amazonec2-vpc-id VPCID staging
-dspecifies the machine driver – in this case, Amazon EC2.
--amazonec2-access-keyspecifies the Amazon access key ID.
--amazonec2-secret-keyspecies the Amazon secret key.
--amazonec2-vpc-idspecifies the Amazon VCP ID.
stagingspecifies the Docker host name.
When trying to create the machine, I received a pretty cryptic error message:
ERRO Error creating machine: Error decoding error response: Error decoding error response: http: read on closed response body
Because Docker Machine is so new, there isn’t a great deal of documentation or many examples of others using it. I’ve submitted an issue and in the meantime trawl through some Go code to see where it’s falling over.
Getting started with Docker containers is extremely easy and enjoyable. The documentation is excellent and is being updated alongside new versions of the product. Coupled with an active and growing community, this means there are many examples to leverage while trying to learn or to integrate Docker into your workflow.
The main issues I experienced were related to the immaturity of ASP.NET 5 and the half-baked integration between this and Docker. With Microsoft integrating Docker into future versions of Windows Server, this integration will no doubt improve substantially.
kpm restore can be difficult from Dockerfile
kpm restore command will by default pull packages from the standard NuGet feed: https://nuget.org/api/v2. ASP.NET 5 packages are currently being deployed via MyGet and may not exist in the standard NuGet feed. There’s not really any documentation available other than some issues filed in the CoreFX repository, so getting it working required a bit of trial and error.
docker-machine upgrade failed and in turn corrupted either my boot2docker or Virtual Box installations (or maybe both). I could create and start the VM, but following this received an error when trying to assign an IP address:
error getting URL for host development: No IP address found
The only workaround I found was to delete all my local machines in
C:\Users\%Username%\.docker and then re-install both boot2docker and Virtual Box. A little heavy-handed but – hey – it worked.
Try to stick to using ASP.NET beta 2 packages
ASP.NET NuGet packages: Visual Studio 2015 doesn’t support packages above version Beta 2.