Last update:

Deploying ASP.NET Core applications to Docker containers

Docker is gaining popularity among .NET developers thanks to ASP.NET Core running on Linux. With native container support in Windows Server announced last summer, this popularity is expected to increase even more. In this article I’ll show how to start using containers technology with .NET by deploying ASP.NET Core applications to Docker.

Creating the Docker host in Azure

Assuming we have an application to deploy, the first step is to create a virtual machine that will be used to host Docker containers. Fortunately Visual Studio makes it very easy thanks to the Tools for Docker extension (it’s still in preview at the time of writing but works quite well most of the time). It adds Docker Containers option to the project publishing dialog.

publish

When Docker Containers are selected, the Publish dialog allows to either choose an existing virtual machine from a linked Azure subscription, create a new machine or use a custom Docker host (i.e. outside of Azure). In this post I’ll focus on creating a machine in Azure.

The next step is to provide the basic settings of the VM. For the operating system we can select either Linux (Ubuntu or CoreOS) or Windows Server 2016 (the only Windows host that natively supports containers). At the time of writing Windows Server had quite a few bugs and limitations but it was usable enough to host ASP.NET Core applications, so feel free to give it a try. I highly recommend reading the containers section on MSDN if you are interested in Windows Containers.

create

As for Linux, I find Ubuntu Server 14.04 LTS working the best (I had some trouble provisioning Ubuntu 15.* versions and I haven’t tried CoreOS yet).

All the other options should be pretty straightforward for anyone who had ever created a virtual machine in Azure. Just the last textbox may be a bit confusing – Certificates directory. Docker client needs cryptographic certificates to be able to securely connect to the remote host. The Visual Studio extension thankfully takes care of creation of these certificates (in the specified directory), so we don’t have to think of it. It also places the server certificates on the remote host.

Clicking the OK button will not publish anything yet. Visual Studio will create a powershell script necessary to create the virtual machine on Azure, then it will execute it. The script sets up the certificates and creates a resource group using an ARM (Azure Resource Manager) template contained in the two JSON files generated by VS. Having these scripts allow us to create VMs and associated resources outside from Visual Studio, what makes them perfect to use on continuous integration servers.

Creating the application container

Once the host machine is ready, it’s time to prepare, upload and run the application container. This can be achieved using either Visual Studio or Docker commandline client. I’ll describe both options:

Using Visual Studio’s Tools for Docker

  1. Right click on the project you’d like to publish and select the Publish… option. Then, select Docker Containers, as previously.
  2. Now as the host virtual machine has already been created, select it from the dropdown list and click OK.
  3. The default settings on the Connection tab are OK to start with. However, if you’d like to use a custom dockerfile or specify additional arguments to start the container, you can do it here.
  4. Click publish. The output window will show statuses of all operations and Visual Studio will open a web browser pointing to the deployed site when the process is completed.

connection

Visual Studio will generate a Dockerfile during the deployment process. It’s a simple text file containing instructions on how to build the image. Let’s take a look at the generated Dockerfile:

The FROM keyword specfies the base image of our container. Microsoft publishes these images with each release of ASP.NET Core. It’s basically a Debian image with all the bits and pieces required to run ASP.NET Core applications. There are two flavors of the microsoft/aspnet image – Mono- and CoreCLR-based.

The second line, ADD . /app, copies the application files into the container, so that we’ve got something to run. Then, a working directory is set to /app/approot.

Finally, the Dockerfile specifies the command to run when the container starts – in this case ./kestrel (it’s relative to the working directory). However, when we browse through the local application sources, there is no kestrel command, nor the approot directory. So where does the Dockerfile point to? The answer is pretty simple – prior to building the image, Visual Studio publishes the project to a local directory. This creates the approot directory and a couple of scripts inside it. Then, Docker builds the image based of the locally published application, not the source code. If you take a look at the VS output window during publishing, you will see all the steps I described.

Using command line

When Visual Studio is not available, it’s still possible to deploy the application. We just have to manually do everything Visual Studio does when deploying:

  1. First, create a text file and call it “Dockerfile” (with no extension) in the project folder. The simplest contents were presented above. Copy this code to your Dockerfile. The ENTRYPOINT argument may be different, depending on commands defined in project.json file. Choose a command that runs the application under the desired operating system.
  2. Then, build the application deployment package by executing dnu publish -o <destination path> in the project folder. The destination path can be any directory in your file system.
  3. Next, Docker client must know where the host is. There are two ways of specifiyng its location: adding the --tlsverify -H <docker host address> parameters to each Docker command or setting the DOCKER_HOST and DOCKER_TLS_VERIFY environment variables. Using Powershell write $Env:DOCKER_HOST = <docker host address> and $Env:DOCKER_TLS_VERIFY = 1, where docker host address is by default (in case of Azure VMs) tcp://<vm name>.<region>.cloudapp.azure.com:2376. Setting the environment variables seems less cumbersome and I recommend this option unless you have to deal with multiple hosts from the same Powershell session.
  4. Finally, it’s time to build the image. This is as simple as executing docker build -t <image name> -f <path to dockerfile> <local deployment directory>, where image name can be basically anything you want, as long as it’s a valid Docker image name. This command will send the application to the remote host and build an image. The container, however, won’t be running yet, so…
  5. The last step is to run the container. For this, we’ll use the docker run -t -d -p <host port>:<container port> <image name>.  I won’t go into detail explaining all the switches, as they are described in Docker reference.  The container is running in background (-d) and Docker is mapping a port from the host to a port in the container (i.e. having specified -p 80:5000, the host’s port 80 will be mapped to the container’s port 5000, that is if anyone connected to the host on port 80 would get transparently redirected to the container’s port 5000). The run command will return an identifier of the image, which can come in handy later.

And that’s basically it. All these steps can be scripted and executed in CI environment to create continuous deployment. If the application uses standard output for logging, we can stream these logs by executing docker logs --follow <container id>. Of course it’s possible to stop or restart the container, run another instance of it, and so on. Please take a look at Docker docs for more details.

4 comments

  1. Hi,
    How can we Publish our Docket image on DigitalOcean or any other paid VM provider like Linode. This is for test purpose. Docker currently works only on Windows10 Enterprise+ level. I do not want to upgrade my disk just for testing docker.

    boyner indirim kodu on 13 February 2017, 1:20 +01:00 Reply
    1. Set up a VM and install Docker as per their documentation. You should be good to go then! There’s some additional work with securing your Docker host, so it would be the best for you to read through the official guides.

      Michał Dudak on 16 February 2017, 0:23 +01:00 Reply
  2. Nice Article. In short description good explanation about the DevOps. Thanks For sharing the informative news.

    devops online training on 27 June 2017, 6:55 +01:00 Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Michał Dudak