Dusted Codes

Programming adventures

Building and shipping a .NET Core application with Docker and TravisCI

With the .NET Core ecosystem slowly maturing since the first official release this year I started to increasingly spend more time playing and building software with it.

I am a big fan of managed CI systems like AppVeyor and TravisCI and one of the first things I wanted to work out was how easily I could build and ship a .NET Core application with one of these tools. This was a major consideration for me because I would have been less interested in building a .NET Core app if the deployment story wasn't great yet and I am not very keen in building my own CI server as I don't think this is the best use of a developer's time. Luckily I was very happy to find out that the deployment experience and integration with TravisCI is extremely easy and intuitive, which is what I will be trying to cover in this blog post today.

Up until now I was more or less tied down to AppVeyor as the only vendor which uses Windows Server VMs for its build nodes and therefore the only viable option of building full .NET framework applications. TravisCI and other popular CI platforms use Linux nodes for their build jobs and .NET support was limited to the Mono framework at most. However, with .NET Core being the first officially Microsoft supported cross platform framework my options have suddenly increased from one to many. TravisCI already offered a good integration with Mono and now that .NET Core is part of their default offering I was keen to give it a shot.

In this blog post I will be covering what I believe is a typical deployment scenario for a .NET Core application which will be shipped as a Docker image to either the official Docker Hub or a private registry.

1. Creating a .NET Core application

First I need to create a .NET Core application. For the purpose of this blog post I am just going to create a default hello world app and you can skip this step for the most part if you are already familiar with the framework. For everyone else I will quickly skim through the creation of a new .NET Core application.

Let's open a Windows command line prompt and navigate to C:\temp and create a new folder called NetCoreDemo:

cd C:\temp
mkdir NetCoreDemo
cd NetCoreDemo

Inside that folder I can run dotnet new --type console to create a new hello world console application:

dotnet-new-console-app

For a full reference of the dotnet new command check out the official documentation.

If you don't have the .NET Core CLI available you need to install the .NET Core SDK for Windows (or your operating system of choice).

After the command has completed I can run dotnet restore to restore all dependencies followed by a dotnet run which will build and subsequently start the hello world application:

dotnet-restore-and-run

This is literally all I had to do to get a simple C# console app running and therefore will stop at this point and move on to the next part where I will set up a build and deployment pipeline in TravisCI.

If you want to learn more about building .NET Core applications then I would highly recommend to check out the official ASP.NET Core tutorials or read other great blog posts by developers who have covered this topic extensively.

2. Setting up TravisCI for building a .NET Core application

If you are not familiar with TravisCI yet (or a similar platform), then please follow the instructions to set up TravisCI with your source control repository and add a .travis.yml file to your project repository. This file will contain the entire build configuration for a project.

The first line in the .travis.yml file should be the language declaration. In our case this will be language: csharp which is the correct setting for any .NET language (including VB.NET and F#).

Next we need to set the correct environment type.

The standard TravisCI build environment runs on an Ubuntu 12.04 LTS Server Edition 64 bit distribution. This is no good for us because .NET Core only supports Ubuntu 14.04 or higher. Fortunately there is a new Ubuntu 14.04 (aka Trusty) beta environment available. In order to make use of this new beta environment we need to enable sudo and set the dist setting to trusty:

sudo: required
dist: trusty

Next I want to specify what version of Mono and .NET Core I want to have installed when running my builds. At the moment I am only interested in .NET Core so I am going to skip Mono and set the dotnet setting to the currently latest SDK:

language: csharp
sudo: required
dist: trusty
mono: none
dotnet: 1.0.0-preview2-003131

The next step is not required nor necessarily recommended, but more of my personal preference to disable the .NET Core Tools Telemetry by setting the DOTNET_CLI_TELEMETRY_OPTOUT environment variable to 1 during the install step of the TravisCI lifecycle:

install:
  - export DOTNET_CLI_TELEMETRY_OPTOUT=1

After that I have to set access permissions for two script files in the before_script step:

before_script:
  - chmod a+x ./build.sh
  - chmod a+x ./deploy.sh

The chmod command changes the access permissions of my build and deployment script to allow execution by any user on the system. TravisCI recommends to set chmod ugo+x which is effectdively the same as chmod a+x, where a is a shortcut for ugo.

Following before_script I am going to set the script step which is responsible for the actual build instructions:

script:
  - ./build.sh

At last I am giong to define a deploy step as well, which will automatically trigger only after the script setp has successfully completed:

deploy:
  - provider: script
    script: ./deploy.sh $TRAVIS_TAG $DOCKER_USERNAME $DOCKER_PASSWORD
    skip_cleanup: true
    on:
      tags: true

Here I am essentially calling a second script called deploy.sh and passing in three environment variables which I will explain in a moment. Additionally I defined the trigger to deploy for tags only. You can set up different deploy conditions, but in most cases you either want to deploy on each push to master or when a commit has been tagged. I chose the latter, because sometimes I want to publish an alpha or beta version of my application which is likely to be on a different branch than master and therefore the tag condition made more sense in my case.

The TRAVIS_TAG variable is a default environment variable which gets set by TravisCI for every build which has been triggered by a tag push and will contain the string value of the tag. DOCKER_USERNAME and DOCKER_PASSWORD are two custom environment variables which I have set through the UI to follow TravisCI's recommendation to keep sensitive data secret:

travisci-settings-page

Another option would have been to encrypt environment variables in the .travis.yml file to keep those values secret. Both options are valid as far as I know and it is up to you which one you prefer.

Tip:

If you have to store access credentials to 3rd party platforms like a private registry or the official Docker Hub inside TravisCI then it is highly recommended to register a dedicated user for TravisCI and add that user as an additional collaborator to your Docker Hub repository, so that you can easily limit or revoke access when required:

docker-hub-collaborators

After defining the script and deploy step I am basically done with the .travis.yml file.

Note that I purposefully didn't choose to place the individual build and deployment instructions directly into the script step, because I wanted to seperate out the actual build instructions from the TravisCI configuration.

This has a few advantages: - There is a clear distinction between environment setup and the actual build steps which are required to build and deploy the project. The .travis.yml file is the definition for the build environment and the build.sh and deploy.sh script files are the recipe to build and deploy an application. - The build and deploy scripts are completely independent from the CI platform and I could easily switch the CI provider at any given time. - The actual build and deployment scripts can be executed from anywhere. Both are a generic bash script which developers can run on their personal machines to build, test and deploy a project.

The last point is probably the most important in my view. Even though managed CI systems are super easy to integrate with, it can be a pain if you are tied down to a particular provider. Imagine you have a new developer joining your team and the first question they ask is how to build your project. It would be a pain to tell them to open up the .travis.yml file and follow all the instructions manually if you could just tell them to run build.sh and it will work.

If I put everything together then the final .travis.yml file will look something like this:

language: csharp
sudo: required
dist: trusty
mono: none
dotnet: 1.0.0-preview2-003131
install:
  - export DOTNET_CLI_TELEMETRY_OPTOUT=1
before_script:
  - chmod a+x ./build.sh
  - chmod a+x ./deploy.sh
script:
  - ./build.sh
deploy:
  - provider: script
    script: ./deploy.sh $TRAVIS_TAG $DOCKER_USERNAME $DOCKER_PASSWORD
    skip_cleanup: true
    on:
      tags: true

One last thing that I wanted to mention is that even though I said we are going to use Docker to deploy the project I didn't have to specify Docker as an extra service anywhere in the .travis.yml file. This is because unlike the standard TravisCI environment the Trusty beta environment comes with Docker pre-configured out of the box.

3. Building and deploying a .NET Core app from a bash script

Now that the build environment is set up in the .travis.yml file and we deferred the entire build and deployment logic to external bash scripts we have to actually create those scripts to complete the puzzle.

build.sh

The build.sh script is going to be very quick:

#!/bin/bash
set -ev
dotnet restore
dotnet test
dotnet build -c Release

The first line is not necessarily required, but it is good practice to include #!/bin/bash at top of the script so the shell knows which interpreter to run. The second line tells the shell to exit immediately if a command fails with a non zero exit code (set -e) and to print shell input lines as they are read (set -v).

The last three commands are using the normal dotnet CLI to restore, build and test the application.

deploy.sh

The deploy.sh script is going to be fairly easy as well. The first two lines are going to be the same as in build.sh and then I am assigning the three parameters that we are passing into the script to named variables:

#!/bin/bash
set -ev

TAG=$1
DOCKER_USERNAME=$2
DOCKER_PASSWORD=$3

Next I am going to use the dotnet CLI publish command to package the application and all of its dependencies into the publish folder:

dotnet publish -c Release

Now that everything is packaged up I can use the docker CLI to build an image with the supplied tag and the latest tag:

docker build -t repository/project:$TAG bin/Release/netcoreapp1.0/publish/.
docker tag repository/project:$TAG repository/project:latest

Make sure that repository/project matches your own repository and project name.

Lastly I have to authenticate with the official Docker registry and push both images to the hub:

docker login -u="$DOCKER_USERNAME" -p="$DOCKER_PASSWORD"
docker push repository/project:$TAG
docker push repository/project:latest

And with that I have finished the continuous deployment setup with Docker and TravisCI. The final deploy.sh looks like this:

#!/bin/bash
set -ev

TAG=$1
DOCKER_USERNAME=$2
DOCKER_PASSWORD=$3

# Create publish artifact
dotnet publish -c Release src

# Build the Docker images
docker build -t repository/project:$TAG src/bin/Release/netcoreapp1.0/publish/.
docker tag repository/project:$TAG repository/project:latest

# Login to Docker Hub and upload images
docker login -u="$DOCKER_USERNAME" -p="$DOCKER_PASSWORD"
docker push repository/project:$TAG
docker push repository/project:latest

Tip:

Some projects follow a naming convention where version tags begin with a lowercase v in git, for example v1.0.0, but want to remove the v from the Docker image tag. In that case you can use this additional snippet to create a variable called SEMVER which will be the same as TAG without the leading v:

# Remove a leading v from the major version number (e.g. if the tag was v1.0.0)
IFS='.' read -r -a tag_array <<< "$TAG"
MAJOR="${tag_array[0]//v}"
MINOR=${tag_array[1]}
BUILD=${tag_array[2]}
SEMVER="$MAJOR.$MINOR.$BUILD"

Place that snippet after the dotnet publish command in the deploy.sh and use $SEMVER instead of $TAG when building and publishing the Docker images.

If you want to see a full working example you can check out one of my open source projects where I use this setup to publish a Docker image of an F# .NET Core application.

Load testing a Docker application with JMeter and Amazon EC2

A couple of months ago I blogged about JMeter load testing from a continuous integration build and gave a few tips and tricks on how to get the most out of automated load tests. In this blog post I would like to go a bit more hands on and show how to manually load test a Docker application with JMeter and the help of Amazon Web Services.

I will be launching two Amazon EC2 instances to conduct a single load test. One instance will host a Docker application and the other the JMeter load test tool. The benefit of this setup is that Docker and JMeter have their own dedicated resources and I can load test the application in isolation. It also allows me to quickly tear down the Docker instance and vertically scale it up or down to measure the impact of it.

Launching a Docker VM

First I will create a new EC2 instance to host the Docker container. The easiest way of doing this is to go through the online wizard and select the Ubuntu 14.04 base image and paste the following bash script into the user data field to automatically pre-install the Docker service during the launch up:

#!/bin/bash

# Install Docker
sudo apt-get update
sudo apt-get install apt-transport-https ca-certificates
sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
sudo bash -c 'echo "deb https://apt.dockerproject.org/repo ubuntu-trusty main" >> /etc/apt/sources.list.d/docker.list'
sudo apt-get update
sudo apt-get install linux-image-extra-$(uname -r) -y
sudo apt-get install apparmor
sudo apt-get install docker-engine -y
sudo service docker start

# Run [your] Docker container
sudo docker run -p 8080:8888 dustinmoris/docker-demo-nancy:0.2.0

At the end of the script I added a docker run command to auto start the container which runs my application under test. Replace this with your own container when launching the instance.

aws-launch-ec2-advanced-details

Simply click through the rest of the wizard and a few minutes later you should be having a running Ubuntu VM with Docker and your application container running inside it.

Make sure to map a port from the container to the host and open this port for inbound traffic. For example if I launched my container with the flag -p 8080:8888 then I need to add the port 8080 to the inbound rules of the security group which is associated with this VM.

Launching a JMeter VM

Next I am going to create a JMeter instance by going through the wizard for a second time. Just as before I am using Ubuntu 14.04 as the base image and the user data field to install everything I need during the launch-up:

#!/bin/bash

# Install Java 7
sudo apt-get install openjdk-7-jre-headless -y

# Install JMeter
wget -c http://ftp.ps.pl/pub/apache//jmeter/binaries/apache-jmeter-3.0.tgz -O jmeter.tgz
tar -xf jmeter.tgz

Don't forget to open the default SSH port 22 in the security group of the JMeter instance.

Only a short time later I have two successfully created VMs with Docker and JMeter being fully operational and ready to run some load tests.

Running JMeter tests

Running load tests from the JMeter instance is fairly straight forward now. I am going to remote connect to the JMeter instance, copy a JMeter test file on the machine and then launch the JMeter command line tool to run the load tests remotely. Afterwards I will download the JMeter results file and analyse the test data in my local JMeter GUI.

Download PuTTY SSH client tools

From here on I will describe the steps required to remote connect from a Windows desktop, which might be slightly different than what you'd have to do to connect from a Unix based system. However, most things are very similar and and it should not be too difficult to follow the steps from a Mac or Linux as well.

In order to SSH from Windows to a Linux VM you will have to download the PuTTY SSH client. Whilst you are on the download page you might also download the PSCP and PuTTYgen tools. One will be needed to securely transfer files between your Windows machine and the Linux VM and the other to convert the SSH key from the .pem to the .ppk file format.

Convert SSH key from .pem to .ppk

Before we can use PuTTY to connect to the Ubuntu VM we have to convert the SSH key which has been associated with the VM from the .pem to the .ppk file format:

  1. Open puttygen.exe
  2. Click on the "Load" button and locate the .pem SSH key file
  3. Select the SSH-2 RSA option
  4. Click on "Save private key" and save the key as a .ppk file

Once completed you can use the new key file with the PuTTY SSH client to remote connect to the EC2 instance.

Remote connect to the EC2 instance

  1. Open putty.exe
  2. Type the public IP of the EC2 instance into the host name field
  3. Prepend ubuntu@ to the IP address in the host name field
    (this is not necessarily required, but speeds up the login process later on)
  4. On the left hand side in the tree view expand the "SSH" node and then select "Auth"
  5. Browse for the .ppk private key file
  6. Go back to "Session" in the tree view
  7. Type in a memorable name into the "Saved Sessions" field and click "Save"
  8. Finally click on the "Open" button and connect to the VM

putty-save-session

At this point you should be presented with a terminal window and being connected to the JMeter EC2 instance.

putty-ssh-terminal

Upload a JMeter test file to the VM

Now you can use the pscp.exe tool from a normal Windows command prompt to copy files between your local Windows machine and the Ubuntu EC2 instance in the cloud.

The first argument specifies the source location and the second argument the destination path. You can target remote paths by prepending the username and the saved session name to it.

For example I downloaded the pscp.exe into C:\temp\PuTTY and have an existing JMeter test plan saved under C:\temp\TestPlan.jmx which I would like to upload to the JMeter instance. I named the session in PuTTY demo-session and therefore can run the following command from the Windows command prompt:

C:\temp\PuTTY\pscp.exe C:\temp\TestPlan.jmx [email protected]:TestPlan.jmx

Usually the upload is extremely fast. If you don't know how to create a JMeter test plan then you can follow the official documentation on building a basic JMeter web test plan.

Running JMeter from the command line

After uploading the .jmx file we can switch back to the PuTTY terminal and run the test plan from the JMeter command line.

If you followed all the steps from before then you can find JMeter under /apache-jmeter-3.0/bin/./jmeter on the EC2 instance. Use the -n flag to run it in non-GUI mode, the -t parameter to specify the location of the test plan and -l to set the path of the results file:

apache-jmeter-3.0/bin/./jmeter -n -t TestPlan.jmx -l results.jtl

Run this command, wait and watch the test being executed until it's completed.

Download the JMeter results file

Finally when the test has finished you can download the results file via the PSCP tool again:

C:\temp\PuTTY\pscp.exe [email protected]:results.jtl C:\temp\

From here on everything should be familiar and you can retrospectively open the results.jtl from an available JMeter listener and analyse the data in the JMeter GUI.

With the help of a cloud provider like Amazon Web Services and Docker containers it is super easy to quickly spin up multiple instances and run many load tests at the same time without them interfering with each other. You can test different application versions or instance setups simultanuly and optimise for the best performance.

Older Posts Newer Posts