Software Engineer, builder of webapps

Deploying Docker Containers on CoreOS Using Fleet

Docker containers are the hot tech du jour and today we're going to look at how to deploy your containers to CoreOS using Fleet.

In my previous post, I talked about how to deploy a NodeJS application using a pretty vanilla Docker container. Now Im going to walk you through what CoreOS is and how to deploy your containers using Fleet.

What is CoreOS?

The masthead on the CoreOS website puts it perfectly:

Linux for Massive Server Deployments. CoreOS enables warehouse-scale computing on top of a minimal, modern operating system.

Without all the buzzwords, CoreOS is a stripped down Linux distro that gives you a bare-bones environment designed to run containers. That means you effectively get Systemd, Docker, fleet and etcd (as well as other low level things), all of which play a role in deploying our containers.

CoreOS is available on a bunch of different cloud platforms including EC2, Google Compute Engine, Rackspace. You can even run a cluster locally using Vagrant. For today, we're going to be using EC2.

Fleet and Etcd

Bundled with CoreOS you'll find fleet and etcd. Etcd is a distributed key-value store that acts as the backbone for your CoreOS cluster that is built on top of the Raft protocol. Fleet, is a low-level init system that uses etcd and interacts with Systemd on each CoreOS machine. It handles everything from scheduling services to migrating services should you lose a node, to restarting them should they go down. Think of it as Systemd but for a distributed cluster.

Setting up a container

For this tutorial, I created a stupidly simple NodeJS webserver as well as a container. You have find it on github at seanmcgary/stupid-server. All it does is print out the current time for every request you make to it on port 8000.

var http = require('http');

var server = http.createServer(function(req, res){
    res.end(new Date().toISOString());


The Dockerfile for it is pretty simple too. Its built off another container that has NodeJS already built and installed. Like in my previous tutorial, it includes a script that pulls the latest git repo and runs the application each time the container is run. This way updating your application only requires restarting your container.



EXPOSE 8000ADD start.shRUN chmod +x start.shCMD ./

git clone stupid-server

node stupid-server

Creating a Systemd unit

Remember how I said fleet is like a distributed Systemd? That means that all we need to do is create a Systemd unit file (in this case a template) that we will submit to fleet for scheduling. Fleet will be responsible for finding a machine to run it on, but once it does that, the unit file is compied directly to the machine to be run. This is what our unit file will look like:


Description=Stupid Server

ExecStartPre=/usr/bin/docker pull
ExecStart=/usr/bin/docker run --name stupidservice -p 9000:8000
ExecStopPre=/usr/bin/docker kill stupidservice
ExecStop=/usr/bin/docker rm stupidservice

  • ExecStartPre: before we start our service, we want to make sure that not only do we have the container downloaded, but we have the latest container version
  • ExecStart: Here we run our container, give it a name and map port 9000 on our host to port 8000 in the container (the one our server is listening on).
  • ExecStopPre: We need to make sure to kill the container
  • ExecStop: Then we can actually remove it
  • TimeoutStartSec: This is set to 0 telling Systemd to not timeout during the startup process. We do this because containers can be large and depending on your bandwidth, can take a while to download initially.
  • Restart: This tells Systemd to restart the unit if it dies while it is running.
  • X-Conflicts: This line (and this X-Fleet block) is specific to fleet. This tells fleet not to schedule services on the same machine as the matching service name. In this case, we want just 1 service per machine.

Spinning up some CoreOS nodes

We're going to spin up 3 instances of CoreOS on the beta channel (the current version in beta is 367.1.0). Simply search for "coreos-beta-367" if you're using the web console . You're looking for an ami with an ID of "33e5e776".

Once you have found it, select which size you want (I picked the micro instance, but you can pick which ever you want). On the configuration details screen, we'll want to enter "3" for the number of instances. We're also going to provide a cloud config so that CoreOS starts Docker and etcd on startup. We're also going to provide a discovery token for etcd so that the machines can all find each other.

NOTE: make sure to get your own discovery token and replace the one that is in the example. To get a new one, go to

#cloud-configcoreos:etcd:discovery: $public_ipv4:4001peer-addr: $public_ipv4:7001units:- name: etcd.servicecommand: start- name: fleet.servicecommand: start

Thats pretty much it, hit the "launch and review" button and in a few moments you'll have three CoreOS instances up and running.

Scheduling Services with Fleet

Now that our cluster is running, we can start to schedule services on it using fleet. This can be done one of a few ways - you can log directly into one of the machines in your cluster and run fleetctl that way, or you can download the lastest binary and run it locally. Im going to run it locally to make things easier.

If you do decide to run it locally, I would suggest creating an alias as you'll need to specify some additional flags to tell fleetctl where to find your cluster. I have the following in my .zshrc:

alias fleetcluster="fleetctl"

This way I can just run fleetcluster <command> each time.

To schedule a service on fleet, we need our unit file, so cd into the directory of your project (I'll be doing this based on the stupid-server from above). Scheduling a service is as easy as fleetcluster run <service>. To schedule the stupid-server, I would run:

$ fleetcluster start stupidServerVanilla@1.service
Job stupidServerVanilla@1.service launched on a33809a9.../

If you look closesly you'll realize that there is no stupidServerVanilla@1.service file. This is because stupidServerVanilla@.service is a Systemd template. Rather than creating a uniquely named file for each service, we have one that is used as a template. You'll see below the command, fleet responds with where it scheduled your service. Now, if we run fleetcluster list-units we should see it:

$ fleetcluster list-units

UNIT                                 STATE       LOAD      ACTIVE        SUB          DESC                     MACHINE
stupidServerVanilla@1.service        launched    loaded    activating    start-pre    Stupid Server            a33809a9.../

Fleet also takes care of letting you view logs as well. If we want to view the logs of our server, just run:

$ fleetcluster journal -f stupidServerVanilla@1.service

-- Logs begin at Sun 2014-08-24 14:57:19 UTC. --
Aug 25 02:13:49 systemd[1]: [/run/fleet/units/stupidServerVanilla@1.service:9] Unknown lvalue 'ExecStopPre' in section 'Service'
Aug 25 02:13:49 systemd[1]: [/run/fleet/units/stupidServerVanilla@1.service:9] Unknown lvalue 'ExecStopPre' in section 'Service'
Aug 25 02:13:49 systemd[1]: Starting Stupid Server...
Aug 25 02:13:50 docker[3401]: Pulling repository
Aug 25 02:16:41 systemd[1]: Started Stupid Server.
Aug 25 02:16:41 docker[3426]: Cloning into 'stupid-server'...

Fleet communicates with systemd and journald and then pipes the log over ssh to your local terminal session.

Launching a Fleet of Services

Since we created a Systemd template for our unit file, we can use fleet to launch as many as we want at once. If we wanted to launch three more services we would just run:

$ fleetcluster start stupidServerVanilla@{2,3,4}.service

Now if we look at our units:

stupidServerVanilla@1.service        launched    loaded   deactivating    stop-sigterm        Stupid Server        a33809a9.../
stupidServerVanilla@2.service        launched    loaded   activating      start-pre           Stupid Server        b4809b8d.../
stupidServerVanilla@3.service        inactive    -        -               -                   Stupid Server        -
stupidServerVanilla@4.service        launched    loaded   activating      start-pre           Stupid Server        27b315e2.../

You'll see that three of them have been deployed and we have one that's left as inactive. This is because we told fleet to only schedule one per machine.

Stopping and Destroying Your Service

When you need to take down your service or upload a new version of your service file, stopping and destroying are very easy:

$ fleetcluster stop stupidServerVanilla@1.service

$ fleetcluster destroy stupidServerVanilla@1.service