Docker is an open-source project to easily create lightweight, portable, self-sufficient containers from any application. The same container that a developer builds and tests on a laptop can run at scale, in production, on VMs, bare metal, OpenStack clusters, public clouds and more.

A little over a year ago, Docker was released, built on top of Linux Containers, or LXC for short. Linux Containers have been around for a little while and are a really interesting in that they provide operating system level virtualization. Rather than having a hypervisor running full operating systems on a piece of hardware (like Xen, if you're familiar), Linux Containers rely on the kernel of the host operating system. Think of LXCs as a fancy kind of chrooted environment. Docker then builds on top of that essentially allowing you to run an operating system on an operating system. This makes containers a really attractive method for distributing applications in that you can build one container and it'll run on any host operating system that can support Docker.

Building a base container

Our ultimate goal here is to end up with a container that can run a simple NodeJS web server.

_Note: I am going to assume that you have done some reading on Docker and have probably done their introduction/walkthrough. Im also going to assume that your machine has docker installed and running_

To start with, we need a base operating system for our container. I personally like Ubuntu, so we're gonna use 13.10 as our base image. Lets create a Dockerfile and populate it with the following:


FROM ubuntu:13.10

This tells Docker to go fetch Ubuntu from the registry and use version 13.10. In case you are wondering or have forgotten, containers can be used to build containers and can be publicly stored in the docker registry. If you were to publish your own container, you'd end up with the repository name looking like <username>/<container name>:<revision>. In this case, the ubuntu happens to be a special repository that doesnt belong to a particular user.

Now, we have a very base container. If we wanted to build it, we could run:

docker build -t <username>/ubuntu-base .

*This assumes that you are in the same directory as your dockerfile.

Installing node and npm

Remember how I said containers are effectively operating systems? This means that we can use the container exactly as we would our local machine. Just to show you that our container is basically a base Ubuntu image, try running:

docker run -i -t <username>/ubuntu-base /bin/bash

This will fire up our built container and execute /bin/bash. The -i flag tells docker to redirect the output of the command to stdout and the -t flag tells it to open a tty giving us an interactive session as if we logged in or ssh'd into a machine.

Now lets get NodeJS and NPM installed. We're going to install git while we're at it so that we can clone our repository into our container later. We can do this through apt-get.


FROM ubuntu:13.10

# make sure apt is up to date
RUN apt-get update

# install nodejs and npm
RUN apt-get install -y nodejs npm git git-core

If it isnt obvious already, the RUN instruction takes a command and will run it. The interesting thing to note about Docker here though is that it will cache the state after each command. This is how we can incrementally build a system in a container and not have to rebuild the entire thing each time we deploy that container.

Another thing to note here is that Docker will run these commands without the use of stdin when building, so we need to bass the -y flag to apt-get install to tell it "yes, install these packages and their dependencies"

Lets build our container again.

docker run -i -t <username>/ubuntu-base /bin/bash

Now, if we were to run our container and open up an interactive session, NodeJS, NPM, and git would be available to us on the commandline.

A simple Node webserver

Lets create a simple web server in node using express.


var express = require('express');

var app = express();

app.get('/', function(req, res){
    res.send('Hello from inside a container!');



  "name": "my-cool-webserver",
  "version": "0.0.1",
  "description": "A NodeJS webserver to run inside a docker container",
  "main": "index.js",
  "author": "",
  "license": "MIT",
  "dependencies": {
      "express": "*"

To make this container easy to deploy and updateable, everytime it runs, it will pull the latest version of our app from a remote git repository, so go ahead and commit and push your app to a git repository.

Running the application

To run our application, we need to again modify our Dockerfile with a few things:

  • We need to expose/map port 8080 between the container and host. Remember, a container is basically a fancy chroot so unless we tell the host operating system to map a port to it, nothing can access the container from the outside, and nothing in the container can access the host.
  • We need to pull the app from the remote repository
  • Run npm install to make sure express is installed
  • Finally run our application

Let's modify our Dockerfile:


FROM ubuntu:13.10

# make sure apt is up to date
RUN apt-get update

# install nodejs and npm
RUN apt-get install -y nodejs npm git git-core

ADD /tmp/

RUN chmod +x /tmp/

CMD ./tmp/

So here we use the ADD instruction to copy a file called to /tmp/ in our container, make it executable, then run it. You're probably wondering what the hell is in Aren't we supposed to be running a node app?

Heres what looks like:

cd /tmp

# try to remove the repo if it already exists
rm -rf <git repo name>; true

git clone <remote git repo>

cd <git repo name>

npm install

node .

The reason we put these commands into a script file is so that docker wont cache the result of it. See, unlike RUN, the CMD instruction is used to start and run whatever it is you want to run in your container. It is always the last thing in your Dockerfile and is run every time your container is started/restarted. This way, we clone the repository fresh every time. This makes deploying an update really easy - just restart your container!

Lets build this thing and name it something more descriptive:

docker build -t <username>/my-nodejs-webserver .

Now, to run it we're going to do something likethis:

docker run -p 8080:8080 <username>/my-nodejs-webserver

You'll notice that we have a -p flag in there. This says "take port 8080 on the host operating system and map it to port 8080 in the container". Now, we can send/receive web traffic from our container. The other thing you'll notice is that once you run that command, there isnt any output. To see what's going on run:

docker ps -a

This will give you something that looks like this:

$ docker ps -a
CONTAINER ID        IMAGE                                COMMAND                CREATED             STATUS                    PORTS                    NAMES
4acbdf4c6695        91f00a99f058                    /bin/sh -c ./start.s        2 days ago          Exited (0) 2 days ago>8080/tcp    hopeful_hawking

That first column is the container id that we can use to attach to our container to view the logs.

docker logs 4acbdf4c6695 -f

This will tail the log for you.

When you're ready to stop your container, simply run:

docker stop 4acbdf4c6695

You can also start and restart it in the same way

docker start 4acbdf4c6695

docker restart 4acbdf4c6695

Now that we're all done, we can push our container to the public registry:

docker push <username>/my-nodejs-webserver

To be continued...

This is the first post in a series I plan on writing about my experiences with docker and implementing some technologies from CoreOS including etcd, fleet, and CoreOS itself to create an automated, distributed application environment.