Software Engineer, builder of webapps

How to deploy an AWS Lambda with Terraform

Amazon AWS' Lambdas are incredibly powerful, mainly due to their stateless nature and ability to scale horizontally almost infinitely. But once you have written a Lambda function, how do you update it? Better yet, how do you automate deploying and updating it across multiple regions? Today, we're going to take a look at how to do exactly that using Hashicorp's Terraform

What is Terraform?

Managing server resources can be either very manual, or you can automate the process. Automating the process can be tricky though, especially if you have a complex tree of resources that depend on one another. This is where Terraform comes in.

Terraform enables you to safely and predictably create, change, and improve production infrastructure. It is an open source tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned.

Terraform provides a DSL that allows you to describe the resources that you need and their dependencies, allowing Terraform to launch/configure resources in a particular order.

Installing Terraform

Installing Terraform is pretty straightforward.

If you're on macOS simply run:

brew install terraform

If you're on Linux, depending on your distro and package manager of choice, it might be available, otherwise, follow the directions provided on the installation page.

Setting up AWS credentials

Before setting up the credentials, we're going to install the AWS command line interface.

On macOS, the awscli is available through homebrew:

brew install awscli

On Linux, you can often find the awscli in your package manager:

dnf install -y awscli

# or

apt-get install -y awscli

You can also install it manually using pip:

pip install --upgrade --user awscli

Once installed, simply run:

aws configure

And follow the prompts to provide your AWS credentials. This will generate the proper credentials file that Terraform will use when communicating with AWS.

Describe your infrastructure

Now that we have AWS configured, we can start to describe the AWS Lambda that we're going to deploy.

To start, create a new directory.

mkdir terraform-demo

In that directory we're going to create a main.tf file that looks like this:

main.tf

provider "aws" {
    region = "us-east-1"
}

This is telling Terraform that we're going to be using the AWS provider and to default to the "us-east-1" region for creating our resources.

Now, in main.tf, we're going to describe our lambda function:

provider "aws" {
    region = "us-east-1"
}

resource "aws_lambda_function" "demo_lambda" {
    function_name = "demo_lambda"
    handler = "index.handler"
    runtime = "nodejs4.3"
    filename = "function.zip"
    source_code_hash = "${base64sha256(file("function.zip"))}"
}

Here, we're saying that we want a NodeJS based lambda and will expose its handler as an exported function called "handler" on the index.js file (don't worry, we'll create this shortly), and that it will be uploaded as a zip file called "function.zip". We're also taking a hash of the zip file to determine if we should re-upload everything.

Create an execution role

Next, what we need to do is set the execution role of our Lambda, otherwise it wont be able to run. In main.tf we're going to define a role in the following way:

resource "aws_iam_role" "lambda_exec_role" {
  name = "lambda_exec_role"assume_role_policy = <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": "sts:AssumeRole",
      "Principal": {
        "Service": "lambda.amazonaws.com"
      },
      "Effect": "Allow",
      "Sid": ""
    }
  ]
}
EOF
}

This creates an IAM role in AWS that the Lambda function will assume during execution. If you wanted to grant access to other AWS services, such as S3, SNS, etc, this role is where you would attach those policies.

Now, we need to add the "role" property to our lambda definition:

resource "aws_lambda_function" "demo_lambda" {
    function_name = "demo_lambda"handler = "index.handler"runtime = "nodejs4.3"filename = "function.zip"source_code_hash = "${base64sha256(file("function.zip"))}"role = "${aws_iam_role.lambda_exec_role.arn}"
}

Creating a test NodeJS function

We specified NodeJS as runtime for our lambda, so let's create a function that we can upload and use.

index.js

exports.handler = function(event, context, callback) {
    console.log('Event: ', JSON.stringify(event, null, '\t'));
    console.log('Context: ', JSON.stringify(context, null, '\t'));
    callback(null);
};

Now let's zip it up:

zip -r function.zip index.js

Test our Terraform plan

To generate a plan and show what Terraform will execute, run terraform plan:

> terraform plan

Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but
will not be persisted to local or remote state storage.


The Terraform execution plan has been generated and is shown below.
Resources are shown in alphabetical order for quick scanning. Green resources
will be created (or destroyed and then created if an existing resource
exists), yellow resources are being changed in-place, and red resources
will be destroyed. Cyan entries are data sources to be read.

Note: You didn't specify an "-out" parameter to save this plan, so when
"apply" is called, Terraform can't guarantee this is what will execute.

+ aws_iam_role.lambda_exec_role
    arn:                "<computed>"
    assume_role_policy: "{\n\t\"Version\": \"2012-10-17\",\n\t\"Statement\": [\n\t\t{\n\t\t\t\"Action\": \"sts:AssumeRole\",\n\t\t\t\"Principal\": {\n\t\t\t\t\"Service\": \"lambda.amazonaws.com\"\n\t\t\t},\n\t\t\t\"Effect\": \"Allow\",\n\t\t\t\"Sid\": \"\"\n\t\t}\n\t]\n}\n"
    create_date:        "<computed>"
    name:               "lambda_exec_role"
    path:               "/"
    unique_id:          "<computed>"

+ aws_lambda_function.demo_lambda
    arn:              "<computed>"
    filename:         "function.zip"
    function_name:    "demo_lambda"
    handler:          "index.handler"
    last_modified:    "<computed>"
    memory_size:      "128"
    publish:          "false"
    qualified_arn:    "<computed>"
    role:             "${aws_iam_role.lambda_exec_role.arn}"
    runtime:          "nodejs4.3"
    source_code_hash: "kWxb4o2JvWUnGncB2oSLvzf7d6+ZJumqB2w0Q8DHXtY="
    timeout:          "3"
    version:          "<computed>"


Plan: 2 to add, 0 to change, 0 to destroy.

This tells us that terraform is going to add both the role and the lambda when it applies the plan.

When you're ready, go ahead and run terraform apply to create your lambda:

> terraform apply

aws_iam_role.lambda_exec_role: Creating...
  arn:                "" => "<computed>"
  assume_role_policy: "" => "{\n\t\"Version\": \"2012-10-17\",\n\t\"Statement\": [\n\t\t{\n\t\t\t\"Action\": \"sts:AssumeRole\",\n\t\t\t\"Principal\": {\n\t\t\t\t\"Service\": \"lambda.amazonaws.com\"\n\t\t\t},\n\t\t\t\"Effect\": \"Allow\",\n\t\t\t\"Sid\": \"\"\n\t\t}\n\t]\n}\n"
  create_date:        "" => "<computed>"
  name:               "" => "lambda_exec_role"
  path:               "" => "/"
  unique_id:          "" => "<computed>"
aws_iam_role.lambda_exec_role: Creation complete
aws_lambda_function.demo_lambda: Creating...
  arn:              "" => "<computed>"
  filename:         "" => "function.zip"
  function_name:    "" => "demo_lambda"
  handler:          "" => "index.handler"
  last_modified:    "" => "<computed>"
  memory_size:      "" => "128"
  publish:          "" => "false"
  qualified_arn:    "" => "<computed>"
  role:             "" => "arn:aws:iam::183555302174:role/lambda_exec_role"
  runtime:          "" => "nodejs4.3"
  source_code_hash: "" => "kWxb4o2JvWUnGncB2oSLvzf7d6+ZJumqB2w0Q8DHXtY="
  timeout:          "" => "3"
  version:          "" => "<computed>"
aws_lambda_function.demo_lambda: Creation complete

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.

State path: terraform.tfstate

To see if it worked properly, you can use the aws cli to list all of your lambda functions:

> aws lambda list-functions

{
    "Functions": [
        {
            "Version": "$LATEST", 
            "CodeSha256": "kWxb4o2JvWUnGncB2oSLvzf7d6+ZJumqB2w0Q8DHXtY=", 
            "FunctionName": "demo_lambda", 
            "MemorySize": 128,"CodeSize": 294,"FunctionArn": "arn:aws:lambda:us-east-1:183555302174:function:demo_lambda", 
            "Handler": "index.handler", 
            "Role": "arn:aws:iam::183555302174:role/lambda_exec_role", 
            "Timeout": 3, 
            "LastModified": "2017-04-05T14:02:26.636+0000", 
            "Runtime": "nodejs4.3", 
            "Description": ""
        }
    ]
}

We can now invoke our lambda directly from the aws cli. In this script, Im using a commandline utility called jq for parsing the JSON response. If you're on macOS, simply run brew install jq to install it:

> aws lambda invoke \
    --function-name=demo_lambda \
    --invocation-type=RequestResponse \
    --payload='{ "test": "value" }' \
    --log-type=Tail \
    /dev/null | jq -r '.LogResult' | base64 --decode

START RequestId: 808188ef-1a09-11e7-85e1-71d3bf75c46b Version: $LATEST
2017-04-05T14:09:37.153Z    808188ef-1a09-11e7-85e1-71d3bf75c46b    Event:  {
    "test": "value"
}
2017-04-05T14:09:37.153Z    808188ef-1a09-11e7-85e1-71d3bf75c46b    Context:  {
    "callbackWaitsForEmptyEventLoop": true,
    "logGroupName": "/aws/lambda/demo_lambda",
    "logStreamName": "2017/04/05/[$LATEST]3aa59f4816ae440a805a14fda6e258c7",
    "functionName": "demo_lambda",
    "memoryLimitInMB": "128",
    "functionVersion": "$LATEST",
    "invokeid": "808188ef-1a09-11e7-85e1-71d3bf75c46b",
    "awsRequestId": "808188ef-1a09-11e7-85e1-71d3bf75c46b",
    "invokedFunctionArn": "arn:aws:lambda:us-east-1:183555302174:function:demo_lambda"
}
END RequestId: 808188ef-1a09-11e7-85e1-71d3bf75c46b
REPORT RequestId: 808188ef-1a09-11e7-85e1-71d3bf75c46b    Duration: 0.47 ms    Billed Duration: 100 ms     Memory Size: 128 MB    Max Memory Used: 10 MB

This will run your lambda and decode the last 4kb of the logfile. To view the full logfile, log into the aws web console and head over to the CloudWatch logs.

Wrap up

That's it! From here, you'll be able to set up a lamba that gets run on certain triggers - SNS events, S3 operations, consume data from a Kinesis firehose, etc.

All of the files we've created here can be found on Github at seanmcgary/blog-lambda-terraform

How to deploy a NodeJS app to Kubernetes

Previously, Ive talked about how to get a NodeJS app running in a container, and today we're going to deploy that app to Kubernetes.

What is Kubernetes?

For those that haven't ventured into container orchestration, you're probably wondering what Kubernetes is.

Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.

Kubernetes ("k8s" for short), was a project originally started at, and designed by Google, and is heavily influenced by Google's large scale cluster management system, Borg. More simply, k8s gives you a platform for managing and running your applications at scale across multiple physicaly (or virtual) machines.

Installing minikube and kubectl

To make things easy, we're going to use minikube on our local machine to run a single-node kubernetes cluster. Minikube is a handy tool that starts a virtual machine and bootstraps the cluster for you.

Firstly, if you dont have VirtualBox, go download and install it. While minikube works with other virtualization platforms, Ive found VirtualBox to be the most reliable.

Next, we need to install not only minikube, but also kubectl which will be used to interact with our k8s cluster. To do so, run the script below:

#!/bin/bash

ARCH=$(uname | awk '{print tolower($0)}')
TARGET_VERSION="v0.15.0"
MINIKUBE_URL="https://storage.googleapis.com/minikube/releases/${TARGET_VERSION}/minikube-${ARCH}-amd64"

KUBECTL_VER="v1.5.1"
KUBECTL_URL="http://storage.googleapis.com/kubernetes-release/release/${KUBECTL_VER}/bin/${ARCH}/amd64/kubectl"echo "installing latest kubectl..."
curl -Lo kubectl $KUBECTL_URL && chmod +x kubectl && sudo mv kubectl /usr/local/bin/

echo "installing latest minikube..."
curl -Lo minikube $MINIKUBE_URL && chmod +x minikube && sudo mv minikube /usr/local/bin/

ISO_URL="https://storage.googleapis.com/minikube/iso/minikube-v1.0.1.iso"
minikube start \
    --vm-driver=virtualbox \
    --iso-url=$ISO_URLecho "starting minikube dashboard..."
minikube dashboard

If everything has worked correctly, the kubernetes dashboard should open in your browser.

Using kubectl

When minikube starts, it will automatically set the context for kubectl. If you run kubectl get nodes you should see something like this:

kubectl get nodesNAME       STATUS    AGEminikube   Ready     2m

Same with if you run kubectl get pods --all-namespaces:

kubectl get pods --all-namespaces
NAMESPACE     NAME                          READY     STATUS    RESTARTS   AGE
kube-system   kube-addon-manager-minikube   1/1       Running   0          3m
kube-system   kube-dns-v20-qkzgg            3/3       Running   0          3m
kube-system   kubernetes-dashboard-1hs02    1/1       Running   0          3m

While the dashboard is useful for visualizing pods and deployments, we'll primarily be using kubectl to interact with our cluster.

The demo NodeJS app

Back in the article I wrote on deploying Docker containers with CoreOS Fleet, we wrote a little NodeJS server called "stupid-server" (for being stupidly simple). Stupid-server can be found over at github.com/seanmcgary/stupid-server and we'll be using it for this example as well. In the repository, you should find a server that looks something like this:

var http = require('http');

var server = http.createServer(function(req, res){
    res.end(new Date().toISOString());
});

server.listen(8000);

And a Dockerfile that looks like this:

FROM quay.io/seanmcgary/nodejs-raw-base
MAINTAINER Sean McGary <sean@seanmcgary.com>


EXPOSE 8000ADD start.sh start.shRUN chmod +x start.shCMD ./start.sh

By default, the Dockerfile will on runtime, clone the repo and run the server. Feel free to edit the Dockerfile to add the repo you've already cloned to the container rather than pulling every time.

Build the container

To build the container, run:

CONTAINER_NAME="<container name>"
docker build -t $CONTAINER_NAME:latest .
docker push $CONTAINER_NAME:latest

Note - since k8s is running in it's own virtual machine, it doesn't have access to Docker images that you build. In order to proceed with this tutorial, you'll need to push your image to some place accessible by k8s. Dockerhub is available and free, but I would highly suggest Google's Container Registry which is extremely low cost and supports private images. You can find the gcr getting started guide over here.

Creating a deployment

To deploy our app, we're going to use the "Deployment" pod type. A deployment wraps the functionality of Pods and ReplicaSets to allow you to declaratively update your application. This is the magic that allows you to leverage zero-downtime deploys via Kubernetes' RollingUpdate functionality.

deployment.yaml

apiVersion: extensions/v1beta1kind: Deploymentmetadata:name: stupid-server-deploymentspec:replicas: 1template:metadata:labels:app: stupid-serverspec:containers:- name: stupid-serverimage: <container image>imagePullPolicy: Alwaysports:- containerPort: 8000# vim: set ts=2 expandtab!:

To deploy your deployment, run:

kubectl create -f deployment.yaml

To get your deployment with kubectl, run:

kubectl get deployments
NAME                       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
stupid-server-deployment   1         1         1            1           7m

This metadata will update as your deployment is created and pulls down containers.

Creating a service

Now that our application is deployed, we need a way to expose it to traffic from outside the cluster. To to this, we're going to create a Service. Since we're not covering IngressControllers and advanced load balancing in this tutorial, we're going to open up a NodePort directly to our application on port 30061.

service.yaml

apiVersion: v1kind: Servicemetadata:name: stupid-serverlabels:app: stupid-serverspec:selector:app: stupid-serverports:- port: 8000protocol: TCPnodePort: 30061type: LoadBalancer

Now we can create the service within Kubernetes:

kubectl create -f service.yaml

And we can get the details by running:

kubectl get services
NAME            CLUSTER-IP   EXTERNAL-IP   PORT(S)          AGE
kubernetes      10.0.0.1     <none>        443/TCP          1h
stupid-server   10.0.0.121   <pending>     8000:30061/TCP   12m

Now, if we look at the ReplicaSet for our deployment, we should see something like this:

Accessing the stupid-server

In the Service we defined a NodePort; this exposes a port directly to the IP address that minikube is running on so that your app is accessible outside of the cluster.

By default, minikube binds to port 192.168.99.100. To double check this, you can run minikube ip which will return the current IP address.

To access your service, simply curl the IP on port 30061:

curl http://192.168.99.100:300612017-01-17T16:10:55.153Z

If everything is successful, you'll see a timestamp returned from your application.

Wrap up

This tutorial was meant as a very quick overview of how to get a NodeJS application up and running on Kubernetes with the least amount of configuration possible. Kubernetes is an incredibly powerful platform that has many more features than we used today. Stay tuned for more tutorials and articles on how to work with Kubernetes!

How to structure a Node.js Express project

"How do I structure my Node.js/Express project?". I see this question come up quite regularly, especially on the Node.js and Javascript subreddits. I also remember thinking the exact same thing years ago when I first started using Node. Today, Im going to outline how I typically start structuring projects so that they're modular and easy to extend as the project grows.

The Majestic Monolith

At the end of February 2016, David Heinemeier Hansson (or you may know him simply as DHH, creator of Rails, founder of Basecamp, etc) wrote an article on Medium called "The Majestic Monolith". For the longest time, web apps existed as giagantic, monolithic, codebases where it was only one codebase that contained nearly everything for the app to run. In fact, only within the last 5 or so years have the terms "microservice" and "service oriented architecture" become really mainstream; So mainstream that I see people on discussion forums trying to pre-optimize the ever loving crap out of their platforms before they even exist!

Stop. Hold on. Back up. Let's first talk about the reasons microservices and service oriented architectures exist. These modular patterns exist generally to solve a problem of scale; this could be one of a number of things. Maybe you have a very large team and you want to break things up into smaller pieces so that smaller teams can own specific things. Take Google for example. They have hundreds of public facing services and god knows how many internal services. Splitting things into a SOA makes sense for them. At the same time though, all of their code is in a single monolithic repository.

What about scaling the actual application. Once you hit a certain size maybe you need to split things up and have an authorization server, a billing service, a logging service - this way you can scale each service independently without (hopefully) bringing down the entire platform.

Embrace the Majestic Monolith - at least to start

As a single developer, or even a team of 5, starting on a project for the first time, you dont have any of the problems above. Rather than trying to worry about dependency management of Node modules and spending time trying to write, deploy, and monitor tons of services, my suggestion is to start with a monolith.

Just because you are starting with a monolith, doesnt mean it cant be modular.

Starting with a monolith gives you a few advantages:

All of your code is in one place

This makes managing things easy. Rather than writing Node modules that are installed via npm, you can require them out of a directory of your project. Because of this...

Everyone on your team can find things easily

There's only one repository to look at which means you dont have to go digging through tons of repos on Github to find what you're looking for. Git exists for a reason, so the excuse of "there are too many people doing too many things at once" is really a poor one. Instead, learn how to properly use branches and merge features properly. Feature-flags are also your friend in this case.

No npm dependency management hell

From personal experience, prematurely creating npm modules is just shooting yourself in the foot. If you end up with 3 runnable services that depend on the same module, that is now three things that can easily break, especially if this shared module does something important like interact with your database. If you make a schema change, you now need to go through the tedious process of updating the version of your DB module in each service, re-test each service, deploy it, etc. This gets incredibly annoying, especially when your schema is still being hashed out and is prone to change.

Building your monolith majestic

Let's say for instance that we're writing a RESTful API service built on top of PostgreSQL. I tend to have three different layers to provide the best combination of separation of concerns and modularity. The example Im going to walk you through is fairly simple: we're going to have the notion of a "company" in our database and each "company" can have n many "users" associated with it. Think of this as the start of a multi-seat SaaS app where users are grouped/scoped by the company they work for, but can only belong to one company.

Here's the directory structure we'll be working with:

.
├── index.js
├── lib
│   ├── company
│   │   └── index.js
│   └── user
│       └── index.js
├── models
│   ├── company.js
│   └── user.js
├── routes
│   └── account
│       └── index.js
└── services
    └── account
        └── index.js

The schema of our models is going to look something like this:

user
----------- id
- name
- email
- password
- company_id

company
----------- id
- name


user (1) --> (1) company
company (1) --> (n) user

Let's start with the foundation of our platform:

Models

These are the core of everything. I really like sequelize as its a very featureful and powerfuly ORM that can also get out of your way if you need to write raw SQL queries.

Your models (and thus data in your database) are the very foundation of everything in you application. Your models can express relationships and are used to build sets of data to eventually send to the end user.

Core library/model buisness logic/CRUD layer

This is a small step up from the model level, but still pretty low level. This is where we start to actually interact with our models. Typically I'll create a corresponding file for each model that will wrap basic CRUD operations of a model so that we're not repeating the same operations all over the place. The reason I do this here and not the model is so we can start to handle some higher level features.

Given our example use-case, if you wanted to list all users in a company, your model shouldnt be concerned with interpreting query data, it is only concerned with actually querying the database. For example:

lib/user/index.js

let models = require('../../models');

const listUsersForCompany = exports.listUsersForCompany = (companyId, options = { limit: 10, offset: 0 }){
    let { limit, offset } = options;

    return models.Users.findAll({
        where: {
            company_id: companyId
        },
        limit: limit,
        offset: offset
    })
    .then((users) => {
        let cursor = null;

        if(users.length === limit){
            cursor = {
                limit: limit,
                offset: offset + limit
            };
        }

        return Promise.all([users, cursor]);
    });
}

In this example, we've created a very basic function to list users given a companyId and some limit/offset parameters.

Each of these modules should correspond to a particular model. At this level, we dont want to be introducing other model module dependencies to allow for the greatest level of composability. Thats where the next level up comes in:

Services

I refer to these modules as services because they take different model-level modules and perform some combination of actions. Say we want to write a registration system for our application. When a user registers, you take their name, email, and password, but you also need to create a company profile which could potentially have more users down the road.

One company per user, many users per company. Being that a user depends on the existence of a company, we're going to transactionally create the two together to illustrate how a service would work.

We have our user module:

lib/user/index.js

let models = require('../../models');

exports.createUser = (userData = {}, transaction) => {
    // do some stuff here like hash their password, etc

    let txn = (!!transaction ? { transaction: transaction } : {});
    return models.User.create(userData, transaction);
};

And our company module:

lib/company/index.js

let models = require('../../models');

exports.createCompany = (companyData = {}, transaction) => {
    // do some other prep stuff herelet txn = (!!transaction ? { transaction: transaction } : {});
    return models.Company.create(companyData, txn);
};

And now we have our service that combines the two:

services/account/index.js


const User = require('../../lib/user');
const Company = require('../../lib/company');

let models = require('../../models');

exports.registerUserAndCompany = (data = { user: {}, company: {} }) => {

    return models.sequelize.transaction()
    .then((t) => {
        return Company.createCompany(data.company, t)
        .then((company) => {
            let user = data.user;
            user.company_id = company.get('id');

            return User.createUser(user, t);
        })
        .then((user) => {
            return t.commit()
            .then(() => user);
        })
        .catch((err) => {
            t.rollback();
            throw err;
        });
    });
};

By doing things this way, it doesnt matter if the user or company are created in the same transaction, or even the same request, or even if a company is created at all. For example, what if we wanted to create a user, adding them to an existing company? We can either add another function to our account service, or our route handler could call our user module directly since it would already have the company_id in the request payload.

My app has grown; I NEED microservices!

Thats great! However, you can still build microservices without breaking apart your monolithic repository (at least until you absolutely need to due to team sizes, iteration speed, etc). Our goal from the beginning was to structure our application in a way that was modular and composeable. This means that there is nothing wrong with creating new executables that simply use your monolith as a central library. This way, everything remains in the same repository and services all share the same identical modules. You've essentially created a core library that you can build things on top of.

The only overhead when deploying things is the potential duplication of your repository across services. If you're using something like Docker, which has file system layering, or rkt containers which also do some file magic to cache things, then you can actually share the single repository and simple execute whichever service you need and that overhead potentially decreases.

Getting started with React and Redux - Part 1

React is pretty awesome, but getting started can be tough. Do you use flux? redux? Do you use the new ES6 features and compile with Babel? How do I compile everything with Webpack?

Back in December, 2015 was dubbed "The Year of Javascript Fatigue", and rightfully so. You have all of these new technologies and libraries being developed and before people can decide on a best practice, the next hot library has hit. Maybe you found yourself wanting to try out these cool new things, but quickly felt turned off by how hard it was to get started because literally everyone had an opinion on how you should do it.

Now that the dust has settled a little bit, we're going to take a walk through how to set up a React project using redux for our datalayer, babel for transpiling ES6 features, and webpack for bundling it all together.

For those that are out of the loop, React is a Javascript, component-based view library built by Facebook. "How does this compare to Angular?" you ask. React is just the "view" portion of MV-whatever, allowing you to choose how you architect your data.

Many of the popular state-management libraries follow the action-reducer pattern set out by Flux. Flux at a high level dictates that data only flows in one direction (unlike Angular's two-way data binding) and thus state is managed by a centralized data store. Over time, the Flux pattern of data flow has been refined and simplified, so for the purpose of this how-to, we're going to look at Redux which is a bit easier to grasp.

Getting started

There are two ways you can go about all of this:

  1. Simply use React and all of the libraries standalone without any build/compilation tools
  2. Set up a build/compilation environment with things like Babel and Webpack.

Option 2 is more complicated, but probably what you'll run into in a production setting, so we're going to walk through how to set things up. This means that we're going to use the new ES2015/ES7 features provided in Babel and we're going to use Webpack to bundle everything together to distribute in a single javascript file.

Installing Webpack and Babel

First let's initialize our project:

mkdir react-intro && cd react-intro

npm init -y
mkdir src
mkdir src/components
mkdir src/store
mkdir -p dist/js
mkdir server
touch src/main.js
touch server/index.js

Should give us a directory structure that looks like this:

.
├── dist
│   └── js
├── package.json
├── server
│   └── index.js
└── src
    ├── components
    ├── main.js
    └── store

In your project's directory, we want to run the following to install Webpack and the necessary Babel plugins:

npm install --save \
    webpack \
    babel-loader \
    babel-core \
    babel-plugin-syntax-jsx \
    babel-preset-react \
    babel-preset-es2015 \
    babel-preset-stage-0

Our package.json file now looks like this:

{
  "name": "react-intro",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "keywords": [],
  "author": "",
  "license": "ISC",
  "dependencies": {
    "babel-core": "^6.7.2",
    "babel-loader": "^6.2.4",
    "babel-plugin-syntax-jsx": "^6.5.0",
    "babel-preset-es2015": "^6.6.0",
    "babel-preset-react": "^6.5.0",
    "babel-preset-stage-0": "^6.5.0",
    "webpack": "^1.12.14"
  }
}

Setting up our build process

To compile everything we need to create two files:

  • A webpack.config.js to tell webpack how to compile everything
  • A .babelrc to tell Babel which presets to load and use

./.babelrc


{
    "presets": [
        "react",
        "es2015",
        "stage-0"
    ]
}

./webpack.config.js

'use strict';

let path = require('path');

module.exports = {
    entry: path.resolve(__dirname + '/src/main.js'),
    output: {
        path: path.resolve(__dirname + '/dist/js'),
        filename: 'main.js',
        devtoolLineToLine: true
    },
    module: {
        loaders: [
            {
                test: /src\/.+.jsx?$/,
                exclude: /node_modules/,
                loader: 'babel'
            }
        ]
    }
}

A lot of the webpack config looks scary and complicated, so lets take a few of these sections and break them down:

{
    entry: path.resolve(__dirname + '/src/main.js'),
}

This tells webpack that ./components/main.js is the entry point to our application and where it should start to compile things.

{
    output: {
        path: path.resolve(__dirname + '/dist/js'),
        filename: 'main.js',
        devtoolLineToLine: true
    }
}

Now we tell it to place the compiled source into a file called main.js in the ./dist directory.

{
    loaders: [
        {
            test: /src\/.+.jsx?$/,
            exclude: /node_modules/,
            loader: 'babel'
        }
    ]
}

This loader is specific to babel. It says that every file in the ./components directory with a .js or .jsx extension can be included and compiled using the presets specified. The presets are babel-specific and give us the functionality of not only ES2015/ES6 features, but also some ES7 features (that's what stage-0 gives us).

Install react, react-redux, react-router, and friends

Now we want to install react, react-redux, react-router, immutable, redux-actions, redux-logger:

  • react: will allow us to define and structure our views using JSX
  • react-redux: react-specific bindings for redux to manage the state of our app
  • react-router: gives us the ability to load specific components for a given route
  • immutable: to make detecting changes easier, we're going to use Immutable, this way a simple variable reference comparison will tell us if two objects are equal.
  • redux-actions: eliminates a lot of boilerplate when creating redux actions
  • redux-logger: a logger middleware that makes it easy to visualize what actions are firing and what data is changing.
npm install --save \
    react \
    react-dom \
    react-redux \
    react-router \
    immutable \
    redux-actions \
    redux-logger

Creating a test server

One of the nice things about react-router is that it supports the HTML history API. In order to properly illustrate this and support a hard reload when you navigate to a new page, we're going to run a small Node.js server with express to serve up our client side app and handle server side routing.

Install the dependencies

We'll need two modules - express and serve-static.

npm install --save \
    express \
    serve-static

The server

Our serve is super simple; just an HTML template that includes our client side app and provides a mount point (<div id="app-root">) for our application.

'use strict';

const express = require('express');
const serveStatic = require('serve-static');
const path = require('path');

const template = `
<html>
    <head></head><body><div id="app-root"></div><script type="text/javascript" src="/js/main.js"></script></body>
</html>
`;

const app = express();

app.use(serveStatic(path.resolve(__dirname + '/../dist')));

app.get('*', (req, res) => {
    res.set('text/html');
    res.send(template);
});

app.listen(8080, () => {
    console.log('server listening on port 8080');
});

Starting the server is as simple as running:

node server

Writing and compiling our first component

As a super basic example to make sure that we have everything set up correctly, we're going to create a single react component and render it to the DOM. Your src/main.js file should look like this:

'use strict';

import React, { Component } from 'react';
import { render } from 'react-dom';

class TestComponent extends Component {
    constructor(props){
        super(props);
    }

    render(){
        return (
            <h1>Hello World!</h1>
        );
    }
}

render(<TestComponent />, document.getElementById('app-root'));

To compile, run webpack:

./node_modules/.bin/webpack --config ./webpack.config.js

Start up your server and you should see a nice big "Hello World" on the page.

Ready to start building a more functional application? Stay tuned for part 2!

How to run Node.js in a rocket container

Last time we talked about "Building a fedora-based rocket container", so today we're going to use that as a base to build a container for running NodeJS applications.

If you are just joining us, please go back and read "Building a fedora-based rocket container" as the post includes instructions on how to get set up with rkt, acbuild, and actool and introduces the basics of building a container.

Building a NodeJS rkt container

While it is possible to statically compile node, native npm modules will sometimes need to link off of libraries included in the system. So for this particular demonstration, we're going to use the Fedora container we created in the previous post as a base for our node container.

Back to acbuild

Our acbuild procedure is going to look something like this:

acbuild begin
sudo acbuild dependency add <your domain>/fedora:latest
sudo acbuild set-name <your domain>/nodejs
sudo acbuild label add version "4.2.3"sudo acbuild run -- /bin/bash -c "curl https://nodejs.org/dist/v4.2.3/node-v4.2.3-linux-x64.tar.gz | tar xvz --strip-components=1 -C /usr/local"sudo acbuild write nodejs-4.2.3-linux-amd64.aci
sudo acbuild end

Let's go through this step by step:

sudo acbuild dependency add <your domain>/fedora:latest

This tells acbuild to use the Fedora container we built in the previous post. As you can see, we're also specifying a version of latest. acbuild will first check the local container cache to see if it exists, otherwise it will use http based discovery to location the container (more on discovery and how to set it up to come in a later post)

acbuild label add version "4.2.3"

Since we're pulling in node v4.3.2, we'll tag the version of our container as such.

sudo acbuild run -- /bin/bash -c "curl https://nodejs.org/dist/v4.2.3/node-v4.2.3-linux-x64.tar.gz | tar xvz --strip-components=1 -C /usr/local"

acbuild run is analogous to the RUN parameter you would find in a docker file; it can be used to execute a command within the container. In the case of acbuild (and rkt), what happens is acbuild actually starts systemd-nspawn to run the command against the rootfs as defined by the included dependencies.

sudo acbuild write nodejs-4.2.3-linux-amd64.aci

Now we're getting a little more fancy with our file naming. In this case, we have named our aci in a way that allows us to make it discoverable later on, following the format of:

{name}-{version}-{os}-{arch}.{ext}

So if I named my container seanmcgary.com/nodejs, the discovery mechanism would at:

https://seanmcgary.com/nodejs-4.2.3-linux-amd64.aci

Packaging an application

Now that we have our nodejs base container, we can create another container to house our test application. A while back I wrote a little app called stupid-server that can be found over on github at seanmcgary/stupid-server. Let's create our container:

# first clone the repogit clone https://github.com/seanmcgary/stupidServer.git
acbuild begin
sudo acbuild dependency add <your domain>/nodejs:4.2.3
sudo acbuild set-name <your domain>/stupid-server
sudo acbuild label add version 1.0.0
sudo acbuild copy ./stupid-server /stupid-server
sudo acbuild set-exec -- /bin/bash -c "node /stupid-server"sudo acbuild write stupidServer-1.0.0-linux-amd64.aci
sudo acbuild end

We have some new commands in our process:

sudo acbuild copy ./stupid-server /stupid-server

This one is pretty straightforward - takes a local file/directory and a destination path of where to put it in your container.

sudo acbuild set-exec -- /bin/bash -c "node /stupid-server"

Here, we are specifying what to run when rkt executes our container. set-exec is analagous to CMD <command> found in a Dockerfile.

Running our application

As a quick recap, we have an application that inherits a chain of containers that looks like this:

fedora --> nodejs --> stupidServer

Now we can actually run our container with rkt:

sudo rkt run --insecure-options=all --net=host ./stupidServer-1.0.0-linux-amd64.aci

rkt: using image from local store for image name coreos.com/rkt/stage1-coreos:0.13.0
rkt: using image from file /home/core/node/stupidServer-1.0.0-linux-amd64.aci
rkt: using image from local store for image name seanmcgary.com/nodejs,version=latest
rkt: using image from local store for image name seanmcgary.com/fedora,version=latest

If you want to push it to the background, you can also run it with systemd-run:

sudo systemd-run rkt run --insecure-options=all --net=host /home/core/stupidServer-1.0.0-linux-amd64.aci

Now with your container running, you should be able to hit your server:

curl http://localhost:8000/
2015-12-16T00:36:46.694Z

Wrap up

That's it! Now that you know how to build containers based off of other containers, you should be able to figure out how to deploy your own app in a containerized fashion.

Next time, we'll talk about how to set up discovery so that you can host your containers in a central location.

Building a fedora-based rocket container

Containers are basically taking over the world; Docker, rkt, systemd-nspawn, LXC, etc. Today, we're going to talk about rkt (pronounced "rocket") and how to get started building rkt-runnable containers.

What is rkt?

rkt (pronounced "rock-it") is a CLI for running app containers on Linux. rkt is designed to be composable, secure, and fast.

Alright, so whats an App Container (appc)?

An application container is a way of packaging and executing processes on a computer system that isolates the application from the underlying host operating system.

In other words, rkt is a runtime implementation that uses the appc container spec. It leverages systemd to manage processes within the container, making it compatible with orchestration tools such as fleet and Kubernetes.

Containers can include basically anything from a single, static binary, to an entire root file system. Today, we're going to look at building a Fedora based container that can then be used as a foundation for building other containers on top of it. This will effectively give us the equivalent of using the Fedora docker image.

Boot up CoreOS

To follow along, you'll need to boot an instance of CoreOS of some kind (AWS, GCE, Azure, DigitalOcean, Vagrant, etc). I would use either the latest Beta or Alpha channel release to be sure you have the latest versions of rkt and actool.

Fetching Fedora

We're striving for a super minimal image to use as our base layer, and it just so happens that the folks over at Fedora build a Docker base image which is nothing more than a stripped down Fedora file system. So we're going to use that, but we're not going to use Docker at all to get it.

Here you will find all of the Fedora build images. Builds in green have passed and are probably safe to use. We're going to be using Fedora 23, so look for a build that says f23-candidate, Fedora-Docker-Base-23....

Once you've SSHd into your machine as the core user, fetch the Fedora image:

mkdir fedoraLayer

# fetch and unpack fedora buildcurl https://kojipkgs.fedoraproject.org/work/tasks/7696/12107696/Fedora-Docker-Base-23-20151208.x86_64.tar.xz | tar -xJ -C fedoraLayer
cd fedoraLayer

HASH=$(cat repositories | awk -F '"latest": "' '{ print $2 }' | awk '{ sub(/[^a-zA-Z0-9]+/, ""); print }')

mv $HASH/layer.tar .
rm -rf $HASH repositories
sudo tar -xf layer.tar --same-owner --preserve-permissions

sudo rm layer.tar
cd ../

The HASH variable represents the directory inside the tarball that contains the rootfs; we take the contents of said directory and move it up one level so that /home/core/fedoraLayer contains the rootfs.

Installing acbuild

acbuild is a nifty little interactive CLI for building your container manifest. If you want the most flexibility, you can feel free to write out the manifest by hand.

When this post was written, acbuild was still in early development, so we're going to build it from source. For those unfamiliar with CoreOS, CoreOS comes with basically nothing; no package manager and only a very small set of tools. It does however come with a tool called toolbox which is a container that we can use to actually do some work. We're going to use toolbox to fetch and build acbuild from source.

# Clone acbuild to a known directory inside /home/core.# We're specifically going to clone it /home/core/builds/acbuild.
mkdir $HOME/builds && cd $HOME/builds
git clone https://github.com/appc/acbuild.git

toolbox

# now inside toolbox
yum install -y golang git

# the host filesystem is mounted to /media/root
cd /media/root/home/core/builds/acbuild
./build

# exit toolbox by pressing ctrl+c# now back on the host system, outside of toolbox

sudo mkdir -p /opt/bin || true
# /usr is readonly, but /opt/bin is in the PATH, so symlink our # acbuild binary to that directory
sudo ln -s /home/core/builds/acbuild/bin/acbuild /opt/bin

acbuild --help

If all goes well, you should see the acbuild help menu at the very end.

Building our container

acbuild works by declaring that you are beginning the build of a container (this creates a hidden directory in your CWD that will be used to hold the state of everything as we go), running subcommands, writing the aci (app container image), and then telling acbuild that we're done. Here's our build process:

sudo acbuild begin /home/core/fedoraLayer
sudo acbuild set-name <your domain>/fedora
sudo acbuild label add version "latest"sudo acbuild write fedora.aci
sudo acbuild end
actool validate fedora.aci

What we're doing here is:

  • Telling acbuild to use our fedora rootfs that we extracted as the rootfs for the container
  • Setting the name to <your domain>/fedora. For example, I would use seanmcgary.com/fedora. This is very similar to the naming convention you see in docker when hosting containers on something like Quay.io and acts as a name that you will reference your container by.
  • We set the label "version" to "latest"
  • We write out everything to fedora.aci. This what we will actually run with rkt.
  • Tell acbuild we're done
  • Validate our container with actool.

Thats it, we're done! Well almost. We have a container, but it's pretty useless because we didnt tell it what to execute when we run it with rkt. Lets create it again, but this time we'll tell it to run /bin/date.

sudo acbuild begin /home/core/fedoraLayer
sudo acbuild set-name <your domain>/fedora
sudo acbuild label add version "latest"sudo acbuild set-exec -- /bin/date
sudo acbuild write fedora.aci
sudo acbuild end
actool validate fedora.aci

Now we can actually run it:

sudo rkt run --insecure-options=all ./fedora.aci
rkt: using image from local store for image name coreos.com/rkt/stage1-coreos:0.11.0
rkt: using image from file /home/core/fedora.aci
[123872.191605] date[4]: Tue Dec 15 00:14:06 UTC 2015

Advanced rkt containers

Stay tuned for more posts about building more advanced rkt containers, building your own container repository with appc discovery, and more!

Deploying Node.JS applications with systemd

You've built a node application and now it's time to deploy and run it. But how do you make sure that if your app crashes, it restarts automatically? Or if the host machine goes down, how do you ensure that your app comes back up? Ive seen a number of people across the internet suggest things like node-supervisor, forever, nodemon, and hell, even gnu screen. These might be fine for running a server locally or in a testing environment, but they have a (pretty large) drawback; the underlying process is (by default) managed by the user that ran it. This means that the process that is managing your node app (supervisor, forever, nodemon, screen, etc) isn't managed by anything else and if it goes down, then what? I thought we were going for uptime here...

For whatever reason, it seems that people forget that the underlying operating system (we're assuming Linux here) has an init system that is designed to do exactly what we want. Now that the majority of the major linux distros come with systemd it's easier than ever to make sure that your node app is properly managed, not to mention it can handle logging for you as well.

Setting up your machine

We're going to be using Fedora 23 for the purpose of this article and installing node directly on it.

curl https://nodejs.org/download/release/v4.2.1/node-v4.2.1-linux-x64.tar.gz | sudo tar xvz --strip-components=1 -C /usr/local

node -v
# v4.2.1

What is systemd?

systemd is a suite of basic building blocks for a Linux system. It provides a system and service manager that runs as PID 1 and starts the rest of the system. systemd provides aggressive parallelization capabilities, uses socket and D-Bus activation for starting services, offers on-demand starting of daemons, keeps track of processes using Linux control groups, supports snapshotting and restoring of the system state, maintains mount and automount points and implements an elaborate transactional dependency-based service control logic.

tl;dr - at a high level, systemd gives you:

  • process management
  • logging management
  • process/service dependency management via socket activation

Our test node application

We're going to use this stupidly simple HTTP server as our test application. It's so simple, that it doesn't even depend on any external packages.

server.js

var http = require('http');

var server = http.createServer(function(req, res){
    res.end(new Date().toISOString());
});

server.listen(8000);

We'll build on this as we go to demonstrate the features of systemd.

Running your application

To run our application, we need to write out a unit file that describes what to run and how to run it. For this, we're going to want to look at:

Here's a super simple unit file to get us started:

node-server.service

[Unit]
Description=stupid simple nodejs HTTP server

[Service]
WorkingDirectory=/path/to/your/app
ExecStart=/usr/local/bin/node server.js
Type=simple

Place this file in /etc/systemd/system and run:

sudo systemctl start node-server.service

systemctl is the utility to manage systemd-based services. When given a unit name that isn't a direct path, it looks in /etc/systemd/system, attempting to match the provided name to unit file names. Now, let's check the status of our service:

systemctl status node-server.service

● node-server.service - stupid simple nodejs HTTP server
   Loaded: loaded (/etc/systemd/system/node-server.service; static; vendor preset: disabled)
   Active: active (running) since Mon 2015-11-30 11:40:18 PST; 3s ago
 Main PID: 17018 (node)
   CGroup: /system.slice/node-server.service
           └─17018 /usr/local/bin/node server.js

Nov 30 11:40:18 localhost.localdomain systemd[1]: Started stupid simple nodejs HTTP server.
Nov 30 11:40:18 localhost.localdomain systemd[1]: Starting stupid simple nodejs HTTP server...

Awesome, it looks to be running! Let's curl our HTTP server and see if it actually is:

curl http://localhost:8000/
2015-11-30T19:43:17.102Z

Managing logs with systemd-journald

Now that we have everything running, let's modify our script to print something to stdout when a request comes in.

var http = require('http');

var server = http.createServer(function(req, res){
    var date = new Date().toISOString();
    console.log('sending date: ', date);
    res.end(date);
});

server.listen(8000);

Edit your server to look like the code above. For logging, all we need to do is log directly to stdout and stderr; systemd-journald will handle everything else from here. Now, let's restart our server and tail the log:

sudo systemctl restart node-server.service
journalctl -f -u node-server.service

-- Logs begin at Mon 2015-10-19 17:41:06 PDT. --
Nov 30 11:40:18 localhost.localdomain systemd[1]: Started stupid simple nodejs HTTP server.
Nov 30 11:40:18 localhost.localdomain systemd[1]: Starting stupid simple nodejs HTTP server...
Nov 30 11:46:30 localhost.localdomain systemd[1]: Stopping stupid simple nodejs HTTP server...
Nov 30 11:46:30 localhost.localdomain systemd[1]: Started stupid simple nodejs HTTP server.
Nov 30 11:46:30 localhost.localdomain systemd[1]: Starting stupid simple nodejs HTTP server...

Close out journalctl (ctrl-c) and curl your HTTP server again. You should now see a new line added to the log:

-- Logs begin at Mon 2015-10-19 17:41:06 PDT. --
Nov 30 11:40:18 localhost.localdomain systemd[1]: Started stupid simple nodejs HTTP server.
Nov 30 11:40:18 localhost.localdomain systemd[1]: Starting stupid simple nodejs HTTP server...
Nov 30 11:46:30 localhost.localdomain systemd[1]: Stopping stupid simple nodejs HTTP server...
Nov 30 11:46:30 localhost.localdomain systemd[1]: Started stupid simple nodejs HTTP server.
Nov 30 11:46:30 localhost.localdomain systemd[1]: Starting stupid simple nodejs HTTP server...
Nov 30 11:47:40 localhost.localdomain node[17076]: sending date:  2015-11-30T19:47:40.319Z

Handling crashes and restarting

What if your application crashes; you probably want it to restart, otherwise you wouldn't need an init system in the first place. Systemd provides the Restart= property to specify when your application should restart, if at all. We're going to use Restart=always for simplicity sake, but all of the options can be found in a table on the systemd.service docs page.

Our updated unit file:

[Unit]
Description=stupid simple nodejs HTTP server

[Service]
WorkingDirectory=/path/to/your/app
ExecStart=/usr/local/bin/node server.js
Type=simple
Restart=always
RestartSec=10

Note that we also added RestartSec=10. This is just so that we can easily see in the logs the restart. Now that our unit file is updated, we need to tell systemd:

sudo systemctl daemon-reload

Before we restart everything, let's modify our server so that it crashes:

var http = require('http');

var server = http.createServer(function(req, res){
    var date = new Date().toISOString();
    console.log('sending date: ', date);
    throw new Error('crashing');
    res.end(date);
});

server.listen(8000);

Now we can restart everything:

sudo systemctl restart node-server.service

Now when you curl your server, it will crash and restart itself. We can verify this by checking the logs as we did above:

journalctl -f -u node-server.service

Nov 30 12:01:38 localhost.localdomain systemd[1]: Started stupid simple nodejs HTTP server.
Nov 30 12:01:38 localhost.localdomain systemd[1]: Starting stupid simple nodejs HTTP server...
Nov 30 12:02:20 localhost.localdomain node[17255]: sending date:  2015-11-30T20:02:20.807Z
Nov 30 12:02:20 localhost.localdomain systemd[1]: node-server.service: Main process exited, code=exited, status=1/FAILURE
Nov 30 12:02:20 localhost.localdomain systemd[1]: node-server.service: Unit entered failed state.
Nov 30 12:02:20 localhost.localdomain systemd[1]: node-server.service: Failed with result 'exit-code'.
Nov 30 12:02:30 localhost.localdomain systemd[1]: node-server.service: Service hold-off time over, scheduling restart.
Nov 30 12:02:30 localhost.localdomain systemd[1]: Started stupid simple nodejs HTTP server.
Nov 30 12:02:30 localhost.localdomain systemd[1]: Starting stupid simple nodejs HTTP server...

Starting your app on boot

Often times, you may want your application or service to start when the machine boots (or reboots for that matter). To do this, we need to add an [Install] section to our unit file:

[Unit]
Description=stupid simple nodejs HTTP server

[Service]
WorkingDirectory=/path/to/your/app
ExecStart=/usr/local/bin/node server.js
Type=simple
Restart=always
RestartSec=10

[Install]
WantedBy=basic.target

Now, we can enable it:

sudo systemctl enable node-server.service
Created symlink from /etc/systemd/system/basic.target.wants/node-server.service to /etc/systemd/system/node-server.service.

When control is handed off to systemd on boot, it goes through a number of stages:

local-fs-pre.target
         |
         v
(various mounts and   (various swap   (various cryptsetup
 fsck services...)     devices...)        devices...)       (various low-level   (various low-level
         |                  |                  |             services: udevd,     API VFS mounts:
         v                  v                  v             tmpfiles, random     mqueue, configfs,
  local-fs.target      swap.target     cryptsetup.target    seed, sysctl, ...)      debugfs, ...)
         |                  |                  |                    |                    |
         \__________________|_________________ | ___________________|____________________/
                                              \|/
                                               v
                                        sysinit.target
                                               |
          ____________________________________/|\________________________________________
         /                  |                  |                    |                    \
         |                  |                  |                    |                    |
         v                  v                  |                    v                    v
     (various           (various               |                (various          rescue.service
    timers...)          paths...)              |               sockets...)               |
         |                  |                  |                    |                    v
         v                  v                  |                    v              rescue.target
   timers.target      paths.target             |             sockets.target
         |                  |                  |                    |
         v                  \_________________ | ___________________/
                                              \|/
                                               v
                                         basic.target
                                               |
          ____________________________________/|                                 emergency.service
         /                  |                  |                                         |
         |                  |                  |                  To do this, we first need to add an [In                       v
         v                  v                  v                                 emergency.target
     display-        (various system    (various system
 manager.service         services           services)
         |             required for            |
         |            graphical UIs)           v
         |                  |           multi-user.target
         |                  |                  |
         \_________________ | _________________/
                           \|/
                            v
                  graphical.target

As you can see in the chart, basic.target is the first target hit after the core components of the system come online. This flow chart is how systemd orders services and even resolves service dependencies. When we ran systemctl enable it creates a symlink into /etc/systemd/systemd/basic.target.wants. Everything that is symlinked there will be run as part of the basic.target step in the boot process.

Wrap up

As you can see, it's pretty simple to get everything up and running with systemd and you don't have to mess around with user space based process managers. With a small config and like 10 minutes of time, you now no longer have to worry about your entire app crashing down and not being able to restart itself.

Now that you have a basic understanding of systemd and what it offers, you can start really digging into it and exploring all of the other features that it offers to make your application infrastructure even better.

How to build a fault-tolerant redis cluster with sentinel

Today, Im going to show you how to setup a fault-tolerant master/slave redis cluster using sentinel to failover lost nodes.

Redis is a very versatile database, but what if you want to run it on a cluster? A lot of times, people will run redis as a standalone server with no backup. But what happens when that machine goes down? Or what if we want to migrate our redis instance to a new machine without downtime?

All of this is possible by creating a replica set (master node and n many slave nodes) and letting sentinel watch and manage them. If sentinel discovers that a node has disappeared, it will attempt to elect a new master node, provided that a majority of sentinels in the cluster agree (i.e. quorum).

The quorum is the number of Sentinels that need to agree about the fact the master is not reachable, in order for really mark the slave as failing, and eventually start a fail over procedure if possible.

However the quorum is only used to detect the failure. In order to actually perform a failover, one of the Sentinels need to be elected leader for the failover and be authorized to proceed. This only happens with the vote of the majority of the Sentinel processes.

In this particular example, we're going to setup our nodes in a master/slave configuration, where we will have 1 master and 2 slave nodes. This way, if we lose one node, the cluster will still retain quorum and be able to elect a new master. In this setup, writes will have to go through the master as slaves are read-only. The upside to this is that if the master disappears, its entire state has already been replicated to the slave nodes, meaning when one is elected as master, it can being to accept writes immediately. This is different than setting up a redis cluster where data is sharded across master nodes rather than replicated entirely.

Since sentinel handles electing a master node and sentinel nodes communicate with each other, we can use it as a discovery mechanism to determine which node is the master and thus where we should send our writes.

Setup

To set up a cluster, we're going to run 3 redis instances:

  • 1 master
  • 2 slaves

Each of the three instances will also have a redis sentinel server running along side it for monitoring/service discovery. The config files I have for this example can be run on your localhost, or you can change the IP addresses to fit your own use-case. All of this will be done using version 3.0.2 of redis.

Configs

If you dont feel like writing configs by hand, you can clone the example repository I have at github.com/seanmcgary/redis-cluster-example. In there, you'll find a directory structure that looks like this:

redis-cluster
├── node1
│   ├── redis.conf
│   └── sentinel.conf
├── node2
│   ├── redis.conf
│   └── sentinel.conf
└── node3
    ├── redis.conf
    └── sentinel.conf

3 directories, 6 files

For the purpose of this demo, node1 will be our starting master node and nodes 2 and 3 will be added as slaves.

Master node config

redis.conf

bind 127.0.0.1
port 6380

dir .

sentinel.conf

# Host and port we will listen for requests onbind 127.0.0.1port 16380## "redis-cluster" is the name of our cluster## each sentinel process is paired with a redis-server process#sentinel monitor redis-cluster 127.0.0.1 6380 2sentinel down-after-milliseconds redis-cluster 5000sentinel parallel-syncs redis-cluster 1sentinel failover-timeout redis-cluster 10000

Our redis config should be pretty self-explainatory. For the sentinel config, we've chosen the redis-server port + 10000 to keep things somewhat consistent and make it easier to see which sentinel config goes with which server.

sentinel monitor redis-cluster 127.0.0.1 6380 2

The third "argument" here is the name of our cluster. Each sentinel server needs to have the same name and will point at the master node (rather than the redis-server it shares a host with). The final argument (2 here) is how many sentinel nodes are required for quorum when it comes time to vote on a new master. Since we have 3 nodes, we're requiring a quorum of 2 sentinels, allowing us to lose up to one machine. If we had a cluster of 5 machines, which would allow us to lose 2 machines while still maintaining a majority of nodes participating in quorum.

sentinel down-after-milliseconds redis-cluster 5000

For this example, a machine will have to be unresponsive for 5 seconds before being classified as down thus triggering a vote to elect a new master node.

Slave node config

Our slave node configs don't look much different. This one happens to be for node2:

redis.conf

bind 127.0.0.1
port 6381

dir .

slaveof 127.0.0.1 6380

sentinel.conf

# Host and port we will listen for requests onbind 127.0.0.1port 16381## "redis-cluster" is the name of our cluster## each sentinel process is paired with a redis-server process#sentinel monitor redis-cluster 127.0.0.1 6380 2sentinel down-after-milliseconds redis-cluster 5000sentinel parallel-syncs redis-cluster 1sentinel failover-timeout redis-cluster 10000

The only difference is this line in our redis.conf:

slaveof 127.0.0.1 6380

In order to bootstrap the cluster, we need to tell the slaves where to look for a master node. After the initial bootstrapping process, redis will actually take care of rewriting configs as we add/remove nodes. Since we're not really worrying about deploying this to a production environment where addresses might be dynamic, we're just going to hardcode our master node's IP address and port.

We're going to do the same for the slave sentinels as well as we want them to monitor our master node (node1).

Starting the cluster

You'll probably want to run each of these in something like screen or tmux so that you can see the output from each node all at once.

Starting the master node

redis-server, node1

$ redis-server node1/redis.conf

57411:M 07 Jul 16:32:09.876 * Increased maximum number of open files to 10032 (it was originally set to 256).
                _.__.-``__ ''-.__.-``    `.  `_.  ''-._           Redis 3.0.2 (01888d1e/0) 64 bit
  .-`` .-```.  ```\/    _.,_ ''-._                                   
 (    '      ,       .-`  | `,    )     Running in standalone mode
 |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6380
 |    `-._   `._    /     _.-'    |     PID: 57411
  `-._    `-._  `-./  _.-'    _.-'                                   
 |`-._`-._    `-.__.-'    _.-'_.-'|                                  
 |    `-._`-._        _.-'_.-'    |           http://redis.io        
  `-._    `-._`-.__.-'_.-'    _.-'                                   
 |`-._`-._    `-.__.-'    _.-'_.-'|                                  
 |    `-._`-._        _.-'_.-'    |                                  
  `-._    `-._`-.__.-'_.-'    _.-'                                   
      `-._    `-.__.-'    _.-'                                       
          `-._        _.-'                                           
              `-.__.-'                                               

57411:M 07 Jul 16:32:09.878 # Server started, Redis version 3.0.257411:M 07 Jul 16:32:09.878 * DB loaded from disk: 0.000 seconds
57411:M 07 Jul 16:32:09.878 * The server is now ready to accept connections on port 6380

sentinel, node1

$ redis-server node1/sentinel.conf --sentinel

57425:X 07 Jul 16:32:33.794 * Increased maximum number of open files to 10032 (it was originally set to 256).
                _.__.-``__ ''-.__.-``    `.  `_.  ''-._           Redis 3.0.2 (01888d1e/0) 64 bit
  .-`` .-```.  ```\/    _.,_ ''-._                                   
 (    '      ,       .-`  | `,    )     Running in sentinel mode
 |`-._`-...-` __...-.``-._|'` _.-'|     Port: 16380
 |    `-._   `._    /     _.-'    |     PID: 57425
  `-._    `-._  `-./  _.-'    _.-'                                   
 |`-._`-._    `-.__.-'    _.-'_.-'|                                  
 |    `-._`-._        _.-'_.-'    |           http://redis.io        
  `-._    `-._`-.__.-'_.-'    _.-'                                   
 |`-._`-._    `-.__.-'    _.-'_.-'|                                  
 |    `-._`-._        _.-'_.-'    |                                  
  `-._    `-._`-.__.-'_.-'    _.-'                                   
      `-._    `-.__.-'    _.-'                                       
          `-._        _.-'                                           
              `-.__.-'                                               

57425:X 07 Jul 16:32:33.795 # Sentinel runid is dde8956ca13c6b6d396d33e3a47ab5b489fa3292
57425:X 07 Jul 16:32:33.795 # +monitor master redis-cluster 127.0.0.1 6380 quorum 2

Starting the slave nodes

Now we can go ahead and start our slave nodes. As you start them, you'll see the master node report as they come online and join.

redis-server, node2

$ redis-server node2/redis.conf57450:S 07 Jul 16:32:57.969 * Increased maximum number of open files to 10032 (it was originally set to 256)._.__.-``__ ''-.__.-``    `.  `_.  ''-._           Redis 3.0.2 (01888d1e/0) 64 bit.-`` .-```.  ```\/    _.,_ ''-._(    '      ,       .-`  | `,    )     Running in standalone mode
 |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6381|    `-._   `._    /     _.-'    |     PID: 57450`-._    `-._  `-./  _.-'    _.-'|`-._`-._    `-.__.-'    _.-'_.-'||    `-._`-._        _.-'_.-'    |           http://redis.io`-._    `-._`-.__.-'_.-'    _.-'|`-._`-._    `-.__.-'    _.-'_.-'||    `-._`-._        _.-'_.-'    |                                  
  `-._    `-._`-.__.-'_.-'    _.-'                                   
      `-._    `-.__.-'    _.-'                                       
          `-._        _.-'                                           
              `-.__.-'                                               
57450:S 07 Jul 16:32:57.971 # Server started, Redis version 3.0.257450:S 07 Jul 16:32:57.971 * DB loaded from disk: 0.000 seconds57450:S 07 Jul 16:32:57.971 * The server is now ready to accept connections on port 638157450:S 07 Jul 16:32:57.971 * Connecting to MASTER 127.0.0.1:638057450:S 07 Jul 16:32:57.971 * MASTER <-> SLAVE sync started57450:S 07 Jul 16:32:57.971 * Non blocking connect for SYNC fired the event.57450:S 07 Jul 16:32:57.971 * Master replied to PING, replication can continue...57450:S 07 Jul 16:32:57.971 * Partial resynchronization not possible (no cached master)57450:S 07 Jul 16:32:57.971 * Full resync from master: d75bba9a2f3c5a6e2e4e9dfd70ddb0c2d4e647fd:157450:S 07 Jul 16:32:58.038 * MASTER <-> SLAVE sync: receiving 18 bytes from master57450:S 07 Jul 16:32:58.038 * MASTER <-> SLAVE sync: Flushing old data57450:S 07 Jul 16:32:58.038 * MASTER <-> SLAVE sync: Loading DB in memory57450:S 07 Jul 16:32:58.038 * MASTER <-> SLAVE sync: Finished with success

sentinel, node2

$ redis-server node2/sentinel.conf --sentinel

                _.__.-``__ ''-.__.-``    `.  `_.  ''-._           Redis 3.0.2 (01888d1e/0) 64 bit
  .-`` .-```.  ```\/    _.,_ ''-._                                   
 (    '      ,       .-`  | `,    )     Running in sentinel mode
 |`-._`-...-` __...-.``-._|'` _.-'|     Port: 16381
 |    `-._   `._    /     _.-'    |     PID: 57464
  `-._    `-._  `-./  _.-'    _.-'                                   
 |`-._`-._    `-.__.-'    _.-'_.-'|                                  
 |    `-._`-._        _.-'_.-'    |           http://redis.io        
  `-._    `-._`-.__.-'_.-'    _.-'                                   
 |`-._`-._    `-.__.-'    _.-'_.-'|                                  
 |    `-._`-._        _.-'_.-'    |                                  
  `-._    `-._`-.__.-'_.-'    _.-'                                   
      `-._    `-.__.-'    _.-'                                       
          `-._        _.-'                                           
              `-.__.-'57464:X 07 Jul 16:33:18.109 # Sentinel runid is 978afe015b4554fdd131957ef688ca4ec3651ea157464:X 07 Jul 16:33:18.109 # +monitor master redis-cluster 127.0.0.1 6380 quorum 257464:X 07 Jul 16:33:18.111 * +slave slave 127.0.0.1:6381 127.0.0.1 6381 @ redis-cluster 127.0.0.1 638057464:X 07 Jul 16:33:18.205 * +sentinel sentinel 127.0.0.1:16380 127.0.0.1 16380 @ redis-cluster 127.0.0.1 6380

Go ahead and do the same for node3.

If we look at the log output for node1's sentinel, we can see that the slaves have been added:

57425:X 07 Jul 16:33:03.895 * +slave slave 127.0.0.1:6381 127.0.0.1 6381 @ redis-cluster 127.0.0.1 638057425:X 07 Jul 16:33:20.171 * +sentinel sentinel 127.0.0.1:16381 127.0.0.1 16381 @ redis-cluster 127.0.0.1 638057425:X 07 Jul 16:33:44.107 * +slave slave 127.0.0.1:6382 127.0.0.1 6382 @ redis-cluster 127.0.0.1 638057425:X 07 Jul 16:33:44.303 * +sentinel sentinel 127.0.0.1:16382 127.0.0.1 16382 @ redis-cluster 127.0.0.1 6380

Find the master node

Now that our cluster is in place, we can ask sentinel which node is currently set as the master. To illustrate this, we'll ask sentinel on node3:

$ redis-cli -p 16382 sentinel get-master-addr-by-name redis-cluster

 1) "127.0.0.1"
 2) "6380"

As we can see here, the ip and port values match our node1 which is our master node that we started.

Electing a new master

Now lets kill off our original master node

$ redis-cli -p 6380 debug segfault

Looking at the logs from node2's sentinel we can watch the new master election happen:

57464:X 07 Jul 16:35:30.270 # +sdown master redis-cluster 127.0.0.1 638057464:X 07 Jul 16:35:30.301 # +new-epoch 157464:X 07 Jul 16:35:30.301 # +vote-for-leader 2a4d7647d2e995bd7315d8358efbd336d7fc79ad 157464:X 07 Jul 16:35:30.330 # +odown master redis-cluster 127.0.0.1 6380 #quorum 3/257464:X 07 Jul 16:35:30.330 # Next failover delay: I will not start a failover before Tue Jul  7 16:35:50 201557464:X 07 Jul 16:35:31.432 # +config-update-from sentinel 127.0.0.1:16382 127.0.0.1 16382 @ redis-cluster 127.0.0.1 638057464:X 07 Jul 16:35:31.432 # +switch-master redis-cluster 127.0.0.1 6380 127.0.0.1 638157464:X 07 Jul 16:35:31.432 * +slave slave 127.0.0.1:6382 127.0.0.1 6382 @ redis-cluster 127.0.0.1 638157464:X 07 Jul 16:35:31.432 * +slave slave 127.0.0.1:6380 127.0.0.1 6380 @ redis-cluster 127.0.0.1 638157464:X 07 Jul 16:35:36.519 # +sdown slave 127.0.0.1:6380 127.0.0.1 6380 @ redis-cluster 127.0.0.1 6381

Now, lets see which machine got elected:

$ redis-cli -p 16382 sentinel get-master-addr-by-name redis-cluster

 1) "127.0.0.1"
 2) "6381"

Here, we can see that node2 has been elected the new master of the cluster. Now, we can restart node1 and you'll see it come back up as a slave since node2 has been elected as the master node.

$ redis-server node1/redis.conf

57531:M 07 Jul 16:37:24.176 # Server started, Redis version 3.0.257531:M 07 Jul 16:37:24.176 * DB loaded from disk: 0.000 seconds
57531:M 07 Jul 16:37:24.176 * The server is now ready to accept connections on port 638057531:S 07 Jul 16:37:34.215 * SLAVE OF 127.0.0.1:6381 enabled (user request)
57531:S 07 Jul 16:37:34.215 # CONFIG REWRITE executed with success.
57531:S 07 Jul 16:37:34.264 * Connecting to MASTER 127.0.0.1:638157531:S 07 Jul 16:37:34.264 * MASTER <-> SLAVE sync started
57531:S 07 Jul 16:37:34.265 * Non blocking connect for SYNC fired the event.
57531:S 07 Jul 16:37:34.265 * Master replied to PING, replication can continue...
57531:S 07 Jul 16:37:34.265 * Partial resynchronization not possible (no cached master)
57531:S 07 Jul 16:37:34.265 * Full resync from master: 135e2c6ec93d33dceb30b7efb7da171b0fb93b9d:2475657531:S 07 Jul 16:37:34.276 * MASTER <-> SLAVE sync: receiving 18 bytes from master
57531:S 07 Jul 16:37:34.276 * MASTER <-> SLAVE sync: Flushing old data
57531:S 07 Jul 16:37:34.276 * MASTER <-> SLAVE sync: Loading DB in memory
57531:S 07 Jul 16:37:34.276 * MASTER <-> SLAVE sync: Finished with success

That's it! This was a pretty simple example and is meant to introduce how you can setup a redis replica cluster with failover. In a followup post, I'll show how you can implement this on an actual cluster with CoreOS, containers, and HAProxy for loadbalancing.

nsenter a systemd-nspawn container

If you run applications in containers, you've probably needed a way to enter the container to debug something. Sure you could run sshd in your container, but its really not necessary. The same thing can be accomplished using a little program called nsenter.

nsenter can be used to enter both Docker containers and systemd-nspawn containers. In this situation, we're going to be looking at a container running with systemd-nspawn.

Start a container

To make things easier, we're going to pull the "vanilla" Fedora 21 Docker container and export its filesystem so we can run it with systemd-nspawn.

> docker pull fedora:21
# create a directory to dump everything into> mkdir fedora21
> docker export "$(docker create --name fedora21 fedora:21 true)" | tar -x -C fedora21
# clean up Docker's mess> docker rm fedora21

Now we can actually boot the machine. One thing to note here for those that are unfamiliar, is that when you boot the machine it's pretty much the same as turning on a physical machine; you'll see systemd start up and it'll show a command prompt. The act of using nsenter will be in a different shell/terminal/screen/whatever than the running machine.

> sudo systemd-nspawn --directory fedora21 --machine fedora-container --boot

> machinectl list

MACHINE                          CONTAINER SERVICE         
fedora-container                 container nspawn          

1 machines listed.

Now that we have a machine running, we need to find the PID for systemd running in the container. We can do that using machinectl status

> machinectl status fedora-container
fedora-container
           Since: Thu 2015-04-09 23:44:35 UTC; 5min ago
          Leader: 7943 (systemd)
         Service: nspawn; class containerRoot: /home/core/fedora21
         Address: 10.0.0.0OS: Fedora 21 (Twenty One)
            Unit: machine-fedora\x2dcontainer.scope
                  ├─7943 /usr/lib/systemd/systemd
                  └─system.slice
                    ├─dbus.service
                    │ └─7988 /bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation
                    ├─systemd-journald.service
                    │ └─7964 /usr/lib/systemd/systemd-journald
                    ├─systemd-logind.service
                    │ └─7987 /usr/lib/systemd/systemd-logind
                    └─console-getty.service
                      └─7992 /sbin/agetty --noclear --keep-baud console 115200 38400 9600 vt102

The PID we want is the one specified under "Leader", so 7943.

nsenter into the container

Based on the man page, nsenter:

Enters the namespaces of one or more other processes and then executes the specified program

In this case, that is systemd inside of the container we are running. The goal here is to nsenter into the container and get a simple bash shell running so that we can run commands as if we logged into it.

> sudo nsenter --target 7943 --mount --uts --ipc --net

That's it! If you run something like whoami you'll see that you are in the container as the root user. You can now do everything you normally could if we logged in from the main login prompt or ssh'd into the machine.

When you're done, simply control + d to logout. To terminate the container, you can use machinectl terminate

sudo machinectl terminate fedora-container 

Cortex - express style routing for Backbone

Ive found Backbone to be one of the most useful client-side frameworks available due to it's lightweight nature. I know of a number of people that dislike it because it doesn't provide everything including the kitchen sink, but that's one of the reasons why I love it; it gives me the foundation to build exactly what I need, and only what I need.

Routing

One of the things that I find myself wanting when building a fairly large single page app is the ability to add middlewares to routes. Out of the box, Backbone's routing is extremely simple and looks something like this:

var app = Backbone.Router.extend({
    routes: {
        'users': function(){

        },
        'users/:id': function(id){
            // handle users/:id routeconsole.log(id);
        }
    },
    initialize: function(){
        // initialize your app
    }
});

new app();
Backbone.history.start({ pushState: true });

For a lot of apps this is more than sufficient. But what if you want to add handlers that run before each route to do things like fetch data, or check to see if a user is authenticated and allowed to access that route.

Introducing Cortex

Cortex is a small library that allows you to set up chains of middlewares for each of your routes in the same way that Express does for your NodeJS server.

Let's take a look at simple example that does the same as our vaniall Backbone router above.

<script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/lodash.js/3.3.1/lodash.min.js"></script><script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.3/jquery.min.js"></script><script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/backbone.js/1.1.2/backbone-min.js"></script><script type="text/javascript" src="<path>/cortex-x.x.x.min.js"></script><script type="text/javascript">
    $(function(){
        var cortex = new Cortex();

        cortex.route('users', function(route){
            // handle users route
        });

        cortex.route('users/:id', function(route){
            // handle users/:id routeconsole.log(route.params.id);
        });

        var app = Backbone.Router.extend({
            routes: cortex.getRoutes(),
            initialize: function(){
                // initialize your app
            }
        });

        new app();
        Backbone.history.start({ pushState: true });
    });
</script>

This example should be pretty straightforward. Cortex.prototype.route takes at least two parameters:

  • A pattern to define the route. This is the exact same string we used in the vanilla Backbone example
  • A function to handle the route. This is the function that will be called when your route is matched. It takes two parameters:
    • route - This is an object that will contain things like url parameter tokens, query parameters, etc
    • next - This is a callback that can be called to move on to the next handler in the chain. In our example we dont call if because there is nothing after the handler we defined.

Lets add a middleware that will run before all routes:

$(function(){
    var cortex = new Cortex();

    cortex.use(function(route, next){
        // do something before all routes

        next();
    });

    cortex.route('users', function(route){
        // handle users route
    });

    cortex.route('users/:id', function(route){
        // handle users/:id routeconsole.log(route.params.id);
    });

    var app = Backbone.Router.extend({
        routes: cortex.getRoutes(),
        initialize: function(){
            // initialize your app
        }
    });

    new app();
    Backbone.history.start({ pushState: true });
});

Middlewares function almost identically to those in Express save for the parameters that are passed (since we're not working with an HTTP server here). Middlewares will be called in the order they are defined. If you don't invoke the next callback, execution of the middleware/handler chain will stop at that point.

Now what if we want a chain of middlewares for a particular route:

$(function(){
    var cortex = new Cortex();

    cortex.route('users', function(route){
        // handle users route
    });

    var authUser = function(route, next){
        // check if the user is authenticatedif(user.isAuthenticated){
            next();
        } else {
            throw new Error('User is not authenticated');
        }
    };

    cortex.route('users/:id', authUser, function(route){
        // handle users/:id routeconsole.log(route.params.id);
    });

    var app = Backbone.Router.extend({
        routes: cortex.getRoutes(),
        initialize: function(){
            // initialize your app
        }
    });

    new app();
    Backbone.history.start({ pushState: true });
});

In this example, if the user is determined to be unauthenticated, we'll throw an exception. Cortex actually has a mechanism built in to handle exceptions that arise in middlewares/handlers. You can listen to the error event on your Cortex instance to handle errors:

$(function(){
    var cortex = new Cortex();

    cortex.on('error', function(err, route){
        // err - the error object/exception thrown// route - the route payload in the context the error was thrown
    });

    cortex.route('users', function(route){
        // handle users route
    });

    var authUser = function(route, next){
        // check if the user is authenticatedif(!user.isAuthenticated){
            throw new Error('User is not authenticated');
        }
        next();
    };

    cortex.route('users/:id', authUser, function(route){
        // handle users/:id routeconsole.log(route.params.id);
    });

    var app = Backbone.Router.extend({
        routes: cortex.getRoutes(),
        initialize: function(){
            // initialize your app
        }
    });

    new app();
    Backbone.history.start({ pushState: true });
});

In this error handler you can use the err object and route object to determine where the error happened and how to handle it.

The future

This is a very first iteration of this library, so expect that things will improve as time goes on. Future updates will include support for various module systems and possibly an Express middleware to make serving the individual file super easy.

Improvements and pull requests are more than welcome and can be created over at seanmcgary/backbone-cortex.