Software Engineer, builder of webapps

How to deploy an AWS Lambda with Terraform

Amazon AWS' Lambdas are incredibly powerful, mainly due to their stateless nature and ability to scale horizontally almost infinitely. But once you have written a Lambda function, how do you update it? Better yet, how do you automate deploying and updating it across multiple regions? Today, we're going to take a look at how to do exactly that using Hashicorp's Terraform

What is Terraform?

Managing server resources can be either very manual, or you can automate the process. Automating the process can be tricky though, especially if you have a complex tree of resources that depend on one another. This is where Terraform comes in.

Terraform enables you to safely and predictably create, change, and improve production infrastructure. It is an open source tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned.

Terraform provides a DSL that allows you to describe the resources that you need and their dependencies, allowing Terraform to launch/configure resources in a particular order.

Installing Terraform

Installing Terraform is pretty straightforward.

If you're on macOS simply run:

brew install terraform

If you're on Linux, depending on your distro and package manager of choice, it might be available, otherwise, follow the directions provided on the installation page.

Setting up AWS credentials

Before setting up the credentials, we're going to install the AWS command line interface.

On macOS, the awscli is available through homebrew:

brew install awscli

On Linux, you can often find the awscli in your package manager:

dnf install -y awscli

# or

apt-get install -y awscli

You can also install it manually using pip:

pip install --upgrade --user awscli

Once installed, simply run:

aws configure

And follow the prompts to provide your AWS credentials. This will generate the proper credentials file that Terraform will use when communicating with AWS.

Describe your infrastructure

Now that we have AWS configured, we can start to describe the AWS Lambda that we're going to deploy.

To start, create a new directory.

mkdir terraform-demo

In that directory we're going to create a file that looks like this:

provider "aws" {
    region = "us-east-1"

This is telling Terraform that we're going to be using the AWS provider and to default to the "us-east-1" region for creating our resources.

Now, in, we're going to describe our lambda function:

provider "aws" {
    region = "us-east-1"

resource "aws_lambda_function" "demo_lambda" {
    function_name = "demo_lambda"
    handler = "index.handler"
    runtime = "nodejs4.3"
    filename = ""
    source_code_hash = "${base64sha256(file(""))}"

Here, we're saying that we want a NodeJS based lambda and will expose its handler as an exported function called "handler" on the index.js file (don't worry, we'll create this shortly), and that it will be uploaded as a zip file called "". We're also taking a hash of the zip file to determine if we should re-upload everything.

Create an execution role

Next, what we need to do is set the execution role of our Lambda, otherwise it wont be able to run. In we're going to define a role in the following way:

resource "aws_iam_role" "lambda_exec_role" {
  name = "lambda_exec_role"assume_role_policy = <<EOF
  "Version": "2012-10-17",
  "Statement": [
      "Action": "sts:AssumeRole",
      "Principal": {
        "Service": ""
      "Effect": "Allow",
      "Sid": ""

This creates an IAM role in AWS that the Lambda function will assume during execution. If you wanted to grant access to other AWS services, such as S3, SNS, etc, this role is where you would attach those policies.

Now, we need to add the "role" property to our lambda definition:

resource "aws_lambda_function" "demo_lambda" {
    function_name = "demo_lambda"handler = "index.handler"runtime = "nodejs4.3"filename = ""source_code_hash = "${base64sha256(file(""))}"role = "${aws_iam_role.lambda_exec_role.arn}"

Creating a test NodeJS function

We specified NodeJS as runtime for our lambda, so let's create a function that we can upload and use.


exports.handler = function(event, context, callback) {
    console.log('Event: ', JSON.stringify(event, null, '\t'));
    console.log('Context: ', JSON.stringify(context, null, '\t'));

Now let's zip it up:

zip -r index.js

Test our Terraform plan

To generate a plan and show what Terraform will execute, run terraform plan:

> terraform plan

Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but
will not be persisted to local or remote state storage.

The Terraform execution plan has been generated and is shown below.
Resources are shown in alphabetical order for quick scanning. Green resources
will be created (or destroyed and then created if an existing resource
exists), yellow resources are being changed in-place, and red resources
will be destroyed. Cyan entries are data sources to be read.

Note: You didn't specify an "-out" parameter to save this plan, so when
"apply" is called, Terraform can't guarantee this is what will execute.

+ aws_iam_role.lambda_exec_role
    arn:                "<computed>"
    assume_role_policy: "{\n\t\"Version\": \"2012-10-17\",\n\t\"Statement\": [\n\t\t{\n\t\t\t\"Action\": \"sts:AssumeRole\",\n\t\t\t\"Principal\": {\n\t\t\t\t\"Service\": \"\"\n\t\t\t},\n\t\t\t\"Effect\": \"Allow\",\n\t\t\t\"Sid\": \"\"\n\t\t}\n\t]\n}\n"
    create_date:        "<computed>"
    name:               "lambda_exec_role"
    path:               "/"
    unique_id:          "<computed>"

+ aws_lambda_function.demo_lambda
    arn:              "<computed>"
    filename:         ""
    function_name:    "demo_lambda"
    handler:          "index.handler"
    last_modified:    "<computed>"
    memory_size:      "128"
    publish:          "false"
    qualified_arn:    "<computed>"
    role:             "${aws_iam_role.lambda_exec_role.arn}"
    runtime:          "nodejs4.3"
    source_code_hash: "kWxb4o2JvWUnGncB2oSLvzf7d6+ZJumqB2w0Q8DHXtY="
    timeout:          "3"
    version:          "<computed>"

Plan: 2 to add, 0 to change, 0 to destroy.

This tells us that terraform is going to add both the role and the lambda when it applies the plan.

When you're ready, go ahead and run terraform apply to create your lambda:

> terraform apply

aws_iam_role.lambda_exec_role: Creating...
  arn:                "" => "<computed>"
  assume_role_policy: "" => "{\n\t\"Version\": \"2012-10-17\",\n\t\"Statement\": [\n\t\t{\n\t\t\t\"Action\": \"sts:AssumeRole\",\n\t\t\t\"Principal\": {\n\t\t\t\t\"Service\": \"\"\n\t\t\t},\n\t\t\t\"Effect\": \"Allow\",\n\t\t\t\"Sid\": \"\"\n\t\t}\n\t]\n}\n"
  create_date:        "" => "<computed>"
  name:               "" => "lambda_exec_role"
  path:               "" => "/"
  unique_id:          "" => "<computed>"
aws_iam_role.lambda_exec_role: Creation complete
aws_lambda_function.demo_lambda: Creating...
  arn:              "" => "<computed>"
  filename:         "" => ""
  function_name:    "" => "demo_lambda"
  handler:          "" => "index.handler"
  last_modified:    "" => "<computed>"
  memory_size:      "" => "128"
  publish:          "" => "false"
  qualified_arn:    "" => "<computed>"
  role:             "" => "arn:aws:iam::183555302174:role/lambda_exec_role"
  runtime:          "" => "nodejs4.3"
  source_code_hash: "" => "kWxb4o2JvWUnGncB2oSLvzf7d6+ZJumqB2w0Q8DHXtY="
  timeout:          "" => "3"
  version:          "" => "<computed>"
aws_lambda_function.demo_lambda: Creation complete

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.

State path: terraform.tfstate

To see if it worked properly, you can use the aws cli to list all of your lambda functions:

> aws lambda list-functions

    "Functions": [
            "Version": "$LATEST", 
            "CodeSha256": "kWxb4o2JvWUnGncB2oSLvzf7d6+ZJumqB2w0Q8DHXtY=", 
            "FunctionName": "demo_lambda", 
            "MemorySize": 128,"CodeSize": 294,"FunctionArn": "arn:aws:lambda:us-east-1:183555302174:function:demo_lambda", 
            "Handler": "index.handler", 
            "Role": "arn:aws:iam::183555302174:role/lambda_exec_role", 
            "Timeout": 3, 
            "LastModified": "2017-04-05T14:02:26.636+0000", 
            "Runtime": "nodejs4.3", 
            "Description": ""

We can now invoke our lambda directly from the aws cli. In this script, Im using a commandline utility called jq for parsing the JSON response. If you're on macOS, simply run brew install jq to install it:

> aws lambda invoke \
    --function-name=demo_lambda \
    --invocation-type=RequestResponse \
    --payload='{ "test": "value" }' \
    --log-type=Tail \
    /dev/null | jq -r '.LogResult' | base64 --decode

START RequestId: 808188ef-1a09-11e7-85e1-71d3bf75c46b Version: $LATEST
2017-04-05T14:09:37.153Z    808188ef-1a09-11e7-85e1-71d3bf75c46b    Event:  {
    "test": "value"
2017-04-05T14:09:37.153Z    808188ef-1a09-11e7-85e1-71d3bf75c46b    Context:  {
    "callbackWaitsForEmptyEventLoop": true,
    "logGroupName": "/aws/lambda/demo_lambda",
    "logStreamName": "2017/04/05/[$LATEST]3aa59f4816ae440a805a14fda6e258c7",
    "functionName": "demo_lambda",
    "memoryLimitInMB": "128",
    "functionVersion": "$LATEST",
    "invokeid": "808188ef-1a09-11e7-85e1-71d3bf75c46b",
    "awsRequestId": "808188ef-1a09-11e7-85e1-71d3bf75c46b",
    "invokedFunctionArn": "arn:aws:lambda:us-east-1:183555302174:function:demo_lambda"
END RequestId: 808188ef-1a09-11e7-85e1-71d3bf75c46b
REPORT RequestId: 808188ef-1a09-11e7-85e1-71d3bf75c46b    Duration: 0.47 ms    Billed Duration: 100 ms     Memory Size: 128 MB    Max Memory Used: 10 MB

This will run your lambda and decode the last 4kb of the logfile. To view the full logfile, log into the aws web console and head over to the CloudWatch logs.

Wrap up

That's it! From here, you'll be able to set up a lamba that gets run on certain triggers - SNS events, S3 operations, consume data from a Kinesis firehose, etc.

All of the files we've created here can be found on Github at seanmcgary/blog-lambda-terraform