Aws

CloudFormation for Node.js Developers

Master AWS CloudFormation for deploying serverless Node.js applications with Lambda, API Gateway, DynamoDB, and S3

CloudFormation for Node.js Developers

AWS CloudFormation is the infrastructure-as-code service that lets you define your entire AWS stack in JSON or YAML templates and deploy it repeatably with a single command. If you are building serverless Node.js applications on AWS, CloudFormation eliminates the manual clicking through the console and gives you version-controlled, reviewable infrastructure that lives right next to your application code. This article walks through everything you need to know to go from zero to deploying a production-ready serverless Node.js API using CloudFormation.

Prerequisites

Before diving in, you should have:

  • An AWS account with administrator access
  • The AWS CLI installed and configured (aws configure)
  • Node.js 18 or 20 installed locally
  • Basic familiarity with AWS Lambda, API Gateway, and DynamoDB
  • A text editor that supports YAML syntax highlighting

Verify your AWS CLI is working:

aws sts get-caller-identity

You should see your account ID, ARN, and user ID. If that fails, fix your credentials before going further.

CloudFormation Template Anatomy

Every CloudFormation template follows the same structure. You can write templates in JSON or YAML. I strongly recommend YAML because it is easier to read, supports comments, and produces smaller files. Here is the skeleton:

AWSTemplateFormatVersion: "2010-09-09"
Description: My Node.js application stack

Parameters:
  Environment:
    Type: String
    Default: dev
    AllowedValues:
      - dev
      - staging
      - production

Resources:
  MyLambdaFunction:
    Type: AWS::Lambda::Function
    Properties:
      FunctionName: !Sub "my-api-${Environment}"
      Runtime: nodejs20.x
      Handler: index.handler
      Code:
        ZipFile: |
          var response = require('cfn-response');
          exports.handler = function(event, context) {
            response.send(event, context, response.SUCCESS, {});
          };

Outputs:
  FunctionArn:
    Description: The ARN of the Lambda function
    Value: !GetAtt MyLambdaFunction.Arn
    Export:
      Name: !Sub "${AWS::StackName}-FunctionArn"

Let me break down each section.

AWSTemplateFormatVersion

This is always "2010-09-09". AWS has never released a new version, so do not overthink this. Just include it.

Description

A human-readable string that shows up in the CloudFormation console. Keep it short and useful.

Parameters

Parameters make your templates reusable. Instead of hardcoding values, you parameterize them and pass different values for different environments. Parameters support types like String, Number, CommaDelimitedList, and AWS-specific types like AWS::EC2::KeyPair::KeyName.

Parameters:
  TableReadCapacity:
    Type: Number
    Default: 5
    MinValue: 1
    MaxValue: 100
    Description: Read capacity units for the DynamoDB table

  AllowedOrigins:
    Type: CommaDelimitedList
    Default: "http://localhost:3000,https://myapp.com"
    Description: CORS allowed origins

  VpcId:
    Type: AWS::EC2::VPC::Id
    Description: The VPC to deploy into

Resources

This is the heart of your template. Every AWS resource you want to create goes here. Each resource has a logical name (your choice), a Type (the AWS resource type), and Properties specific to that type. Resources can reference each other, and CloudFormation figures out the dependency order automatically.

Outputs

Outputs export values from your stack that you can use in other stacks or just view in the console. They are essential for cross-stack references and for getting the URLs and ARNs you need after deployment.

Intrinsic Functions

CloudFormation provides built-in functions for dynamic values. These are critical for writing flexible templates.

Ref

Ref returns the value of a parameter or the physical ID of a resource. For most resources, this is the resource ID or ARN.

Resources:
  MyBucket:
    Type: AWS::S3::Bucket

  MyFunction:
    Type: AWS::Lambda::Function
    Properties:
      Environment:
        Variables:
          BUCKET_NAME: !Ref MyBucket

Fn::Sub

Fn::Sub performs string substitution. It is the most useful intrinsic function. It replaces ${variable} placeholders with their values.

Resources:
  MyFunction:
    Type: AWS::Lambda::Function
    Properties:
      FunctionName: !Sub "${AWS::StackName}-handler-${Environment}"
      Environment:
        Variables:
          TABLE_ARN: !Sub "arn:aws:dynamodb:${AWS::Region}:${AWS::AccountId}:table/${ItemsTable}"

The AWS:: pseudo-parameters like AWS::StackName, AWS::Region, and AWS::AccountId are available in every template without declaring them.

Fn::Join

Fn::Join concatenates a list of strings with a delimiter. It is less flexible than Fn::Sub but still useful for building delimited strings.

Properties:
  Environment:
    Variables:
      ALLOWED_ORIGINS: !Join
        - ","
        - - "https://example.com"
          - "https://api.example.com"
          - !Sub "https://${Environment}.example.com"

Fn::GetAtt

Fn::GetAtt retrieves an attribute from a resource. While Ref gives you the primary identifier, GetAtt gives you other attributes like ARNs, URLs, and DNS names.

Outputs:
  TableArn:
    Value: !GetAtt ItemsTable.Arn
  ApiUrl:
    Value: !GetAtt ApiGateway.RootResourceId
  BucketDomainName:
    Value: !GetAtt AssetsBucket.DomainName

Each resource type has its own set of supported attributes. Check the CloudFormation documentation for the specific resource.

Fn::Select and Fn::Split

These are handy for picking values out of lists:

Properties:
  AvailabilityZone: !Select
    - 0
    - !GetAZs ""

Conditions with Fn::If

You can conditionally create resources or set properties:

Conditions:
  IsProduction: !Equals [!Ref Environment, "production"]

Resources:
  AlarmTopic:
    Type: AWS::SNS::Topic
    Condition: IsProduction
    Properties:
      TopicName: !Sub "${AWS::StackName}-alarms"

  MyTable:
    Type: AWS::DynamoDB::Table
    Properties:
      BillingMode: !If
        - IsProduction
        - PROVISIONED
        - PAY_PER_REQUEST

Deploying Lambda Functions

There are three ways to include Lambda code in a CloudFormation template.

Inline Code with ZipFile

For simple functions under 4KB, you can inline the code directly:

Resources:
  HelloFunction:
    Type: AWS::Lambda::Function
    Properties:
      FunctionName: !Sub "${AWS::StackName}-hello"
      Runtime: nodejs20.x
      Handler: index.handler
      Role: !GetAtt LambdaRole.Arn
      Timeout: 30
      MemorySize: 256
      Code:
        ZipFile: |
          exports.handler = function(event, context, callback) {
            var response = {
              statusCode: 200,
              headers: { "Content-Type": "application/json" },
              body: JSON.stringify({ message: "Hello from CloudFormation" })
            };
            callback(null, response);
          };

S3 Deployment Package

For real applications, you package your code as a zip and upload it to S3:

Resources:
  ApiFunction:
    Type: AWS::Lambda::Function
    Properties:
      Runtime: nodejs20.x
      Handler: src/handler.handler
      Role: !GetAtt LambdaRole.Arn
      Code:
        S3Bucket: !Ref DeploymentBucket
        S3Key: !Sub "deployments/${Version}/lambda.zip"
      Environment:
        Variables:
          NODE_ENV: !Ref Environment
          TABLE_NAME: !Ref ItemsTable

Package and upload with the CLI:

zip -r lambda.zip src/ node_modules/ package.json
aws s3 cp lambda.zip s3://my-deployment-bucket/deployments/v1.0.0/lambda.zip

AWS SAM Transform

The AWS Serverless Application Model (SAM) is a CloudFormation extension that simplifies serverless definitions. Add the transform at the top of your template:

AWSTemplateFormatVersion: "2010-09-09"
Transform: AWS::Serverless-2016-10-31

Resources:
  ApiFunction:
    Type: AWS::Serverless::Function
    Properties:
      Handler: src/handler.handler
      Runtime: nodejs20.x
      Events:
        GetItems:
          Type: Api
          Properties:
            Path: /items
            Method: get

SAM is convenient, but I prefer raw CloudFormation for production templates because it gives you full control and does not hide important configuration behind abstractions.

API Gateway Configuration

Setting up API Gateway in CloudFormation is verbose but straightforward. Here is a REST API with a single endpoint:

Resources:
  ApiGateway:
    Type: AWS::ApiGateway::RestApi
    Properties:
      Name: !Sub "${AWS::StackName}-api"
      Description: Node.js API
      EndpointConfiguration:
        Types:
          - REGIONAL

  ItemsResource:
    Type: AWS::ApiGateway::Resource
    Properties:
      RestApiId: !Ref ApiGateway
      ParentId: !GetAtt ApiGateway.RootResourceId
      PathPart: items

  ItemsGetMethod:
    Type: AWS::ApiGateway::Method
    Properties:
      RestApiId: !Ref ApiGateway
      ResourceId: !Ref ItemsResource
      HttpMethod: GET
      AuthorizationType: NONE
      Integration:
        Type: AWS_PROXY
        IntegrationHttpMethod: POST
        Uri: !Sub "arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${ApiFunction.Arn}/invocations"

  ApiDeployment:
    Type: AWS::ApiGateway::Deployment
    DependsOn: ItemsGetMethod
    Properties:
      RestApiId: !Ref ApiGateway

  ApiStage:
    Type: AWS::ApiGateway::Stage
    Properties:
      RestApiId: !Ref ApiGateway
      DeploymentId: !Ref ApiDeployment
      StageName: !Ref Environment

  LambdaApiPermission:
    Type: AWS::Lambda::Permission
    Properties:
      FunctionName: !Ref ApiFunction
      Action: lambda:InvokeFunction
      Principal: apigateway.amazonaws.com
      SourceArn: !Sub "arn:aws:execute-api:${AWS::Region}:${AWS::AccountId}:${ApiGateway}/*"

The DependsOn on the deployment is critical. Without it, CloudFormation might try to create the deployment before the methods exist, and the deployment will have no routes.

DynamoDB and S3 Resources

DynamoDB Table

Resources:
  ItemsTable:
    Type: AWS::DynamoDB::Table
    DeletionPolicy: Retain
    Properties:
      TableName: !Sub "${AWS::StackName}-items"
      BillingMode: PAY_PER_REQUEST
      AttributeDefinitions:
        - AttributeName: pk
          AttributeType: S
        - AttributeName: sk
          AttributeType: S
        - AttributeName: gsi1pk
          AttributeType: S
      KeySchema:
        - AttributeName: pk
          KeyType: HASH
        - AttributeName: sk
          KeyType: RANGE
      GlobalSecondaryIndexes:
        - IndexName: gsi1
          KeySchema:
            - AttributeName: gsi1pk
              KeyType: HASH
            - AttributeName: sk
              KeyType: RANGE
          Projection:
            ProjectionType: ALL
      PointInTimeRecoverySpecification:
        PointInTimeRecoveryEnabled: true

The DeletionPolicy: Retain is essential for any table holding real data. Without it, deleting your stack deletes the table and all its data.

S3 Bucket

Resources:
  AssetsBucket:
    Type: AWS::S3::Bucket
    DeletionPolicy: Retain
    Properties:
      BucketName: !Sub "${AWS::StackName}-assets-${AWS::AccountId}"
      VersioningConfiguration:
        Status: Enabled
      BucketEncryption:
        ServerSideEncryptionConfiguration:
          - ServerSideEncryptionByDefault:
              SSEAlgorithm: AES256
      PublicAccessBlockConfiguration:
        BlockPublicAcls: true
        BlockPublicPolicy: true
        IgnorePublicAcls: true
        RestrictPublicBuckets: true
      LifecycleConfiguration:
        Rules:
          - Id: CleanupOldVersions
            Status: Enabled
            NoncurrentVersionExpiration:
              NoncurrentDays: 30

Always include the account ID in bucket names to avoid naming collisions, since S3 bucket names are globally unique.

IAM Roles and Policies

Lambda functions need an execution role. Define the role with the minimum permissions your function actually needs:

Resources:
  LambdaRole:
    Type: AWS::IAM::Role
    Properties:
      RoleName: !Sub "${AWS::StackName}-lambda-role"
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Effect: Allow
            Principal:
              Service: lambda.amazonaws.com
            Action: sts:AssumeRole
      ManagedPolicyArns:
        - arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
      Policies:
        - PolicyName: DynamoDBAccess
          PolicyDocument:
            Version: "2012-10-17"
            Statement:
              - Effect: Allow
                Action:
                  - dynamodb:GetItem
                  - dynamodb:PutItem
                  - dynamodb:UpdateItem
                  - dynamodb:DeleteItem
                  - dynamodb:Query
                  - dynamodb:Scan
                Resource:
                  - !GetAtt ItemsTable.Arn
                  - !Sub "${ItemsTable.Arn}/index/*"
        - PolicyName: S3Access
          PolicyDocument:
            Version: "2012-10-17"
            Statement:
              - Effect: Allow
                Action:
                  - s3:GetObject
                  - s3:PutObject
                  - s3:DeleteObject
                Resource: !Sub "${AssetsBucket.Arn}/*"

The AWSLambdaBasicExecutionRole managed policy gives your function permission to write logs to CloudWatch. The inline policies grant specific access to your DynamoDB table and S3 bucket. Never use * for resources in production.

Nested Stacks

When your template grows beyond 500 lines, split it into nested stacks. Each nested stack is a separate template referenced from a parent template:

# parent-stack.yaml
Resources:
  DatabaseStack:
    Type: AWS::CloudFormation::Stack
    Properties:
      TemplateURL: https://s3.amazonaws.com/my-templates/database.yaml
      Parameters:
        Environment: !Ref Environment
        TableName: !Sub "${AWS::StackName}-items"

  ApiStack:
    Type: AWS::CloudFormation::Stack
    DependsOn: DatabaseStack
    Properties:
      TemplateURL: https://s3.amazonaws.com/my-templates/api.yaml
      Parameters:
        Environment: !Ref Environment
        TableArn: !GetAtt DatabaseStack.Outputs.TableArn
        TableName: !GetAtt DatabaseStack.Outputs.TableName

Upload nested templates to S3 before deploying:

aws s3 sync ./templates/ s3://my-templates/
aws cloudformation deploy --template-file parent-stack.yaml --stack-name my-app

Change Sets

Never deploy directly to production without previewing changes. Change sets show you exactly what CloudFormation will add, modify, or delete before it does anything:

# Create a change set
aws cloudformation create-change-set \
  --stack-name my-app \
  --template-body file://template.yaml \
  --change-set-name update-v2 \
  --capabilities CAPABILITY_NAMED_IAM

# Review the changes
aws cloudformation describe-change-set \
  --stack-name my-app \
  --change-set-name update-v2

# Execute if satisfied
aws cloudformation execute-change-set \
  --stack-name my-app \
  --change-set-name update-v2

The describe-change-set output tells you whether each resource will be added, modified (and whether the modification requires replacement), or removed. Pay close attention to resources marked as Replacement because they will be destroyed and recreated.

Stack Policies

Stack policies prevent accidental updates or deletions of critical resources. Apply a policy to your stack:

{
  "Statement": [
    {
      "Effect": "Deny",
      "Action": "Update:Replace",
      "Principal": "*",
      "Resource": "LogicalResourceId/ItemsTable"
    },
    {
      "Effect": "Deny",
      "Action": "Update:Delete",
      "Principal": "*",
      "Resource": "LogicalResourceId/ItemsTable"
    },
    {
      "Effect": "Allow",
      "Action": "Update:*",
      "Principal": "*",
      "Resource": "*"
    }
  ]
}

Apply it during stack creation:

aws cloudformation create-stack \
  --stack-name my-app \
  --template-body file://template.yaml \
  --stack-policy-body file://stack-policy.json \
  --capabilities CAPABILITY_NAMED_IAM

This prevents anyone from accidentally replacing or deleting your DynamoDB table, even if they change properties that would normally trigger a replacement.

Custom Resources with Lambda

Custom resources let you run arbitrary code during stack creation, update, or deletion. This is powerful for tasks CloudFormation does not natively support, like populating a DynamoDB table with seed data or cleaning up S3 buckets before deletion.

Resources:
  SeedDataFunction:
    Type: AWS::Lambda::Function
    Properties:
      Runtime: nodejs20.x
      Handler: index.handler
      Role: !GetAtt LambdaRole.Arn
      Timeout: 120
      Environment:
        Variables:
          TABLE_NAME: !Ref ItemsTable
      Code:
        ZipFile: |
          var AWS = require('aws-sdk');
          var response = require('cfn-response');
          var dynamo = new AWS.DynamoDB.DocumentClient();

          exports.handler = function(event, context) {
            if (event.RequestType === 'Delete') {
              response.send(event, context, response.SUCCESS, {});
              return;
            }

            var tableName = process.env.TABLE_NAME;
            var items = [
              { pk: 'CONFIG', sk: 'APP_VERSION', value: '1.0.0' },
              { pk: 'CONFIG', sk: 'FEATURE_FLAGS', value: JSON.stringify({ darkMode: false }) }
            ];

            var promises = items.map(function(item) {
              return dynamo.put({ TableName: tableName, Item: item }).promise();
            });

            Promise.all(promises)
              .then(function() {
                response.send(event, context, response.SUCCESS, { ItemCount: items.length });
              })
              .catch(function(err) {
                console.error('Seed failed:', err);
                response.send(event, context, response.FAILED, { Error: err.message });
              });
          };

  SeedData:
    Type: Custom::SeedData
    DependsOn: ItemsTable
    Properties:
      ServiceToken: !GetAtt SeedDataFunction.Arn
      Version: "1.0"

The cfn-response module is only available when using ZipFile inline code. For S3-deployed custom resource handlers, you need to send the response manually via an HTTPS PUT to the pre-signed URL in the event object. Changing the Version property triggers an update, which re-runs the custom resource.

Drift Detection

Over time, people make manual changes in the console that cause your stack to drift from the template. Drift detection identifies these discrepancies:

# Start drift detection
aws cloudformation detect-stack-drift --stack-name my-app

# Check detection status
aws cloudformation describe-stack-drift-detection-status \
  --stack-drift-detection-id <detection-id>

# View drifted resources
aws cloudformation describe-stack-resource-drifts \
  --stack-name my-app \
  --stack-resource-drift-status-filters MODIFIED DELETED

Run drift detection as part of your CI pipeline. If drift is detected, either update your template to match the actual state or revert the manual changes. Unchecked drift leads to deployment failures and unexpected behavior.

Complete Working Example

Here is a full CloudFormation template that deploys a serverless Node.js API with Lambda, API Gateway, DynamoDB, S3, and proper IAM roles.

AWSTemplateFormatVersion: "2010-09-09"
Description: Serverless Node.js API with Lambda, API Gateway, DynamoDB, and S3

Parameters:
  Environment:
    Type: String
    Default: dev
    AllowedValues:
      - dev
      - staging
      - production
    Description: Deployment environment

  LambdaMemorySize:
    Type: Number
    Default: 256
    AllowedValues:
      - 128
      - 256
      - 512
      - 1024
    Description: Lambda function memory in MB

  LambdaTimeout:
    Type: Number
    Default: 30
    MinValue: 5
    MaxValue: 300
    Description: Lambda function timeout in seconds

Conditions:
  IsProduction: !Equals [!Ref Environment, "production"]

Resources:
  # ---- DynamoDB Table ----
  ItemsTable:
    Type: AWS::DynamoDB::Table
    DeletionPolicy: Retain
    UpdateReplacePolicy: Retain
    Properties:
      TableName: !Sub "${AWS::StackName}-items"
      BillingMode: PAY_PER_REQUEST
      AttributeDefinitions:
        - AttributeName: pk
          AttributeType: S
        - AttributeName: sk
          AttributeType: S
      KeySchema:
        - AttributeName: pk
          KeyType: HASH
        - AttributeName: sk
          KeyType: RANGE
      PointInTimeRecoverySpecification:
        PointInTimeRecoveryEnabled: !If [IsProduction, true, false]
      Tags:
        - Key: Environment
          Value: !Ref Environment

  # ---- S3 Bucket ----
  AssetsBucket:
    Type: AWS::S3::Bucket
    DeletionPolicy: Retain
    Properties:
      BucketName: !Sub "${AWS::StackName}-assets-${AWS::AccountId}"
      VersioningConfiguration:
        Status: Enabled
      BucketEncryption:
        ServerSideEncryptionConfiguration:
          - ServerSideEncryptionByDefault:
              SSEAlgorithm: AES256
      PublicAccessBlockConfiguration:
        BlockPublicAcls: true
        BlockPublicPolicy: true
        IgnorePublicAcls: true
        RestrictPublicBuckets: true
      CorsConfiguration:
        CorsRules:
          - AllowedHeaders:
              - "*"
            AllowedMethods:
              - GET
              - PUT
            AllowedOrigins:
              - "*"
            MaxAge: 3600

  # ---- IAM Role ----
  LambdaExecutionRole:
    Type: AWS::IAM::Role
    Properties:
      RoleName: !Sub "${AWS::StackName}-lambda-role"
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Effect: Allow
            Principal:
              Service: lambda.amazonaws.com
            Action: sts:AssumeRole
      ManagedPolicyArns:
        - arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
      Policies:
        - PolicyName: DynamoDBAccess
          PolicyDocument:
            Version: "2012-10-17"
            Statement:
              - Effect: Allow
                Action:
                  - dynamodb:GetItem
                  - dynamodb:PutItem
                  - dynamodb:UpdateItem
                  - dynamodb:DeleteItem
                  - dynamodb:Query
                Resource:
                  - !GetAtt ItemsTable.Arn
                  - !Sub "${ItemsTable.Arn}/index/*"
        - PolicyName: S3Access
          PolicyDocument:
            Version: "2012-10-17"
            Statement:
              - Effect: Allow
                Action:
                  - s3:GetObject
                  - s3:PutObject
                Resource: !Sub "${AssetsBucket.Arn}/*"

  # ---- Lambda Function ----
  ApiFunction:
    Type: AWS::Lambda::Function
    Properties:
      FunctionName: !Sub "${AWS::StackName}-api"
      Runtime: nodejs20.x
      Handler: index.handler
      Role: !GetAtt LambdaExecutionRole.Arn
      MemorySize: !Ref LambdaMemorySize
      Timeout: !Ref LambdaTimeout
      Environment:
        Variables:
          TABLE_NAME: !Ref ItemsTable
          BUCKET_NAME: !Ref AssetsBucket
          NODE_ENV: !Ref Environment
      Code:
        ZipFile: |
          var AWS = require('aws-sdk');
          var dynamo = new AWS.DynamoDB.DocumentClient();
          var tableName = process.env.TABLE_NAME;

          exports.handler = function(event, context, callback) {
            var method = event.httpMethod;
            var path = event.path;
            var body = event.body ? JSON.parse(event.body) : {};

            var headers = {
              "Content-Type": "application/json",
              "Access-Control-Allow-Origin": "*",
              "Access-Control-Allow-Methods": "GET,POST,PUT,DELETE,OPTIONS",
              "Access-Control-Allow-Headers": "Content-Type"
            };

            if (method === "OPTIONS") {
              return callback(null, { statusCode: 200, headers: headers, body: "" });
            }

            if (method === "GET" && path === "/items") {
              var params = {
                TableName: tableName,
                KeyConditionExpression: "pk = :pk",
                ExpressionAttributeValues: { ":pk": "ITEM" }
              };
              dynamo.query(params).promise()
                .then(function(data) {
                  callback(null, {
                    statusCode: 200,
                    headers: headers,
                    body: JSON.stringify({ items: data.Items })
                  });
                })
                .catch(function(err) {
                  console.error("Query failed:", err);
                  callback(null, {
                    statusCode: 500,
                    headers: headers,
                    body: JSON.stringify({ error: "Internal server error" })
                  });
                });

            } else if (method === "POST" && path === "/items") {
              var id = Date.now().toString(36) + Math.random().toString(36).substr(2, 5);
              var item = {
                pk: "ITEM",
                sk: id,
                name: body.name,
                description: body.description || "",
                createdAt: new Date().toISOString()
              };
              var params = {
                TableName: tableName,
                Item: item
              };
              dynamo.put(params).promise()
                .then(function() {
                  callback(null, {
                    statusCode: 201,
                    headers: headers,
                    body: JSON.stringify({ item: item })
                  });
                })
                .catch(function(err) {
                  console.error("Put failed:", err);
                  callback(null, {
                    statusCode: 500,
                    headers: headers,
                    body: JSON.stringify({ error: "Failed to create item" })
                  });
                });

            } else if (method === "DELETE" && path.startsWith("/items/")) {
              var itemId = path.split("/").pop();
              var params = {
                TableName: tableName,
                Key: { pk: "ITEM", sk: itemId }
              };
              dynamo.delete(params).promise()
                .then(function() {
                  callback(null, {
                    statusCode: 204,
                    headers: headers,
                    body: ""
                  });
                })
                .catch(function(err) {
                  console.error("Delete failed:", err);
                  callback(null, {
                    statusCode: 500,
                    headers: headers,
                    body: JSON.stringify({ error: "Failed to delete item" })
                  });
                });

            } else {
              callback(null, {
                statusCode: 404,
                headers: headers,
                body: JSON.stringify({ error: "Not found" })
              });
            }
          };
      Tags:
        - Key: Environment
          Value: !Ref Environment

  # ---- API Gateway ----
  ApiGateway:
    Type: AWS::ApiGateway::RestApi
    Properties:
      Name: !Sub "${AWS::StackName}-api"
      Description: !Sub "Node.js Items API (${Environment})"
      EndpointConfiguration:
        Types:
          - REGIONAL

  ItemsResource:
    Type: AWS::ApiGateway::Resource
    Properties:
      RestApiId: !Ref ApiGateway
      ParentId: !GetAtt ApiGateway.RootResourceId
      PathPart: items

  ItemIdResource:
    Type: AWS::ApiGateway::Resource
    Properties:
      RestApiId: !Ref ApiGateway
      ParentId: !Ref ItemsResource
      PathPart: "{id}"

  ItemsGetMethod:
    Type: AWS::ApiGateway::Method
    Properties:
      RestApiId: !Ref ApiGateway
      ResourceId: !Ref ItemsResource
      HttpMethod: GET
      AuthorizationType: NONE
      Integration:
        Type: AWS_PROXY
        IntegrationHttpMethod: POST
        Uri: !Sub "arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${ApiFunction.Arn}/invocations"

  ItemsPostMethod:
    Type: AWS::ApiGateway::Method
    Properties:
      RestApiId: !Ref ApiGateway
      ResourceId: !Ref ItemsResource
      HttpMethod: POST
      AuthorizationType: NONE
      Integration:
        Type: AWS_PROXY
        IntegrationHttpMethod: POST
        Uri: !Sub "arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${ApiFunction.Arn}/invocations"

  ItemsOptionsMethod:
    Type: AWS::ApiGateway::Method
    Properties:
      RestApiId: !Ref ApiGateway
      ResourceId: !Ref ItemsResource
      HttpMethod: OPTIONS
      AuthorizationType: NONE
      Integration:
        Type: AWS_PROXY
        IntegrationHttpMethod: POST
        Uri: !Sub "arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${ApiFunction.Arn}/invocations"

  ItemDeleteMethod:
    Type: AWS::ApiGateway::Method
    Properties:
      RestApiId: !Ref ApiGateway
      ResourceId: !Ref ItemIdResource
      HttpMethod: DELETE
      AuthorizationType: NONE
      Integration:
        Type: AWS_PROXY
        IntegrationHttpMethod: POST
        Uri: !Sub "arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${ApiFunction.Arn}/invocations"

  ApiDeployment:
    Type: AWS::ApiGateway::Deployment
    DependsOn:
      - ItemsGetMethod
      - ItemsPostMethod
      - ItemsOptionsMethod
      - ItemDeleteMethod
    Properties:
      RestApiId: !Ref ApiGateway

  ApiStage:
    Type: AWS::ApiGateway::Stage
    Properties:
      RestApiId: !Ref ApiGateway
      DeploymentId: !Ref ApiDeployment
      StageName: !Ref Environment
      MethodSettings:
        - ResourcePath: "/*"
          HttpMethod: "*"
          ThrottlingBurstLimit: !If [IsProduction, 500, 50]
          ThrottlingRateLimit: !If [IsProduction, 1000, 100]

  LambdaApiPermission:
    Type: AWS::Lambda::Permission
    Properties:
      FunctionName: !Ref ApiFunction
      Action: lambda:InvokeFunction
      Principal: apigateway.amazonaws.com
      SourceArn: !Sub "arn:aws:execute-api:${AWS::Region}:${AWS::AccountId}:${ApiGateway}/*"

  # ---- CloudWatch Alarm (Production Only) ----
  ErrorAlarm:
    Type: AWS::CloudWatch::Alarm
    Condition: IsProduction
    Properties:
      AlarmName: !Sub "${AWS::StackName}-errors"
      AlarmDescription: Lambda function error rate exceeded threshold
      MetricName: Errors
      Namespace: AWS/Lambda
      Statistic: Sum
      Period: 300
      EvaluationPeriods: 2
      Threshold: 5
      ComparisonOperator: GreaterThanThreshold
      Dimensions:
        - Name: FunctionName
          Value: !Ref ApiFunction

Outputs:
  ApiUrl:
    Description: The URL of the API
    Value: !Sub "https://${ApiGateway}.execute-api.${AWS::Region}.amazonaws.com/${Environment}"

  FunctionArn:
    Description: Lambda function ARN
    Value: !GetAtt ApiFunction.Arn
    Export:
      Name: !Sub "${AWS::StackName}-FunctionArn"

  TableName:
    Description: DynamoDB table name
    Value: !Ref ItemsTable
    Export:
      Name: !Sub "${AWS::StackName}-TableName"

  BucketName:
    Description: S3 bucket name
    Value: !Ref AssetsBucket
    Export:
      Name: !Sub "${AWS::StackName}-BucketName"

Deploy this template:

aws cloudformation deploy \
  --template-file template.yaml \
  --stack-name my-nodejs-api \
  --parameter-overrides Environment=dev LambdaMemorySize=256 \
  --capabilities CAPABILITY_NAMED_IAM \
  --tags Project=MyNodejsApi Environment=dev

After deployment, get the API URL:

aws cloudformation describe-stacks \
  --stack-name my-nodejs-api \
  --query "Stacks[0].Outputs[?OutputKey=='ApiUrl'].OutputValue" \
  --output text

Test it:

# Create an item
curl -X POST https://YOUR_API_URL/items \
  -H "Content-Type: application/json" \
  -d '{"name": "Test Item", "description": "Created via CloudFormation"}'

# List items
curl https://YOUR_API_URL/items

Common Issues and Troubleshooting

1. Circular Dependency Detected

Template error: Circular dependency between resources:
[ApiFunction, LambdaRole, ItemsTable]

This happens when resource A references resource B and resource B references resource A. The most common cause is a Lambda function that references a DynamoDB table name in its environment variables while the table's stream specification references the Lambda function. Break the cycle by using Fn::Sub with the table name pattern instead of !Ref, or introduce an intermediate resource.

2. Role Is Not Authorized to Perform AssumeRole

API: lambda:CreateFunction User: arn:aws:iam::123456789:user/deployer
is not authorized to perform: iam:PassRole on resource:
arn:aws:iam::123456789:role/my-stack-lambda-role

You need iam:PassRole permission on the deploying user or role. Add this to the deployer's IAM policy:

{
  "Effect": "Allow",
  "Action": "iam:PassRole",
  "Resource": "arn:aws:iam::123456789:role/my-stack-*"
}

Also make sure you pass --capabilities CAPABILITY_NAMED_IAM when deploying templates that create named IAM roles.

3. S3 Bucket Already Exists

my-stack-assets (AWS::S3::Bucket) already exists in stack
arn:aws:cloudformation:us-east-1:123456789:stack/other-stack/abc123

S3 bucket names are globally unique. If another stack or account owns that bucket name, you cannot create it. Include the account ID and region in your bucket names:

BucketName: !Sub "${AWS::StackName}-assets-${AWS::AccountId}-${AWS::Region}"

4. Resource Update Requires Replacement

The following resource(s) require replacement: [ItemsTable]
CloudFormation cannot update a stack when a resource requires replacement
and has a stack policy preventing replacement.

Certain property changes on DynamoDB tables (like changing the primary key) require CloudFormation to delete and recreate the table. If you have a stack policy preventing replacement, CloudFormation will refuse. Always use change sets to preview replacements before deploying. If you truly need to change the primary key, create a new table, migrate the data, and update your references.

5. Lambda Deployment Package Too Large

Unzipped size must be smaller than 262144000 bytes
(RequestId: abc-123-def)

The unzipped Lambda deployment package limit is 250MB. Audit your node_modules directory. Remove dev dependencies (npm prune --production), exclude test files, and consider using Lambda Layers for shared dependencies:

SharedDepsLayer:
  Type: AWS::Lambda::LayerVersion
  Properties:
    LayerName: !Sub "${AWS::StackName}-shared-deps"
    Content:
      S3Bucket: !Ref DeploymentBucket
      S3Key: layers/shared-deps.zip
    CompatibleRuntimes:
      - nodejs20.x

ApiFunction:
  Type: AWS::Lambda::Function
  Properties:
    Layers:
      - !Ref SharedDepsLayer

6. Stack Rollback Failed

Stack [my-stack] is in ROLLBACK_FAILED state and can not be updated.

This is one of the worst states to be in. It usually happens when a custom resource's delete handler fails during rollback. Your options are:

# Try to continue the rollback, skipping the problematic resource
aws cloudformation continue-update-rollback \
  --stack-name my-stack \
  --resources-to-skip SeedData

# If that fails, you may need to delete and recreate the stack
aws cloudformation delete-stack --stack-name my-stack

Always make sure your custom resource Lambda handles the Delete event type, even if it does nothing.

Best Practices

  • Use parameters and conditions aggressively. A single template should support dev, staging, and production by parameterizing everything that differs between environments. This keeps your infrastructure DRY and reduces the chance of environment-specific bugs.

  • Set DeletionPolicy to Retain on stateful resources. Any resource that holds data, such as DynamoDB tables, S3 buckets, RDS instances, and Elasticsearch domains, should have DeletionPolicy: Retain. Accidentally deleting a stack should never destroy your data.

  • Tag everything. Apply consistent tags for Environment, Project, Owner, and CostCenter. Tags make cost allocation, access control, and resource discovery possible at scale. Use tag propagation on your stack to apply tags to all resources automatically.

  • Validate templates before deploying. Run aws cloudformation validate-template --template-body file://template.yaml before every deployment. This catches syntax errors and invalid resource types immediately instead of failing two minutes into a deployment.

  • Always use change sets for production updates. Never run aws cloudformation deploy directly against production. Create a change set, review it, get sign-off, and then execute. The five extra minutes can save you from accidentally replacing a production database.

  • Store templates in version control next to your application code. Your infrastructure and application code should be in the same repository and deployed together. This ensures your infrastructure always matches the code running on it.

  • Use Outputs and Exports for cross-stack references. When one stack needs values from another, export those values as outputs. This creates an explicit dependency graph and prevents you from accidentally deleting a stack that other stacks depend on.

  • Keep secrets out of templates. Never hardcode API keys, database passwords, or tokens in your templates. Use AWS Systems Manager Parameter Store or Secrets Manager, and reference them with {{resolve:ssm:parameter-name}} or {{resolve:secretsmanager:secret-id}} dynamic references.

  • Limit inline Lambda code to prototyping. The ZipFile property is convenient for examples and simple custom resources, but production Lambda functions should be packaged, tested, and deployed from S3. Inline code has a 4KB limit and cannot include node_modules.

  • Enable termination protection on production stacks. Run aws cloudformation update-termination-protection --enable-termination-protection --stack-name my-prod-stack to prevent accidental stack deletion.

References

Powered by Contentful