Serverless

Serverless Architecture Patterns for Node.js

Master serverless architecture patterns including API-first, event-driven, data pipelines, and migration strategies for Node.js

Serverless Architecture Patterns for Node.js

Serverless architecture shifts operational responsibility to a cloud provider so you can focus on writing business logic instead of managing infrastructure. For Node.js developers, serverless is a natural fit because Lambda functions are essentially single-purpose Node scripts that respond to events. This article covers the architecture patterns that matter in production, with working examples you can deploy today.

Prerequisites

  • Node.js 18+ installed locally
  • AWS account with CLI configured (aws configure)
  • AWS SAM CLI installed (brew install aws-sam-cli or equivalent)
  • Basic understanding of AWS services (Lambda, API Gateway, SQS, DynamoDB)
  • Familiarity with CloudFormation or SAM templates

Serverless Computing Fundamentals

Before diving into patterns, you need to understand what serverless actually gives you and what it takes away.

A serverless function is a unit of deployment. Your cloud provider allocates compute on demand, executes your function, and bills you per invocation and duration. There are no servers to patch, no capacity to plan, and no idle costs. But there are cold starts, execution time limits, payload size limits, and a fundamentally different programming model.

In AWS, the core serverless building blocks are:

  • AWS Lambda — the compute layer, runs your Node.js code
  • API Gateway — HTTP routing to Lambda
  • SQS / SNS / EventBridge — asynchronous messaging and event routing
  • DynamoDB — serverless NoSQL database
  • S3 — object storage with event triggers
  • Step Functions — workflow orchestration

The key mental shift is this: you are not building an application that runs continuously. You are building a collection of functions that respond to events. Every serverless architecture pattern is fundamentally about how you wire events to functions.

// The simplest possible Lambda handler
var AWS = require("aws-sdk");

exports.handler = function(event, context, callback) {
  console.log("Event received:", JSON.stringify(event, null, 2));

  var response = {
    statusCode: 200,
    headers: { "Content-Type": "application/json" },
    body: JSON.stringify({ message: "Hello from Lambda" })
  };

  callback(null, response);
};

Every Lambda function receives an event object (the trigger payload), a context object (runtime metadata like remaining time), and a callback for returning results. Modern runtimes also support returning a Promise or using async/await, but the callback pattern remains widely used and explicit.

Pattern 1: API-First (API Gateway + Lambda)

This is the most common pattern. API Gateway receives HTTP requests and routes them to Lambda functions. Each endpoint maps to a handler.

// handlers/users.js
var AWS = require("aws-sdk");
var dynamo = new AWS.DynamoDB.DocumentClient();
var TABLE_NAME = process.env.USERS_TABLE;

exports.getUser = function(event, context, callback) {
  var userId = event.pathParameters.id;

  var params = {
    TableName: TABLE_NAME,
    Key: { userId: userId }
  };

  dynamo.get(params, function(err, data) {
    if (err) {
      console.error("DynamoDB error:", err);
      return callback(null, {
        statusCode: 500,
        body: JSON.stringify({ error: "Failed to retrieve user" })
      });
    }

    if (!data.Item) {
      return callback(null, {
        statusCode: 404,
        body: JSON.stringify({ error: "User not found" })
      });
    }

    callback(null, {
      statusCode: 200,
      headers: {
        "Content-Type": "application/json",
        "Cache-Control": "max-age=60"
      },
      body: JSON.stringify(data.Item)
    });
  });
};

exports.createUser = function(event, context, callback) {
  var body = JSON.parse(event.body);
  var userId = context.awsRequestId;

  var item = {
    userId: userId,
    email: body.email,
    name: body.name,
    createdAt: new Date().toISOString()
  };

  var params = {
    TableName: TABLE_NAME,
    Item: item,
    ConditionExpression: "attribute_not_exists(userId)"
  };

  dynamo.put(params, function(err) {
    if (err) {
      console.error("DynamoDB put error:", err);
      return callback(null, {
        statusCode: 500,
        body: JSON.stringify({ error: "Failed to create user" })
      });
    }

    callback(null, {
      statusCode: 201,
      headers: { "Content-Type": "application/json" },
      body: JSON.stringify(item)
    });
  });
};

The API-first pattern works well for CRUD applications and microservices. One important decision is whether to use a single Lambda behind a proxy integration (one function handles all routes) or individual functions per endpoint. I recommend individual functions for most cases — they have smaller cold start times, independent scaling, and granular IAM permissions.

Pattern 2: Event Processing (SQS / SNS / EventBridge)

Synchronous request-response is only half the story. Many workloads are better served by asynchronous event processing. SQS provides reliable message queuing, SNS provides pub/sub fan-out, and EventBridge provides event routing with filtering rules.

// handlers/orderProcessor.js
var AWS = require("aws-sdk");
var dynamo = new AWS.DynamoDB.DocumentClient();
var sns = new AWS.SNS();

exports.handler = function(event, context, callback) {
  var promises = event.Records.map(function(record) {
    var order = JSON.parse(record.body);
    console.log("Processing order:", order.orderId);

    return processOrder(order);
  });

  Promise.all(promises)
    .then(function(results) {
      console.log("Processed", results.length, "orders");
      callback(null, { batchItemFailures: [] });
    })
    .catch(function(err) {
      console.error("Batch processing error:", err);
      callback(err);
    });
};

function processOrder(order) {
  var params = {
    TableName: process.env.ORDERS_TABLE,
    Item: {
      orderId: order.orderId,
      customerId: order.customerId,
      items: order.items,
      total: order.total,
      status: "processing",
      processedAt: new Date().toISOString()
    }
  };

  return dynamo.put(params).promise()
    .then(function() {
      return sns.publish({
        TopicArn: process.env.ORDER_NOTIFICATION_TOPIC,
        Message: JSON.stringify({
          type: "ORDER_PROCESSED",
          orderId: order.orderId,
          customerId: order.customerId,
          total: order.total
        }),
        MessageAttributes: {
          eventType: {
            DataType: "String",
            StringValue: "ORDER_PROCESSED"
          }
        }
      }).promise();
    });
}

SQS triggers Lambda with batches of messages. If your function throws an error, the entire batch is retried. For partial failure handling, return a batchItemFailures response listing only the failed message IDs. This is critical for production reliability — without it, one bad message poisons the entire batch.

EventBridge is the more sophisticated option. It lets you define rules that match event patterns and route them to targets:

// handlers/eventRouter.js
// This function responds to EventBridge events

exports.handler = function(event, context, callback) {
  var detailType = event["detail-type"];
  var source = event.source;
  var detail = event.detail;

  console.log("Received event:", detailType, "from", source);

  switch (detailType) {
    case "OrderPlaced":
      return handleOrderPlaced(detail, callback);
    case "PaymentReceived":
      return handlePaymentReceived(detail, callback);
    case "InventoryUpdated":
      return handleInventoryUpdated(detail, callback);
    default:
      console.warn("Unhandled event type:", detailType);
      callback(null, { status: "skipped" });
  }
};

function handleOrderPlaced(detail, callback) {
  var AWS = require("aws-sdk");
  var eventBridge = new AWS.EventBridge();

  // Emit downstream events
  var params = {
    Entries: [{
      Source: "orders.service",
      DetailType: "InventoryReservationRequested",
      Detail: JSON.stringify({
        orderId: detail.orderId,
        items: detail.items,
        timestamp: new Date().toISOString()
      }),
      EventBusName: process.env.EVENT_BUS_NAME
    }]
  };

  eventBridge.putEvents(params).promise()
    .then(function(result) {
      callback(null, { status: "processed", eventId: result.Entries[0].EventId });
    })
    .catch(function(err) {
      console.error("Failed to emit event:", err);
      callback(err);
    });
}

Pattern 3: Data Pipeline (S3 Triggers + Lambda + DynamoDB)

This pattern handles file processing workflows. When a file lands in S3, it triggers a Lambda function that transforms the data and writes results to DynamoDB or another S3 bucket.

// handlers/csvProcessor.js
var AWS = require("aws-sdk");
var s3 = new AWS.S3();
var dynamo = new AWS.DynamoDB.DocumentClient();

exports.handler = function(event, context, callback) {
  var bucket = event.Records[0].s3.bucket.name;
  var key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, " "));

  console.log("Processing file:", key, "from bucket:", bucket);

  s3.getObject({ Bucket: bucket, Key: key }).promise()
    .then(function(data) {
      var content = data.Body.toString("utf-8");
      var rows = parseCSV(content);
      console.log("Parsed", rows.length, "rows");

      return writeBatch(rows);
    })
    .then(function(result) {
      console.log("Successfully wrote", result.count, "records");

      // Move processed file to archive
      return s3.copyObject({
        Bucket: bucket,
        CopySource: bucket + "/" + key,
        Key: "archive/" + key
      }).promise()
      .then(function() {
        return s3.deleteObject({ Bucket: bucket, Key: key }).promise();
      });
    })
    .then(function() {
      callback(null, { status: "complete" });
    })
    .catch(function(err) {
      console.error("Pipeline error:", err);
      callback(err);
    });
};

function parseCSV(content) {
  var lines = content.split("\n");
  var headers = lines[0].split(",").map(function(h) { return h.trim(); });

  return lines.slice(1)
    .filter(function(line) { return line.trim().length > 0; })
    .map(function(line) {
      var values = line.split(",");
      var row = {};
      headers.forEach(function(header, i) {
        row[header] = values[i] ? values[i].trim() : "";
      });
      return row;
    });
}

function writeBatch(rows) {
  var batchSize = 25; // DynamoDB batch write limit
  var batches = [];

  for (var i = 0; i < rows.length; i += batchSize) {
    batches.push(rows.slice(i, i + batchSize));
  }

  var processed = 0;

  return batches.reduce(function(chain, batch) {
    return chain.then(function() {
      var requests = batch.map(function(row) {
        return {
          PutRequest: {
            Item: {
              id: row.id || require("crypto").randomUUID(),
              data: row,
              importedAt: new Date().toISOString()
            }
          }
        };
      });

      var params = { RequestItems: {} };
      params.RequestItems[process.env.DATA_TABLE] = requests;

      return dynamo.batchWrite(params).promise().then(function() {
        processed += batch.length;
      });
    });
  }, Promise.resolve()).then(function() {
    return { count: processed };
  });
}

The pipeline pattern has a key constraint: Lambda has a 15-minute execution limit. For large files, split processing across multiple invocations using Step Functions or by chunking the file into smaller parts and processing them in parallel.

Pattern 4: Scheduled Tasks (EventBridge Scheduled Rules)

Cron jobs in serverless are handled by EventBridge (formerly CloudWatch Events) scheduled rules. These invoke Lambda functions on a schedule without any infrastructure to maintain.

// handlers/dailyReport.js
var AWS = require("aws-sdk");
var dynamo = new AWS.DynamoDB.DocumentClient();
var ses = new AWS.SES();

exports.handler = function(event, context, callback) {
  var yesterday = new Date();
  yesterday.setDate(yesterday.getDate() - 1);
  var dateKey = yesterday.toISOString().split("T")[0];

  console.log("Generating report for:", dateKey);

  var params = {
    TableName: process.env.METRICS_TABLE,
    KeyConditionExpression: "#dt = :dateKey",
    ExpressionAttributeNames: { "#dt": "date" },
    ExpressionAttributeValues: { ":dateKey": dateKey }
  };

  dynamo.query(params).promise()
    .then(function(data) {
      var report = generateReport(data.Items, dateKey);
      return sendReportEmail(report);
    })
    .then(function() {
      callback(null, { status: "report sent" });
    })
    .catch(function(err) {
      console.error("Report generation failed:", err);
      callback(err);
    });
};

function generateReport(items, dateKey) {
  var totalOrders = items.length;
  var totalRevenue = items.reduce(function(sum, item) {
    return sum + (item.revenue || 0);
  }, 0);
  var avgOrderValue = totalOrders > 0 ? totalRevenue / totalOrders : 0;

  return {
    date: dateKey,
    totalOrders: totalOrders,
    totalRevenue: totalRevenue.toFixed(2),
    averageOrderValue: avgOrderValue.toFixed(2),
    topProducts: getTopProducts(items, 5)
  };
}

function getTopProducts(items, limit) {
  var productCounts = {};
  items.forEach(function(item) {
    (item.products || []).forEach(function(p) {
      productCounts[p.name] = (productCounts[p.name] || 0) + p.quantity;
    });
  });

  return Object.keys(productCounts)
    .sort(function(a, b) { return productCounts[b] - productCounts[a]; })
    .slice(0, limit)
    .map(function(name) {
      return { name: name, quantity: productCounts[name] };
    });
}

function sendReportEmail(report) {
  var params = {
    Destination: { ToAddresses: [process.env.REPORT_EMAIL] },
    Source: process.env.SENDER_EMAIL,
    Message: {
      Subject: { Data: "Daily Report - " + report.date },
      Body: {
        Html: {
          Data: "<h2>Daily Report: " + report.date + "</h2>" +
            "<p>Total Orders: " + report.totalOrders + "</p>" +
            "<p>Total Revenue: $" + report.totalRevenue + "</p>" +
            "<p>Average Order Value: $" + report.averageOrderValue + "</p>"
        }
      }
    }
  };

  return ses.sendEmail(params).promise();
}

Pattern 5: GraphQL Serverless (AppSync)

AWS AppSync provides a managed GraphQL API that integrates directly with DynamoDB, Lambda, and other data sources. It reduces boilerplate compared to rolling your own GraphQL server on Lambda.

However, you can also run a GraphQL server directly on Lambda using Apollo Server:

// handlers/graphql.js
var ApolloServer = require("apollo-server-lambda").ApolloServer;
var gql = require("apollo-server-lambda").gql;
var AWS = require("aws-sdk");
var dynamo = new AWS.DynamoDB.DocumentClient();

var typeDefs = gql(
  "type Query { " +
  "  product(id: ID!): Product " +
  "  products(category: String, limit: Int): [Product] " +
  "} " +
  "type Mutation { " +
  "  createProduct(input: ProductInput!): Product " +
  "} " +
  "input ProductInput { " +
  "  name: String! " +
  "  price: Float! " +
  "  category: String! " +
  "} " +
  "type Product { " +
  "  id: ID! " +
  "  name: String! " +
  "  price: Float! " +
  "  category: String! " +
  "  createdAt: String " +
  "}"
);

var resolvers = {
  Query: {
    product: function(parent, args) {
      return dynamo.get({
        TableName: process.env.PRODUCTS_TABLE,
        Key: { id: args.id }
      }).promise().then(function(data) {
        return data.Item;
      });
    },
    products: function(parent, args) {
      var params = {
        TableName: process.env.PRODUCTS_TABLE,
        Limit: args.limit || 20
      };

      if (args.category) {
        params.IndexName = "category-index";
        params.KeyConditionExpression = "category = :cat";
        params.ExpressionAttributeValues = { ":cat": args.category };
        return dynamo.query(params).promise().then(function(data) {
          return data.Items;
        });
      }

      return dynamo.scan(params).promise().then(function(data) {
        return data.Items;
      });
    }
  },
  Mutation: {
    createProduct: function(parent, args) {
      var item = {
        id: require("crypto").randomUUID(),
        name: args.input.name,
        price: args.input.price,
        category: args.input.category,
        createdAt: new Date().toISOString()
      };

      return dynamo.put({
        TableName: process.env.PRODUCTS_TABLE,
        Item: item
      }).promise().then(function() {
        return item;
      });
    }
  }
};

var server = new ApolloServer({
  typeDefs: typeDefs,
  resolvers: resolvers,
  context: function(req) {
    return { headers: req.event.headers };
  }
});

exports.handler = server.createHandler({
  cors: {
    origin: "*",
    credentials: true
  }
});

The tradeoff with Apollo-on-Lambda is cold starts. The Apollo Server initialization adds meaningful latency to cold invocations. For GraphQL-heavy workloads, AppSync with direct DynamoDB resolvers eliminates this overhead entirely using VTL mapping templates.

Pattern 6: Monolith vs. Micro-Functions

A frequent debate in serverless is whether to deploy one function per route (micro-functions) or a single function that handles all routes (monolith Lambda, sometimes called a "Lambda-lith").

Micro-functions give you:

  • Smaller deployment packages and faster cold starts
  • Per-function IAM permissions (least privilege)
  • Independent scaling per endpoint
  • Granular CloudWatch metrics

Monolith Lambda gives you:

  • Simpler local development and testing
  • Shared code without layers or packages
  • Fewer deployment artifacts to manage
  • Familiar Express.js programming model
// Monolith Lambda using express
var express = require("express");
var serverless = require("serverless-http");

var app = express();
app.use(express.json());

app.get("/api/users/:id", function(req, res) {
  // handler logic
  res.json({ userId: req.params.id });
});

app.post("/api/users", function(req, res) {
  // handler logic
  res.status(201).json({ created: true });
});

app.get("/api/products", function(req, res) {
  // handler logic
  res.json({ products: [] });
});

module.exports.handler = serverless(app);

My recommendation: start with micro-functions. The monolith approach is tempting because it feels familiar, but it defeats the purpose of serverless. You lose fine-grained scaling, your cold starts get worse as the package grows, and your IAM permissions become overly broad. The monolith pattern makes sense only during initial prototyping or when migrating an existing Express app.

Pattern 7: Fan-Out / Fan-In

The fan-out/fan-in pattern distributes work across multiple concurrent Lambda invocations, then aggregates results. This is how you achieve parallelism in serverless.

// handlers/fanout.js — orchestrator
var AWS = require("aws-sdk");
var lambda = new AWS.Lambda();

exports.handler = function(event, context, callback) {
  var segments = event.segments; // Array of work items

  var invocations = segments.map(function(segment) {
    return lambda.invoke({
      FunctionName: process.env.WORKER_FUNCTION,
      InvocationType: "RequestResponse",
      Payload: JSON.stringify({ segment: segment })
    }).promise();
  });

  Promise.all(invocations)
    .then(function(results) {
      var aggregated = results.map(function(r) {
        return JSON.parse(r.Payload);
      });

      var totalRecords = aggregated.reduce(function(sum, r) {
        return sum + r.processedCount;
      }, 0);

      callback(null, {
        statusCode: 200,
        body: JSON.stringify({
          totalSegments: segments.length,
          totalRecords: totalRecords,
          results: aggregated
        })
      });
    })
    .catch(function(err) {
      console.error("Fan-out failed:", err);
      callback(err);
    });
};

// handlers/worker.js — individual worker
exports.handler = function(event, context, callback) {
  var segment = event.segment;
  var processedCount = 0;

  // Process the segment
  segment.items.forEach(function(item) {
    // Heavy computation here
    processedCount++;
  });

  callback(null, {
    segmentId: segment.id,
    processedCount: processedCount,
    completedAt: new Date().toISOString()
  });
};

For more robust fan-out/fan-in, use AWS Step Functions with a Map state. Step Functions handle retries, error handling, and concurrency limits natively, which is hard to replicate with raw Lambda invocations.

Pattern 8: Circuit Breaker in Serverless

Traditional circuit breakers rely on in-memory state, which does not work in Lambda because each invocation may run in a different container. You need external state — DynamoDB or SSM Parameter Store.

// lib/circuitBreaker.js
var AWS = require("aws-sdk");
var dynamo = new AWS.DynamoDB.DocumentClient();

var CIRCUIT_TABLE = process.env.CIRCUIT_TABLE;
var FAILURE_THRESHOLD = 5;
var RESET_TIMEOUT_MS = 30000; // 30 seconds

function CircuitBreaker(serviceName) {
  this.serviceName = serviceName;
}

CircuitBreaker.prototype.execute = function(fn) {
  var self = this;

  return this.getState()
    .then(function(state) {
      if (state.status === "OPEN") {
        var elapsed = Date.now() - state.lastFailureTime;
        if (elapsed < RESET_TIMEOUT_MS) {
          return Promise.reject(new Error("Circuit OPEN for " + self.serviceName));
        }
        // Half-open: allow one attempt
        console.log("Circuit half-open, attempting request");
      }

      return fn()
        .then(function(result) {
          return self.recordSuccess().then(function() {
            return result;
          });
        })
        .catch(function(err) {
          return self.recordFailure().then(function() {
            throw err;
          });
        });
    });
};

CircuitBreaker.prototype.getState = function() {
  return dynamo.get({
    TableName: CIRCUIT_TABLE,
    Key: { serviceName: this.serviceName }
  }).promise().then(function(data) {
    return data.Item || { status: "CLOSED", failureCount: 0 };
  });
};

CircuitBreaker.prototype.recordFailure = function() {
  var self = this;

  return dynamo.update({
    TableName: CIRCUIT_TABLE,
    Key: { serviceName: self.serviceName },
    UpdateExpression: "SET failureCount = if_not_exists(failureCount, :zero) + :one, lastFailureTime = :now, #s = :status",
    ExpressionAttributeNames: { "#s": "status" },
    ExpressionAttributeValues: {
      ":one": 1,
      ":zero": 0,
      ":now": Date.now(),
      ":status": "CLOSED"
    },
    ReturnValues: "ALL_NEW"
  }).promise().then(function(data) {
    if (data.Attributes.failureCount >= FAILURE_THRESHOLD) {
      return dynamo.update({
        TableName: CIRCUIT_TABLE,
        Key: { serviceName: self.serviceName },
        UpdateExpression: "SET #s = :open",
        ExpressionAttributeNames: { "#s": "status" },
        ExpressionAttributeValues: { ":open": "OPEN" }
      }).promise();
    }
  });
};

CircuitBreaker.prototype.recordSuccess = function() {
  return dynamo.put({
    TableName: CIRCUIT_TABLE,
    Item: {
      serviceName: this.serviceName,
      status: "CLOSED",
      failureCount: 0,
      lastSuccessTime: Date.now()
    }
  }).promise();
};

module.exports = CircuitBreaker;

Usage in a handler:

var CircuitBreaker = require("../lib/circuitBreaker");
var https = require("https");

var breaker = new CircuitBreaker("payment-service");

exports.handler = function(event, context, callback) {
  breaker.execute(function() {
    return callPaymentService(event.body);
  })
  .then(function(result) {
    callback(null, {
      statusCode: 200,
      body: JSON.stringify(result)
    });
  })
  .catch(function(err) {
    if (err.message.indexOf("Circuit OPEN") === 0) {
      callback(null, {
        statusCode: 503,
        body: JSON.stringify({ error: "Payment service temporarily unavailable" })
      });
    } else {
      callback(null, {
        statusCode: 500,
        body: JSON.stringify({ error: "Payment processing failed" })
      });
    }
  });
};

Pattern 9: Strangler Fig Migration

The strangler fig pattern lets you migrate from a monolithic application to serverless incrementally, route by route, without a big-bang rewrite.

The approach is straightforward:

  1. Place API Gateway in front of your existing application
  2. Route all traffic through API Gateway to your monolith (using HTTP proxy integration)
  3. Pick one endpoint, rewrite it as a Lambda function
  4. Update API Gateway to route that endpoint to Lambda instead of the monolith
  5. Repeat until the monolith is empty
# SAM template showing strangler fig routing
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31

Resources:
  ApiGateway:
    Type: AWS::Serverless::Api
    Properties:
      StageName: prod
      DefinitionBody:
        openapi: "3.0.1"
        paths:
          # Migrated endpoint - goes to Lambda
          /api/users/{id}:
            get:
              x-amazon-apigateway-integration:
                type: aws_proxy
                uri: !Sub "arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${GetUserFunction.Arn}/invocations"
                httpMethod: POST

          # Not yet migrated - proxied to monolith
          /api/{proxy+}:
            x-amazon-apigateway-any-method:
              x-amazon-apigateway-integration:
                type: http_proxy
                uri: "https://legacy-api.example.com/api/{proxy}"
                httpMethod: ANY
                requestParameters:
                  integration.request.path.proxy: method.request.path.proxy

  GetUserFunction:
    Type: AWS::Serverless::Function
    Properties:
      Handler: handlers/users.getUser
      Runtime: nodejs18.x
      MemorySize: 256
      Timeout: 10
      Environment:
        Variables:
          USERS_TABLE: !Ref UsersTable
      Policies:
        - DynamoDBReadPolicy:
            TableName: !Ref UsersTable

The key to a successful strangler fig migration is monitoring. Before migrating each endpoint, establish baseline metrics in your monolith (latency, error rate, throughput). After migration, compare. If the Lambda version is worse, you can instantly roll back by changing the API Gateway route.

Complete Working Example: Multi-Pattern SAM Application

Here is a complete SAM template that combines API endpoints, event-driven processing, and scheduled tasks into a single deployable application:

AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: Multi-pattern serverless application

Globals:
  Function:
    Runtime: nodejs18.x
    MemorySize: 256
    Timeout: 30
    Environment:
      Variables:
        ORDERS_TABLE: !Ref OrdersTable
        EVENT_BUS_NAME: !Ref OrderEventBus

Resources:
  # ========== API Pattern ==========
  CreateOrderFunction:
    Type: AWS::Serverless::Function
    Properties:
      Handler: handlers/api.createOrder
      Events:
        CreateOrder:
          Type: Api
          Properties:
            Path: /orders
            Method: post
      Policies:
        - DynamoDBCrudPolicy:
            TableName: !Ref OrdersTable
        - EventBridgePutEventsPolicy:
            EventBusName: !Ref OrderEventBus

  GetOrderFunction:
    Type: AWS::Serverless::Function
    Properties:
      Handler: handlers/api.getOrder
      Events:
        GetOrder:
          Type: Api
          Properties:
            Path: /orders/{id}
            Method: get
      Policies:
        - DynamoDBReadPolicy:
            TableName: !Ref OrdersTable

  # ========== Event Processing Pattern ==========
  OrderEventBus:
    Type: AWS::Events::EventBus
    Properties:
      Name: order-events

  OrderProcessorFunction:
    Type: AWS::Serverless::Function
    Properties:
      Handler: handlers/eventProcessor.handler
      Events:
        OrderCreated:
          Type: EventBridgeRule
          Properties:
            EventBusName: !Ref OrderEventBus
            Pattern:
              source:
                - "orders.api"
              detail-type:
                - "OrderCreated"
      Policies:
        - DynamoDBCrudPolicy:
            TableName: !Ref OrdersTable
        - SQSSendMessagePolicy:
            QueueName: !GetAtt NotificationQueue.QueueName

  # ========== Queue Processing Pattern ==========
  NotificationQueue:
    Type: AWS::SQS::Queue
    Properties:
      VisibilityTimeout: 60
      RedrivePolicy:
        deadLetterTargetArn: !GetAtt NotificationDLQ.Arn
        maxReceiveCount: 3

  NotificationDLQ:
    Type: AWS::SQS::Queue
    Properties:
      MessageRetentionPeriod: 1209600  # 14 days

  NotificationFunction:
    Type: AWS::Serverless::Function
    Properties:
      Handler: handlers/notification.handler
      Events:
        SQSEvent:
          Type: SQS
          Properties:
            Queue: !GetAtt NotificationQueue.Arn
            BatchSize: 10
      Policies:
        - SESCrudPolicy:
            IdentityName: !Ref SenderEmail

  # ========== Scheduled Task Pattern ==========
  DailyReportFunction:
    Type: AWS::Serverless::Function
    Properties:
      Handler: handlers/dailyReport.handler
      Timeout: 300
      Events:
        DailySchedule:
          Type: Schedule
          Properties:
            Schedule: cron(0 8 * * ? *)
            Description: Daily order report at 8am UTC
      Policies:
        - DynamoDBReadPolicy:
            TableName: !Ref OrdersTable
        - SESCrudPolicy:
            IdentityName: !Ref SenderEmail

  # ========== Data Layer ==========
  OrdersTable:
    Type: AWS::DynamoDB::Table
    Properties:
      AttributeDefinitions:
        - AttributeName: orderId
          AttributeType: S
        - AttributeName: customerId
          AttributeType: S
        - AttributeName: createdAt
          AttributeType: S
      KeySchema:
        - AttributeName: orderId
          KeyType: HASH
      GlobalSecondaryIndexes:
        - IndexName: customer-index
          KeySchema:
            - AttributeName: customerId
              KeyType: HASH
            - AttributeName: createdAt
              KeyType: RANGE
          Projection:
            ProjectionType: ALL
      BillingMode: PAY_PER_REQUEST

Parameters:
  SenderEmail:
    Type: String
    Default: [email protected]

Outputs:
  ApiUrl:
    Value: !Sub "https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Prod"
  OrderEventBusName:
    Value: !Ref OrderEventBus

The corresponding API handler:

// handlers/api.js
var AWS = require("aws-sdk");
var dynamo = new AWS.DynamoDB.DocumentClient();
var eventBridge = new AWS.EventBridge();

exports.createOrder = function(event, context, callback) {
  var body = JSON.parse(event.body);
  var orderId = "ORD-" + Date.now() + "-" + Math.random().toString(36).substr(2, 9);

  var order = {
    orderId: orderId,
    customerId: body.customerId,
    items: body.items,
    total: body.items.reduce(function(sum, item) {
      return sum + (item.price * item.quantity);
    }, 0),
    status: "created",
    createdAt: new Date().toISOString()
  };

  dynamo.put({
    TableName: process.env.ORDERS_TABLE,
    Item: order
  }).promise()
    .then(function() {
      // Emit event for downstream processing
      return eventBridge.putEvents({
        Entries: [{
          Source: "orders.api",
          DetailType: "OrderCreated",
          Detail: JSON.stringify(order),
          EventBusName: process.env.EVENT_BUS_NAME
        }]
      }).promise();
    })
    .then(function() {
      callback(null, {
        statusCode: 201,
        headers: { "Content-Type": "application/json" },
        body: JSON.stringify(order)
      });
    })
    .catch(function(err) {
      console.error("Create order failed:", err);
      callback(null, {
        statusCode: 500,
        body: JSON.stringify({ error: "Failed to create order" })
      });
    });
};

exports.getOrder = function(event, context, callback) {
  var orderId = event.pathParameters.id;

  dynamo.get({
    TableName: process.env.ORDERS_TABLE,
    Key: { orderId: orderId }
  }).promise()
    .then(function(data) {
      if (!data.Item) {
        return callback(null, {
          statusCode: 404,
          body: JSON.stringify({ error: "Order not found" })
        });
      }

      callback(null, {
        statusCode: 200,
        headers: { "Content-Type": "application/json" },
        body: JSON.stringify(data.Item)
      });
    })
    .catch(function(err) {
      console.error("Get order failed:", err);
      callback(null, {
        statusCode: 500,
        body: JSON.stringify({ error: "Failed to retrieve order" })
      });
    });
};

Deploy the entire stack with:

sam build
sam deploy --guided --stack-name multi-pattern-app

Common Issues and Troubleshooting

1. Cold Start Timeout

Task timed out after 3.00 seconds

This happens when your function's initialization (require statements, SDK setup) exceeds the configured timeout. The fix is to increase the Timeout in your SAM template and move heavy initialization outside the handler so it only runs once per container lifecycle. Also increase MemorySize — Lambda allocates CPU proportionally to memory, so a 128MB function is significantly slower at initialization than a 512MB one.

2. DynamoDB Throughput Exceeded

ProvisionedThroughputExceededException: The level of configured provisioned throughput for the table was exceeded.

If you are using provisioned capacity mode, a traffic spike will cause throttling. Switch to BillingMode: PAY_PER_REQUEST (on-demand) in your table definition. For batch writes, implement exponential backoff on UnprocessedItems returned by batchWrite.

3. Lambda Payload Size Limit

RequestEntityTooLargeException: Request must be smaller than 6291456 bytes for the InvokeFunction operation

Lambda synchronous invocations have a 6MB request/response payload limit. For larger payloads, write data to S3 and pass the S3 key in the event instead. For API Gateway, the limit is 10MB for the payload but the Lambda integration still enforces 6MB. Use S3 presigned URLs for file uploads instead of proxying through Lambda.

4. SQS Message Reprocessing Loop

Message moved to DLQ after 3 receive attempts. All attempts failed with: Cannot read property 'orderId' of undefined

This happens when your SQS handler throws an error on a malformed message. The message returns to the queue, gets retried, fails again, and eventually lands in the dead letter queue — but not before it has consumed invocations and logged errors. Always validate the message shape at the top of your handler and discard invalid messages rather than throwing:

exports.handler = function(event, context, callback) {
  var failures = [];

  event.Records.forEach(function(record) {
    try {
      var body = JSON.parse(record.body);
      if (!body.orderId) {
        console.warn("Skipping malformed message:", record.messageId);
        return; // Skip, do not retry
      }
      processMessage(body);
    } catch (err) {
      console.error("Failed to process:", record.messageId, err);
      failures.push({ itemIdentifier: record.messageId });
    }
  });

  callback(null, { batchItemFailures: failures });
};

5. EventBridge Rule Not Triggering

No CloudWatch Logs found for function. EventBridge rule exists but target is not being invoked.

This is almost always a permissions issue. The EventBridge rule needs permission to invoke your Lambda function. In SAM, the EventBridgeRule event type handles this automatically, but if you are defining resources manually in CloudFormation, you need an explicit AWS::Lambda::Permission resource granting events.amazonaws.com invoke access.

Best Practices

  • Initialize SDK clients outside the handler. Place var dynamo = new AWS.DynamoDB.DocumentClient() at module scope so it is reused across invocations within the same container. This dramatically reduces warm invocation latency.

  • Use environment variables for all configuration. Table names, queue URLs, topic ARNs, and feature flags should all come from environment variables set in your SAM template. Never hardcode resource identifiers.

  • Set up dead letter queues on every asynchronous invocation. SQS consumers, SNS subscribers, and async Lambda invocations should all have DLQs configured. Without them, failed messages disappear silently.

  • Apply least-privilege IAM policies. SAM policy templates like DynamoDBReadPolicy and SQSSendMessagePolicy are scoped to specific resources. Use them instead of broad wildcard policies. A Lambda that only reads from DynamoDB should not have write permissions.

  • Structure your project for independent deployment. Each function should have its own directory with its dependencies. Use Lambda Layers for shared code (utilities, data access modules) rather than bundling everything together.

  • Monitor with structured logging and custom metrics. Use console.log(JSON.stringify({ level: "info", orderId: id, action: "created" })) for structured logs that are queryable in CloudWatch Logs Insights. Emit custom CloudWatch metrics for business-level monitoring (orders processed, revenue, error rates).

  • Set appropriate memory and timeout values. Do not leave the defaults. Profile your functions with AWS Lambda Power Tuning to find the optimal memory/cost balance. A function that completes in 200ms at 1024MB might take 800ms at 256MB — the higher memory option can actually be cheaper.

  • Use provisioned concurrency for latency-sensitive endpoints. If cold starts are unacceptable for a specific API endpoint, configure provisioned concurrency to keep containers warm. This adds cost but guarantees consistent latency.

  • Implement idempotency in every handler. Lambda can invoke your function more than once for the same event. Use DynamoDB conditional writes or a dedicated idempotency table to ensure duplicate invocations do not create duplicate side effects.

References

Powered by Contentful