Aws

AWS Lambda Function Patterns for Node.js

Production patterns for AWS Lambda functions in Node.js including cold start optimization, database connections, and API Gateway integration

AWS Lambda Function Patterns for Node.js

AWS Lambda is the backbone of serverless computing on AWS, and Node.js remains one of the most popular runtimes for it. This article covers production-tested patterns for building Lambda functions in Node.js — from handler design and cold start mitigation to database connections, error handling, and deployment strategies. If you are building serverless APIs or event-driven systems on AWS, these patterns will save you from the mistakes I have made over years of running Lambda in production.

Prerequisites

  • Node.js v18 or v20 installed locally
  • An AWS account with IAM permissions for Lambda, API Gateway, DynamoDB, and CloudWatch
  • AWS CLI configured with credentials (aws configure)
  • Basic familiarity with the AWS Console and serverless concepts
  • The aws-sdk (v2) or @aws-sdk (v3) installed in your project

Lambda Handler Patterns: Async vs Callback

Lambda supports two handler signatures in Node.js. The older callback style and the modern async/await style. You should understand both because you will encounter legacy code using callbacks, but all new code should use async handlers.

Callback Handler (Legacy)

exports.handler = function(event, context, callback) {
  var result = {
    statusCode: 200,
    body: JSON.stringify({ message: "Hello from Lambda" })
  };
  callback(null, result);
};

The callback takes two arguments: an error (or null) and the response. If you pass a non-null first argument, Lambda treats the invocation as failed.

Async Handler (Recommended)

exports.handler = async function(event, context) {
  var result = {
    statusCode: 200,
    body: JSON.stringify({ message: "Hello from Lambda" })
  };
  return result;
};

With async handlers, returning a value is equivalent to calling callback(null, value). Throwing an error is equivalent to calling callback(error). The async pattern is cleaner, composes better with await, and avoids the subtle bugs that come from accidentally calling callback twice.

The Context Object

The context object gives you runtime information about the current invocation:

exports.handler = async function(event, context) {
  console.log("Request ID:", context.awsRequestId);
  console.log("Function name:", context.functionName);
  console.log("Memory limit:", context.memoryLimitInMB, "MB");
  console.log("Time remaining:", context.getRemainingTimeInMillis(), "ms");

  // Prevent Lambda from waiting for empty event loop
  context.callbackWaitsForEmptyEventLoop = false;

  return { statusCode: 200, body: "OK" };
};

Setting callbackWaitsForEmptyEventLoop = false is critical when you have persistent connections (like database pools). Without it, Lambda will hang until the connection times out, which wastes execution time and money.

Cold Start Optimization

Cold starts are the single biggest performance concern with Lambda. A cold start happens when AWS spins up a new execution environment for your function — downloading your code, initializing the runtime, and running your module-level code. Subsequent invocations reuse the warm environment.

Measure Your Cold Starts

Before optimizing, measure. Add structured logging to distinguish cold starts from warm invocations:

var isColdStart = true;

exports.handler = async function(event, context) {
  var startTime = Date.now();

  if (isColdStart) {
    console.log(JSON.stringify({
      type: "COLD_START",
      requestId: context.awsRequestId,
      functionName: context.functionName
    }));
    isColdStart = false;
  }

  // Your handler logic here
  var result = doWork(event);

  console.log(JSON.stringify({
    type: "INVOCATION",
    requestId: context.awsRequestId,
    duration: Date.now() - startTime
  }));

  return result;
};

The isColdStart variable is set at the module level. It is true on the first invocation and false for all subsequent invocations in the same execution environment.

Strategies to Reduce Cold Starts

1. Keep your deployment package small. Every megabyte added to your zip increases cold start time. Use only the dependencies you need. The AWS SDK v3 is modular — import only the clients you use instead of the entire SDK.

// Bad - pulls in the entire SDK
var AWS = require("aws-sdk");
var dynamodb = new AWS.DynamoDB.DocumentClient();

// Good - only imports DynamoDB client
var { DynamoDBClient } = require("@aws-sdk/client-dynamodb");
var { DynamoDBDocumentClient, GetCommand } = require("@aws-sdk/lib-dynamodb");

var client = new DynamoDBClient({});
var docClient = DynamoDBDocumentClient.from(client);

2. Initialize outside the handler. Module-level code runs once during cold start and is reused across invocations. Put your client initialization there.

var { DynamoDBClient } = require("@aws-sdk/client-dynamodb");
var { DynamoDBDocumentClient } = require("@aws-sdk/lib-dynamodb");

// This runs once during cold start, then is reused
var client = new DynamoDBClient({ region: "us-east-1" });
var docClient = DynamoDBDocumentClient.from(client);

exports.handler = async function(event, context) {
  // docClient is already initialized — no overhead here
  // ...
};

3. Use Provisioned Concurrency for latency-sensitive endpoints. This keeps a pool of warm environments ready. It costs more, but eliminates cold starts entirely for the provisioned count.

aws lambda put-provisioned-concurrency-config \
  --function-name my-api-handler \
  --qualifier prod \
  --provisioned-concurrent-executions 5

4. Use the Node.js 20.x runtime. AWS optimizes newer runtimes more aggressively. Node.js 20.x has measurably faster cold starts than 18.x.

Packaging and Deployment

Zip Deployment

The simplest deployment method is a zip file. Structure your project cleanly:

my-lambda/
  index.js
  lib/
    database.js
    validation.js
  node_modules/
  package.json

Deploy with the AWS CLI:

# Install production dependencies only
npm install --production

# Create deployment package
zip -r function.zip index.js lib/ node_modules/ package.json

# Deploy
aws lambda update-function-code \
  --function-name my-api-handler \
  --zip-file fileb://function.zip

Using SAM (Serverless Application Model)

For more complex projects, AWS SAM gives you infrastructure-as-code with a local testing story:

# template.yaml
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31

Globals:
  Function:
    Timeout: 30
    Runtime: nodejs20.x
    MemorySize: 256

Resources:
  ApiFunction:
    Type: AWS::Serverless::Function
    Properties:
      Handler: index.handler
      CodeUri: src/
      Environment:
        Variables:
          TABLE_NAME: !Ref UsersTable
      Policies:
        - DynamoDBCrudPolicy:
            TableName: !Ref UsersTable
      Events:
        GetUser:
          Type: Api
          Properties:
            Path: /users/{id}
            Method: get

  UsersTable:
    Type: AWS::DynamoDB::Table
    Properties:
      TableName: users
      AttributeDefinitions:
        - AttributeName: id
          AttributeType: S
      KeySchema:
        - AttributeName: id
          KeyType: HASH
      BillingMode: PAY_PER_REQUEST

Deploy with:

sam build
sam deploy --guided

Environment Variables and Configuration

Never hardcode configuration values. Use environment variables for table names, API keys, feature flags, and stage-specific settings.

var TABLE_NAME = process.env.TABLE_NAME || "users-dev";
var STAGE = process.env.STAGE || "dev";
var MAX_RETRIES = parseInt(process.env.MAX_RETRIES || "3", 10);
var ENABLE_CACHING = process.env.ENABLE_CACHING === "true";

exports.handler = async function(event, context) {
  console.log("Running in stage:", STAGE);
  console.log("Table:", TABLE_NAME);
  // ...
};

For sensitive values like API keys and database credentials, use AWS Systems Manager Parameter Store or Secrets Manager instead of plain environment variables:

var { SSMClient, GetParameterCommand } = require("@aws-sdk/client-ssm");

var ssm = new SSMClient({ region: "us-east-1" });
var cachedSecret = null;

async function getSecret(name) {
  if (cachedSecret) return cachedSecret;

  var command = new GetParameterCommand({
    Name: name,
    WithDecryption: true
  });
  var response = await ssm.send(command);
  cachedSecret = response.Parameter.Value;
  return cachedSecret;
}

exports.handler = async function(event, context) {
  var apiKey = await getSecret("/myapp/prod/api-key");
  // Use apiKey...
};

Notice we cache the secret in a module-level variable. This means we only call Parameter Store once per cold start, not once per invocation.

Connecting to Databases from Lambda

Database connections in Lambda are fundamentally different from traditional servers. You do not have a long-lived process, so naive connection management leads to connection exhaustion.

DynamoDB (Native Integration)

DynamoDB is the natural fit for Lambda. It is HTTP-based, so there are no persistent connections to manage:

var { DynamoDBClient } = require("@aws-sdk/client-dynamodb");
var { DynamoDBDocumentClient, GetCommand, PutCommand, QueryCommand } = require("@aws-sdk/lib-dynamodb");

var client = new DynamoDBClient({ region: "us-east-1" });
var docClient = DynamoDBDocumentClient.from(client, {
  marshallOptions: { removeUndefinedValues: true }
});

async function getUser(userId) {
  var command = new GetCommand({
    TableName: process.env.TABLE_NAME,
    Key: { id: userId }
  });
  var response = await docClient.send(command);
  return response.Item || null;
}

async function queryUsersByEmail(email) {
  var command = new QueryCommand({
    TableName: process.env.TABLE_NAME,
    IndexName: "email-index",
    KeyConditionExpression: "email = :email",
    ExpressionAttributeValues: { ":email": email }
  });
  var response = await docClient.send(command);
  return response.Items;
}

RDS / PostgreSQL with Connection Pooling

If you must use a relational database, use RDS Proxy to manage connection pooling. Without it, each Lambda instance opens its own connection, and at high concurrency you will exhaust your database's connection limit.

var { Client } = require("pg");

var dbClient = null;

async function getDbClient() {
  if (dbClient) return dbClient;

  dbClient = new Client({
    host: process.env.DB_HOST,  // RDS Proxy endpoint
    port: 5432,
    database: process.env.DB_NAME,
    user: process.env.DB_USER,
    password: process.env.DB_PASSWORD,
    ssl: { rejectUnauthorized: false }
  });

  await dbClient.connect();
  return dbClient;
}

exports.handler = async function(event, context) {
  context.callbackWaitsForEmptyEventLoop = false;

  var client = await getDbClient();
  var result = await client.query("SELECT * FROM users WHERE id = $1", [event.pathParameters.id]);

  return {
    statusCode: 200,
    body: JSON.stringify(result.rows[0])
  };
};

The key points: reuse the connection across invocations with the module-level variable, set callbackWaitsForEmptyEventLoop = false, and always point to RDS Proxy rather than the database directly.

Error Handling and Retries

Lambda has built-in retry behavior depending on the invocation type. Synchronous invocations (API Gateway) do not retry — the error goes straight back to the caller. Asynchronous invocations (S3 events, SNS) retry twice by default. Event source mappings (SQS, Kinesis) retry until the record expires.

Structured Error Responses for API Gateway

function buildResponse(statusCode, body) {
  return {
    statusCode: statusCode,
    headers: {
      "Content-Type": "application/json",
      "Access-Control-Allow-Origin": "*"
    },
    body: JSON.stringify(body)
  };
}

exports.handler = async function(event, context) {
  try {
    var userId = event.pathParameters && event.pathParameters.id;

    if (!userId) {
      return buildResponse(400, { error: "Missing user ID" });
    }

    var user = await getUser(userId);

    if (!user) {
      return buildResponse(404, { error: "User not found" });
    }

    return buildResponse(200, { data: user });

  } catch (err) {
    console.error(JSON.stringify({
      type: "ERROR",
      requestId: context.awsRequestId,
      message: err.message,
      stack: err.stack
    }));

    if (err.name === "ConditionalCheckFailedException") {
      return buildResponse(409, { error: "Conflict — item was modified" });
    }

    if (err.name === "ProvisionedThroughputExceededException") {
      return buildResponse(429, { error: "Too many requests" });
    }

    return buildResponse(500, { error: "Internal server error" });
  }
};

Retry with Exponential Backoff

For calls to external services, implement your own retry logic:

async function retryWithBackoff(fn, maxRetries, baseDelay) {
  var retries = 0;

  while (true) {
    try {
      return await fn();
    } catch (err) {
      retries++;
      if (retries > maxRetries) throw err;

      var delay = baseDelay * Math.pow(2, retries - 1);
      var jitter = Math.random() * delay * 0.1;
      console.log("Retry " + retries + "/" + maxRetries + " after " + Math.round(delay + jitter) + "ms");
      await new Promise(function(resolve) { setTimeout(resolve, delay + jitter); });
    }
  }
}

// Usage
var result = await retryWithBackoff(function() {
  return docClient.send(new GetCommand({ TableName: "users", Key: { id: "123" } }));
}, 3, 100);

API Gateway Integration

Lambda is most commonly invoked behind API Gateway. The event object from API Gateway has a specific shape you need to understand:

exports.handler = async function(event, context) {
  // Request details
  var method = event.httpMethod;               // "GET", "POST", etc.
  var path = event.path;                       // "/users/123"
  var pathParams = event.pathParameters;       // { id: "123" }
  var queryParams = event.queryStringParameters; // { page: "1" }
  var headers = event.headers;                 // { "Content-Type": "application/json" }
  var body = event.body;                       // Raw body string

  // Parse JSON body for POST/PUT
  var parsedBody = null;
  if (body) {
    try {
      parsedBody = JSON.parse(body);
    } catch (e) {
      return buildResponse(400, { error: "Invalid JSON in request body" });
    }
  }

  // Route handling
  if (method === "GET" && pathParams && pathParams.id) {
    return handleGetUser(pathParams.id);
  }

  if (method === "POST") {
    return handleCreateUser(parsedBody);
  }

  return buildResponse(405, { error: "Method not allowed" });
};

CORS Support

If your Lambda is called from a browser, you need CORS headers on every response, including errors:

var CORS_HEADERS = {
  "Access-Control-Allow-Origin": "*",
  "Access-Control-Allow-Headers": "Content-Type,Authorization",
  "Access-Control-Allow-Methods": "GET,POST,PUT,DELETE,OPTIONS"
};

function buildResponse(statusCode, body) {
  return {
    statusCode: statusCode,
    headers: Object.assign({}, CORS_HEADERS, { "Content-Type": "application/json" }),
    body: JSON.stringify(body)
  };
}

// Handle preflight
exports.handler = async function(event, context) {
  if (event.httpMethod === "OPTIONS") {
    return { statusCode: 200, headers: CORS_HEADERS, body: "" };
  }

  // Normal handler logic...
};

Lambda Layers for Shared Code

Lambda Layers let you share code and dependencies across multiple functions. This is useful for utility libraries, SDK clients, or large dependencies that would bloat every function's deployment package.

Creating a Layer

mkdir -p my-layer/nodejs
cd my-layer/nodejs

# Initialize and install shared deps
npm init -y
npm install @aws-sdk/client-dynamodb @aws-sdk/lib-dynamodb

# Add shared utility code
mkdir lib

Create my-layer/nodejs/lib/response.js:

var CORS_HEADERS = {
  "Access-Control-Allow-Origin": "*",
  "Access-Control-Allow-Headers": "Content-Type,Authorization",
  "Access-Control-Allow-Methods": "GET,POST,PUT,DELETE,OPTIONS"
};

function buildResponse(statusCode, body) {
  return {
    statusCode: statusCode,
    headers: Object.assign({}, CORS_HEADERS, { "Content-Type": "application/json" }),
    body: JSON.stringify(body)
  };
}

function errorResponse(statusCode, message) {
  return buildResponse(statusCode, { error: message });
}

module.exports = { buildResponse: buildResponse, errorResponse: errorResponse };

Publish the layer:

cd my-layer
zip -r layer.zip nodejs/

aws lambda publish-layer-version \
  --layer-name shared-utils \
  --zip-file fileb://layer.zip \
  --compatible-runtimes nodejs20.x

Using a Layer in Your Function

// The layer's nodejs/ directory is added to NODE_PATH
var { buildResponse, errorResponse } = require("lib/response");

exports.handler = async function(event, context) {
  try {
    return buildResponse(200, { message: "Using shared layer code" });
  } catch (err) {
    return errorResponse(500, "Something went wrong");
  }
};

Attach the layer to your function:

aws lambda update-function-configuration \
  --function-name my-api-handler \
  --layers arn:aws:lambda:us-east-1:123456789:layer:shared-utils:1

Testing Lambda Functions Locally

Unit Testing with a Test Harness

You do not need a framework to test Lambda functions. Structure your code so the business logic is separate from the handler:

// lib/userService.js
var { DynamoDBDocumentClient, GetCommand } = require("@aws-sdk/lib-dynamodb");

function UserService(docClient, tableName) {
  this.docClient = docClient;
  this.tableName = tableName;
}

UserService.prototype.getUser = async function(userId) {
  var command = new GetCommand({
    TableName: this.tableName,
    Key: { id: userId }
  });
  var response = await this.docClient.send(command);
  return response.Item || null;
};

module.exports = UserService;
// test/userService.test.js
var assert = require("assert");
var UserService = require("../lib/userService");

// Mock the DynamoDB client
var mockDocClient = {
  send: async function(command) {
    if (command.input.Key.id === "user-123") {
      return {
        Item: { id: "user-123", name: "Shane Larson", email: "[email protected]" }
      };
    }
    return { Item: undefined };
  }
};

async function testGetUser() {
  var service = new UserService(mockDocClient, "users");

  var user = await service.getUser("user-123");
  assert.strictEqual(user.name, "Shane Larson");
  console.log("PASS: getUser returns correct user");

  var missing = await service.getUser("nonexistent");
  assert.strictEqual(missing, null);
  console.log("PASS: getUser returns null for missing user");
}

async function testHandler() {
  // Test the full handler with a mock event
  var event = {
    httpMethod: "GET",
    pathParameters: { id: "user-123" },
    queryStringParameters: null,
    headers: {},
    body: null
  };

  var context = {
    awsRequestId: "test-request-id",
    functionName: "test-function",
    callbackWaitsForEmptyEventLoop: true,
    getRemainingTimeInMillis: function() { return 30000; }
  };

  // Require and invoke handler
  var handler = require("../index").handler;
  var response = await handler(event, context);

  assert.strictEqual(response.statusCode, 200);
  var body = JSON.parse(response.body);
  assert.strictEqual(body.data.name, "Shane Larson");
  console.log("PASS: handler returns 200 with user data");
}

testGetUser()
  .then(function() { return testHandler(); })
  .then(function() { console.log("\nAll tests passed"); })
  .catch(function(err) { console.error("FAIL:", err.message); process.exit(1); });

Run tests with:

node test/userService.test.js

Output:

PASS: getUser returns correct user
PASS: getUser returns null for missing user
PASS: handler returns 200 with user data

All tests passed

Local Invocation with SAM

SAM CLI lets you invoke functions locally with Docker:

# Invoke with a test event
sam local invoke ApiFunction -e events/get-user.json

# Start a local API Gateway
sam local start-api

Create events/get-user.json:

{
  "httpMethod": "GET",
  "path": "/users/user-123",
  "pathParameters": { "id": "user-123" },
  "queryStringParameters": null,
  "headers": { "Content-Type": "application/json" },
  "body": null,
  "isBase64Encoded": false
}

Monitoring with CloudWatch

Every console.log in your Lambda function goes to CloudWatch Logs automatically. Use structured JSON logging so you can query logs with CloudWatch Logs Insights.

function log(level, message, data) {
  var entry = {
    level: level,
    message: message,
    timestamp: new Date().toISOString()
  };
  if (data) entry.data = data;
  console.log(JSON.stringify(entry));
}

exports.handler = async function(event, context) {
  log("INFO", "Invocation started", {
    requestId: context.awsRequestId,
    path: event.path,
    method: event.httpMethod
  });

  var startTime = Date.now();

  try {
    var result = await processRequest(event);
    log("INFO", "Invocation completed", {
      requestId: context.awsRequestId,
      duration: Date.now() - startTime,
      statusCode: result.statusCode
    });
    return result;
  } catch (err) {
    log("ERROR", "Invocation failed", {
      requestId: context.awsRequestId,
      duration: Date.now() - startTime,
      error: err.message,
      stack: err.stack
    });
    throw err;
  }
};

Query these logs in CloudWatch Logs Insights:

fields @timestamp, data.requestId, data.duration, data.statusCode
| filter level = "INFO" and message = "Invocation completed"
| sort @timestamp desc
| limit 50

To find slow invocations:

fields @timestamp, data.requestId, data.duration
| filter data.duration > 1000
| sort data.duration desc
| limit 20

Custom Metrics

Push custom metrics to CloudWatch for dashboards and alarms:

var { CloudWatchClient, PutMetricDataCommand } = require("@aws-sdk/client-cloudwatch");

var cloudwatch = new CloudWatchClient({ region: "us-east-1" });

async function publishMetric(metricName, value, unit) {
  var command = new PutMetricDataCommand({
    Namespace: "MyApp/API",
    MetricData: [
      {
        MetricName: metricName,
        Value: value,
        Unit: unit,
        Timestamp: new Date(),
        Dimensions: [
          { Name: "FunctionName", Value: process.env.AWS_LAMBDA_FUNCTION_NAME },
          { Name: "Stage", Value: process.env.STAGE || "dev" }
        ]
      }
    ]
  });
  await cloudwatch.send(command);
}

// In your handler
await publishMetric("RequestLatency", duration, "Milliseconds");
await publishMetric("UserNotFound", 1, "Count");

Complete Working Example

Here is a production-ready Lambda function that handles CRUD operations for a users API backed by DynamoDB:

// index.js
var { DynamoDBClient } = require("@aws-sdk/client-dynamodb");
var {
  DynamoDBDocumentClient,
  GetCommand,
  PutCommand,
  DeleteCommand,
  QueryCommand
} = require("@aws-sdk/lib-dynamodb");
var crypto = require("crypto");

// ── Configuration ──────────────────────────────────────────
var TABLE_NAME = process.env.TABLE_NAME || "users";
var STAGE = process.env.STAGE || "dev";

// ── DynamoDB Client (initialized once, reused across invocations) ──
var client = new DynamoDBClient({ region: process.env.AWS_REGION || "us-east-1" });
var docClient = DynamoDBDocumentClient.from(client, {
  marshallOptions: { removeUndefinedValues: true }
});

// ── Cold Start Tracking ────────────────────────────────────
var isColdStart = true;

// ── Response Helpers ───────────────────────────────────────
var CORS_HEADERS = {
  "Access-Control-Allow-Origin": "*",
  "Access-Control-Allow-Headers": "Content-Type,Authorization",
  "Access-Control-Allow-Methods": "GET,POST,PUT,DELETE,OPTIONS"
};

function respond(statusCode, body) {
  return {
    statusCode: statusCode,
    headers: Object.assign({}, CORS_HEADERS, { "Content-Type": "application/json" }),
    body: JSON.stringify(body)
  };
}

// ── Validation ─────────────────────────────────────────────
function validateUser(data) {
  var errors = [];
  if (!data.name || typeof data.name !== "string" || data.name.trim().length < 2) {
    errors.push("name is required and must be at least 2 characters");
  }
  if (!data.email || !/^[^\s@]+@[^\s@]+\.[^\s@]+$/.test(data.email)) {
    errors.push("a valid email is required");
  }
  return errors;
}

// ── Route Handlers ─────────────────────────────────────────
async function getUser(userId) {
  var command = new GetCommand({
    TableName: TABLE_NAME,
    Key: { id: userId }
  });
  var response = await docClient.send(command);

  if (!response.Item) {
    return respond(404, { error: "User not found", userId: userId });
  }

  return respond(200, { data: response.Item });
}

async function listUsers(queryParams) {
  var limit = parseInt((queryParams && queryParams.limit) || "20", 10);
  if (limit > 100) limit = 100;

  var command = new QueryCommand({
    TableName: TABLE_NAME,
    IndexName: "status-index",
    KeyConditionExpression: "#status = :active",
    ExpressionAttributeNames: { "#status": "status" },
    ExpressionAttributeValues: { ":active": "active" },
    Limit: limit,
    ScanIndexForward: false
  });

  var response = await docClient.send(command);

  return respond(200, {
    data: response.Items,
    count: response.Items.length,
    hasMore: !!response.LastEvaluatedKey
  });
}

async function createUser(body) {
  if (!body) {
    return respond(400, { error: "Request body is required" });
  }

  var data;
  try {
    data = typeof body === "string" ? JSON.parse(body) : body;
  } catch (e) {
    return respond(400, { error: "Invalid JSON in request body" });
  }

  var errors = validateUser(data);
  if (errors.length > 0) {
    return respond(400, { error: "Validation failed", details: errors });
  }

  var userId = crypto.randomUUID();
  var now = new Date().toISOString();

  var item = {
    id: userId,
    name: data.name.trim(),
    email: data.email.toLowerCase().trim(),
    status: "active",
    createdAt: now,
    updatedAt: now
  };

  var command = new PutCommand({
    TableName: TABLE_NAME,
    Item: item,
    ConditionExpression: "attribute_not_exists(id)"
  });

  await docClient.send(command);

  return respond(201, { data: item });
}

async function deleteUser(userId) {
  var command = new DeleteCommand({
    TableName: TABLE_NAME,
    Key: { id: userId },
    ConditionExpression: "attribute_exists(id)",
    ReturnValues: "ALL_OLD"
  });

  try {
    var response = await docClient.send(command);
    return respond(200, { message: "User deleted", data: response.Attributes });
  } catch (err) {
    if (err.name === "ConditionalCheckFailedException") {
      return respond(404, { error: "User not found" });
    }
    throw err;
  }
}

// ── Main Handler ───────────────────────────────────────────
exports.handler = async function(event, context) {
  context.callbackWaitsForEmptyEventLoop = false;

  var startTime = Date.now();
  var method = event.httpMethod;
  var pathParams = event.pathParameters || {};
  var queryParams = event.queryStringParameters || {};

  // Log invocation
  console.log(JSON.stringify({
    type: isColdStart ? "COLD_START" : "WARM_START",
    requestId: context.awsRequestId,
    method: method,
    path: event.path,
    stage: STAGE
  }));
  isColdStart = false;

  // Handle CORS preflight
  if (method === "OPTIONS") {
    return { statusCode: 200, headers: CORS_HEADERS, body: "" };
  }

  try {
    var response;

    if (method === "GET" && pathParams.id) {
      response = await getUser(pathParams.id);
    } else if (method === "GET") {
      response = await listUsers(queryParams);
    } else if (method === "POST") {
      response = await createUser(event.body);
    } else if (method === "DELETE" && pathParams.id) {
      response = await deleteUser(pathParams.id);
    } else {
      response = respond(405, { error: "Method not allowed" });
    }

    // Log completion
    console.log(JSON.stringify({
      type: "COMPLETED",
      requestId: context.awsRequestId,
      statusCode: response.statusCode,
      duration: Date.now() - startTime
    }));

    return response;

  } catch (err) {
    console.error(JSON.stringify({
      type: "ERROR",
      requestId: context.awsRequestId,
      error: err.message,
      stack: err.stack,
      duration: Date.now() - startTime
    }));

    if (err.name === "ProvisionedThroughputExceededException") {
      return respond(429, { error: "Too many requests, please retry" });
    }

    return respond(500, { error: "Internal server error" });
  }
};

Deploy this with the SAM template shown earlier, and you have a complete serverless CRUD API with structured logging, input validation, proper error handling, CORS, and cold start tracking.

Common Issues and Troubleshooting

1. Task Timed Out After X Seconds

REPORT RequestId: abc-123 Duration: 30000.00 ms Billed Duration: 30000 ms
Task timed out after 30.01 seconds

This usually means your function hit the timeout limit. Common causes: database connection hanging, external API not responding, or callbackWaitsForEmptyEventLoop not set to false. Check whether a persistent connection (database pool, HTTP keep-alive) is keeping the event loop alive. Increase the timeout or fix the underlying issue — do not just increase the timeout blindly.

2. Cannot Find Module

Runtime.ImportModuleError: Error: Cannot find module '@aws-sdk/client-dynamodb'

Your deployment package is missing dependencies. Make sure you ran npm install --production before zipping. If you are using layers, verify the layer is attached and the module lives under nodejs/node_modules/ in the layer zip. Also check that the correct layer version is attached — stale versions are a common source of this error.

3. Access Denied to DynamoDB

AccessDeniedException: User: arn:aws:sts::123456789:assumed-role/my-lambda-role/my-function is not authorized to perform: dynamodb:GetItem on resource: arn:aws:dynamodb:us-east-1:123456789:table/users

Your Lambda execution role does not have the necessary IAM permissions. Add a policy granting dynamodb:GetItem, dynamodb:PutItem, dynamodb:Query, and dynamodb:DeleteItem on the specific table ARN. Avoid using dynamodb:* in production — follow least-privilege.

4. CORS Errors in the Browser

Access to XMLHttpRequest at 'https://api.example.com/users' from origin 'https://app.example.com' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested response.

This happens when your Lambda does not return CORS headers. The critical detail: CORS headers must be present on every response, including 4xx and 5xx errors. If your error path skips headers, the browser blocks the response and you only see a CORS error instead of the actual error message. Also make sure your API Gateway is configured to pass through OPTIONS preflight requests to your handler (or configure the CORS settings on the gateway itself).

5. Out of Memory

REPORT RequestId: abc-123 Duration: 1234.56 ms Max Memory Used: 128 MB
RequestId: abc-123 Error: Runtime exited with error: signal: killed
Runtime.ExitError

Your function ran out of memory. Lambda kills the process with a SIGKILL when it exceeds the configured memory limit. Increase the memory in your function configuration. Note that increasing memory also increases CPU allocation proportionally, so memory-constrained functions are also CPU-constrained.

Best Practices

  • Keep functions focused. One function per responsibility. A function that handles user CRUD is fine. A function that handles user CRUD, sends emails, processes payments, and generates reports is not. Break it up.

  • Initialize SDK clients outside the handler. Module-level initialization runs once per cold start and is reused across all invocations in that execution environment. This is free performance.

  • Set callbackWaitsForEmptyEventLoop = false when using persistent connections. Without this, Lambda waits for the event loop to drain, which means it waits for your database connection to close. Your function will always hit the timeout.

  • Use structured JSON logging. Plain console.log("something happened") is useless in production. Log JSON objects with request IDs, durations, and context. CloudWatch Logs Insights can query structured logs efficiently.

  • Keep deployment packages under 5 MB (zipped). Smaller packages mean faster cold starts. Use the modular AWS SDK v3 instead of the monolithic v2. Tree-shake if possible. Move large dependencies to Lambda Layers.

  • Set appropriate timeouts and memory. Do not leave the default 3-second timeout on a function that queries a database. Set timeouts based on measured P99 latency plus a buffer. Start with 256 MB memory and adjust based on CloudWatch metrics.

  • Use environment variables for all configuration. Table names, stage identifiers, feature flags, API endpoints — none of these should be hardcoded. This lets you deploy the same code to dev, staging, and production by changing environment variables.

  • Implement idempotency for event-driven functions. If your function processes SQS messages or S3 events, it will be invoked at least once — meaning duplicates are possible. Use a DynamoDB conditional write or an idempotency key to prevent duplicate processing.

  • Monitor and alert on errors. Set up CloudWatch alarms on the Errors and Throttles Lambda metrics. If your error rate spikes, you want to know immediately, not when a customer reports it.

  • Use X-Ray for distributed tracing. When your Lambda calls other AWS services or external APIs, X-Ray shows you exactly where time is being spent. Enable it in your function configuration and instrument the SDK calls.

References

Powered by Contentful