Pulumi vs Terraform: Choosing Your IaC Tool
Compare Pulumi and Terraform for infrastructure as code with side-by-side examples, testing strategies, and decision framework
Pulumi vs Terraform: Choosing Your IaC Tool
Infrastructure as Code is no longer optional. If you are provisioning cloud resources by clicking around in a console, you are building technical debt that will eventually collapse under its own weight. The two dominant tools in this space — Pulumi and Terraform — take fundamentally different approaches to the same problem. This article breaks down both tools with real code, honest trade-offs, and a decision framework that will save your team months of debate.
Prerequisites
Before diving in, you should have:
- Familiarity with at least one cloud provider (AWS, Azure, or GCP)
- Basic understanding of infrastructure concepts (VPCs, IAM, compute, storage)
- Node.js installed (v16+ for Pulumi examples)
- Terraform CLI installed (v1.5+ for HCL examples)
- An AWS account for running the examples
Philosophy: DSL vs General-Purpose Language
This is the core divide and everything else flows from it.
Terraform uses HCL (HashiCorp Configuration Language), a purpose-built domain-specific language. HCL is declarative, intentionally limited, and designed to describe infrastructure — not execute business logic. HashiCorp's position is that infrastructure definition should be constrained. You declare what you want, and Terraform figures out how to get there.
Pulumi lets you write infrastructure code in languages you already know: JavaScript, TypeScript, Python, Go, or C#. Pulumi's position is that infrastructure is software and should be treated as such. You get loops, conditionals, functions, classes, package managers, and the entire ecosystem of your chosen language.
Neither philosophy is wrong. They optimize for different things. Terraform optimizes for readability and auditability by non-developers. Pulumi optimizes for expressiveness and developer productivity. The question is which trade-off matters more for your team.
Here is a telling example. Suppose you need to create 10 S3 buckets with names derived from a configuration map, each with different lifecycle policies based on a data classification tier. In Terraform, you will fight for_each, dynamic blocks, and locals to make it work. In Pulumi JavaScript, it is a for loop with an if statement — code you have written a thousand times.
Pulumi with JavaScript and TypeScript
Pulumi's JavaScript SDK wraps cloud provider APIs in a way that feels natural to Node.js developers. You define resources as objects, and Pulumi tracks dependencies automatically through its concept of "outputs" — values that are resolved asynchronously after resources are provisioned.
Here is a basic Pulumi program that creates an S3 bucket:
var pulumi = require("@pulumi/pulumi");
var aws = require("@pulumi/aws");
var bucket = new aws.s3.Bucket("my-bucket", {
acl: "private",
tags: {
Environment: pulumi.getStack(),
ManagedBy: "pulumi"
}
});
exports.bucketName = bucket.id;
exports.bucketArn = bucket.arn;
That is a real, deployable Pulumi program. The exports at the bottom are stack outputs — values that other stacks or CI/CD pipelines can consume. The pulumi.getStack() call returns the current stack name (dev, staging, production), which means the same code adapts to its environment.
Pulumi's real power shows up when you start using language features. You can create reusable infrastructure components as classes:
var pulumi = require("@pulumi/pulumi");
var aws = require("@pulumi/aws");
function createTaggedBucket(name, tier, opts) {
var lifecycleDays = {
"hot": 30,
"warm": 90,
"cold": 365
};
var bucket = new aws.s3.Bucket(name, {
acl: "private",
lifecycleRules: [{
enabled: true,
expiration: {
days: lifecycleDays[tier] || 90
}
}],
tags: {
Tier: tier,
Environment: pulumi.getStack(),
ManagedBy: "pulumi"
}
}, opts);
return bucket;
}
var config = new pulumi.Config();
var buckets = config.requireObject("buckets");
var createdBuckets = {};
buckets.forEach(function(bucketConfig) {
createdBuckets[bucketConfig.name] = createTaggedBucket(
bucketConfig.name,
bucketConfig.tier
);
});
exports.bucketArns = Object.keys(createdBuckets).reduce(function(acc, key) {
acc[key] = createdBuckets[key].arn;
return acc;
}, {});
That is just JavaScript. Any Node.js developer on your team can read it, modify it, and debug it with standard tools.
Terraform with HCL
Terraform's HCL is clean and readable for straightforward infrastructure. Here is the equivalent S3 bucket:
resource "aws_s3_bucket" "my_bucket" {
bucket_prefix = "my-bucket"
tags = {
Environment = terraform.workspace
ManagedBy = "terraform"
}
}
resource "aws_s3_bucket_acl" "my_bucket_acl" {
bucket = aws_s3_bucket.my_bucket.id
acl = "private"
}
output "bucket_name" {
value = aws_s3_bucket.my_bucket.id
}
output "bucket_arn" {
value = aws_s3_bucket.my_bucket.arn
}
HCL is straightforward for this case. Where it gets complicated is when you need dynamic behavior. Here is the multi-bucket example with lifecycle policies:
variable "buckets" {
type = list(object({
name = string
tier = string
}))
}
locals {
lifecycle_days = {
"hot" = 30
"warm" = 90
"cold" = 365
}
}
resource "aws_s3_bucket" "tagged_buckets" {
for_each = { for b in var.buckets : b.name => b }
bucket_prefix = each.value.name
tags = {
Tier = each.value.tier
Environment = terraform.workspace
ManagedBy = "terraform"
}
}
resource "aws_s3_bucket_lifecycle_configuration" "tagged_buckets_lifecycle" {
for_each = { for b in var.buckets : b.name => b }
bucket = aws_s3_bucket.tagged_buckets[each.key].id
rule {
id = "expire-objects"
status = "Enabled"
expiration {
days = lookup(local.lifecycle_days, each.value.tier, 90)
}
}
}
output "bucket_arns" {
value = { for k, v in aws_s3_bucket.tagged_buckets : k => v.arn }
}
It works, but you had to learn for_each, for expressions, lookup, and locals. These are HCL-specific constructs that do not transfer to any other language. For a team of infrastructure specialists, that is fine. For a team of full-stack developers who occasionally touch infrastructure, it is friction.
State Management Comparison
Both tools maintain state — a record of what resources exist and their current configuration. How they manage that state differs significantly.
Terraform stores state in a JSON file. By default it is local (terraform.tfstate), but in practice you must use a remote backend. The most common setup is an S3 bucket with DynamoDB locking:
terraform {
backend "s3" {
bucket = "my-terraform-state"
key = "prod/terraform.tfstate"
region = "us-east-1"
dynamodb_table = "terraform-locks"
encrypt = true
}
}
You must provision the state bucket and lock table before you can use Terraform. This is the classic chicken-and-egg problem that every Terraform team solves slightly differently.
Pulumi defaults to the Pulumi Cloud service for state management. This means encryption, locking, and history are handled for you out of the box. You can also self-host state on S3, Azure Blob Storage, or a local file system:
# Use Pulumi Cloud (default)
pulumi login
# Use S3 backend
pulumi login s3://my-pulumi-state
# Use local file system
pulumi login file://~/.pulumi-state
Pulumi's approach is simpler to get started with, but the default ties you to their cloud service. Terraform's approach requires more setup but keeps you independent from the start. For enterprise teams that care about data sovereignty, Terraform's self-managed state is often preferred.
Both tools support state locking to prevent concurrent modifications. Both support state import for adopting existing resources. Terraform's terraform import command and Pulumi's pulumi import command work similarly, though Pulumi can also generate code from imported resources, which is a meaningful productivity advantage during migration.
Ecosystem and Provider Support
Terraform has the larger ecosystem. The Terraform Registry hosts thousands of providers and modules covering virtually every cloud service and SaaS product. When a new AWS service launches, a Terraform provider usually exists within days, often maintained by HashiCorp or the cloud vendor directly.
Pulumi bridges this gap through its Terraform Bridge, which automatically wraps Terraform providers for use in Pulumi programs. This means Pulumi effectively has access to the same provider ecosystem, though there can be a delay between a Terraform provider update and the corresponding Pulumi package. In practice, this delay is usually hours to days for major providers like AWS, Azure, and GCP.
For modules (reusable infrastructure packages), Terraform's ecosystem is more mature. The Terraform Registry has community-verified modules for common patterns: VPC setups, EKS clusters, RDS instances. Pulumi has its own component library, and you can publish reusable components as npm packages, but the selection is smaller.
Where Pulumi excels is in the broader language ecosystem. Need to fetch a configuration value from Vault, parse a YAML file, or call an external API during provisioning? You import an npm package and write JavaScript. In Terraform, you are limited to data sources and provisioners, which are intentionally constrained.
Testing Approaches
This is where the philosophical difference creates the largest practical gap.
Pulumi supports unit testing with standard frameworks. You can mock cloud resources and test your infrastructure logic with Mocha, Jest, or any Node.js test runner:
var assert = require("assert");
var pulumi = require("@pulumi/pulumi");
var mock = require("@pulumi/pulumi/runtime/mocks");
mock.setMocks({
newResource: function(args) {
return {
id: args.name + "-id",
state: args.inputs
};
},
call: function(args) {
return args.inputs;
}
});
describe("Infrastructure", function() {
var infra;
before(function() {
infra = require("./index");
});
it("should create a bucket with private ACL", function(done) {
pulumi.all([infra.bucketName]).apply(function(values) {
assert.ok(values[0]);
done();
});
});
it("should tag resources with environment", function(done) {
pulumi.all([infra.bucketTags]).apply(function(values) {
assert.ok(values[0].Environment);
done();
});
});
});
You can also write property tests and integration tests that actually deploy to a real cloud environment and verify the results. Pulumi's automation API lets you drive deployments programmatically from test code.
Terraform testing is more limited. You can validate configuration with terraform validate and terraform plan. For real testing, the community relies on Terratest (a Go library) or the newer terraform test command introduced in Terraform 1.6:
# tests/s3_test.tftest.hcl
run "verify_bucket_creation" {
command = apply
assert {
condition = aws_s3_bucket.my_bucket.tags["ManagedBy"] == "terraform"
error_message = "Bucket should be tagged as managed by terraform"
}
}
Terraform's built-in testing is improving but remains less flexible than what Pulumi offers through general-purpose test frameworks.
CI/CD Integration
Both tools integrate cleanly with standard CI/CD pipelines. The workflows look similar:
Terraform in GitHub Actions:
name: Terraform
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
terraform:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: hashicorp/setup-terraform@v3
- run: terraform init
- run: terraform plan -out=tfplan
if: github.event_name == 'pull_request'
- run: terraform apply -auto-approve
if: github.ref == 'refs/heads/main'
Pulumi in GitHub Actions:
name: Pulumi
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
pulumi:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
- run: npm install
- uses: pulumi/actions@v5
with:
command: preview
stack-name: production
if: github.event_name == 'pull_request'
- uses: pulumi/actions@v5
with:
command: up
stack-name: production
if: github.ref == 'refs/heads/main'
Both support plan/preview on pull requests and apply on merge. Terraform has a slight edge here because its plan output is a well-known format that teams have built review processes around. Pulumi's preview output is equally informative but less standardized across organizations.
Terraform Cloud and Pulumi Cloud both offer managed CI/CD for infrastructure, with policy enforcement, cost estimation, and approval workflows. If your organization wants a fully managed experience, both vendors deliver it.
Team Adoption Considerations
Choose Terraform when:
- Your team includes dedicated infrastructure or DevOps engineers who are comfortable with HCL
- You need maximum community support and module availability
- Your organization requires strict separation between application code and infrastructure code
- You want the broadest possible hiring pool (Terraform dominance in job postings is real)
- Regulatory compliance requires auditable, constrained infrastructure definitions
Choose Pulumi when:
- Your team is primarily application developers who own their infrastructure
- You want to share types, validation logic, and configuration between application code and infrastructure
- Testing infrastructure logic with standard tools is a priority
- You are already standardized on TypeScript or another supported language
- You need complex dynamic infrastructure that would be painful in HCL
The team composition question is the most important one. A team of Python developers who are told to learn HCL will resent it. A team of infrastructure specialists who are told to learn TypeScript build patterns will resent it equally. Meet your team where they are.
Migration Between Tools
Moving from Terraform to Pulumi is well-supported. Pulumi provides pulumi convert to translate HCL to Pulumi code and pulumi import to adopt existing resources. The conversion is not always perfect — complex modules and dynamic blocks may need manual adjustment — but it handles 80% of cases cleanly.
Moving from Pulumi to Terraform is harder. There is no official conversion tool. You would need to export your Pulumi state, map resources to Terraform resource types, and write the HCL by hand. For non-trivial infrastructure, this is a multi-week project.
This asymmetry matters. Choosing Pulumi is a higher-commitment decision. If you are unsure, starting with Terraform and migrating to Pulumi later is the lower-risk path.
Cost Comparison
Terraform:
- Terraform CLI: open source, free
- Terraform Cloud Free: up to 500 managed resources
- Terraform Cloud Plus: $0.00014/hour per managed resource (~$1.25/resource/year)
- Terraform Enterprise: self-hosted, contact sales
Pulumi:
- Pulumi CLI: open source, free
- Pulumi Cloud Individual: free for one user
- Pulumi Cloud Team: starts at $50/month per user
- Pulumi Cloud Enterprise: contact sales
- Self-managed state backends: free (you pay only for storage)
For small teams and open source projects, both tools are effectively free. At scale, Terraform Cloud's per-resource pricing can add up quickly for large infrastructure footprints, while Pulumi's per-seat pricing hits harder for large teams with small infrastructure.
If you self-manage state for either tool, the tooling cost is zero. The real cost is engineering time, and that depends entirely on which tool your team is more productive with.
Real-World Decision Framework
Ask these questions in order:
What does your team know? If they know HCL, use Terraform. If they know TypeScript, consider Pulumi. Learning a new tool is expensive.
How complex is your infrastructure logic? If you are standing up standard three-tier architectures, either tool works. If you are generating infrastructure dynamically from application configuration, Pulumi is significantly easier.
Do you need to test infrastructure logic? If yes, Pulumi's testing story is stronger today.
What is your organization's risk tolerance? Terraform has been around longer, has more community knowledge, and has a clearer migration path away from. Pulumi is a higher-reward, higher-commitment choice.
Are you hiring for this role? "Terraform experience" appears in 10x more job postings than "Pulumi experience." That matters for recruiting.
Complete Working Example
Here is the same infrastructure — a VPC, Lambda function, DynamoDB table, and S3 bucket — defined in both Pulumi JavaScript and Terraform HCL.
Pulumi JavaScript
var pulumi = require("@pulumi/pulumi");
var aws = require("@pulumi/aws");
var iam = require("@pulumi/aws/iam");
var config = new pulumi.Config();
var environment = config.get("environment") || "dev";
var projectName = "myapp";
// VPC
var vpc = new aws.ec2.Vpc(projectName + "-vpc", {
cidrBlock: "10.0.0.0/16",
enableDnsHostnames: true,
enableDnsSupport: true,
tags: {
Name: projectName + "-vpc",
Environment: environment
}
});
var publicSubnet = new aws.ec2.Subnet(projectName + "-public-subnet", {
vpcId: vpc.id,
cidrBlock: "10.0.1.0/24",
availabilityZone: "us-east-1a",
mapPublicIpOnLaunch: true,
tags: {
Name: projectName + "-public-subnet",
Environment: environment
}
});
var privateSubnet = new aws.ec2.Subnet(projectName + "-private-subnet", {
vpcId: vpc.id,
cidrBlock: "10.0.2.0/24",
availabilityZone: "us-east-1b",
tags: {
Name: projectName + "-private-subnet",
Environment: environment
}
});
var igw = new aws.ec2.InternetGateway(projectName + "-igw", {
vpcId: vpc.id,
tags: {
Name: projectName + "-igw",
Environment: environment
}
});
var routeTable = new aws.ec2.RouteTable(projectName + "-rt", {
vpcId: vpc.id,
routes: [{
cidrBlock: "0.0.0.0/0",
gatewayId: igw.id
}],
tags: {
Name: projectName + "-rt",
Environment: environment
}
});
var rtAssociation = new aws.ec2.RouteTableAssociation(projectName + "-rta", {
subnetId: publicSubnet.id,
routeTableId: routeTable.id
});
// DynamoDB Table
var table = new aws.dynamodb.Table(projectName + "-table", {
name: projectName + "-" + environment + "-data",
billingMode: "PAY_PER_REQUEST",
hashKey: "pk",
rangeKey: "sk",
attributes: [
{ name: "pk", type: "S" },
{ name: "sk", type: "S" },
{ name: "gsi1pk", type: "S" },
{ name: "gsi1sk", type: "S" }
],
globalSecondaryIndexes: [{
name: "gsi1",
hashKey: "gsi1pk",
rangeKey: "gsi1sk",
projectionType: "ALL"
}],
pointInTimeRecovery: {
enabled: true
},
tags: {
Environment: environment
}
});
// S3 Bucket
var bucket = new aws.s3.Bucket(projectName + "-bucket", {
acl: "private",
versioning: {
enabled: true
},
serverSideEncryptionConfiguration: {
rule: {
applyServerSideEncryptionByDefault: {
sseAlgorithm: "AES256"
}
}
},
tags: {
Environment: environment
}
});
var bucketPublicAccessBlock = new aws.s3.BucketPublicAccessBlock(projectName + "-bucket-pab", {
bucket: bucket.id,
blockPublicAcls: true,
blockPublicPolicy: true,
ignorePublicAcls: true,
restrictPublicBuckets: true
});
// Lambda Function
var lambdaRole = new aws.iam.Role(projectName + "-lambda-role", {
assumeRolePolicy: JSON.stringify({
Version: "2012-10-17",
Statement: [{
Action: "sts:AssumeRole",
Principal: { Service: "lambda.amazonaws.com" },
Effect: "Allow"
}]
}),
tags: {
Environment: environment
}
});
var lambdaPolicy = new aws.iam.RolePolicy(projectName + "-lambda-policy", {
role: lambdaRole.id,
policy: pulumi.all([table.arn, bucket.arn]).apply(function(args) {
return JSON.stringify({
Version: "2012-10-17",
Statement: [
{
Effect: "Allow",
Action: [
"dynamodb:GetItem",
"dynamodb:PutItem",
"dynamodb:Query",
"dynamodb:UpdateItem",
"dynamodb:DeleteItem"
],
Resource: [args[0], args[0] + "/index/*"]
},
{
Effect: "Allow",
Action: ["s3:GetObject", "s3:PutObject", "s3:DeleteObject"],
Resource: args[1] + "/*"
},
{
Effect: "Allow",
Action: [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
Resource: "arn:aws:logs:*:*:*"
}
]
});
})
});
var lambdaFunction = new aws.lambda.Function(projectName + "-function", {
runtime: "nodejs20.x",
handler: "index.handler",
role: lambdaRole.arn,
code: new pulumi.asset.AssetArchive({
"index.js": new pulumi.asset.StringAsset(
'exports.handler = async function(event) { return { statusCode: 200, body: "ok" }; };'
)
}),
memorySize: 256,
timeout: 30,
environment: {
variables: {
TABLE_NAME: table.name,
BUCKET_NAME: bucket.id,
NODE_ENV: environment
}
},
tags: {
Environment: environment
}
});
// Outputs
exports.vpcId = vpc.id;
exports.tableName = table.name;
exports.bucketName = bucket.id;
exports.lambdaArn = lambdaFunction.arn;
Terraform HCL
terraform {
required_version = ">= 1.5.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}
provider "aws" {
region = "us-east-1"
}
variable "environment" {
type = string
default = "dev"
}
variable "project_name" {
type = string
default = "myapp"
}
# VPC
resource "aws_vpc" "main" {
cidr_block = "10.0.0.0/16"
enable_dns_hostnames = true
enable_dns_support = true
tags = {
Name = "${var.project_name}-vpc"
Environment = var.environment
}
}
resource "aws_subnet" "public" {
vpc_id = aws_vpc.main.id
cidr_block = "10.0.1.0/24"
availability_zone = "us-east-1a"
map_public_ip_on_launch = true
tags = {
Name = "${var.project_name}-public-subnet"
Environment = var.environment
}
}
resource "aws_subnet" "private" {
vpc_id = aws_vpc.main.id
cidr_block = "10.0.2.0/24"
availability_zone = "us-east-1b"
tags = {
Name = "${var.project_name}-private-subnet"
Environment = var.environment
}
}
resource "aws_internet_gateway" "main" {
vpc_id = aws_vpc.main.id
tags = {
Name = "${var.project_name}-igw"
Environment = var.environment
}
}
resource "aws_route_table" "public" {
vpc_id = aws_vpc.main.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.main.id
}
tags = {
Name = "${var.project_name}-rt"
Environment = var.environment
}
}
resource "aws_route_table_association" "public" {
subnet_id = aws_subnet.public.id
route_table_id = aws_route_table.public.id
}
# DynamoDB Table
resource "aws_dynamodb_table" "main" {
name = "${var.project_name}-${var.environment}-data"
billing_mode = "PAY_PER_REQUEST"
hash_key = "pk"
range_key = "sk"
attribute {
name = "pk"
type = "S"
}
attribute {
name = "sk"
type = "S"
}
attribute {
name = "gsi1pk"
type = "S"
}
attribute {
name = "gsi1sk"
type = "S"
}
global_secondary_index {
name = "gsi1"
hash_key = "gsi1pk"
range_key = "gsi1sk"
projection_type = "ALL"
}
point_in_time_recovery {
enabled = true
}
tags = {
Environment = var.environment
}
}
# S3 Bucket
resource "aws_s3_bucket" "main" {
bucket_prefix = "${var.project_name}-"
tags = {
Environment = var.environment
}
}
resource "aws_s3_bucket_versioning" "main" {
bucket = aws_s3_bucket.main.id
versioning_configuration {
status = "Enabled"
}
}
resource "aws_s3_bucket_server_side_encryption_configuration" "main" {
bucket = aws_s3_bucket.main.id
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
resource "aws_s3_bucket_public_access_block" "main" {
bucket = aws_s3_bucket.main.id
block_public_acls = true
block_public_policy = true
ignore_public_acls = true
restrict_public_buckets = true
}
# Lambda Function
resource "aws_iam_role" "lambda" {
name = "${var.project_name}-lambda-role"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [{
Action = "sts:AssumeRole"
Principal = { Service = "lambda.amazonaws.com" }
Effect = "Allow"
}]
})
tags = {
Environment = var.environment
}
}
resource "aws_iam_role_policy" "lambda" {
name = "${var.project_name}-lambda-policy"
role = aws_iam_role.lambda.id
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Effect = "Allow"
Action = [
"dynamodb:GetItem",
"dynamodb:PutItem",
"dynamodb:Query",
"dynamodb:UpdateItem",
"dynamodb:DeleteItem"
]
Resource = [
aws_dynamodb_table.main.arn,
"${aws_dynamodb_table.main.arn}/index/*"
]
},
{
Effect = "Allow"
Action = [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject"
]
Resource = "${aws_s3_bucket.main.arn}/*"
},
{
Effect = "Allow"
Action = [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
]
Resource = "arn:aws:logs:*:*:*"
}
]
})
}
data "archive_file" "lambda" {
type = "zip"
output_path = "${path.module}/lambda.zip"
source {
content = <<-EOF
exports.handler = async function(event) {
return { statusCode: 200, body: "ok" };
};
EOF
filename = "index.js"
}
}
resource "aws_lambda_function" "main" {
function_name = "${var.project_name}-function"
role = aws_iam_role.lambda.arn
handler = "index.handler"
runtime = "nodejs20.x"
filename = data.archive_file.lambda.output_path
source_code_hash = data.archive_file.lambda.output_base64sha256
memory_size = 256
timeout = 30
environment {
variables = {
TABLE_NAME = aws_dynamodb_table.main.name
BUCKET_NAME = aws_s3_bucket.main.id
NODE_ENV = var.environment
}
}
tags = {
Environment = var.environment
}
}
# Outputs
output "vpc_id" {
value = aws_vpc.main.id
}
output "table_name" {
value = aws_dynamodb_table.main.name
}
output "bucket_name" {
value = aws_s3_bucket.main.id
}
output "lambda_arn" {
value = aws_lambda_function.main.arn
}
Notice how the Terraform version requires more files conceptually (the archive_file data source to zip the Lambda code, separate resources for bucket versioning and encryption) while the Pulumi version handles these inline. The Terraform version is more explicit about every resource relationship. The Pulumi version is more concise but hides some of that explicitness behind SDK abstractions. Both deploy identical infrastructure.
Common Issues and Troubleshooting
1. Pulumi Output Resolution Errors
One of the most confusing issues for Pulumi newcomers is trying to use an output value as a plain string:
// WRONG - bucket.id is an Output<string>, not a string
var key = "prefix/" + bucket.id + "/data";
// CORRECT - use pulumi.interpolate or .apply()
var key = pulumi.interpolate`prefix/${bucket.id}/data`;
// or
var key = bucket.id.apply(function(id) {
return "prefix/" + id + "/data";
});
Pulumi outputs are promises that resolve after deployment. You cannot concatenate them directly. Use pulumi.interpolate for string building or .apply() for transformations.
2. Terraform State Lock Stuck
If a Terraform apply is interrupted (CI/CD timeout, network failure), the DynamoDB state lock may not release. You will see: Error: Error locking state: Error acquiring the state lock.
# Find the lock ID from the error message, then force unlock
terraform force-unlock LOCK_ID
Prevent this by setting reasonable timeouts in your CI/CD pipeline and implementing graceful shutdown handlers. Always use the -lock-timeout flag in automated environments:
terraform apply -auto-approve -lock-timeout=5m
3. Pulumi Stack References and Circular Dependencies
When splitting infrastructure across multiple Pulumi stacks, circular references between stacks will silently deadlock your deployment. Stack A exports a VPC ID that Stack B needs, but Stack B exports a security group that Stack A needs.
The fix is architectural: create a third "shared" stack that both consume from, or restructure so dependencies flow in one direction. This is the same problem microservice teams face with circular service dependencies. Design your stack boundaries as you would service boundaries — with clear dependency direction.
4. Terraform Provider Version Conflicts
When upgrading providers, Terraform may refuse to plan if the state was written by a newer provider version:
Error: state snapshot was created by Terraform v1.7.0,
which is newer than current v1.5.0
Pin your provider versions explicitly and upgrade them deliberately:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.30.0" # Pin to minor version
}
}
}
Run terraform init -upgrade when you intentionally bump versions, and make sure all team members and CI/CD pipelines use the same Terraform CLI version. Tools like tfenv or asdf help manage this.
5. Resource Replacement Surprises
Both tools will sometimes decide to destroy and recreate a resource instead of updating it in place. This is catastrophic for stateful resources like databases. In Terraform, use lifecycle blocks:
resource "aws_dynamodb_table" "main" {
# ...
lifecycle {
prevent_destroy = true
}
}
In Pulumi, use protect:
var table = new aws.dynamodb.Table("main", {
// ...
}, { protect: true });
Always review plans and previews before applying. Automate this review step in CI/CD so no one skips it.
Best Practices
Always use remote state. Local state files will be lost, corrupted, or cause conflicts the moment a second person touches the infrastructure. Set up remote state on day one, not after you have a problem.
Pin your provider and CLI versions. Floating versions across team members and CI/CD pipelines is the single most common source of "works on my machine" problems in IaC. Use lock files (
terraform.lock.hclorpackage-lock.json) and version managers.Structure your code by lifecycle, not by resource type. Group resources that change together and are deployed together. A VPC and its subnets belong in the same module or stack. A Lambda function and its DynamoDB table belong together. Do not create a
networking.tfandcompute.tfunless those truly have independent lifecycles.Implement policy-as-code from the start. Use Terraform Sentinel or OPA, or Pulumi CrossGuard, to enforce naming conventions, tagging requirements, and security baselines. Retrofitting policy after hundreds of resources exist is painful.
Treat IaC like application code. Code review every change. Run linters (
tflint,eslint). Write tests. Use feature branches. If your infrastructure code does not go through the same rigor as your application code, it will accumulate the same defects.Use separate state per environment. Terraform workspaces or Pulumi stacks should isolate dev, staging, and production. Never share state across environments. A botched dev deployment should never be able to affect production state.
Tag everything. Every resource should have at minimum: Environment, Team, ManagedBy, and Project tags. This is not optional. Without tags, cost allocation, incident response, and security audits become guesswork.
Automate drift detection. Run
terraform planorpulumi previewon a schedule (daily or weekly) even when you are not deploying. This catches manual changes someone made in the console that your IaC does not know about. Fix drift immediately — it only compounds over time.
References
- Pulumi Documentation — Official Pulumi docs including language-specific guides
- Terraform Documentation — Official Terraform docs and registry
- Pulumi vs Terraform Comparison — Pulumi's own comparison document
- Terraform AWS Provider — AWS provider reference
- Pulumi AWS Package — Pulumi AWS SDK reference
- Infrastructure as Code by Kief Morris — The definitive book on IaC principles
- Terratest — Go library for testing Terraform modules
- Pulumi Automation API — Programmatic infrastructure deployment