Multi-Cloud Serverless Strategies
Build portable serverless applications across AWS, Azure, and GCP with abstraction patterns and migration strategies
Multi-Cloud Serverless Strategies
Running serverless workloads across multiple cloud providers is one of those ideas that sounds brilliant in a strategy meeting and painful in practice. But there are legitimate reasons to do it — regulatory requirements, disaster recovery, cost optimization, and avoiding vendor lock-in among them. This article covers the patterns, abstractions, and hard-won lessons from building portable serverless applications that run on AWS Lambda, Azure Functions, and Google Cloud Functions.
Prerequisites
- Node.js 18+ installed locally
- Basic familiarity with at least one serverless platform (Lambda, Azure Functions, or Cloud Functions)
- AWS CLI, Azure CLI, or
gcloudCLI configured - Understanding of HTTP request/response patterns
- Familiarity with infrastructure-as-code concepts
Why Multi-Cloud Serverless (And When to Avoid It)
There are exactly four good reasons to go multi-cloud with serverless:
- Regulatory compliance — Some industries require workloads to run in specific geographic regions, and no single provider covers every jurisdiction.
- Disaster recovery — If your entire business depends on one provider and they have a regional outage, you need a fallback.
- Cost arbitrage — Different providers price compute, storage, and egress differently. A workload that costs $800/month on Lambda might cost $500 on Cloud Functions.
- Acquisition and mergers — You inherited a system on Azure and your company runs on AWS. Now you need both to talk to each other.
Here is when you should avoid multi-cloud serverless:
- You are a startup with fewer than 20 engineers. The operational overhead will eat you alive. Pick one provider and go deep.
- You are optimizing prematurely. If your monthly cloud bill is under $10,000, the engineering cost of multi-cloud abstraction will dwarf any savings.
- You want to use provider-specific features. AWS Step Functions, Azure Durable Functions, and Google Workflows are all excellent — and none of them are portable. If you need them, commit to the provider.
The honest truth is that most teams claiming they need multi-cloud actually need multi-region on a single provider. Know the difference before you invest months of engineering effort.
Abstraction Layer Patterns
The core challenge of multi-cloud serverless is that every provider has a different function signature. AWS Lambda hands you (event, context, callback). Azure Functions gives you (context, req). Google Cloud Functions expects (req, res) like Express. Your business logic should not care about any of this.
The pattern I have used successfully across three production systems is a three-layer architecture:
┌─────────────────────────────┐
│ Provider Adapter │ ← Translates provider-specific I/O
├─────────────────────────────┤
│ Core Business Logic │ ← Pure functions, no provider deps
├─────────────────────────────┤
│ Service Abstraction │ ← Database, queue, storage interfaces
└─────────────────────────────┘
The provider adapter is a thin wrapper that normalizes the request into a standard shape and converts your response back into whatever the provider expects. The core logic never imports anything from aws-sdk, @azure/functions, or @google-cloud/*. The service abstraction layer gives you a consistent interface for databases, queues, and object storage regardless of the underlying provider.
// lib/request-normalizer.js
function normalizeRequest(providerEvent, providerContext) {
var provider = detectProvider(providerContext);
if (provider === 'aws') {
return {
method: providerEvent.httpMethod,
path: providerEvent.path,
headers: lowerCaseKeys(providerEvent.headers || {}),
query: providerEvent.queryStringParameters || {},
body: parseBody(providerEvent.body, providerEvent.isBase64Encoded),
provider: 'aws'
};
}
if (provider === 'azure') {
return {
method: providerContext.req.method,
path: providerContext.req.url,
headers: lowerCaseKeys(providerContext.req.headers || {}),
query: providerContext.req.query || {},
body: providerContext.req.body || {},
provider: 'azure'
};
}
// Google Cloud Functions uses Express-style req/res
return {
method: providerEvent.method,
path: providerEvent.path,
headers: lowerCaseKeys(providerEvent.headers || {}),
query: providerEvent.query || {},
body: providerEvent.body || {},
provider: 'gcp'
};
}
function detectProvider(context) {
if (process.env.AWS_LAMBDA_FUNCTION_NAME) return 'aws';
if (process.env.AZURE_FUNCTIONS_ENVIRONMENT) return 'azure';
if (process.env.FUNCTION_TARGET || process.env.K_SERVICE) return 'gcp';
return 'unknown';
}
function lowerCaseKeys(obj) {
var result = {};
Object.keys(obj).forEach(function(key) {
result[key.toLowerCase()] = obj[key];
});
return result;
}
function parseBody(body, isBase64) {
if (!body) return {};
if (isBase64) body = Buffer.from(body, 'base64').toString('utf8');
try {
return JSON.parse(body);
} catch (e) {
return { raw: body };
}
}
module.exports = { normalizeRequest, detectProvider };
Handler Portability Between Lambda, Azure Functions, and Cloud Functions
Each provider expects a different export shape. The trick is to write your handler once and wrap it per provider.
// handlers/core/get-user.js — Provider-agnostic handler
var userService = require('../../services/user-service');
function handleGetUser(normalizedRequest) {
var userId = normalizedRequest.query.id || normalizedRequest.path.split('/').pop();
if (!userId) {
return {
statusCode: 400,
body: { error: 'User ID is required' }
};
}
return userService.findById(userId)
.then(function(user) {
if (!user) {
return { statusCode: 404, body: { error: 'User not found' } };
}
return { statusCode: 200, body: user };
})
.catch(function(err) {
console.error('Failed to fetch user:', err.message);
return { statusCode: 500, body: { error: 'Internal server error' } };
});
}
module.exports = handleGetUser;
Now the provider-specific wrappers:
// aws/get-user.js — AWS Lambda handler
var normalizer = require('../lib/request-normalizer');
var handleGetUser = require('../handlers/core/get-user');
exports.handler = function(event, context, callback) {
var request = normalizer.normalizeRequest(event, context);
handleGetUser(request)
.then(function(result) {
callback(null, {
statusCode: result.statusCode,
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(result.body)
});
})
.catch(function(err) {
callback(err);
});
};
// azure/get-user/index.js — Azure Functions handler
var normalizer = require('../../lib/request-normalizer');
var handleGetUser = require('../../handlers/core/get-user');
module.exports = function(context, req) {
var request = normalizer.normalizeRequest(req, context);
handleGetUser(request)
.then(function(result) {
context.res = {
status: result.statusCode,
headers: { 'Content-Type': 'application/json' },
body: result.body
};
context.done();
})
.catch(function(err) {
context.log.error('Handler error:', err);
context.res = { status: 500, body: { error: 'Internal error' } };
context.done();
});
};
// gcp/get-user.js — Google Cloud Functions handler
var normalizer = require('../lib/request-normalizer');
var handleGetUser = require('../handlers/core/get-user');
exports.getUser = function(req, res) {
var request = normalizer.normalizeRequest(req, null);
handleGetUser(request)
.then(function(result) {
res.status(result.statusCode).json(result.body);
})
.catch(function(err) {
console.error('Handler error:', err);
res.status(500).json({ error: 'Internal error' });
});
};
Serverless Framework for Multi-Cloud
The Serverless Framework supports AWS, Azure, and GCP through provider plugins. However, I want to be upfront: the experience is not equally polished across all three. AWS support is first-class. Azure and GCP support works but you will hit edge cases.
# serverless.yml — AWS configuration
service: user-api
provider:
name: aws
runtime: nodejs18.x
region: us-east-1
environment:
DB_PROVIDER: dynamodb
TABLE_NAME: users
functions:
getUser:
handler: aws/get-user.handler
events:
- http:
path: /users/{id}
method: get
# serverless.yml — Azure configuration
service: user-api
provider:
name: azure
region: East US
runtime: nodejs18
environment:
DB_PROVIDER: cosmosdb
COSMOS_ENDPOINT: ${env:COSMOS_ENDPOINT}
functions:
getUser:
handler: azure/get-user/index.handler
events:
- http: true
methods:
- GET
route: users/{id}
A practical approach is to maintain separate serverless-aws.yml and serverless-azure.yml files and deploy with serverless deploy --config serverless-aws.yml. Trying to cram both providers into a single config file leads to unmaintainable YAML.
Database Abstraction: DynamoDB vs Cosmos DB vs Firestore
This is where multi-cloud gets genuinely difficult. DynamoDB, Cosmos DB, and Firestore are all document databases, but their query models, consistency guarantees, and pricing structures differ significantly.
// services/db/dynamodb-adapter.js
var AWS = require('aws-sdk');
var dynamo = new AWS.DynamoDB.DocumentClient();
function DynamoAdapter(tableName) {
this.tableName = tableName;
}
DynamoAdapter.prototype.get = function(id) {
var params = {
TableName: this.tableName,
Key: { id: id }
};
return dynamo.get(params).promise()
.then(function(result) { return result.Item || null; });
};
DynamoAdapter.prototype.put = function(item) {
var params = {
TableName: this.tableName,
Item: item
};
return dynamo.put(params).promise();
};
DynamoAdapter.prototype.query = function(index, key, value) {
var params = {
TableName: this.tableName,
IndexName: index,
KeyConditionExpression: '#k = :v',
ExpressionAttributeNames: { '#k': key },
ExpressionAttributeValues: { ':v': value }
};
return dynamo.query(params).promise()
.then(function(result) { return result.Items; });
};
DynamoAdapter.prototype.delete = function(id) {
var params = {
TableName: this.tableName,
Key: { id: id }
};
return dynamo.delete(params).promise();
};
module.exports = DynamoAdapter;
// services/db/cosmosdb-adapter.js
var CosmosClient = require('@azure/cosmos').CosmosClient;
function CosmosAdapter(databaseId, containerId) {
var client = new CosmosClient({
endpoint: process.env.COSMOS_ENDPOINT,
key: process.env.COSMOS_KEY
});
this.container = client.database(databaseId).container(containerId);
}
CosmosAdapter.prototype.get = function(id) {
return this.container.item(id, id).read()
.then(function(result) { return result.resource || null; })
.catch(function(err) {
if (err.code === 404) return null;
throw err;
});
};
CosmosAdapter.prototype.put = function(item) {
return this.container.items.upsert(item)
.then(function(result) { return result.resource; });
};
CosmosAdapter.prototype.query = function(index, key, value) {
var querySpec = {
query: 'SELECT * FROM c WHERE c[@key] = @value',
parameters: [
{ name: '@key', value: key },
{ name: '@value', value: value }
]
};
return this.container.items.query(querySpec).fetchAll()
.then(function(result) { return result.resources; });
};
CosmosAdapter.prototype.delete = function(id) {
return this.container.item(id, id).delete();
};
module.exports = CosmosAdapter;
// services/db/firestore-adapter.js
var Firestore = require('@google-cloud/firestore');
var db = new Firestore();
function FirestoreAdapter(collection) {
this.collection = db.collection(collection);
}
FirestoreAdapter.prototype.get = function(id) {
return this.collection.doc(id).get()
.then(function(doc) {
if (!doc.exists) return null;
return Object.assign({ id: doc.id }, doc.data());
});
};
FirestoreAdapter.prototype.put = function(item) {
var id = item.id;
return this.collection.doc(id).set(item);
};
FirestoreAdapter.prototype.query = function(index, key, value) {
return this.collection.where(key, '==', value).get()
.then(function(snapshot) {
var results = [];
snapshot.forEach(function(doc) {
results.push(Object.assign({ id: doc.id }, doc.data()));
});
return results;
});
};
FirestoreAdapter.prototype.delete = function(id) {
return this.collection.doc(id).delete();
};
module.exports = FirestoreAdapter;
The factory that ties it all together:
// services/db/index.js
var DynamoAdapter = require('./dynamodb-adapter');
var CosmosAdapter = require('./cosmosdb-adapter');
var FirestoreAdapter = require('./firestore-adapter');
function createDbAdapter(collection) {
var provider = process.env.DB_PROVIDER || 'dynamodb';
if (provider === 'dynamodb') {
return new DynamoAdapter(process.env.TABLE_NAME || collection);
}
if (provider === 'cosmosdb') {
return new CosmosAdapter(
process.env.COSMOS_DATABASE || 'main',
collection
);
}
if (provider === 'firestore') {
return new FirestoreAdapter(collection);
}
throw new Error('Unsupported DB_PROVIDER: ' + provider);
}
module.exports = { createDbAdapter: createDbAdapter };
Important caveats: this abstraction covers basic CRUD. The moment you need DynamoDB streams, Cosmos DB change feed, or Firestore real-time listeners, you are back to provider-specific code. Accept that and isolate those integrations rather than trying to abstract them.
Event System Differences
Every cloud provider has its own event model. AWS uses EventBridge, SNS, and SQS. Azure has Event Grid, Service Bus, and Storage Queues. GCP has Pub/Sub and Eventarc. Making these interoperable requires a message envelope pattern.
// lib/event-envelope.js
var crypto = require('crypto');
function createEnvelope(type, source, data) {
return {
id: crypto.randomUUID(),
type: type,
source: source,
time: new Date().toISOString(),
dataContentType: 'application/json',
data: data,
specversion: '1.0' // CloudEvents spec
};
}
function parseProviderEvent(rawEvent, provider) {
if (provider === 'aws') {
// SNS wraps the message
if (rawEvent.Records && rawEvent.Records[0].Sns) {
return JSON.parse(rawEvent.Records[0].Sns.Message);
}
// SQS message
if (rawEvent.Records && rawEvent.Records[0].body) {
return JSON.parse(rawEvent.Records[0].body);
}
// EventBridge
return rawEvent.detail || rawEvent;
}
if (provider === 'azure') {
// Service Bus
if (rawEvent.body) return rawEvent.body;
// Event Grid
if (rawEvent.data) return rawEvent.data;
return rawEvent;
}
if (provider === 'gcp') {
// Pub/Sub
if (rawEvent.data) {
var decoded = Buffer.from(rawEvent.data, 'base64').toString();
return JSON.parse(decoded);
}
return rawEvent;
}
return rawEvent;
}
module.exports = { createEnvelope: createEnvelope, parseProviderEvent: parseProviderEvent };
I strongly recommend adopting the CloudEvents specification for your message envelope. It is a CNCF graduated project and gives you a standardized way to describe events regardless of the source provider.
Authentication Across Providers
Authentication is provider-specific and resisting abstraction is wise here. AWS uses IAM roles and Cognito. Azure uses Azure AD and Managed Identities. GCP uses IAM and Firebase Auth. Trying to build a universal auth layer is a mistake I have made and do not recommend.
What you can abstract is token validation:
// lib/auth/token-validator.js
var jwt = require('jsonwebtoken');
var jwksClient = require('jwks-rsa');
function createValidator(config) {
var client = jwksClient({ jwksUri: config.jwksUri });
function getKey(header, callback) {
client.getSigningKey(header.kid, function(err, key) {
if (err) return callback(err);
var signingKey = key.publicKey || key.rsaPublicKey;
callback(null, signingKey);
});
}
return function validateToken(token) {
return new Promise(function(resolve, reject) {
jwt.verify(token, getKey, {
audience: config.audience,
issuer: config.issuer,
algorithms: ['RS256']
}, function(err, decoded) {
if (err) return reject(err);
resolve(decoded);
});
});
};
}
// Provider-specific JWKS endpoints
var providers = {
cognito: function(region, poolId) {
return {
jwksUri: 'https://cognito-idp.' + region + '.amazonaws.com/' + poolId + '/.well-known/jwks.json',
issuer: 'https://cognito-idp.' + region + '.amazonaws.com/' + poolId
};
},
azureAd: function(tenantId) {
return {
jwksUri: 'https://login.microsoftonline.com/' + tenantId + '/discovery/v2.0/keys',
issuer: 'https://login.microsoftonline.com/' + tenantId + '/v2.0'
};
},
firebase: function(projectId) {
return {
jwksUri: 'https://www.googleapis.com/service_accounts/v1/jwk/[email protected]',
issuer: 'https://securetoken.google.com/' + projectId
};
}
};
module.exports = { createValidator: createValidator, providers: providers };
Deployment Pipeline for Multi-Cloud
A multi-cloud deployment pipeline should build once and deploy to each provider. Here is a GitHub Actions workflow that handles this:
# .github/workflows/multi-cloud-deploy.yml
name: Multi-Cloud Deploy
on:
push:
branches: [main]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '18'
- run: npm ci
- run: npm test
deploy-aws:
needs: test
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '18'
- run: npm ci --production
- run: npx serverless deploy --config serverless-aws.yml --stage production
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
deploy-azure:
needs: test
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '18'
- run: npm ci --production
- uses: azure/login@v1
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}
- run: npx serverless deploy --config serverless-azure.yml --stage production
deploy-gcp:
needs: test
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '18'
- run: npm ci --production
- uses: google-github-actions/auth@v2
with:
credentials_json: ${{ secrets.GCP_SA_KEY }}
- run: npx serverless deploy --config serverless-gcp.yml --stage production
Deploy to each provider in parallel after tests pass. If one provider deployment fails, the others still succeed — you want independent failure domains.
Cost Comparison Across Providers
Pricing as of early 2026, for 1 million invocations per month with 256MB memory and 200ms average duration:
| Component | AWS Lambda | Azure Functions | GCP Cloud Functions |
|---|---|---|---|
| Invocations | $0.20 | $0.20 | $0.40 |
| Compute (GB-s) | $0.0000166667 | $0.000016 | $0.0000025 |
| Free tier invocations | 1M/month | 1M/month | 2M/month |
| Free tier compute | 400K GB-s | 400K GB-s | 400K GB-s |
| Networking egress | $0.09/GB | $0.087/GB | $0.12/GB |
The real costs are not in compute — they are in data transfer, API Gateway charges, and the managed services you attach. AWS API Gateway at $3.50 per million requests can easily exceed your Lambda compute costs. Azure and GCP include HTTP triggers at no extra charge beyond function compute.
For most workloads under the free tier, all three providers are effectively free. Above that threshold, Azure tends to be cheapest for consistent workloads, AWS for bursty workloads, and GCP for CPU-intensive operations where their per-100ms billing granularity matters.
Vendor Lock-In Assessment
Here is a practical lock-in rating for common serverless services, from 1 (easy to migrate) to 5 (deeply locked in):
- Compute (functions): 2/5 — Easy to migrate with the adapter pattern shown above
- API Gateway: 3/5 — Configuration differs significantly, but the concepts map
- Document database: 3/5 — CRUD is portable, advanced queries are not
- Object storage (S3/Blob/GCS): 2/5 — APIs differ but the abstraction is thin
- Message queues: 3/5 — CloudEvents helps, but ordering guarantees differ
- Orchestration (Step Functions, Durable Functions): 5/5 — Completely proprietary, no good abstraction exists
- Identity (Cognito, Azure AD, Firebase Auth): 4/5 — User migration is painful
- Monitoring (CloudWatch, App Insights, Cloud Monitoring): 3/5 — Use OpenTelemetry to reduce this to 2/5
The takeaway: invest in portability for compute and storage. Accept lock-in for orchestration and identity. Use OpenTelemetry for observability from day one.
The Knative Approach
If you want true portability and have Kubernetes infrastructure, Knative is worth serious consideration. It gives you a serverless execution model on top of Kubernetes that runs identically on any cloud — or on-premises.
// A Knative service is just an HTTP server
var http = require('http');
var handleGetUser = require('./handlers/core/get-user');
var normalizer = require('./lib/request-normalizer');
var server = http.createServer(function(req, res) {
var chunks = [];
req.on('data', function(chunk) { chunks.push(chunk); });
req.on('end', function() {
var body = Buffer.concat(chunks).toString();
var normalized = {
method: req.method,
path: req.url,
headers: req.headers,
query: parseQuery(req.url),
body: body ? JSON.parse(body) : {},
provider: 'knative'
};
handleGetUser(normalized)
.then(function(result) {
res.writeHead(result.statusCode, { 'Content-Type': 'application/json' });
res.end(JSON.stringify(result.body));
})
.catch(function(err) {
res.writeHead(500, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: 'Internal error' }));
});
});
});
function parseQuery(url) {
var queryString = url.split('?')[1] || '';
var params = {};
queryString.split('&').forEach(function(pair) {
var parts = pair.split('=');
if (parts[0]) params[decodeURIComponent(parts[0])] = decodeURIComponent(parts[1] || '');
});
return params;
}
var port = process.env.PORT || 8080;
server.listen(port, function() {
console.log('Server listening on port ' + port);
});
The trade-off with Knative is that you need Kubernetes expertise. You are trading cloud serverless complexity for container orchestration complexity. For teams that already run Kubernetes, this is a net win. For teams that adopted serverless to avoid infrastructure management, Knative is a step backward.
Practical Migration Between Providers
When migrating a serverless function from one provider to another, follow this sequence:
- Extract business logic from provider-specific handlers into pure functions
- Add the request normalizer and create a new provider wrapper
- Swap the database adapter — this is where most migration pain lives
- Update environment variables for the new provider's services
- Run integration tests against the new provider in a staging environment
- Deploy canary traffic — send 5% of traffic to the new provider via DNS or a load balancer
- Monitor error rates and latency for at least one week before cutting over
- Keep the old deployment running for at least 30 days as a rollback target
// scripts/migration-validator.js
// Run this to verify both providers return identical responses
var http = require('https');
var endpoints = {
aws: 'https://abc123.execute-api.us-east-1.amazonaws.com/prod/users',
azure: 'https://my-func-app.azurewebsites.net/api/users',
gcp: 'https://us-central1-my-project.cloudfunctions.net/getUser'
};
var testIds = ['user-001', 'user-002', 'user-003'];
function fetchJson(url) {
return new Promise(function(resolve, reject) {
http.get(url, function(res) {
var data = '';
res.on('data', function(chunk) { data += chunk; });
res.on('end', function() {
try { resolve(JSON.parse(data)); }
catch (e) { reject(new Error('Invalid JSON from ' + url)); }
});
}).on('error', reject);
});
}
function compareResponses(id) {
var awsUrl = endpoints.aws + '?id=' + id;
var targetUrl = endpoints.azure + '?id=' + id;
return Promise.all([fetchJson(awsUrl), fetchJson(targetUrl)])
.then(function(results) {
var match = JSON.stringify(results[0]) === JSON.stringify(results[1]);
console.log(id + ': ' + (match ? 'MATCH' : 'MISMATCH'));
if (!match) {
console.log(' AWS:', JSON.stringify(results[0]).substring(0, 200));
console.log(' Azure:', JSON.stringify(results[1]).substring(0, 200));
}
return match;
});
}
Promise.all(testIds.map(compareResponses))
.then(function(results) {
var allMatch = results.every(function(r) { return r; });
console.log('\nOverall: ' + (allMatch ? 'ALL PASSED' : 'FAILURES DETECTED'));
process.exit(allMatch ? 0 : 1);
});
Complete Working Example
Here is a complete portable serverless function — a URL shortener that runs on all three providers with a unified database abstraction.
// handlers/core/shorten-url.js
var crypto = require('crypto');
var dbFactory = require('../../services/db');
var db = dbFactory.createDbAdapter('shortened_urls');
function handleShortenUrl(request) {
if (request.method === 'POST') {
return createShortUrl(request);
}
if (request.method === 'GET') {
return resolveShortUrl(request);
}
return Promise.resolve({
statusCode: 405,
body: { error: 'Method not allowed' }
});
}
function createShortUrl(request) {
var originalUrl = request.body.url;
if (!originalUrl) {
return Promise.resolve({
statusCode: 400,
body: { error: 'URL is required in request body' }
});
}
var shortCode = crypto.randomBytes(4).toString('hex');
var record = {
id: shortCode,
originalUrl: originalUrl,
createdAt: new Date().toISOString(),
clicks: 0
};
return db.put(record)
.then(function() {
return {
statusCode: 201,
body: {
shortCode: shortCode,
shortUrl: request.headers.host + '/s/' + shortCode,
originalUrl: originalUrl
}
};
})
.catch(function(err) {
console.error('Failed to create short URL:', err);
return { statusCode: 500, body: { error: 'Failed to create short URL' } };
});
}
function resolveShortUrl(request) {
var shortCode = request.query.code || request.path.split('/').pop();
if (!shortCode) {
return Promise.resolve({
statusCode: 400,
body: { error: 'Short code is required' }
});
}
return db.get(shortCode)
.then(function(record) {
if (!record) {
return { statusCode: 404, body: { error: 'Short URL not found' } };
}
// Increment click count asynchronously — do not block the redirect
db.put(Object.assign({}, record, { clicks: (record.clicks || 0) + 1 }))
.catch(function(err) { console.error('Failed to increment clicks:', err); });
return {
statusCode: 302,
headers: { Location: record.originalUrl },
body: { redirectTo: record.originalUrl }
};
})
.catch(function(err) {
console.error('Failed to resolve short URL:', err);
return { statusCode: 500, body: { error: 'Failed to resolve URL' } };
});
}
module.exports = handleShortenUrl;
// aws/shorten-url.js
var normalizer = require('../lib/request-normalizer');
var handleShortenUrl = require('../handlers/core/shorten-url');
exports.handler = function(event, context, callback) {
var request = normalizer.normalizeRequest(event, context);
handleShortenUrl(request)
.then(function(result) {
var response = {
statusCode: result.statusCode,
headers: Object.assign(
{ 'Content-Type': 'application/json' },
result.headers || {}
),
body: JSON.stringify(result.body)
};
callback(null, response);
})
.catch(function(err) { callback(err); });
};
// gcp/shorten-url.js
var normalizer = require('../lib/request-normalizer');
var handleShortenUrl = require('../handlers/core/shorten-url');
exports.shortenUrl = function(req, res) {
var request = normalizer.normalizeRequest(req, null);
handleShortenUrl(request)
.then(function(result) {
if (result.headers && result.headers.Location) {
res.redirect(result.statusCode, result.headers.Location);
} else {
res.status(result.statusCode).json(result.body);
}
})
.catch(function(err) {
console.error('Handler error:', err);
res.status(500).json({ error: 'Internal error' });
});
};
Common Issues and Troubleshooting
1. Cold Start Timeout Differences
Error (AWS):
Task timed out after 3.00 seconds
AWS Lambda defaults to a 3-second timeout. Azure Functions defaults to 5 minutes. GCP defaults to 60 seconds. When migrating from Azure to AWS, your function may suddenly start timing out because you assumed a generous default. Always explicitly set timeouts in your provider configuration and ensure they match across providers.
2. Module Resolution Failures on Azure
Error (Azure):
Error: Cannot find module '../../lib/request-normalizer'
Worker was unable to load function getUser: 'Error: Cannot find module'
Azure Functions has a different working directory structure than Lambda. Your node_modules and shared libraries must be relative to the function directory. Azure expects each function in its own subdirectory with a function.json file. The require() path that works locally may fail when deployed because Azure resolves paths relative to the function folder, not the project root. Use path.resolve(__dirname, '..', 'lib', 'request-normalizer') to make paths explicit.
3. Request Body Parsing Inconsistencies
Error (GCP):
SyntaxError: Unexpected token u in JSON at position 0
This happens because you called JSON.parse(req.body) on GCP, but Cloud Functions already parses JSON bodies for you. On AWS, the body arrives as a raw string. On Azure, it depends on the content-type header. The normalizer must handle all three cases — this is why the parseBody function checks the type before parsing.
4. IAM Permission Errors During Cross-Account Database Access
Error (AWS):
AccessDeniedException: User: arn:aws:sts::123456789:assumed-role/lambda-role/my-function
is not authorized to perform: dynamodb:GetItem on resource: arn:aws:dynamodb:us-east-1:987654321:table/users
When your Lambda function in account A tries to access a DynamoDB table in account B (common in multi-cloud migration phases where you have multiple AWS accounts), you need cross-account IAM role assumption. The Lambda execution role needs sts:AssumeRole permission, and the target account needs a trust policy allowing the assumption. This is not a multi-cloud problem per se, but it surfaces frequently during migrations.
5. Environment Variable Naming Conflicts
Error:
Error: connect ECONNREFUSED 127.0.0.1:443
This cryptic error often means an environment variable that your code expects is not set on the target provider. AWS uses AWS_REGION automatically; Azure uses AZURE_FUNCTIONS_ENVIRONMENT. If your database adapter reads process.env.AWS_REGION to construct a DynamoDB endpoint and you deploy to Azure without setting it, the AWS SDK will try to connect to localhost. Always validate required environment variables at function startup.
Best Practices
Start with one provider and design for portability. Do not build multi-cloud on day one. Structure your code with clear separation between business logic and provider integrations. You can add a second provider later with minimal refactoring if the boundaries are clean.
Use CloudEvents for all inter-service messaging. The CloudEvents specification gives you a provider-neutral envelope format. When you eventually need to route events across providers, you will be grateful for a consistent schema.
Pin your provider SDK versions aggressively. AWS SDK v2 and v3 have fundamentally different APIs. Azure SDK packages update frequently with breaking changes. Pin exact versions in
package.jsonand test upgrades deliberately.Implement health checks that verify provider-specific integrations. Each deployment should have a
/healthendpoint that tests database connectivity, queue access, and storage permissions. This catches missing environment variables and IAM misconfigurations before traffic arrives.Use OpenTelemetry instead of provider-native observability. CloudWatch, Application Insights, and Cloud Monitoring are all capable tools, but they lock you in. OpenTelemetry lets you export traces and metrics to any backend while maintaining consistent instrumentation across providers.
Accept that some services are not worth abstracting. Step Functions, Durable Functions, and Google Workflows are powerful because they are deeply integrated with their respective platforms. Abstracting them away loses their value. Use them directly and treat them as provider-committed components in your architecture.
Test with provider-specific emulators locally. Use
aws-sam-clifor Lambda,Azure Functions Core Toolsfor Azure, andfunctions-frameworkfor GCP. Run your integration tests against each emulator in CI to catch provider-specific issues before deployment.Design database schemas for the lowest common denominator. If you need to run on DynamoDB and Firestore, design around key-value access patterns. Do not rely on DynamoDB-specific features like Transactions or Firestore-specific features like subcollections in your portable code.
References
- AWS Lambda Developer Guide — Official AWS documentation for Lambda functions
- Azure Functions Documentation — Microsoft's reference for Azure Functions
- Google Cloud Functions Documentation — GCP's serverless function documentation
- CloudEvents Specification — CNCF standard for event data description
- Serverless Framework — Multi-provider serverless deployment framework
- Knative Documentation — Kubernetes-based serverless platform
- OpenTelemetry — Observability framework for cloud-native software
- The Twelve-Factor App — Methodology for building portable cloud applications