Digitalocean

Spaces Object Storage: S3-Compatible Cloud Storage

A practical guide to DigitalOcean Spaces object storage for Node.js developers, covering AWS SDK integration, file uploads, presigned URLs, CDN delivery, and building a file management service.

Spaces Object Storage: S3-Compatible Cloud Storage

Overview

DigitalOcean Spaces is S3-compatible object storage that lets you store and serve files -- images, videos, backups, user uploads, static assets -- without managing disks or file servers. It uses the same API as Amazon S3, which means the entire AWS SDK ecosystem works with it out of the box, but the pricing is dramatically simpler: $5/month for 250 GB of storage and 1 TB of outbound transfer with no per-request fees. I have been using Spaces for production file storage across multiple Node.js applications and it has replaced every scenario where I used to reach for S3.

Prerequisites

  • A DigitalOcean account with billing configured
  • Node.js 18+ and npm installed locally
  • Basic familiarity with Express.js
  • A DigitalOcean Space created (we will walk through this)
  • doctl CLI installed (optional but useful)
  • An understanding of HTTP and REST concepts

What Is Object Storage

Object storage is fundamentally different from the file systems you work with on your laptop or server. There is no directory tree, no file permissions, no symlinks. Every item is an object stored in a flat namespace with a unique key, a blob of data, and metadata. You interact with it entirely over HTTP.

This matters for Node.js developers because it changes how you think about file management. You do not fs.writeFile() to a path on disk. You make an HTTP PUT request to a URL. You do not fs.readFile() to serve content. You either proxy the GET request or hand the client a signed URL and let them download directly.

The tradeoffs are clear:

  • Unlimited scale -- you will never run out of disk space
  • Built-in redundancy -- data is replicated across multiple devices automatically
  • HTTP-native -- every object has a URL, no file server needed
  • No random access -- you cannot seek to byte 500 of a file; you get the whole object or a byte range
  • Eventually consistent for deletes and overwrites (though Spaces has improved this significantly)

Spaces vs Block Storage: When to Use Each

DigitalOcean offers both Spaces (object storage) and Volumes (block storage). They serve different purposes and picking the wrong one wastes money or creates architectural headaches.

Use Spaces when:

  • Storing user-uploaded files (images, documents, videos)
  • Serving static assets (CSS, JavaScript, images) via CDN
  • Storing application backups and logs
  • Any file that needs a public or signed URL
  • Files accessed primarily by HTTP clients (browsers, mobile apps)

Use Block Storage (Volumes) when:

  • Your database needs more disk (attach a volume to your Droplet)
  • Your application writes to the filesystem directly and expects POSIX semantics
  • You need low-latency random access to files
  • You are running a traditional application that reads/writes files on disk

My rule of thumb: If a browser or API client needs to access the file, use Spaces. If only your server process needs the file and it expects a local path, use a Volume.

Feature              Spaces (Object)         Volumes (Block)
─────────────────────────────────────────────────────────────
Access method        HTTP API (S3-compat)    Mounted filesystem
Max size             Unlimited               16 TB per volume
Pricing              $5/mo for 250 GB        $0.10/GB/mo
CDN                  Built-in                Not applicable
Redundancy           Automatic replication   3x replication
Latency              Higher (HTTP overhead)  Low (local mount)
Use case             Files served over web   Database storage

Creating a Space and Generating API Keys

Creating a Space

You can create a Space through the DigitalOcean console or the CLI. I prefer the CLI because it is scriptable:

# Install doctl if you haven't
brew install doctl  # macOS
# or: snap install doctl  # Ubuntu

# Authenticate
doctl auth init

# Create a Space in the NYC3 region
doctl spaces create my-app-files --region nyc3

Through the console: navigate to Spaces Object Storage in the left sidebar, click Create a Spaces Bucket, choose a region (pick one close to your application servers), give it a unique name, and set the default permissions. I recommend leaving the default as Private and granting access through your application code.

Available regions for Spaces: nyc3, sfo3, ams3, sgp1, fra1, syd1.

Generating API Keys

Spaces uses API keys separate from your DigitalOcean personal access token. Generate them in the console:

  1. Go to API in the left sidebar
  2. Scroll down to Spaces Keys
  3. Click Generate New Key
  4. Give it a name (e.g., my-app-production)
  5. Save both the Key and the Secret -- the secret is only shown once

Store these in your environment:

export SPACES_KEY="DO00XXXXXXXXXXXXXXXXXX"
export SPACES_SECRET="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
export SPACES_REGION="nyc3"
export SPACES_BUCKET="my-app-files"

Using the AWS SDK v3 with Spaces

Since Spaces is S3-compatible, you use the official AWS SDK. Version 3 is modular, so you only install the packages you need:

npm install @aws-sdk/client-s3 @aws-sdk/s3-request-presigner @aws-sdk/lib-storage

Configuring the S3 Client

The key difference from AWS S3 is the endpoint and forcePathStyle configuration:

var { S3Client } = require("@aws-sdk/client-s3");

var s3 = new S3Client({
  endpoint: "https://" + process.env.SPACES_REGION + ".digitaloceanspaces.com",
  region: process.env.SPACES_REGION,
  credentials: {
    accessKeyId: process.env.SPACES_KEY,
    secretAccessKey: process.env.SPACES_SECRET
  },
  forcePathStyle: false
});

module.exports = s3;

Save this as lib/spaces.js and require it wherever you need to interact with Spaces. The forcePathStyle: false setting is important -- Spaces uses virtual-hosted-style URLs (bucket.region.digitaloceanspaces.com), not path-style (region.digitaloceanspaces.com/bucket).


Uploading Files from Node.js

Simple Upload

For files under 5 GB, a single PUT operation works:

var { PutObjectCommand } = require("@aws-sdk/client-s3");
var fs = require("fs");
var path = require("path");
var s3 = require("./lib/spaces");

function uploadFile(localPath, remoteKey, contentType) {
  var fileStream = fs.createReadStream(localPath);

  var command = new PutObjectCommand({
    Bucket: process.env.SPACES_BUCKET,
    Key: remoteKey,
    Body: fileStream,
    ContentType: contentType,
    ACL: "private"
  });

  return s3.send(command);
}

// Usage
uploadFile(
  "./uploads/photo.jpg",
  "user-uploads/2026/02/photo.jpg",
  "image/jpeg"
).then(function(result) {
  console.log("Upload complete:", result.ETag);
}).catch(function(err) {
  console.error("Upload failed:", err.message);
});

Multipart Upload for Large Files

For files larger than 100 MB, use multipart upload. The @aws-sdk/lib-storage package handles this automatically, splitting the file into chunks and uploading them in parallel:

var { Upload } = require("@aws-sdk/lib-storage");
var fs = require("fs");
var s3 = require("./lib/spaces");

function uploadLargeFile(localPath, remoteKey, contentType) {
  var fileStream = fs.createReadStream(localPath);

  var upload = new Upload({
    client: s3,
    params: {
      Bucket: process.env.SPACES_BUCKET,
      Key: remoteKey,
      Body: fileStream,
      ContentType: contentType,
      ACL: "private"
    },
    queueSize: 4,           // concurrent part uploads
    partSize: 1024 * 1024 * 10, // 10 MB per part
    leavePartsOnError: false
  });

  upload.on("httpUploadProgress", function(progress) {
    var pct = Math.round((progress.loaded / progress.total) * 100);
    console.log("Progress: " + pct + "%");
  });

  return upload.done();
}

// Upload a 500 MB video file
uploadLargeFile(
  "./uploads/video.mp4",
  "videos/tutorial-01.mp4",
  "video/mp4"
).then(function(result) {
  console.log("Multipart upload complete:", result.Key);
}).catch(function(err) {
  console.error("Upload failed:", err.message);
});

Output:

Progress: 2%
Progress: 4%
Progress: 6%
...
Progress: 98%
Progress: 100%
Multipart upload complete: videos/tutorial-01.mp4

Downloading and Streaming Files

Download to Disk

var { GetObjectCommand } = require("@aws-sdk/client-s3");
var fs = require("fs");
var path = require("path");
var s3 = require("./lib/spaces");

function downloadFile(remoteKey, localPath) {
  var command = new GetObjectCommand({
    Bucket: process.env.SPACES_BUCKET,
    Key: remoteKey
  });

  return s3.send(command).then(function(response) {
    var writeStream = fs.createWriteStream(localPath);
    response.Body.pipe(writeStream);

    return new Promise(function(resolve, reject) {
      writeStream.on("finish", resolve);
      writeStream.on("error", reject);
    });
  });
}

downloadFile("user-uploads/2026/02/photo.jpg", "./downloads/photo.jpg")
  .then(function() {
    console.log("Download complete");
  });

Stream Directly to HTTP Response

This is useful when your Express.js app acts as a proxy, serving files from Spaces without downloading them to disk first:

var { GetObjectCommand } = require("@aws-sdk/client-s3");
var s3 = require("./lib/spaces");

function streamToResponse(remoteKey, res) {
  var command = new GetObjectCommand({
    Bucket: process.env.SPACES_BUCKET,
    Key: remoteKey
  });

  return s3.send(command).then(function(response) {
    res.set("Content-Type", response.ContentType);
    res.set("Content-Length", response.ContentLength);
    res.set("Cache-Control", "public, max-age=86400");
    response.Body.pipe(res);
  });
}

// In an Express route
app.get("/files/:key(*)", function(req, res, next) {
  streamToResponse(req.params.key, res).catch(function(err) {
    if (err.name === "NoSuchKey") {
      return res.status(404).json({ error: "File not found" });
    }
    next(err);
  });
});

Listing Objects and Pagination

Spaces limits list results to 1000 objects per request. For buckets with more objects, you need to paginate:

var { ListObjectsV2Command } = require("@aws-sdk/client-s3");
var s3 = require("./lib/spaces");

function listAllObjects(prefix) {
  var allObjects = [];
  var continuationToken = undefined;

  function fetchPage() {
    var command = new ListObjectsV2Command({
      Bucket: process.env.SPACES_BUCKET,
      Prefix: prefix,
      MaxKeys: 1000,
      ContinuationToken: continuationToken
    });

    return s3.send(command).then(function(response) {
      if (response.Contents) {
        allObjects = allObjects.concat(response.Contents);
      }

      if (response.IsTruncated) {
        continuationToken = response.NextContinuationToken;
        return fetchPage();
      }

      return allObjects;
    });
  }

  return fetchPage();
}

// List all files in the user-uploads prefix
listAllObjects("user-uploads/").then(function(objects) {
  console.log("Total objects:", objects.length);
  objects.forEach(function(obj) {
    console.log(obj.Key, " - ", (obj.Size / 1024).toFixed(1) + " KB");
  });
});

Output:

Total objects: 2847
user-uploads/2026/01/avatar-001.jpg  -  45.2 KB
user-uploads/2026/01/avatar-002.jpg  -  38.7 KB
user-uploads/2026/01/document.pdf  -  1204.5 KB
...

Deleting Objects and Bulk Cleanup

Single Delete

var { DeleteObjectCommand } = require("@aws-sdk/client-s3");
var s3 = require("./lib/spaces");

function deleteObject(key) {
  var command = new DeleteObjectCommand({
    Bucket: process.env.SPACES_BUCKET,
    Key: key
  });

  return s3.send(command);
}

Bulk Delete

To delete many objects at once, use the DeleteObjectsCommand which accepts up to 1000 keys per request:

var { DeleteObjectsCommand } = require("@aws-sdk/client-s3");
var s3 = require("./lib/spaces");

function bulkDelete(keys) {
  var batches = [];
  for (var i = 0; i < keys.length; i += 1000) {
    batches.push(keys.slice(i, i + 1000));
  }

  var deleteBatch = function(batch) {
    var command = new DeleteObjectsCommand({
      Bucket: process.env.SPACES_BUCKET,
      Delete: {
        Objects: batch.map(function(key) {
          return { Key: key };
        }),
        Quiet: true
      }
    });
    return s3.send(command);
  };

  return Promise.all(batches.map(deleteBatch)).then(function(results) {
    var totalDeleted = results.reduce(function(sum, r) {
      return sum + (r.Deleted ? r.Deleted.length : 0);
    }, 0);
    console.log("Deleted " + totalDeleted + " objects");
    return totalDeleted;
  });
}

// Clean up temporary upload files older than 24 hours
listAllObjects("temp-uploads/").then(function(objects) {
  var cutoff = Date.now() - (24 * 60 * 60 * 1000);
  var staleKeys = objects
    .filter(function(obj) {
      return new Date(obj.LastModified).getTime() < cutoff;
    })
    .map(function(obj) {
      return obj.Key;
    });

  if (staleKeys.length > 0) {
    return bulkDelete(staleKeys);
  }
  console.log("No stale files to clean up");
});

Presigned URLs for Direct Browser Uploads

Presigned URLs are one of the most powerful features of S3-compatible storage. Instead of routing file uploads through your server (which consumes bandwidth, memory, and CPU), you generate a short-lived signed URL that lets the client upload directly to Spaces. Your server never touches the file data.

Generating Upload URLs

var { PutObjectCommand } = require("@aws-sdk/client-s3");
var { getSignedUrl } = require("@aws-sdk/s3-request-presigner");
var crypto = require("crypto");
var s3 = require("./lib/spaces");

function generateUploadUrl(filename, contentType, maxSizeBytes) {
  var ext = filename.split(".").pop().toLowerCase();
  var key = "uploads/" + crypto.randomUUID() + "." + ext;

  var command = new PutObjectCommand({
    Bucket: process.env.SPACES_BUCKET,
    Key: key,
    ContentType: contentType,
    ACL: "private"
  });

  return getSignedUrl(s3, command, {
    expiresIn: 300  // URL valid for 5 minutes
  }).then(function(url) {
    return { uploadUrl: url, key: key };
  });
}

// Express endpoint
app.post("/api/upload-url", function(req, res, next) {
  var filename = req.body.filename;
  var contentType = req.body.contentType;

  var allowedTypes = ["image/jpeg", "image/png", "image/webp", "application/pdf"];
  if (allowedTypes.indexOf(contentType) === -1) {
    return res.status(400).json({ error: "File type not allowed" });
  }

  generateUploadUrl(filename, contentType, 10 * 1024 * 1024)
    .then(function(result) {
      res.json(result);
    })
    .catch(next);
});

Client-Side Upload

// Browser JavaScript
function uploadFile(file) {
  // 1. Get presigned URL from your server
  return fetch("/api/upload-url", {
    method: "POST",
    headers: { "Content-Type": "application/json" },
    body: JSON.stringify({
      filename: file.name,
      contentType: file.type
    })
  })
  .then(function(res) { return res.json(); })
  .then(function(data) {
    // 2. Upload directly to Spaces
    return fetch(data.uploadUrl, {
      method: "PUT",
      headers: { "Content-Type": file.type },
      body: file
    }).then(function() {
      return data.key;  // Return the storage key
    });
  });
}

// Usage with file input
document.getElementById("file-input").addEventListener("change", function(e) {
  var file = e.target.files[0];
  uploadFile(file).then(function(key) {
    console.log("Uploaded to:", key);
  });
});

Generating Download URLs

For private files, generate a signed download URL:

var { GetObjectCommand } = require("@aws-sdk/client-s3");
var { getSignedUrl } = require("@aws-sdk/s3-request-presigner");
var s3 = require("./lib/spaces");

function generateDownloadUrl(key, expiresIn) {
  var command = new GetObjectCommand({
    Bucket: process.env.SPACES_BUCKET,
    Key: key
  });

  return getSignedUrl(s3, command, {
    expiresIn: expiresIn || 3600  // 1 hour default
  });
}

Setting CORS for Browser-Based Access

If you use presigned URLs for direct browser uploads, you need to configure CORS on your Space. Without it, the browser will block the PUT request.

You can set CORS through the DigitalOcean console under your Space's settings, or programmatically:

var { PutBucketCorsCommand } = require("@aws-sdk/client-s3");
var s3 = require("./lib/spaces");

function setCors() {
  var command = new PutBucketCorsCommand({
    Bucket: process.env.SPACES_BUCKET,
    CORSConfiguration: {
      CORSRules: [
        {
          AllowedHeaders: ["*"],
          AllowedMethods: ["GET", "PUT", "POST", "HEAD"],
          AllowedOrigins: ["https://yourdomain.com"],
          ExposeHeaders: ["ETag", "x-amz-request-id"],
          MaxAgeSeconds: 3600
        }
      ]
    }
  });

  return s3.send(command);
}

setCors().then(function() {
  console.log("CORS configuration applied");
});

Important: Do not use "*" for AllowedOrigins in production. Restrict it to your actual domains. If you have a staging environment, add that origin as a separate rule.


CDN Integration

Every Space comes with an optional built-in CDN powered by Cloudflare. Enabling it gives you edge caching across 200+ locations worldwide, which dramatically reduces latency for static content.

Enabling the CDN

In the DigitalOcean console, go to your Space, click Settings, and toggle CDN on. Your CDN URL will be:

https://my-app-files.nyc3.cdn.digitaloceanspaces.com

The non-CDN URL is:

https://my-app-files.nyc3.digitaloceanspaces.com

Custom CDN Domain

You can map a custom subdomain to your Spaces CDN:

  1. In your Space settings, click Add Custom Subdomain
  2. Enter your subdomain (e.g., cdn.yourdomain.com)
  3. DigitalOcean generates a Let's Encrypt certificate automatically
  4. Add a CNAME record in your DNS pointing cdn.yourdomain.com to the CDN endpoint
# DNS record
cdn.yourdomain.com  CNAME  my-app-files.nyc3.cdn.digitaloceanspaces.com.

Using the CDN in Your Application

function getCdnUrl(key) {
  var cdnBase = process.env.SPACES_CDN_URL ||
    "https://" + process.env.SPACES_BUCKET + "." + process.env.SPACES_REGION + ".cdn.digitaloceanspaces.com";
  return cdnBase + "/" + key;
}

// Generate CDN URLs for public assets
var imageUrl = getCdnUrl("images/hero-banner.jpg");
// https://my-app-files.nyc3.cdn.digitaloceanspaces.com/images/hero-banner.jpg

Note: The CDN only caches objects with public-read ACL. Private objects must be accessed through presigned URLs, which bypass the CDN. If you want CDN delivery for user content, set the ACL to public-read on upload and rely on unguessable keys (UUIDs) for security-through-obscurity. For truly sensitive files, stick with presigned URLs.


Organizing Objects with Prefixes

Spaces has a flat namespace -- there are no real directories. But the S3 API supports a Delimiter parameter that lets you treat / as a folder separator. This is how the console shows a folder-like structure.

I recommend a consistent prefix scheme:

{content-type}/{year}/{month}/{unique-id}.{ext}

For example:

user-avatars/2026/02/a1b2c3d4-e5f6.jpg
documents/2026/02/report-q4-2025.pdf
temp-uploads/2026/02/08/raw-upload.bin
thumbnails/2026/02/a1b2c3d4-e5f6-thumb.jpg

Listing "Folders"

var { ListObjectsV2Command } = require("@aws-sdk/client-s3");
var s3 = require("./lib/spaces");

function listFolders(prefix) {
  var command = new ListObjectsV2Command({
    Bucket: process.env.SPACES_BUCKET,
    Prefix: prefix,
    Delimiter: "/"
  });

  return s3.send(command).then(function(response) {
    var folders = (response.CommonPrefixes || []).map(function(p) {
      return p.Prefix;
    });
    var files = (response.Contents || []).map(function(obj) {
      return { key: obj.Key, size: obj.Size };
    });
    return { folders: folders, files: files };
  });
}

// List top-level "folders"
listFolders("").then(function(result) {
  console.log("Folders:", result.folders);
  // Folders: [ 'documents/', 'temp-uploads/', 'thumbnails/', 'user-avatars/' ]
});

Lifecycle Policies: Auto-Expiring Temporary Files

Spaces supports object lifecycle rules that automatically delete objects after a specified period. This is essential for cleaning up temporary uploads, expired cache files, or old log archives.

Unfortunately, Spaces lifecycle management is configured through the S3 API, not the DigitalOcean console:

var { PutBucketLifecycleConfigurationCommand } = require("@aws-sdk/client-s3");
var s3 = require("./lib/spaces");

function setLifecycleRules() {
  var command = new PutBucketLifecycleConfigurationCommand({
    Bucket: process.env.SPACES_BUCKET,
    LifecycleConfiguration: {
      Rules: [
        {
          ID: "delete-temp-uploads",
          Status: "Enabled",
          Filter: {
            Prefix: "temp-uploads/"
          },
          Expiration: {
            Days: 1
          }
        },
        {
          ID: "delete-old-logs",
          Status: "Enabled",
          Filter: {
            Prefix: "logs/"
          },
          Expiration: {
            Days: 90
          }
        }
      ]
    }
  });

  return s3.send(command);
}

setLifecycleRules().then(function() {
  console.log("Lifecycle rules configured");
});

This configuration automatically deletes objects under temp-uploads/ after 1 day and objects under logs/ after 90 days. Spaces checks expiration rules once a day, so objects may persist a few hours past the specified expiration.


Express.js File Upload Endpoint with Multer and Spaces

Here is a production-ready file upload endpoint that uses multer to handle multipart form data and streams the file directly to Spaces:

npm install multer
var express = require("express");
var multer = require("multer");
var crypto = require("crypto");
var path = require("path");
var { PutObjectCommand } = require("@aws-sdk/client-s3");
var s3 = require("./lib/spaces");

var router = express.Router();

// Store in memory (for small files) or use disk for large files
var upload = multer({
  storage: multer.memoryStorage(),
  limits: {
    fileSize: 10 * 1024 * 1024  // 10 MB max
  },
  fileFilter: function(req, file, cb) {
    var allowed = [".jpg", ".jpeg", ".png", ".gif", ".webp", ".pdf"];
    var ext = path.extname(file.originalname).toLowerCase();
    if (allowed.indexOf(ext) === -1) {
      return cb(new Error("File type not allowed"), false);
    }
    cb(null, true);
  }
});

router.post("/upload", upload.single("file"), function(req, res, next) {
  if (!req.file) {
    return res.status(400).json({ error: "No file provided" });
  }

  var ext = path.extname(req.file.originalname).toLowerCase();
  var key = "uploads/" + crypto.randomUUID() + ext;

  var command = new PutObjectCommand({
    Bucket: process.env.SPACES_BUCKET,
    Key: key,
    Body: req.file.buffer,
    ContentType: req.file.mimetype,
    ACL: "public-read",
    CacheControl: "public, max-age=31536000, immutable"
  });

  s3.send(command)
    .then(function() {
      var cdnUrl = "https://" + process.env.SPACES_BUCKET + "." +
        process.env.SPACES_REGION + ".cdn.digitaloceanspaces.com/" + key;

      res.json({
        key: key,
        url: cdnUrl,
        size: req.file.size,
        contentType: req.file.mimetype
      });
    })
    .catch(next);
});

module.exports = router;

Serving Static Assets from Spaces

For web applications, moving static assets (CSS, JavaScript, images) to Spaces with CDN enabled reduces load on your application server and improves page load times globally.

Build Pipeline Integration

After building your frontend assets, upload them to Spaces:

// scripts/deploy-assets.js
var fs = require("fs");
var path = require("path");
var { PutObjectCommand } = require("@aws-sdk/client-s3");
var s3 = require("../lib/spaces");

var ASSET_DIR = path.join(__dirname, "..", "dist", "static");
var BUCKET = process.env.SPACES_BUCKET;

var mimeTypes = {
  ".js": "application/javascript",
  ".css": "text/css",
  ".png": "image/png",
  ".jpg": "image/jpeg",
  ".svg": "image/svg+xml",
  ".woff2": "font/woff2",
  ".json": "application/json"
};

function uploadDirectory(dir, prefix) {
  var files = fs.readdirSync(dir);

  var uploads = files.map(function(file) {
    var fullPath = path.join(dir, file);
    var stat = fs.statSync(fullPath);

    if (stat.isDirectory()) {
      return uploadDirectory(fullPath, prefix + file + "/");
    }

    var ext = path.extname(file).toLowerCase();
    var contentType = mimeTypes[ext] || "application/octet-stream";
    var key = prefix + file;

    var command = new PutObjectCommand({
      Bucket: BUCKET,
      Key: key,
      Body: fs.readFileSync(fullPath),
      ContentType: contentType,
      ACL: "public-read",
      CacheControl: "public, max-age=31536000, immutable"
    });

    return s3.send(command).then(function() {
      console.log("Uploaded: " + key + " (" + (stat.size / 1024).toFixed(1) + " KB)");
    });
  });

  return Promise.all(uploads);
}

uploadDirectory(ASSET_DIR, "static/v" + process.env.npm_package_version + "/")
  .then(function() {
    console.log("All assets uploaded");
  })
  .catch(function(err) {
    console.error("Asset upload failed:", err.message);
    process.exit(1);
  });

Referencing CDN Assets in Templates

// In your Express app
app.locals.cdnBase = process.env.SPACES_CDN_URL || "";

// In Pug templates
// link(rel="stylesheet", href=cdnBase + "/static/v1.2.0/css/styles.css")
// script(src=cdnBase + "/static/v1.2.0/js/app.js")

Cost Comparison with S3 and Other Providers

This is where Spaces really stands out. The pricing model is flat and predictable:

Provider             Storage     Transfer     Requests    Monthly Min
────────────────────────────────────────────────────────────────────────
DO Spaces            $5/250GB    $5/1TB incl  Free        $5
AWS S3 Standard      $0.023/GB   $0.09/GB     $0.0004/1K  Pay-as-you-go
Google Cloud Storage $0.020/GB   $0.12/GB     $0.004/10K  Pay-as-you-go
Azure Blob Storage   $0.018/GB   $0.087/GB    $0.004/10K  Pay-as-you-go
Cloudflare R2        $0.015/GB   Free         $0.36/1M    Pay-as-you-go

For a typical application storing 100 GB with 500 GB of monthly transfer:

  • Spaces: $5/month (flat)
  • S3: $2.30 storage + ~$45 transfer + ~$2 requests = **$49/month**
  • R2: $1.50 storage + $0 transfer + ~$1.80 requests = **$3.30/month**

Spaces wins on simplicity and predictability. You know exactly what you will pay every month. S3 can surprise you with transfer costs. R2 is the cheapest option if you have high transfer volume, but it lacks the built-in CDN that Spaces provides.

My take: For Node.js applications already running on DigitalOcean, Spaces is the obvious choice. Everything integrates cleanly, the pricing is predictable, and the S3 compatibility means you can migrate to AWS later if you need to. If you are running on Cloudflare Workers, use R2. If you are on AWS, use S3.


Migrating from S3 to Spaces

If you have existing files in S3 and want to move to Spaces, the migration is straightforward because both speak the same API. The rclone tool is the fastest way:

# Install rclone
curl https://rclone.org/install.sh | sudo bash

# Configure S3 source
rclone config create s3source s3 \
  provider=AWS \
  access_key_id=$AWS_ACCESS_KEY \
  secret_access_key=$AWS_SECRET_KEY \
  region=us-east-1

# Configure Spaces destination
rclone config create spaces s3 \
  provider=DigitalOcean \
  access_key_id=$SPACES_KEY \
  secret_access_key=$SPACES_SECRET \
  endpoint=$SPACES_REGION.digitaloceanspaces.com

# Sync (dry run first)
rclone sync s3source:my-s3-bucket spaces:my-spaces-bucket --dry-run --progress

# Actual migration
rclone sync s3source:my-s3-bucket spaces:my-spaces-bucket --progress --transfers=16

Output:

Transferred:   45.620 GiB / 45.620 GiB, 100%, 52.381 MiB/s, ETA 0s
Transferred:     12847 / 12847, 100%
Elapsed time:   14m52.3s

After migration, update your application to point at the Spaces endpoint. Since the API is identical, you only need to change the endpoint URL and credentials -- no code changes to your S3 operations.


Complete Working Example: File Upload Service

Here is a complete Express.js file upload service that stores files in Spaces with presigned URL generation, CDN delivery, and a cleanup cron job. This is production-ready code that I have used in real applications.

Project Structure

file-service/
  lib/
    spaces.js          # S3 client configuration
  routes/
    files.js           # Upload, download, delete endpoints
  jobs/
    cleanup.js         # Scheduled cleanup of temp files
  app.js               # Express application
  package.json

lib/spaces.js -- S3 Client

var { S3Client } = require("@aws-sdk/client-s3");

var s3 = new S3Client({
  endpoint: "https://" + process.env.SPACES_REGION + ".digitaloceanspaces.com",
  region: process.env.SPACES_REGION,
  credentials: {
    accessKeyId: process.env.SPACES_KEY,
    secretAccessKey: process.env.SPACES_SECRET
  },
  forcePathStyle: false
});

module.exports = s3;

routes/files.js -- Upload, Download, and Delete Endpoints

var express = require("express");
var multer = require("multer");
var crypto = require("crypto");
var path = require("path");
var sharp = require("sharp");
var {
  PutObjectCommand,
  GetObjectCommand,
  DeleteObjectCommand,
  HeadObjectCommand
} = require("@aws-sdk/client-s3");
var { getSignedUrl } = require("@aws-sdk/s3-request-presigner");
var s3 = require("../lib/spaces");

var router = express.Router();
var BUCKET = process.env.SPACES_BUCKET;
var CDN_BASE = process.env.SPACES_CDN_URL ||
  "https://" + BUCKET + "." + process.env.SPACES_REGION + ".cdn.digitaloceanspaces.com";

var upload = multer({
  storage: multer.memoryStorage(),
  limits: { fileSize: 25 * 1024 * 1024 }
});

// POST /files/upload -- Upload a file with optional thumbnail generation
router.post("/upload", upload.single("file"), function(req, res, next) {
  if (!req.file) {
    return res.status(400).json({ error: "No file provided" });
  }

  var ext = path.extname(req.file.originalname).toLowerCase();
  var id = crypto.randomUUID();
  var key = "files/" + id + ext;
  var isImage = [".jpg", ".jpeg", ".png", ".webp", ".gif"].indexOf(ext) !== -1;
  var uploads = [];

  // Upload original file
  var originalUpload = s3.send(new PutObjectCommand({
    Bucket: BUCKET,
    Key: key,
    Body: req.file.buffer,
    ContentType: req.file.mimetype,
    ACL: "public-read",
    CacheControl: "public, max-age=31536000, immutable",
    Metadata: {
      "original-name": req.file.originalname,
      "uploaded-by": req.body.userId || "anonymous"
    }
  }));
  uploads.push(originalUpload);

  // Generate thumbnail for images
  var thumbKey = null;
  if (isImage && ext !== ".gif") {
    thumbKey = "thumbnails/" + id + ".webp";
    var thumbUpload = sharp(req.file.buffer)
      .resize(300, 300, { fit: "cover", position: "center" })
      .webp({ quality: 80 })
      .toBuffer()
      .then(function(thumbBuffer) {
        return s3.send(new PutObjectCommand({
          Bucket: BUCKET,
          Key: thumbKey,
          Body: thumbBuffer,
          ContentType: "image/webp",
          ACL: "public-read",
          CacheControl: "public, max-age=31536000, immutable"
        }));
      });
    uploads.push(thumbUpload);
  }

  Promise.all(uploads)
    .then(function() {
      var result = {
        id: id,
        key: key,
        url: CDN_BASE + "/" + key,
        size: req.file.size,
        contentType: req.file.mimetype,
        originalName: req.file.originalname
      };

      if (thumbKey) {
        result.thumbnailUrl = CDN_BASE + "/" + thumbKey;
      }

      res.status(201).json(result);
    })
    .catch(next);
});

// GET /files/download/:key -- Generate a presigned download URL
router.get("/download/*", function(req, res, next) {
  var key = req.params[0];

  // Verify the object exists
  s3.send(new HeadObjectCommand({ Bucket: BUCKET, Key: key }))
    .then(function(headResult) {
      return getSignedUrl(s3, new GetObjectCommand({
        Bucket: BUCKET,
        Key: key,
        ResponseContentDisposition: "attachment; filename=\"" +
          path.basename(key) + "\""
      }), { expiresIn: 3600 });
    })
    .then(function(url) {
      res.json({ downloadUrl: url, expiresIn: 3600 });
    })
    .catch(function(err) {
      if (err.name === "NotFound" || err.$metadata && err.$metadata.httpStatusCode === 404) {
        return res.status(404).json({ error: "File not found" });
      }
      next(err);
    });
});

// POST /files/presign -- Generate a presigned upload URL
router.post("/presign", function(req, res, next) {
  var filename = req.body.filename;
  var contentType = req.body.contentType;

  if (!filename || !contentType) {
    return res.status(400).json({ error: "filename and contentType are required" });
  }

  var allowedTypes = [
    "image/jpeg", "image/png", "image/webp", "image/gif",
    "application/pdf", "text/plain", "text/csv"
  ];

  if (allowedTypes.indexOf(contentType) === -1) {
    return res.status(400).json({ error: "Content type not allowed: " + contentType });
  }

  var ext = path.extname(filename).toLowerCase();
  var key = "uploads/" + crypto.randomUUID() + ext;

  getSignedUrl(s3, new PutObjectCommand({
    Bucket: BUCKET,
    Key: key,
    ContentType: contentType,
    ACL: "public-read"
  }), { expiresIn: 600 })
    .then(function(uploadUrl) {
      res.json({
        uploadUrl: uploadUrl,
        key: key,
        cdnUrl: CDN_BASE + "/" + key,
        expiresIn: 600
      });
    })
    .catch(next);
});

// DELETE /files/:key -- Delete a file and its thumbnail
router.delete("/*", function(req, res, next) {
  var key = req.params[0];
  var deletes = [
    s3.send(new DeleteObjectCommand({ Bucket: BUCKET, Key: key }))
  ];

  // Also delete thumbnail if it exists
  var thumbKey = "thumbnails/" + path.basename(key, path.extname(key)) + ".webp";
  deletes.push(
    s3.send(new DeleteObjectCommand({ Bucket: BUCKET, Key: thumbKey }))
      .catch(function() { /* thumbnail may not exist, ignore */ })
  );

  Promise.all(deletes)
    .then(function() {
      res.json({ deleted: key });
    })
    .catch(next);
});

module.exports = router;

jobs/cleanup.js -- Scheduled Cleanup Cron

var { ListObjectsV2Command, DeleteObjectsCommand } = require("@aws-sdk/client-s3");
var s3 = require("../lib/spaces");

var BUCKET = process.env.SPACES_BUCKET;
var MAX_AGE_HOURS = 24;

function cleanupTempFiles() {
  var cutoff = new Date(Date.now() - (MAX_AGE_HOURS * 60 * 60 * 1000));
  var deletedCount = 0;

  function processPage(continuationToken) {
    var command = new ListObjectsV2Command({
      Bucket: BUCKET,
      Prefix: "temp-uploads/",
      MaxKeys: 1000,
      ContinuationToken: continuationToken
    });

    return s3.send(command).then(function(response) {
      var staleObjects = (response.Contents || []).filter(function(obj) {
        return new Date(obj.LastModified) < cutoff;
      });

      if (staleObjects.length === 0) {
        if (response.IsTruncated) {
          return processPage(response.NextContinuationToken);
        }
        return;
      }

      var deleteCommand = new DeleteObjectsCommand({
        Bucket: BUCKET,
        Delete: {
          Objects: staleObjects.map(function(obj) {
            return { Key: obj.Key };
          }),
          Quiet: true
        }
      });

      return s3.send(deleteCommand).then(function() {
        deletedCount += staleObjects.length;
        if (response.IsTruncated) {
          return processPage(response.NextContinuationToken);
        }
      });
    });
  }

  return processPage().then(function() {
    console.log("[cleanup] Deleted " + deletedCount + " temp files older than " +
      MAX_AGE_HOURS + " hours");
    return deletedCount;
  });
}

module.exports = cleanupTempFiles;

app.js -- Express Application

var express = require("express");
var fileRoutes = require("./routes/files");
var cleanupTempFiles = require("./jobs/cleanup");

var app = express();
var PORT = process.env.PORT || 3000;

app.use(express.json());

// Mount file routes
app.use("/files", fileRoutes);

// Health check
app.get("/health", function(req, res) {
  res.json({ status: "ok", timestamp: new Date().toISOString() });
});

// Error handler
app.use(function(err, req, res, next) {
  console.error("[error]", err.message);
  res.status(err.status || 500).json({
    error: process.env.NODE_ENV === "production" ? "Internal server error" : err.message
  });
});

app.listen(PORT, function() {
  console.log("File service running on port " + PORT);

  // Run cleanup every hour
  setInterval(function() {
    cleanupTempFiles().catch(function(err) {
      console.error("[cleanup error]", err.message);
    });
  }, 60 * 60 * 1000);

  // Run cleanup on startup too
  cleanupTempFiles().catch(function(err) {
    console.error("[cleanup error]", err.message);
  });
});

package.json

{
  "name": "file-service",
  "version": "1.0.0",
  "scripts": {
    "start": "node app.js"
  },
  "dependencies": {
    "@aws-sdk/client-s3": "^3.500.0",
    "@aws-sdk/s3-request-presigner": "^3.500.0",
    "@aws-sdk/lib-storage": "^3.500.0",
    "express": "^4.18.2",
    "multer": "^1.4.5-lts.1",
    "sharp": "^0.33.2"
  }
}

Testing It

# Upload a file
curl -X POST http://localhost:3000/files/upload \
  -F "[email protected]" \
  -F "userId=user-123"

# Response:
# {
#   "id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
#   "key": "files/a1b2c3d4-e5f6-7890-abcd-ef1234567890.jpg",
#   "url": "https://my-app-files.nyc3.cdn.digitaloceanspaces.com/files/a1b2c3d4.jpg",
#   "size": 245892,
#   "contentType": "image/jpeg",
#   "originalName": "photo.jpg",
#   "thumbnailUrl": "https://my-app-files.nyc3.cdn.digitaloceanspaces.com/thumbnails/a1b2c3d4.webp"
# }

# Get a presigned upload URL
curl -X POST http://localhost:3000/files/presign \
  -H "Content-Type: application/json" \
  -d '{"filename": "report.pdf", "contentType": "application/pdf"}'

# Get a presigned download URL
curl http://localhost:3000/files/download/files/a1b2c3d4.jpg

# Delete a file
curl -X DELETE http://localhost:3000/files/files/a1b2c3d4.jpg

Common Issues and Troubleshooting

1. SignatureDoesNotMatch Error

SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided.

This almost always means your SPACES_SECRET has trailing whitespace or your endpoint URL is wrong. Double-check:

// Wrong -- extra space copied from the console
var secret = "abc123def456 ";

// Right
var secret = "abc123def456";

// Also check endpoint format
// Wrong: https://nyc3.digitaloceanspaces.com/my-bucket
// Right: https://nyc3.digitaloceanspaces.com

Also verify that forcePathStyle is set to false. If it is true, the SDK constructs URLs differently and the signature will not match.

2. AccessDenied on Upload

AccessDenied: Access Denied

This occurs when your Spaces API key does not have write permissions or when the ACL value is invalid. Spaces only supports private and public-read ACLs. If you pass bucket-owner-full-control or other S3-specific ACLs, you get an access denied error.

// Wrong -- not supported by Spaces
ACL: "bucket-owner-full-control"

// Right
ACL: "private"
// or
ACL: "public-read"

3. CORS Errors on Browser Upload

Access to fetch at 'https://my-bucket.nyc3.digitaloceanspaces.com/...'
from origin 'https://myapp.com' has been blocked by CORS policy

This means CORS is not configured on your Space or the origin does not match. Set CORS rules using the API (shown in the CORS section above) or through the console. Remember that the origin must be an exact match including the protocol:

// Wrong
AllowedOrigins: ["myapp.com"]

// Right
AllowedOrigins: ["https://myapp.com"]

4. SlowDown: Rate Limiting

SlowDown: Please reduce your request rate.

Spaces has rate limits -- approximately 150 requests per second per bucket for PUTs and 750 per second for GETs. If you hit this during bulk operations, add exponential backoff:

function uploadWithRetry(command, retries) {
  retries = retries || 3;

  return s3.send(command).catch(function(err) {
    if (err.name === "SlowDown" && retries > 0) {
      var delay = Math.pow(2, 4 - retries) * 1000; // 2s, 4s, 8s
      console.log("Rate limited, retrying in " + delay + "ms");
      return new Promise(function(resolve) {
        setTimeout(resolve, delay);
      }).then(function() {
        return uploadWithRetry(command, retries - 1);
      });
    }
    throw err;
  });
}

5. Empty Body on GetObject with SDK v3

TypeError: Cannot read properties of undefined (reading 'pipe')

In AWS SDK v3, the Body of a GetObjectCommand response is a readable stream, but it behaves differently in Node.js vs the browser. Make sure you are consuming it correctly:

// For Node.js, Body is a ReadableStream
response.Body.pipe(writableStream);

// If you need the full buffer
var { Readable } = require("stream");
var chunks = [];
for await (var chunk of response.Body) {
  chunks.push(chunk);
}
var buffer = Buffer.concat(chunks);

If Body is undefined, the object likely does not exist. Always check with HeadObjectCommand first or handle the NoSuchKey error.


Best Practices

  • Always use UUIDs for object keys. Never use the original filename directly. Filenames contain spaces, special characters, and can collide. Generate a UUID and preserve the extension: crypto.randomUUID() + ext.

  • Set Cache-Control headers on upload. Objects served through the CDN respect the cache headers you set. For immutable content (versioned assets, uploaded images), use public, max-age=31536000, immutable. For content that changes, use shorter TTLs.

  • Use presigned URLs instead of proxying through your server. Every byte that flows through your Express.js process consumes memory and CPU. Presigned URLs let clients upload and download directly to/from Spaces. Your server only generates the URL.

  • Validate file types on the server, not just the client. Check the file extension, MIME type, and ideally the file's magic bytes. Never trust Content-Type headers from the client -- they are trivially spoofed.

  • Configure lifecycle rules for temporary files. If your application creates temp uploads, processing artifacts, or cached files, set an expiration policy. Otherwise these files accumulate silently and you end up paying for storage you do not need.

  • Use prefixes to organize objects logically. Adopt a consistent naming scheme from day one. Changing key structures after you have thousands of objects is painful. Include the content type and date in the prefix for easy management.

  • Keep your Spaces API keys out of client-side code. Never embed credentials in JavaScript that runs in the browser. Use presigned URLs from your server instead. Rotate your Spaces keys periodically and store them in environment variables, not config files.

  • Monitor your usage with DigitalOcean's metrics. Spaces provides bandwidth and storage metrics in the console. Set up alerts if your transfer usage approaches the included 1 TB to avoid unexpected overage charges ($0.02/GB beyond the included transfer).

  • Test uploads with realistic file sizes. A 50 KB test image will not reveal the timeout and memory issues you hit with 20 MB user uploads. Test with files at your actual size limit.


References

Powered by Contentful