Digitalocean

Backup Strategies for DigitalOcean Infrastructure

A practical guide to backup strategies on DigitalOcean covering Droplet snapshots, database backups, Spaces storage, automated scripts, and disaster recovery planning.

Backup Strategies for DigitalOcean Infrastructure

Backups protect against data loss from hardware failures, software bugs, accidental deletions, and security breaches. Every production system needs a backup strategy that answers three questions: what gets backed up, how often, and how quickly can you restore.

DigitalOcean provides several backup mechanisms — automated Droplet backups, managed database backups, Spaces object storage for offsite copies, and snapshots for point-in-time captures. This guide covers building a comprehensive backup strategy for Node.js applications running on DigitalOcean infrastructure.

Prerequisites

  • A DigitalOcean account
  • One or more Droplets with production data
  • A managed database (optional)
  • doctl CLI installed
  • A DigitalOcean Spaces bucket (for offsite backups)

Understanding Backup Types

Droplet Backups

DigitalOcean's automated Droplet backups create a complete image of your server weekly. Pricing is 20% of the Droplet cost (a $12 Droplet adds $2.40/month for backups).

What they capture: The entire filesystem — OS, application code, configuration files, logs, and any data stored on the Droplet.

What they do not capture: External volumes (Block Storage) or database data on managed databases.

Retention: The last 4 weekly backups are kept. Older backups are automatically deleted.

Snapshots

Snapshots are on-demand copies of a Droplet or volume. Unlike automated backups, you control when they are created. Pricing is $0.06/GB/month based on compressed size.

When to use snapshots:

  • Before a major deployment or OS upgrade
  • Before changing server configuration
  • Creating a template for new Droplets
  • Ad-hoc backup of the entire system

Database Backups

Managed databases on DigitalOcean include automatic daily backups with point-in-time recovery within the retention window (7 days for PostgreSQL and MySQL).

Application-Level Backups

Your custom backup scripts that export specific data — database dumps, uploaded files, configuration files. These are the most flexible and often the most important.

Enabling Automated Droplet Backups

Via Dashboard

Navigate to your Droplet > Backups > Enable Backups.

Via CLI

# Enable backups on an existing Droplet
doctl compute droplet-action enable-backups YOUR_DROPLET_ID

# Create a Droplet with backups enabled
doctl compute droplet create my-app \
  --image ubuntu-22-04-x64 \
  --size s-2vcpu-4gb \
  --region nyc3 \
  --enable-backups

Restoring from a Droplet Backup

# List available backups
doctl compute image list --type backup

# Restore a Droplet from a backup
doctl compute droplet-action restore YOUR_DROPLET_ID --image-id BACKUP_IMAGE_ID

Restoring replaces the entire Droplet with the backup image. This is a full restoration — all current data on the Droplet is overwritten.

Creating and Managing Snapshots

Creating Snapshots

# Snapshot a Droplet (powers off the Droplet temporarily)
doctl compute droplet-action snapshot YOUR_DROPLET_ID --snapshot-name "pre-deploy-2026-02-13"

# Snapshot a volume
doctl compute volume snapshot YOUR_VOLUME_ID --snapshot-name "data-vol-2026-02-13"

Snapshots are more reliable when the Droplet is powered off (no writes in progress). For live systems, consider pausing writes momentarily:

# Pre-snapshot script
ssh deploy@YOUR_IP "pm2 stop myapp && sync"
doctl compute droplet-action snapshot YOUR_DROPLET_ID --snapshot-name "clean-snapshot"
ssh deploy@YOUR_IP "pm2 start myapp"

Automated Snapshot Script

// scripts/snapshot.js
var { execSync } = require("child_process");

var DROPLET_ID = process.env.DROPLET_ID;
var MAX_SNAPSHOTS = 10;
var date = new Date().toISOString().split("T")[0];
var name = "auto-" + date;

console.log("Creating snapshot: " + name);

try {
  // Create snapshot
  execSync('doctl compute droplet-action snapshot ' + DROPLET_ID + ' --snapshot-name "' + name + '" --wait', {
    stdio: "inherit"
  });
  console.log("Snapshot created successfully");

  // Clean up old snapshots
  var output = execSync("doctl compute snapshot list --format ID,Name,CreatedAt --no-header", {
    encoding: "utf8"
  });

  var snapshots = output.trim().split("\n")
    .filter(function(line) { return line.indexOf("auto-") !== -1; })
    .map(function(line) {
      var parts = line.trim().split(/\s+/);
      return { id: parts[0], name: parts[1], created: parts[2] };
    })
    .sort(function(a, b) { return b.created.localeCompare(a.created); });

  if (snapshots.length > MAX_SNAPSHOTS) {
    var toDelete = snapshots.slice(MAX_SNAPSHOTS);
    toDelete.forEach(function(snap) {
      console.log("Deleting old snapshot: " + snap.name);
      execSync("doctl compute snapshot delete " + snap.id + " --force");
    });
  }
} catch (err) {
  console.error("Snapshot failed:", err.message);
  process.exit(1);
}

Database Backup Strategies

PostgreSQL Backups

Using pg_dump

#!/bin/bash
# scripts/backup-postgres.sh

BACKUP_DIR="/var/backups/postgres"
SPACES_BUCKET="my-backups"
DATE=$(date +%Y%m%d_%H%M%S)
FILENAME="postgres_${DATE}.sql.gz"
FILEPATH="${BACKUP_DIR}/${FILENAME}"

mkdir -p "$BACKUP_DIR"

echo "Starting PostgreSQL backup..."

# Dump and compress
pg_dump "$DATABASE_URL" | gzip > "$FILEPATH"

if [ $? -eq 0 ]; then
  echo "Backup created: $FILEPATH"
  SIZE=$(du -h "$FILEPATH" | cut -f1)
  echo "Size: $SIZE"

  # Upload to Spaces
  s3cmd put "$FILEPATH" "s3://${SPACES_BUCKET}/postgres/${FILENAME}"

  if [ $? -eq 0 ]; then
    echo "Uploaded to Spaces: s3://${SPACES_BUCKET}/postgres/${FILENAME}"
  else
    echo "WARNING: Upload to Spaces failed"
  fi

  # Keep only last 7 local backups
  ls -t "$BACKUP_DIR"/postgres_*.sql.gz | tail -n +8 | xargs -r rm

  echo "Backup complete"
else
  echo "ERROR: pg_dump failed"
  exit 1
fi

Node.js Backup Script

// scripts/backup-db.js
var { exec } = require("child_process");
var path = require("path");
var fs = require("fs");

var BACKUP_DIR = "/var/backups/postgres";
var DB_URL = process.env.DATABASE_URL;
var RETENTION_DAYS = 7;
var date = new Date().toISOString().replace(/[:.]/g, "-").split("T");
var filename = "postgres_" + date[0] + "_" + date[1].substring(0, 8) + ".sql.gz";
var filepath = path.join(BACKUP_DIR, filename);

// Ensure backup directory exists
if (!fs.existsSync(BACKUP_DIR)) {
  fs.mkdirSync(BACKUP_DIR, { recursive: true });
}

console.log("Starting backup: " + filename);

var command = 'pg_dump "' + DB_URL + '" | gzip > "' + filepath + '"';

exec(command, function(err) {
  if (err) {
    console.error("Backup failed:", err.message);
    process.exit(1);
  }

  var stats = fs.statSync(filepath);
  var sizeMB = (stats.size / 1024 / 1024).toFixed(2);
  console.log("Backup created: " + filepath + " (" + sizeMB + " MB)");

  // Upload to Spaces
  uploadToSpaces(filepath, filename, function(uploadErr) {
    if (uploadErr) {
      console.error("Upload failed:", uploadErr.message);
    } else {
      console.log("Uploaded to Spaces");
    }

    // Clean old backups
    cleanOldBackups();
  });
});

function uploadToSpaces(localPath, remoteName, callback) {
  var uploadCmd = 's3cmd put "' + localPath + '" s3://my-backups/postgres/' + remoteName;
  exec(uploadCmd, function(err) {
    callback(err);
  });
}

function cleanOldBackups() {
  var files = fs.readdirSync(BACKUP_DIR)
    .filter(function(f) { return f.startsWith("postgres_") && f.endsWith(".sql.gz"); })
    .map(function(f) {
      return {
        name: f,
        path: path.join(BACKUP_DIR, f),
        time: fs.statSync(path.join(BACKUP_DIR, f)).mtime.getTime()
      };
    })
    .sort(function(a, b) { return b.time - a.time; });

  var cutoff = Date.now() - (RETENTION_DAYS * 24 * 60 * 60 * 1000);

  files.forEach(function(file) {
    if (file.time < cutoff) {
      fs.unlinkSync(file.path);
      console.log("Deleted old backup: " + file.name);
    }
  });
}

MongoDB Backups

#!/bin/bash
# scripts/backup-mongo.sh

BACKUP_DIR="/var/backups/mongo"
DATE=$(date +%Y%m%d_%H%M%S)
FILENAME="mongo_${DATE}"

mkdir -p "$BACKUP_DIR"

echo "Starting MongoDB backup..."

# Dump the database
mongodump --uri="$MONGO_URL" --out="${BACKUP_DIR}/${FILENAME}"

# Compress
cd "$BACKUP_DIR"
tar -czf "${FILENAME}.tar.gz" "$FILENAME"
rm -rf "$FILENAME"

echo "Backup created: ${BACKUP_DIR}/${FILENAME}.tar.gz"

# Upload to Spaces
s3cmd put "${BACKUP_DIR}/${FILENAME}.tar.gz" "s3://my-backups/mongo/${FILENAME}.tar.gz"

# Keep last 7 local backups
ls -t "$BACKUP_DIR"/mongo_*.tar.gz | tail -n +8 | xargs -r rm

echo "Backup complete"

Restoring Database Backups

# PostgreSQL restore
gunzip -c backup.sql.gz | psql "$DATABASE_URL"

# MongoDB restore
tar -xzf mongo_backup.tar.gz
mongorestore --uri="$MONGO_URL" --drop mongo_backup/

Offsite Backups with Spaces

DigitalOcean Spaces provides S3-compatible object storage — ideal for offsite backup copies. Backups stored on the same Droplet they protect are not truly safe. If the Droplet is destroyed, both the application and its backups are lost.

Setting Up s3cmd

# Install s3cmd
sudo apt install s3cmd -y

# Configure for DigitalOcean Spaces
s3cmd --configure

Enter your Spaces access key and secret key. Set the S3 endpoint to nyc3.digitaloceanspaces.com (or your region).

Backup Lifecycle with Spaces

// scripts/spaces-lifecycle.js
var { exec } = require("child_process");

var BUCKET = "my-backups";
var RETENTION_DAYS = 30;

// List all backup files in the bucket
exec("s3cmd ls s3://" + BUCKET + "/postgres/ --list-md5", {
  encoding: "utf8"
}, function(err, stdout) {
  if (err) {
    console.error("Failed to list files:", err.message);
    return;
  }

  var cutoff = new Date();
  cutoff.setDate(cutoff.getDate() - RETENTION_DAYS);

  var lines = stdout.trim().split("\n");
  lines.forEach(function(line) {
    var match = line.match(/^(\d{4}-\d{2}-\d{2})/);
    if (match) {
      var fileDate = new Date(match[1]);
      if (fileDate < cutoff) {
        var urlMatch = line.match(/(s3:\/\/.+)$/);
        if (urlMatch) {
          console.log("Deleting: " + urlMatch[1]);
          exec("s3cmd del " + urlMatch[1]);
        }
      }
    }
  });
});

Cross-Region Backup

For disaster recovery, copy backups to a different DigitalOcean region:

# Copy from NYC to SFO Spaces bucket
s3cmd cp s3://my-backups-nyc3/postgres/backup.sql.gz s3://my-backups-sfo3/postgres/backup.sql.gz

Application File Backups

Uploaded Files

If your application stores user uploads on the Droplet filesystem:

#!/bin/bash
# scripts/backup-uploads.sh

UPLOAD_DIR="/var/www/myapp/uploads"
BACKUP_DIR="/var/backups/uploads"
SPACES_BUCKET="my-backups"
DATE=$(date +%Y%m%d)
FILENAME="uploads_${DATE}.tar.gz"

mkdir -p "$BACKUP_DIR"

# Compress uploads directory
tar -czf "${BACKUP_DIR}/${FILENAME}" -C "$UPLOAD_DIR" .

# Upload to Spaces
s3cmd put "${BACKUP_DIR}/${FILENAME}" "s3://${SPACES_BUCKET}/uploads/${FILENAME}"

# Keep last 7 local copies
ls -t "$BACKUP_DIR"/uploads_*.tar.gz | tail -n +8 | xargs -r rm

echo "Upload backup complete: $FILENAME"

Configuration Files

Back up critical configuration files that are not in version control:

#!/bin/bash
# scripts/backup-config.sh

CONFIG_FILES=(
  "/var/www/myapp/.env"
  "/etc/nginx/sites-available/myapp"
  "/var/www/myapp/ecosystem.config.js"
)

BACKUP_DIR="/var/backups/config"
DATE=$(date +%Y%m%d)
FILENAME="config_${DATE}.tar.gz"

mkdir -p "$BACKUP_DIR"

tar -czf "${BACKUP_DIR}/${FILENAME}" "${CONFIG_FILES[@]}" 2>/dev/null

s3cmd put "${BACKUP_DIR}/${FILENAME}" "s3://my-backups/config/${FILENAME}"

echo "Config backup complete: $FILENAME"

Automated Backup Scheduling

Using cron

# Edit crontab
crontab -e

# Database backup — daily at 2 AM
0 2 * * * /var/www/myapp/scripts/backup-postgres.sh >> /var/log/backup-postgres.log 2>&1

# Upload backup — daily at 3 AM
0 3 * * * /var/www/myapp/scripts/backup-uploads.sh >> /var/log/backup-uploads.log 2>&1

# Config backup — weekly on Sunday at 4 AM
0 4 * * 0 /var/www/myapp/scripts/backup-config.sh >> /var/log/backup-config.log 2>&1

# Cleanup Spaces — weekly on Monday at 5 AM
0 5 * * 1 node /var/www/myapp/scripts/spaces-lifecycle.js >> /var/log/backup-cleanup.log 2>&1

Using node-cron in Your Application

// backup/scheduler.js
var cron = require("node-cron");
var { exec } = require("child_process");
var path = require("path");

var SCRIPTS_DIR = path.join(__dirname, "..", "scripts");

function runBackup(scriptName) {
  var script = path.join(SCRIPTS_DIR, scriptName);
  console.log("Starting backup: " + scriptName);

  exec("bash " + script, function(err, stdout, stderr) {
    if (err) {
      console.error("Backup failed (" + scriptName + "):", err.message);
      if (stderr) console.error(stderr);
      return;
    }
    console.log(stdout);
  });
}

// Database backup — daily at 2 AM UTC
cron.schedule("0 2 * * *", function() {
  runBackup("backup-postgres.sh");
});

// Upload backup — daily at 3 AM UTC
cron.schedule("0 3 * * *", function() {
  runBackup("backup-uploads.sh");
});

// Config backup — weekly Sunday at 4 AM UTC
cron.schedule("0 4 * * 0", function() {
  runBackup("backup-config.sh");
});

console.log("Backup scheduler started");

Backup Verification

Backups you never test are backups you cannot trust. Schedule regular restore tests:

Automated Restore Test

// scripts/verify-backup.js
var { exec, execSync } = require("child_process");
var fs = require("fs");

var BACKUP_DIR = "/var/backups/postgres";
var TEST_DB = "backup_test_" + Date.now();

// Find the latest backup
var backups = fs.readdirSync(BACKUP_DIR)
  .filter(function(f) { return f.endsWith(".sql.gz"); })
  .sort()
  .reverse();

if (backups.length === 0) {
  console.error("No backups found");
  process.exit(1);
}

var latestBackup = backups[0];
console.log("Testing backup: " + latestBackup);

var DB_HOST = process.env.DB_HOST || "localhost";
var DB_USER = process.env.DB_USER || "doadmin";

try {
  // Create test database
  execSync('psql -h ' + DB_HOST + ' -U ' + DB_USER + ' -c "CREATE DATABASE ' + TEST_DB + '"');

  // Restore backup to test database
  execSync('gunzip -c "' + BACKUP_DIR + '/' + latestBackup + '" | psql -h ' + DB_HOST + ' -U ' + DB_USER + ' -d ' + TEST_DB);

  // Run verification queries
  var result = execSync(
    'psql -h ' + DB_HOST + ' -U ' + DB_USER + ' -d ' + TEST_DB +
    ' -c "SELECT COUNT(*) as count FROM information_schema.tables WHERE table_schema = \'public\'"',
    { encoding: "utf8" }
  );

  console.log("Tables found:", result.trim());

  // Clean up test database
  execSync('psql -h ' + DB_HOST + ' -U ' + DB_USER + ' -c "DROP DATABASE ' + TEST_DB + '"');

  console.log("Backup verification PASSED");
} catch (err) {
  console.error("Backup verification FAILED:", err.message);

  // Clean up on failure
  try {
    execSync('psql -h ' + DB_HOST + ' -U ' + DB_USER + ' -c "DROP DATABASE IF EXISTS ' + TEST_DB + '"');
  } catch (cleanupErr) {
    // Ignore cleanup errors
  }

  process.exit(1);
}

Verification Checklist

Run these checks monthly:

  1. Database restore — restore the latest backup to a test database. Verify table counts and sample data.
  2. File restore — extract the upload backup to a temporary directory. Verify file counts and sizes.
  3. Full system restore — create a new Droplet from the latest backup. Verify the application starts and serves requests.
  4. Cross-region restore — restore from the offsite copy in a different region. Verify data integrity.

Disaster Recovery Plan

Recovery Time Objective (RTO)

How quickly do you need to be back online?

RTO Target Strategy
< 1 minute Load balancer with multiple active servers
< 15 minutes Pre-configured standby Droplet + database failover
< 1 hour Snapshot restore + database restore from backup
< 4 hours Fresh Droplet + deploy from Git + database restore

Recovery Point Objective (RPO)

How much data loss is acceptable?

RPO Target Strategy
Zero Managed database with standby nodes (automatic failover)
< 1 hour Database backups every hour + WAL archiving
< 24 hours Daily database backups
< 1 week Weekly Droplet backups

Disaster Recovery Runbook

Document the steps for each failure scenario:

## Scenario: Droplet Destroyed

1. Create new Droplet from latest snapshot
   doctl compute droplet create my-app --image SNAPSHOT_ID --size s-2vcpu-4gb --region nyc3

2. If no recent snapshot, create from backup
   doctl compute droplet create my-app --image BACKUP_ID --size s-2vcpu-4gb --region nyc3

3. If no snapshot or backup available:
   a. Create fresh Droplet with Ubuntu
   b. Run setup script (documented in README)
   c. Clone application from Git
   d. Restore .env and config from Spaces backup
   e. Restore database from latest backup in Spaces
   f. Restore uploads from latest backup in Spaces

4. Update DNS to point to new Droplet IP
5. Verify application health
6. Update monitoring alerts with new Droplet ID

## Scenario: Database Corruption

1. Stop the application
   pm2 stop myapp

2. Restore from managed database backup (via dashboard)
   - Navigate to database > Backups > Restore
   - Select point in time before corruption
   - This creates a new database cluster

3. Update DATABASE_URL in .env with new connection string

4. Restart the application
   pm2 start myapp

5. Verify data integrity

Backup Strategy Summary

Data Method Frequency Retention Location
Droplet Automated backup Weekly 4 weeks DigitalOcean
Droplet Snapshot Before deploys 10 snapshots DigitalOcean
PostgreSQL pg_dump Daily 7 days local, 30 days Spaces Droplet + Spaces
MongoDB mongodump Daily 7 days local, 30 days Spaces Droplet + Spaces
Managed DB Automatic Daily 7 days (DigitalOcean managed) DigitalOcean
Uploads tar + gzip Daily 7 days local, 30 days Spaces Droplet + Spaces
Config tar + gzip Weekly 30 days Spaces Spaces

Common Issues and Troubleshooting

Backup script fails silently

Cron jobs redirect output but errors are lost:

Fix: Always redirect both stdout and stderr to a log file (>> /var/log/backup.log 2>&1). Check cron logs with grep CRON /var/log/syslog. Set up alerts when backup files are missing or stale.

Snapshots take too long

Large Droplets with many files take time to snapshot:

Fix: Keep the Droplet filesystem lean. Store large files (uploads, logs) on Block Storage volumes and snapshot them separately. Clean up old logs before snapshotting.

Spaces upload fails

s3cmd configuration is incorrect or credentials expired:

Fix: Verify credentials with s3cmd ls s3://my-backups/. Check that the Spaces bucket exists and the region is correct. Ensure outbound HTTPS is allowed by the firewall.

Restore brings back stale data

The backup is outdated because backups run infrequently:

Fix: Increase backup frequency for critical data. Use managed database point-in-time recovery for the most recent data. Consider WAL archiving for PostgreSQL to minimize data loss.

Backup storage costs grow unexpectedly

Old backups are not being cleaned up:

Fix: Implement lifecycle policies in your cleanup scripts. Set retention limits. Use compression (gzip, zstd) to reduce storage size. Monitor Spaces usage in the dashboard.

Best Practices

  • Follow the 3-2-1 rule. Keep 3 copies of your data, on 2 different storage types, with 1 copy offsite. Example: database + local backup + Spaces backup.
  • Automate everything. Manual backups get forgotten. Schedule backups with cron or node-cron and monitor that they complete.
  • Test restores regularly. A backup you cannot restore is not a backup. Run automated restore verification monthly.
  • Encrypt sensitive backups. Database dumps contain user data. Encrypt before uploading to Spaces: gzip backup.sql | openssl enc -aes-256-cbc -pass env:BACKUP_PASSWORD > backup.sql.gz.enc.
  • Monitor backup freshness. Alert when the latest backup is older than expected. A missed backup should be investigated immediately.
  • Document your recovery procedures. A disaster is the wrong time to figure out how to restore. Write runbooks for every failure scenario and practice them.
  • Keep backups in a different region. Regional outages are rare but real. Cross-region copies ensure recovery even if an entire datacenter is unavailable.
  • Version your backup scripts. Keep backup scripts in version control alongside your application code. Changes to the database schema may require changes to the backup process.

References

Powered by Contentful