The Server Stack That Powers All My Side Projects in 2026
Every few months, someone on Hacker News posts about their Kubernetes cluster running a personal blog and three microservices. Twenty nodes. Service mesh....
Every few months, someone on Hacker News posts about their Kubernetes cluster running a personal blog and three microservices. Twenty nodes. Service mesh. Helm charts. Observability platform. The whole nine yards. And I sit here in my cabin in Alaska, watching all of my production side projects hum along on a single $5/month DigitalOcean droplet, and I wonder what exactly we're all doing with our careers.
I'm not being cute. I genuinely run multiple production applications — real ones with real users and real revenue — on a server that costs less than a fancy coffee. And I'm going to show you exactly how.
Why I Stay on One Box
There's a deeply ingrained belief in our industry that scaling your infrastructure is a sign of success. It's not. It's a sign of traffic. And most side projects — even successful ones — don't have the kind of traffic that requires more than one well-configured server.
Here's the math that nobody wants to do: a single $5 DigitalOcean droplet gives you 1 GB of RAM, 1 vCPU, 25 GB of SSD storage, and 1 TB of bandwidth. A typical Node.js application serving dynamic pages uses maybe 80-120 MB of RAM. PostgreSQL sitting mostly idle with a few thousand rows uses another 50 MB. MongoDB doing the same, another 60 MB. Nginx as a reverse proxy, 10 MB.
Add it all up and you're using maybe 400 MB of your available gigabyte. You're not even close to the ceiling.
I've been building software for over 30 years, and one of the most important lessons I've learned is this: premature scaling kills more side projects than lack of scaling ever has. You don't need Kubernetes. You don't need auto-scaling groups. You need a server that works, and you need to ship.
The Exact Stack
Let me walk you through everything running on my box right now. No hand-waving, no "it depends" — the actual configuration.
The Operating System
Ubuntu 22.04 LTS. Nothing exotic. I pick LTS releases because I don't want to think about OS upgrades more than once every two years. The server has been running this install since I set it up, and it just works.
# First things first after provisioning
apt update && apt upgrade -y
apt install -y build-essential git curl wget ufw fail2ban
Fail2ban goes on immediately. I've watched my server logs. Brute force SSH attempts start within minutes of a new droplet going live. It's not optional.
Node.js
I run Node 20 LTS. Not the latest, not bleeding edge. LTS. Because production servers are not the place to beta test runtime features.
# Install via NodeSource
curl -fsSL https://deb.nodesource.com/setup_20.x | bash -
apt install -y nodejs
Every application I run is a Node.js Express app. Grizzly Peak Software, AutoDetective.ai, internal tools — all Express. Some people would call this a monoculture. I call it operational simplicity. When something breaks at 2 AM Alaska time, I don't have to remember which app uses which runtime.
Nginx as Reverse Proxy
Nginx sits in front of everything. Each application gets its own server block, its own domain, its own SSL certificate via Let's Encrypt.
server {
listen 443 ssl http2;
server_name grizzlypeaksoftware.com www.grizzlypeaksoftware.com;
ssl_certificate /etc/letsencrypt/live/grizzlypeaksoftware.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/grizzlypeaksoftware.com/privkey.pem;
location / {
proxy_pass http://127.0.0.1:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
}
}
Nothing fancy. But it does three critical things: terminates SSL so my Node apps don't have to deal with certificates, provides connection buffering so slow clients don't tie up Node event loops, and lets me add caching headers at the edge without touching application code.
Let's Encrypt renewal runs on a cron job via Certbot. I set it up once and haven't thought about it since.
apt install -y certbot python3-certbot-nginx
certbot --nginx -d grizzlypeaksoftware.com -d www.grizzlypeaksoftware.com
PM2 Process Manager
PM2 is the unsung hero of this whole setup. It keeps every Node.js process alive, restarts them if they crash, manages log rotation, and provides basic monitoring — all without any external dependencies.
npm install -g pm2
# Start an application
pm2 start app.js --name "grizzlypeak" --max-memory-restart 200M
# Save the process list so it survives reboots
pm2 save
pm2 startup
That --max-memory-restart flag is important. If a memory leak starts creeping in — and they always do eventually — PM2 will restart the process before it eats all your RAM. On a 1 GB box, you can't afford a runaway process.
Here's what pm2 list looks like on my server:
┌────┬──────────────────┬──────┬───────┬────────┬─────────┬────────┐
│ id │ name │ mode │ pid │ status │ cpu │ memory │
├────┼──────────────────┼──────┼───────┼────────┼─────────┼────────┤
│ 0 │ grizzlypeak │ fork │ 12847 │ online │ 0.1% │ 98 MB │
│ 1 │ autodetective │ fork │ 12903 │ online │ 0.2% │ 112 MB │
│ 2 │ internal-tools │ fork │ 13021 │ online │ 0% │ 45 MB │
└────┴──────────────────┴──────┴───────┴────────┴─────────┴────────┘
Three production apps. Under 300 MB combined. And PM2 itself uses almost nothing.
PostgreSQL
PostgreSQL handles structured data — job listings, library articles, ad tracking. I run version 15.
apt install -y postgresql postgresql-contrib
On a $5 droplet, the default PostgreSQL configuration is actually too aggressive. It assumes it has more memory than it does. I tune it down:
# /etc/postgresql/15/main/postgresql.conf
shared_buffers = 128MB
effective_cache_size = 256MB
work_mem = 4MB
maintenance_work_mem = 64MB
max_connections = 20
Twenty connections might seem low, but think about it: I have three Node.js apps, each maintaining a small connection pool. Five connections per app is plenty. PostgreSQL creates a new process for each connection, and on a memory-constrained box, every idle connection costs you.
MongoDB
MongoDB handles unstructured content — contact form submissions, newsletter subscribers, the Contentful cache layer. I run MongoDB Community Edition.
# MongoDB 7.0
curl -fsSL https://www.mongodb.org/static/pgp/server-7.0.asc | gpg --dearmor -o /usr/share/keyrings/mongodb-server-7.0.gpg
echo "deb [ signed-by=/usr/share/keyrings/mongodb-server-7.0.gpg ] https://repo.mongodb.org/apt/ubuntu jammy/mongodb-org/7.0 multiverse" | tee /etc/apt/sources.list.d/mongodb-org-7.0.list
apt update && apt install -y mongodb-org
Same story with memory tuning. The WiredTiger cache defaults are way too high for a small box:
# /etc/mongod.conf
storage:
wiredTiger:
engineConfig:
cacheSizeGB: 0.15
150 MB for the cache. That's enough for the volume of data I'm dealing with, and it leaves room for everything else.
Performance Optimizations That Actually Matter
When you're working with constrained resources, you learn to care about the things that actually move the needle instead of the things that look impressive on a conference slide.
Gzip Everything
This is free performance. Nginx compresses responses before sending them over the wire, cutting transfer sizes by 60-80%.
# In your nginx.conf http block
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_types text/plain text/css text/xml application/json application/javascript text/javascript application/xml;
Compression level 6 is the sweet spot. Beyond that, you're burning CPU for diminishing returns.
Static Asset Caching
Every image, CSS file, and JavaScript file gets a long cache header. If the browser already has it, don't send it again.
location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg|woff|woff2)$ {
expires 30d;
add_header Cache-Control "public, immutable";
}
Connection Pooling
This is the one that catches people. Every time your Node app opens a new database connection, that's a handshake, authentication, memory allocation on the database side. On a small server, connection storms will kill you.
var { Pool } = require("pg");
var pool = new Pool({
connectionString: process.env.POSTGRES_CONNECTION_STRING,
max: 5,
idleTimeoutMillis: 30000,
connectionTimeoutMillis: 2000
});
function query(text, params) {
return pool.query(text, params);
}
module.exports = { query: query };
Five connections in the pool. Thirty-second idle timeout so connections don't sit around wasting memory. Two-second connection timeout so a hung database doesn't block the event loop forever.
Response Caching in Application Code
For content that doesn't change often — which is most content on a blog or reference site — I cache rendered responses in memory.
var cache = {};
var CACHE_TTL = 300000; // 5 minutes
function getCached(key) {
var entry = cache[key];
if (entry && Date.now() - entry.timestamp < CACHE_TTL) {
return entry.data;
}
return null;
}
function setCache(key, data) {
cache[key] = { data: data, timestamp: Date.now() };
}
Simple. No Redis. No Memcached. Just a JavaScript object with a TTL check. For a single-process Node app serving a few hundred requests per minute, this is more than sufficient. Redis would add another 30 MB of memory overhead for something a plain object handles fine.
Monitoring Without the Overhead
I'm not running Prometheus, Grafana, or Datadog. I'm running a cron job and common sense.
Basic Health Check Script
#!/bin/bash
# /opt/scripts/health-check.sh
SITES=("https://grizzlypeaksoftware.com" "https://autodetective.ai")
for site in "${SITES[@]}"; do
STATUS=$(curl -s -o /dev/null -w "%{http_code}" --max-time 10 "$site")
if [ "$STATUS" != "200" ]; then
echo "ALERT: $site returned $STATUS" | mail -s "Site Down" [email protected]
fi
done
Runs every five minutes via cron. If a site is down, I get an email. That's it. Is this as fancy as PagerDuty? No. Does it work? Yes. Have I ever missed an actual outage because of it? No.
Disk Space Monitoring
Log files will eat your disk if you let them. PM2 handles log rotation for Node apps, but system logs need attention too.
# /etc/logrotate.d/nginx
/var/log/nginx/*.log {
daily
missingok
rotate 7
compress
notifempty
}
I also run a weekly cron that alerts me if disk usage crosses 80%.
USAGE=$(df / | tail -1 | awk '{print $5}' | sed 's/%//')
if [ "$USAGE" -gt 80 ]; then
echo "Disk usage at ${USAGE}%" | mail -s "Disk Warning" [email protected]
fi
PM2 Monitoring
PM2 has a built-in monitoring dashboard that's surprisingly useful.
pm2 monit
This gives you real-time CPU, memory, and log output for every managed process. I SSH in, check it, and SSH out. Takes 30 seconds.
Security on a Budget
Running everything on one box means your security posture matters more, not less. If someone gets in, they get everything.
UFW Firewall
ufw default deny incoming
ufw default allow outgoing
ufw allow ssh
ufw allow 'Nginx Full'
ufw enable
Three ports open: SSH (22), HTTP (80), HTTPS (443). That's it. PostgreSQL and MongoDB only listen on localhost. There is no reason for them to be reachable from the internet.
SSH Hardening
# /etc/ssh/sshd_config
PasswordAuthentication no
PermitRootLogin no
MaxAuthTries 3
Key-based auth only. No root login. Three tries and you're banned by Fail2ban. I've seen my Fail2ban logs. It blocks hundreds of IPs per day. The internet is a hostile place.
Automatic Security Updates
apt install -y unattended-upgrades
dpkg-reconfigure -plow unattended-upgrades
Security patches install automatically. I'm not going to pretend I SSH into my server every day to check for updates. Nobody does. Automate it.
When to Scale Up vs. Stay Lean
This is where experience matters more than technical skill. The question isn't can you scale — it's should you.
Stay on the $5 box when:
- Your traffic is under 50,000 pageviews per month
- Your database tables have fewer than a million rows
- Response times are under 500ms for dynamic pages
- You're the only developer
- Revenue doesn't justify higher infrastructure costs
Move to a bigger box when:
- PM2 is restarting processes due to memory limits more than once a day
- Database queries are consistently slow despite proper indexing
- You're running out of disk space even with aggressive log rotation
- You start needing background workers that compete with web processes for CPU
Move to multiple boxes when:
- You need zero-downtime deployments (though PM2's reload handles this for most cases)
- Database load requires its own dedicated resources
- You're serving more than 500 concurrent connections consistently
- Regulatory requirements demand data isolation
Notice I said "multiple boxes," not "Kubernetes." The step between a single server and a full orchestration platform is huge. Most side projects that outgrow one box can happily live on two or three boxes with a load balancer in front. You don't need container orchestration until you have a team that needs container orchestration.
The Real Cost Breakdown
Let me lay out what I actually pay for infrastructure across all my projects:
| Service | Monthly Cost | |---------|-------------| | DigitalOcean Droplet (1GB) | $6 | | Domain registrations (3 domains, amortized) | $4 | | DigitalOcean managed DB (PostgreSQL) | $15 | | Backups | $1.20 | | Total | $26.20 |
Okay, I lied slightly. The managed PostgreSQL database is separate and costs $15/month. I moved to a managed database after one too many times worrying about backup integrity. The $5 droplet still runs everything else — Node.js, Nginx, PM2, MongoDB — and the managed database handles PostgreSQL with automatic backups, failover, and point-in-time recovery.
Was the managed database worth it? For the PostgreSQL data that includes job listings and library content that would be painful to reconstruct? Absolutely. For MongoDB data like contact form submissions? Not worth the premium.
The total is still under $30/month. My side projects generate more than that. The infrastructure pays for itself many times over.
The Deployment Pipeline
No CI/CD platform. No GitHub Actions (for deployment, anyway). Just a shell script.
#!/bin/bash
# /opt/scripts/deploy-grizzlypeak.sh
cd /var/www/grizzlypeak
git pull origin master
npm install --production
pm2 restart grizzlypeak
echo "Deployed at $(date)" >> /var/log/deployments.log
I SSH in. I run the script. It pulls the latest code, installs any new dependencies, and restarts the process. The whole deployment takes about 15 seconds.
Is this "proper" CI/CD? No. Is it appropriate for a one-person operation running side projects? Absolutely. I've shipped hundreds of deployments this way without a single incident caused by the deployment process itself.
When the project graduates to something that justifies automated deployments — like when I moved Grizzly Peak Software to DigitalOcean's App Platform for automatic deploys from GitHub — then I'll set it up. But I don't front-load complexity on the hope that I'll need it someday.
What I'd Do Differently
If I were starting fresh today, the only thing I'd change is starting with the $6/month droplet instead of $5. The extra dollar gets you 1 GB more RAM (2 GB total), and that breathing room is worth more than a cup of gas station coffee.
I'd also set up proper database backups from day one instead of retroactively adding them after my first "oh no" moment. A simple pg_dump cron job to a DigitalOcean Space costs pennies and saves you from the kind of stomach-dropping moment that teaches you about backups the hard way.
Everything else — the stack, the approach, the philosophy of staying lean until the numbers demand otherwise — I wouldn't change a thing.
The tech industry has a spending problem disguised as a scaling problem. Most of us don't need more infrastructure. We need less ambition about infrastructure and more ambition about shipping. A $5 server that runs your code is infinitely more valuable than a $500 architecture diagram that doesn't.
Ship small. Stay lean. Scale when the numbers tell you to, not when your ego tells you to.
Shane Larson is a software engineer and the founder of Grizzly Peak Software. He writes code from a cabin in Caswell Lakes, Alaska, where his $5 server runs more reliably than his satellite internet connection.