The Solo Operator Tech Stack: What I Actually Use to Run Five Products
Five products, no employees, under $1K/month. Here's the exact stack, AI tooling, and philosophy behind running it all solo.
I run five products and a publishing operation. No employees, no co-founder, no DevOps team. Monthly infrastructure costs recently crossed into the $500 to $1,000 range after years of keeping it under $100. Here's exactly what the stack looks like.
This isn't a theoretical "ideal stack for indie hackers" post. This is what I actually use, what I ditched, and why.
The Products
Grizzly Peak Software (grizzlypeaksoftware.com): A technical content site for software engineers. Express.js, PostgreSQL, EJS templates, Bootstrap, Node.js. Hosted on DigitalOcean.
AutoDetective.ai: An AI-powered car diagnostic chatbot that generates repair content programmatically. DigitalOcean, PostgreSQL, OpenAI. This one has its own infrastructure, separate from Grizzly Peak.
MermAgent (mermagent.com): An agentic diagramming tool. Next.js, PostgreSQL, hosted on DigitalOcean.
CortexAgent (cortexagent.com): Same stack as MermAgent: Next.js and PostgreSQL on DigitalOcean. This one is in the early stages and may be rebuilt into something different.
voding.ai: Built on PostgreSQL, Redis, and Next.js on DigitalOcean. This one didn't find a profitable niche, so I'm evaluating a pivot for the domain.
Publishing operation: A catalog of 70+ books and 70 audiobooks on Amazon KDP. The production pipeline runs through Claude Code, DALL-E for cover art, and automated Pandoc export. No traditional infrastructure costs here: the "stack" is AI tooling and Amazon's platform.
The Infrastructure Layer
Almost everything runs on DigitalOcean. Droplets, managed databases, app platform. I've been on DigitalOcean long enough that I know the platform's quirks intimately, and the pricing is predictable. No surprise bills, no "you forgot to turn off that NAT gateway" moments.
DNS is split across DigitalOcean, GoDaddy, and Namecheap depending on where I originally registered the domain. I consolidate when it makes sense, but I don't lose sleep over having registrars in three places.
CI/CD runs through GitHub Actions and DigitalOcean's deployment pipeline. I use the DigitalOcean MCP server to manage deployments through my AI coding assistant, which means I can deploy without context-switching to a dashboard.
Monitoring and alerting are custom-built. Not because I enjoy building monitoring systems, but because the off-the-shelf options either cost too much for a solo operator or give me dashboards full of metrics I don't care about. I built what I need: uptime checks, error rate alerts, and the specific health signals that matter for each product. It's not pretty, but it catches problems before users do.
Email goes through Postmark for transactional messages and SendGrid for other delivery needs. I've used multiple providers because different products have different email requirements: transactional receipts need different deliverability guarantees than marketing notifications.
Analytics is Google Analytics. I've looked at privacy-focused alternatives like Plausible, but Google Analytics is free, I already know it, and the data I need from it is straightforward. Pragmatism over purity.
The AI Tooling Layer
This is where the stack has changed the most in the last two years.
Claude Code is my primary AI coding assistant. I use it for everything: writing code, processing articles, managing deployments through MCP servers, and running the book publishing pipeline. It's the single tool that has had the biggest impact on my velocity as a solo operator.
Grok handles image generation. I switched away from Runway and DALL-E for most visual content because Grok's image output quality crossed a threshold where the other tools stopped being worth the friction. Cover images for blog posts, marketing visuals, social content: all Grok now.
ChatGPT fills in the gaps. Research, brainstorming, quick questions where I want a different perspective than Claude's. Having multiple AI tools isn't redundancy: each has a different personality and different strengths.
OpenAI API powers AutoDetective.ai's diagnostic engine. The architecture is designed to work with any OAS-capable LLM interface, but GPT-5 class models are what's running in production right now.
What I Ditched and Why
Cursor. I used it early on and it was fine. Claude Code replaced it completely. The terminal-native workflow fits how I actually work better than an IDE-based AI integration. I don't need a fancy UI around my AI assistant. I need it in my terminal where I already live.
GitHub Copilot. Same story. Once Claude Code became my primary tool, Copilot's inline completions felt like autocomplete suggestions interrupting a conversation I was already having with a more capable partner.
Runway. I was using it for video and image generation. Grok's image capabilities caught up and passed it for my use cases. I don't generate enough video content to justify a separate tool for that.
The pattern: I try things, use them until something better fits my workflow, and cut without sentimentality. Tool loyalty is expensive when you're a solo operator. If something stops earning its place in the stack, it goes.
The Philosophy: Bootstrap Everything, Then Don't
My default is to build it myself. Custom monitoring, custom alerting, custom content pipelines, custom deployment workflows. Not because I think I can build a better Datadog. Because I know exactly what I need, and what I need is usually 5% of what a SaaS product offers at 100% of the price.
But I'm not dogmatic about it. When the cost of building exceeds the cost of paying, I pay. PostgreSQL is managed on DigitalOcean because I don't want to wake up at 3 AM to deal with a failed backup. Email goes through Postmark because I don't want to manage SMTP servers and deliverability reputation. Analytics is Google because building my own analytics platform would be insane.
The decision framework is simple: is this a differentiator or plumbing? If it's plumbing, pay someone. If it's something that directly affects my product or workflow and I can build exactly what I need in less time than I'd spend configuring a SaaS tool, I build it.
Running Five Products Solo
The honest answer is that they don't all get equal attention. At any given time, one or two products are in active development and the rest are in maintenance mode. The infrastructure is stable enough that "maintenance mode" means checking alerts and handling the occasional user issue, not constant firefighting.
The key architectural decision that makes this possible: every product uses the same basic patterns. PostgreSQL for data. DigitalOcean for hosting. GitHub for source control. The same deployment patterns. The same monitoring approach. When I context-switch between products, I'm not also switching mental models for infrastructure.
This is the real argument for boring technology choices. I don't use PostgreSQL because it's the best database for every use case. I use it because I know it deeply, my tooling works with it, and I can diagnose problems at 11 PM without reading documentation. Multiply that across five products and the compounding value of familiarity becomes enormous.
The Cost Reality
For years I kept infrastructure under $100/month. That's the bootstrapper sweet spot: cheap enough that revenue pressure is low, expensive enough that you're running real infrastructure and not toy projects.
That number is climbing into the $500 to $1,000 range now. More products, more traffic, more AI API usage. The AI costs are the fastest-growing line item: OpenAI API calls for AutoDetective, Claude Code usage across everything, image generation. These costs scale with usage in a way that static hosting never did.
I'm not worried about it. The revenue scales with the same usage. More AutoDetective diagnoses mean more affiliate revenue. More books mean more KDP royalties. More articles mean more traffic. The cost increase reflects growth, not waste.
What I'd Tell a Solo Operator Starting Today
Pick one cloud provider and learn it deeply. DigitalOcean, AWS, whatever: just pick one and stop evaluating. The switching costs aren't the migration: they're the mental overhead of maintaining expertise across multiple platforms.
Use PostgreSQL. Use it for everything until it genuinely can't handle what you need. That day may never come.
Get an AI coding assistant and make it central to your workflow, not supplementary. The difference between using AI occasionally and using it as your primary development interface is the difference between a solo operator who ships one product and a solo operator who ships five.
Build your own tooling only when you know exactly what you need and the commercial options would cost more than your time. Otherwise, pay for SaaS and spend your time on product.
And cut tools ruthlessly. Every tool in your stack is a surface area for problems, a subscription to manage, and a context switch to maintain. The best stack is the smallest stack that gets the job done.