I Tracked Every Hour and Every Dollar for 6 Months of AI-Augmented Work: Here Are the Numbers
Six months of tracked hours and output reveal where AI tooling actually boosts solo developer productivity and where it quietly costs you time.
An Ask HN thread last week posed a blunt question: do you actually make more money now that you use AI for everything?
The comments were what you'd expect: a mix of "dramatically yes," "about the same but different," and "I don't know, I haven't tracked it." I've been tracking it. Here's what six months of data looks like when you take the question seriously.
The Setup
I'm a solo operator running several software products: a content site for engineers, a programmatic SEO chatbot in the automotive space, an agentic diagramming tool, and a handful of other projects across content creation, software development, and occasional consulting. I started tracking time and output systematically six months ago with a specific question: has AI tooling changed my effective output per hour, and if so, where?
I'm not going to give you specific revenue numbers: that's not what this article is about, and frankly it varies too much based on factors unrelated to AI tooling. What I'll give you is ratios and categories, which is the actually useful information.
Where AI Saved Real Time
Software Development
This is the clearest win, and it's substantial. I use Claude Code daily. My estimate based on time logs: code that would have taken me 3-4 hours to write correctly (routes, database queries, validation logic, utility functions) now takes 45-90 minutes including review.
The math: roughly 50-60% reduction in implementation time for well-defined tasks. This is real and consistent. I have not found it to be overstated.
The caveat: this is for implementation of things I already know how to build. For architecture and design decisions (what to build and how to structure it) AI assistance is useful but not transformative. The time savings are in execution, not in thinking.
Boilerplate and Scaffolding
I started a new Express.js service recently that needed the standard setup: middleware stack, database connection pooling, error handling, logging, health check endpoints. This used to take me two to three hours to set up correctly. With AI assistance, it took about 25 minutes.
I know this code well enough to review it confidently. The AI produced correct output and I verified it. Net result: high-value time savings on low-value work.
First Drafts of Documentation and Articles
I write a lot. Technical articles, documentation, README files. AI assistance has changed this workflow significantly. The first-draft stage (getting structured thoughts onto the page) is faster. Not because the AI writes for me, but because it helps me outline and then I write into a structure rather than writing into a blank page.
My estimate: first drafts that previously took 2-3 hours take 45-60 minutes. Final, polished drafts still take similar time because editing is editing.
Where AI Did Not Save Time (And Sometimes Added It)
Prompt Engineering for Novel Problems
When I'm doing something genuinely new (a feature I haven't built before, a domain I'm not deeply familiar with) the time I spend figuring out how to prompt the AI effectively, evaluating the output, and iterating often equals or exceeds what I would have spent just building it.
This is not an indictment of AI tooling. It's a calibration. AI tools excel at well-defined, familiar problems. They're less useful when you're at the frontier of your own knowledge, because you don't yet know well enough to evaluate what the AI gives you.
I've had sessions where I spent 90 minutes getting AI-generated output to the point where it was ready for review, and I would have had working code in 60 minutes if I'd just written it myself. This happens when the problem is genuinely novel or when I've been imprecise in how I've framed the requirements.
Debugging Non-Obvious Issues
This one surprised me. My prior expectation was that AI debugging assistance would be uniformly helpful. It is not.
For common, well-documented bugs (the kind you'd find on Stack Overflow) AI assistance is fast and accurate. For subtle, context-specific issues (a race condition in async code that only manifests under certain load patterns, a PostgreSQL query that performs correctly but produces wrong results on a specific data shape) AI assistance is often unhelpful or actively misleading.
The reason: the AI produces plausible-sounding explanations and suggests fixes that look right. On a subtle bug, you can lose an hour implementing suggestions that don't solve the problem while feeling like you're making progress. I've learned to do my own systematic debugging first for non-obvious issues and bring in AI assistance once I've narrowed the problem space significantly.
The Review Tax
This is the hidden cost that doesn't show up in simple "time to write code" comparisons.
When I write code myself, I understand it. When I review AI-generated code, I have to achieve that understanding from reading. For short, well-defined functions this is fast. For longer or more complex generated code, the review can take meaningful time, and it should, because that review is the only thing standing between AI-generated code and production.
I estimate I spend 20-30% of the "time saved" in generation on review. This is appropriate and I don't consider it wasted time. But it means the net savings are somewhat smaller than the raw generation speed improvement suggests.
The Revenue Question
The HN thread was really asking: has this translated to more money? The answer for me is: yes, but not through the mechanism most people assume.
The naive model is: AI makes you faster, you can take on more projects, you make more money. This is partly true but it's not the main mechanism.
The more important mechanism: AI tooling allows a solo operator to maintain a portfolio of projects that would previously require a team. I can build and maintain more software, produce more content, and respond to more opportunities: not because I work faster at individual tasks, but because the ceiling on what a single person can operate has moved up.
The second mechanism: faster iteration on product ideas. Things I would have deprioritized because "I'll get to it eventually" now get built. Some of those things have driven revenue. The causality is diffuse but real.
The third mechanism, and this is the uncomfortable one: I'm not sure the revenue increase is proportional to the productivity increase. Productivity gains that don't translate to customer value or market differentiation don't show up in revenue. Writing code faster doesn't make your product better if you're writing the wrong code faster. Some of my AI-accelerated development has been on features that mattered; some has been on features that didn't.
What I'd Tell Someone Starting This Tracking
Track actual output, not just time. "Hours spent coding" is a poor proxy for value produced. Track what shipped, what's in production, what's being used. Time saved matters only if you deploy it on high-value work.
Track where AI doesn't help. The failure modes are as informative as the successes. Where do you consistently find AI output requiring heavy revision? Where does AI assistance leave you more confused than when you started? Those are calibration signals.
Be honest about review time. Every "hour saved" in generation has a review cost. If you're not accounting for review, you're overstating your productivity gains.
Track prompt quality over time. I've gotten better at prompting over six months. The gains I see now are partly attributable to AI tooling and partly attributable to my improving skill at using it. Both matter, and they're not the same thing.
The Honest Summary
Six months of tracking gives me more confidence in some things and more humility about others. AI tooling has genuinely increased my output for well-defined implementation tasks. It has not made hard problems easy. It has added a review discipline that I didn't have before. And it has let me operate at a scope that would have required a small team previously.
Whether that translates to more money depends on what you're building and whether the market values what you're now able to produce. The tooling is a multiplier. The base has to be there first.