AI Integration & Development

The Skills That Actually Matter in 2026: What I've Learned Building with AI Every Day

After 40+ years in software, here are the engineering skills AI has devalued and the ones worth doubling down on right now.

A piece oan Substack by Benjaminsen made rounds recently asking how engineers stay relevant in a post-AI world. It's worth reading because it takes the question seriously instead of offering false comfort. But it stays at the level of principles, and I want to get more concrete.

I've been building software for over 40 years. For the last several years, I've used AI tools daily: for code generation, for debugging, for drafting architecture proposals. I've watched what the tools do well, where they fall apart, and what that means for which skills actually matter right now.

Cap'n Crunch: The Whistle That Started It All — John Draper and the Birth of Hacking

Cap'n Crunch: The Whistle That Started It All — John Draper and the Birth of Hacking

A cereal box whistle hacked AT&T's phone network. John Draper's story—from engineering genius to Apple's prehistory to complicated downfall. The full truth.

Learn More

Here's what I've observed.


The Skills That Have Declined in Value

Let me start here because the industry is reluctant to say it plainly.

Boilerplate production. Writing a CRUD controller from scratch, setting up a standard Express route, scaffolding a React component: AI does this faster than humans. Not better in every case, but fast enough that the human's speed advantage is gone. If your primary value was writing this kind of code efficiently, the value has been compressed.

Memorizing syntax and API surfaces. Knowing the exact arguments to Array.prototype.reduce() from memory was a mild skill signal in 2018. It means nothing now. The AI knows every API. Searching documentation is dead. Knowing where to look is less relevant than knowing what to ask for.

Routine code translation. Converting Python to JavaScript, migrating from one ORM to another, reformatting data structures: these are now tasks you describe, not tasks you do. I used to spend meaningful time on migrations. I don't anymore.


The Skills That Have Increased in Value

Systems Thinking

When AI generates code, it generates code that satisfies the local requirement. It doesn't understand the system it's being inserted into. It doesn't know that your background job queue saturates at a certain concurrency level, or that a particular database table is a known bottleneck, or that this service is going to be called from a mobile client on a 3G connection in a rural area.

Systems thinking: holding the whole architecture in your head, understanding how components interact under load and under failure, anticipating emergent behavior. This is not something AI assists with well. It requires context that lives in your head, in conversations, in the history of decisions made and reversed.

Engineers who can reason about systems, not just components, are more valuable now because they're the ones who prevent AI-generated code from creating elegant solutions to the wrong problems.

Debugging AI Output

This is a new skill and it's genuinely different from debugging your own code.

When I write code and it's wrong, I have a mental model of where I might have made an error. When AI writes code and it's wrong, I have to reconstruct what the AI was "thinking": what pattern it was following, what it got right, where it diverged from correctness.

AI bugs are often subtle in a particular way: the code is idiomatic, it looks right, it handles the happy path, and it fails in an edge case that the AI didn't model. Finding these requires reading code carefully with a skeptical mindset, not the trusting eye you bring to your own output.

Engineers who can read code they didn't write, understand its assumptions, and probe its failure modes are doing high-value work right now.

Architecture and Design Decisions

AI can implement an architecture. It cannot choose one.

When I'm deciding whether to use an event-driven architecture versus request-response for a particular integration, or whether a feature belongs in the monolith or deserves to be a separate service, or whether to build something versus buy it: the AI can help me think through tradeoffs if I describe them accurately. But it cannot make the call. It doesn't know my team's operational maturity, my traffic patterns, my deployment constraints, my debt load.

Architecture decisions require judgment that comes from experience and context. That judgment has not been commoditized. If anything, it's more in demand because teams can ship faster now, which means they get to architectural inflection points faster.

Domain Expertise

AI is a generalist. It knows a little about everything. It produces code that looks reasonable for a generic e-commerce app, or a generic content platform, or a generic SaaS.

If you understand the specific domain deeply: the edge cases in automotive diagnostic data, the compliance requirements in healthcare, the settlement patterns in financial systems: you produce solutions the generalist cannot. Your domain expertise is a forcing function that constrains the AI's output to something actually correct for the use case.

The more specialized and consequential the domain, the more your expertise matters relative to AI capability.

Communication and Requirement Precision

This one surprised me. The engineers who get the most out of AI tools are the ones who can articulate requirements clearly: not just "build a login system" but "build a JWT-based authentication flow using HS256 algorithm pinning, with parameterized database queries, no hardcoded fallback secrets, and explicit failure if required environment variables are missing."

Precision in requirements was always valuable. It's now a direct multiplier on AI output quality. Vague prompt, vague code. Engineers who can decompose a problem, specify its constraints, and communicate them precisely are getting better AI output and doing less cleanup.

This is essentially written communication applied to software requirements. It's a skill that was undervalued when you were writing code yourself. It's not undervalued anymore.


What This Looks Like in Practice

When I'm building something now, my day looks different than it did five years ago. Less time writing implementation code. More time on:

  • Reading and reviewing AI-generated code with a critical eye
  • Thinking through system design before I start generating anything
  • Writing precise requirements and architecture docs that the AI can work from
  • Debugging the non-obvious failures that only emerge when the pieces connect
  • Making judgment calls that require context the AI doesn't have

The ratio has shifted. The work is still technical. The distribution of what constitutes "the work" has changed.


The Career Implications

The Benjaminsen piece argues that the engineers most at risk are those who are purely implementors: who execute well-defined specs without contributing to defining them. I think that's directionally right.

The engineers I've seen thrive in an AI-augmented workflow share some characteristics: they're curious about how systems work, not just how to make them work. They can communicate clearly in writing. They have opinions about architecture. They're skeptical of their own output, including AI-generated output.

None of those are new skills. They're the skills that distinguished senior engineers from junior engineers before AI. What's changed is that the leverage on those skills is higher now, and the penalty for lacking them is steeper.

The advice I'd give to an engineer worried about relevance: stop optimizing for implementation speed. Start optimizing for judgment, communication, and systems thinking. Those are the skills that compound in an AI-augmented world.

Powered by Contentful