Staying Sharp in the Age of AI
There’s a tension I’m feeling with AI-assisted development. The tooling is powerful enough to handle significant implementation work, but that power makes it easy to stop engaging with the detail. Software engineering was never about typing code. It’s systems thinking, understanding how components interact, anticipating failure modes, recognising when something feels wrong before I can articulate why. AI doesn’t threaten that directly, but it does make it easier to skip.
The Higher Level of Abstraction
Working with AI means working at higher levels of abstraction. Instead of writing every line, I’m directing, reviewing, and shaping. This seems to be how engineering has always evolved. Assembly gave way to C, manual deployments gave way to infrastructure as code. Each step traded direct control for leverage.
That shift isn’t inherently a problem. Directing AI well is a genuine skill: knowing what to ask for, how to decompose a problem, when to accept output and when to push back. When done well, this isn’t less work. It’s different work.
But there’s a difference between choosing to work at a higher level and drifting away from the lower ones without noticing.
Skill Atrophy
When code arrives fully formed, the temptation is to test it, see it work, and move on. Each time I do, I miss the learning that comes from building it myself, the dead ends, the subtle bugs, the moment where the design clicks into place. Namanyay Goel described the progression at the start of 2025: first he stopped reading documentation, then debugging skills waned, then deep comprehension went. He’d become a “human clipboard”, blindly shuttling errors to the AI and solutions back to code.
Over time, I could become fluent in directing AI while losing fluency in the underlying discipline. The danger isn’t forgetting syntax, it’s losing the instinct for when something is architecturally wrong, or the patience to trace a bug through a system I didn’t build myself.
The Trade-Off
There’s no clean answer here. Spending hours writing code manually that AI could produce in seconds has a real cost. That time could go toward design, architecture, or shipping something that matters. But never touching the lower levels erodes the instincts that make some of the higher-level work possible in the first place.
What “staying sharp” means has shifted. It used to mean writing code regularly. Now it’s more about understanding code deeply even when I didn’t write it, and knowing not just what the AI produced but why it chose that approach.
The right balance depends on context, and it changes over time. What matters is being deliberate about it rather than letting the drift happen by default.
The Antidote
I don’t think the answer is to retreat to writing everything by hand. I’m focussing on staying engaged at the level that matters: outcomes, system behaviour, architectural coherence. I focus less on whether individual lines are correct and more on whether the system as a whole does what it should. When AI makes a choice, I want to understand why it made that choice, not just whether it works. Knowledge gaps still get closed deliberately, but the gaps that matter have shifted.
Staying sharp now means maintaining the ability to reason about the whole system, even when I didn’t write most of it.