What Makes Humans Valuable When AI Handles Execution
The Question
If AI can produce code, tests, and systems faster than we can, what exactly is the job? I’ve been sitting with this question for a while, and the answer I keep coming back to is that our ability to churn stuff out was always a bottleneck rather than our value. Execution speed was the constraint, not the contribution.
Our value was always in deciding what to build, why to build it, and whether it’s working as intended. Faster execution is a multiplier for that, not a replacement.
Jevons’ Paradox
There’s an economic concept that is often referenced when people predict that AI will reduce the need for developers. Jevons’ paradox, named after the economist William Stanley Jevons, describes what happened when James Watt’s steam engine made coal usage more efficient in the 1860s. The reasonable prediction was that coal consumption would fall. The opposite happened. Greater efficiency made steam power viable for more applications, and total coal consumption increased dramatically.
The parallel to software development is hard to ignore. If AI makes building software significantly cheaper and faster, the reasonable prediction might be that we’ll need fewer developers. But Jevons’ paradox suggests the opposite: lower cost drives increased demand. For this to play out, three things probably need to happen:
- Developers get more productive using AI, producing more software, faster.
- Higher productivity reduces the cost of software, as each developer delivers more output.
- Lower prices bring increased demand, as software solutions become viable for problems that were previously too expensive to address.
The path is choppy, but we’re already seeing the first, the second is following naturally. The third is the one that will create significant change.
Three Pillars of Human Value
Faster execution means more systems to test, correct, secure, operate, and maintain. But the human contribution to those systems isn’t about volume. It’s about three things that AI fundamentally cannot provide.
Judgement
AI optimises for metrics, but it can’t tell whether a metric is the right one. It can’t weigh second-order effects on users, trust, or reputation. Knowing why something should be built matters more than producing more of it.
Trust
Systems need human oversight. Someone has to check the incentives, pull the plug when things go wrong, and maintain legitimacy. An unsupervised AI system might be technically correct and still erode the trust that makes it useful.
Imagination
AI remixes the past in novel ways, and that’s genuinely useful. But it doesn’t dream up new futures or imagine fundamentally different approaches to broken processes. We still decide which goals are worth pursuing.
The Opportunity
Opportunity exists for software engineers in this space, but it comes with a condition. More systems means a greater demand for human skills. But only if we invest in what makes us valuable rather than competing with AI on what it already does well.
If we spend our time trying to out-code the machines, we’ll lose. If we invest in judgement, in building trust, in the kind of lateral thinking that imagines what hasn’t existed before, then faster execution just means we get to apply those uniquely human capabilities to more problems, more often.
The job is changing, and understanding where our real value lies makes the difference between riding this wave and being swept along by it.