What AI Changes
As my AI usage has seen significant increase in the past ~quarter, I also began thinking about it more. Besides the practicality — techniques, incantations, workflows, tools, subscriptions —, the approach I have, or the right approach one should have when using these tools. Or more simply put, what these tools can really do?
For one, what has been curious to me is the degree (and speed!) that my premises and assumptions about this approach keeps changing. Or my willingness to update them, (which in some ways is driven by fear, followed by excitement).
A rough outline of my usage has been: I went from moderate code completion on SuperMaven and Cursor, to a deferral of simple repetitive tasks, to using extensively to research, investigate and prototype, to deferring to it increasingly complex and ambiguous tasks (with varying degrees of hand-holding and continuous reviewing). In essence, sliding right on the autonomy slider1.
On one hand, this technology is truly incredible. There are several moments that it genuinely feels like magic that such kind of experience with computers is a reality. In many occasions it can feel like a super-power. On the other, it is far from a full-blown human replacement, which makes it hard to reconcile with opinions and predictions as the terminally online faces on a daily basis now.
Creativity has not been solved
Tasks that are inherently creativity-bounded are still largely not "solved" by any of the frontier offerings. This is, admittedly, drawn from personal and close friends' anecdotal: daily driving both Codex and Claude Code can expose the inherent bias to the mean of these models. Which is not necessarily a bad thing, but not something you'd want if you're looking for novel outputs.
This could also be proved by looking at results publicly available - from both companies and individuals. It may increase one's own individual perceived creativity i.e., one is exposed to ideas that they may have not otherwise considered. But on the aggregate, it exposes people to the same ideas, which, again, is the opposite of creativity. Hence, the volume of output most certainly increases, but most of it is not novel e.g., take the shape of experimentational, amateur artifacts.
Taking the counter: If creativity had been, or was close to be, solved we'd be seeing a growth and uptick of output (business, projects, etc) novelty i.e., meaningfully creative and high-quality contributions; that has not been the case.
What is hard remains hard
We've now got the means to produce much more code in less time. In fact, code quantity is seemingly one of the most notable output (proudly) reported metrics as proxy for something2. But software remains... software. It is still built over the same constraints that the industry has been wrestling with for decades. The fundamentals, specifics, details, still do matter; perhaps now less legible and opaque, but still there.
This means that the hard parts of the job are still hard. It also means that those who are quick to try to automate everything and anything, including their judgement, understanding, and fundamentals, are bound to a correction.
A system that needs to be maintainable still requires someone who understands it well enough to keep it that way —the complexity needs to be understood and considered in full— until we get to an end game3 scenario. We now have some incredibly useful tools that makes this job much easier: this is the correct framing, in my opinion.
System understanding and accountability
The reason for the conclusion above is that there is no way around the fundamental property of human collaboration - it is bounded by trust. Bryan Cantrill argued in a recent talk4 about this dynamic, which can be illustrated by the infamous quote:
A computer can never be held accountable. Therefore, a computer must never make a management decision.
Accountability requires traceability, which, in turn, requires individuals with sufficient understanding of the system to make informed decisions. Thus, experienced and competent software engineers should be in demand as they would be the ones trained to remain in this role.
In a pessimistic model, in which there are no new largely successful applications and usage of the increased productivity gains, the total market demand for software engineers' decreases over time. In such a world, it would still be valuable to be a competent software operator, but the market would look more like a zero-sum game. A pessimistic world could be a temporary transition to one that fulfills today's promises. Just as the 2000-2007 period — though that analogy holds only if demand for software expands to absorb productivity gains, as it historically has. It breaks if AI's efficiency gains are net deflationary.
The frontier
The engineering of the future is interesting, as cheap and intelligent codegen unlocks projects and enterprises that would otherwise be laughed off.
My mind has been marinating on some crazy ideas after stumbling on this piece about StrongDM5 (a company that spends over $1,000/day per engineer), and its technique and approach for generating ambitious software at a higher scale. It is, indeed, fascinating to read their reports of commanding a swarm of software that is never reviewed - only its outputs are observed, to build a system that tests access permission correctness. Reminded me of stories6 from folks doing similar cloning stunts around the time Ralph Wiggum was discovered. Which is what Cursor and Claude had also been doing by building a browser and a GCC clone autonomously.
These systems and approaches all share a common denominator - they all require concretely defined artifacts that can be used for objective source-of-truths (e.g., "can we build linux?", "coverage of wpt7 suite?", "do we have API parity with GitHub.com?", etc).
AI autonomy scales where the oracle is cheap. The rest of software is still waiting for its oracle — which is, in the end, what accountability has always required of engineers too.