People ask: âHow did you verify code worked before AI?â My answer is boring: the same way I do now.
AI in the editor doesnât change fundamentals. If anything, it makes them more important. AI is compression of time, not replacement of expertise. It helps you move faster where you already understand the problem; it doesnât remove the need to understand what youâre shipping.
The failure mode I see more often now isnât wrong code. Itâs correct code built on a wrong assumption. I recently approved a change that was mostly written by AI. It looked excellent: clean structure, clear documentation (including ADRs), and more tests than anyone would write by hand. It still broke pre-prod. The code passed its tests, but the assumption underneath was wrong. AI didnât just make the mistake faster; it made it with better documentation, better tests, and a confident tone. It looked trustworthy by our usual standards. It produced what felt like proof-of-thought. But it wasnât.
Before LLMs, it was rare to see a polished patch in a domain the author barely understood. Writing code took enough effort that polish implied work, and work implied understanding. That implication no longer holds. Today, itâs easy to produce code that looks correct, well tested, and well documented in areas you donât fully understand. Code quality, structure, and test coverage no longer tell you whether someone understands the domain, knows the edge cases, or can debug the system when it fails. That burden moves to reviewers, who now have to reconstruct the mental model and validate the assumptions themselves. You canât rely on the usual signals anymore.
The problem isnât that AI writes bad code. Itâs that it writes convincing code without owning the assumptions.
We used to get proof-of-thought for free because producing a patch took real effort. Now that writing code is cheap, verification becomes the real proof-of-work. I mean proof of work in the original sense: effort that leaves a trail: careful reviews, assumption checks, simulations, stress tests, design notes, postmortems. That trail is hard to fake. In a world where AI says anything with confidence and the tone never changes, skepticism becomes the scarce resource.
Being obsessed with verification mostly means slowing down where it matters: reading code slowly enough to build a real mental model, reasoning in invariants and failure modes, and adding multiple layers of validation when mistakes would be costly.
My stance (for now) is simple: assume AI will effortlessly produce plausible garbage; treat AI-assisted changes as untrusted until proven safe; let verification, not polish, be the main signal of trust; slow down around assumptions and interfaces so you can move faster everywhere else.
Weâre not going back to a pre-AI world. But we donât have to outsource the one thing that still makes us engineers: the willingness to say âI donât trust this yetâ and do the work to prove itâs right.