People make wrong decisions when they move fast and don't think. That's not new, we watched it happen through the entire “agile” era. What coding agents did is put a rocket under that same dynamic.
Unfortunately, we have reached the stage where the average behavior of advanced coding agents using frontier models now is probably superior to the average programmer on planet Earth, in terms of the quality of its output, and it does it a thousand times faster. For organizations and developers who are comfortable shipping products with low assurance, their itch is scratched, because they can literally do at least as good as they're doing right now, with fewer people and less time. They pull that slop machine over and over again until they get something that seems to be right and smells good enough, and then they ship it.
But there is an unfounded confidence that rides along with this, because seemingly superior output does not mean evidence. Someone told us recently that their coding agent was smarter than any professor they ever had. They're kind of missing the point, because in a lot of ways it's not smart at all. The output can look great and still carry no evidence about why it's correct (no specification it was built against, no model connecting it to your requirements, no reasoning you can check and that can be checked automatically later, etc). The better the output looks, the easier it is to trust a process that isn't actually giving you any evidence about its outcomes. You end up paying for speed that compounds against you: every artifact the slop machine produces without assurance is a liability you've accepted without knowing its terms.
Now consider the scale. If every developer using a coding agent is generating ten to a hundred times more code, without deep understanding of it, and without any assurance behind it, we are flooding the world with unvetted software at a rate we've never seen before.
That much unvetted code is now finding its way into systems people depend on. Infrastructure, defense, finance, healthcare… shipped with little human insight into the threat model or the architecture. The result is a vastly larger attack surface and a non-existent assurance case, because you have pulled the slop machine over and over again to produce a codebase.
If you're building anything where failure has real consequences, you should be able to stand up in front of an audience, or a courtroom, or your own peers, put your hand over your heart, and say: "Here's what I believe this product does, here's what it protects against, and here's what I'm promising to the world." We'd wager that very few people building products under those assertions would stand up and do that with a codebase built by a coding agent, not at the velocity people are moving now. For every impressive piece of code a frontier model generates, there is a README, a doc comment, or a configuration file that, upon closer inspection, gets something so fundamentally wrong about the system that, if you know the subject matter, you hardly know where to start correcting it. The impressive output and the confidently broken output come from the same process.
The answer isn't to slow down, speed is not the enemy. But fast without understanding is just running into an electric fence. The answer is to have rigor in the artifacts as you create them, formal specs, models, verification, traceability… so, you actually have evidence behind your promises.

