When Code Is Easy, Judgment Is Scarce
I read a post about AI-generated pull requests from tldraw and it nails the real problem. The PRs were correct. They compiled. They looked clean. They were also useless. The team ended up closing PRs by default because reviewing plausible nonsense is worse than not reviewing at all.
This is what AI changes. Writing code is getting cheap. Deciding which code belongs is getting expensive. The hard part is no longer the diff, it is the context: why this exists, what it breaks, what it trades away. AI does not know that. It can only fake it.
For engineering leaders, the review bottleneck is not a tooling failure. It is the work. It is where product intent survives. It is where long-term stability beats short-term output.
For non-technical leaders, speed is not progress. AI can ship fast and still move you sideways. Output is easy. Signal is rare. Every AI-generated change you accept is a bet that the author understood the system. Most of the time, no one did.
The quiet risk is cultural. If the team starts treating generation as the primary skill, you end up with people optimizing prompts instead of understanding. That creates a review tax that never goes away, because the inputs keep scaling while the context stays finite. The organization looks productive until you try to change direction and realize no one knows what they shipped or why.
If code keeps getting easier to produce, then judgment becomes the scarce resource. The winning teams will treat context like capital and invest in the people who can spend it well.