We should stop pretending like LLMs are software engineers
I recently wrote about how overdoing it with AI coding tools can lead to complacency, and I wanted to dig deeper to understand why this trap exists in the first place.
I think it's because of the marketing hype surrounding AI coding tools. Often, the companies that build and sell these tools present them as if they're software engineers in their own right -- but they're not.
Sure, you can ask them to vibe code an idea into a functional thing, but there isn't yet a tool where the produced code is consistently good enough for humans to pick up, read, and maintain. Even with long-term steering mechanisms like CLAUDE.md and AGENTS.md, these tools often stray and write code that works but seems designed and styled in isolation.
We humans are good at building mental maps of a codebase, especially because we have the ability to -- and excel at -- organizing things spatially. We also have strong long-term memory, allowing us to learn and apply developer conventions, coding standards, and styles -- especially when they come up in PR reviews and retrospectives.
I truly think that to build good software, the best way to utilize AI coding tools is to stop anthropomorphizing them and treat them purely as tools -- just like IntelliSense and the code completion tools that existed before LLMs -- and use them surgically and with intent. This is also why I think Cursor peaked with Cursor Compose, and it all went downhill from there. A tool that can strongly follow instructions and precisely make edits on a few specific files, although it carries some form of tedium, is much much more useful than a more autonomous agent that will likely eventually rabbithole and try to brute-force the codebase into something that works based on its assumptions of your request.
After all, I still believe that working software is different from software that feels good to build on regularly.