ʕ☞ᴥ ☜ʔ Kix Panganiban's blog

Complacency is the clear and present danger

First came Github Copilot. It was, to me, the first real product that demonstrated how powerful LLMs could be when deployed into coding workflows. Then came Cursor, which took that up a notch with its mind-bendingly fast and powerful autocomplete and -- the now deprecated in favor of Agents -- Cursor Compose tool. And then Anthropic released Claude Code, which I genuinely thought would be the gold standard for agentic coding: you treat it like an obscenely well-read (but inexperienced) developer, give it instructions and guidance, and let it rip autonomously navigating your codebase, grokking files, and writing features and fixing bugs.

Fast forward to 2025 and a coworker just told me during a code review session: "I don't think the Kix I know would've written code like this."

How did I get there?


The biggest risk -- the clear and present danger -- of doing any sort of coding with AI is complacency. And it's easier to fall into than you think.

Part of onboarding a team to AI coding tools is teaching the mantra of never merging code that you otherwise wouldn't have personally written. This sounds perfectly clear and easy, but it's so easy to get lost in the vibe-coding sauce and forget about it.

Here's how the complacency trap works: When a coding agent works, you run whatever code it outputs and -- after running some tests and manual validation -- feel satisfied that you now have a working new thing that barely cost any mental overhead. The code passes all checks: linting, type validation, test suites, and looks fine. While it's working, you suddenly have a bunch of free time and mental space to context-switch to something else. After all, coding agents do not yet instantaneously write code (not counting those running on ultrafast inference services like Cerebras and Sambanova).

Those two combined -- the instant gratification and downtime -- create a dangerous feedback loop. You're slowly beginning to pay less attention to your well-read intern and you're losing that mental map of your code. Letting the AI agent have at it almost completely autonomously erodes your grip on your codebase and makes you complacent.

This leads to situations where you know the thing works, but you haven't written everything (or even most of it) yourself. Building on top of that new code becomes harder than it would have been had you written it yourself. And the cycle continues -- it's harder for you to write on top of it, so you just delegate it to the AI agent again (because it feels easier) -- and so it writes even more code that you're not completely familiar with, until you end up with a mountain of slop that's deeply unpleasant to read and write.


The callout from my coworker was deeply embarrassing for me, and was pretty much the intervention that I needed. I cancelled my $200 Claude Max subscription and completely rethought how I work with AI coding tools.

Now I'm back to writing code myself, only delegating small chunks of work to AI. Here's what I found works great for me -- allowing me to balance mental load with productivity while staying in control:

  1. 99% of the AI I use is Cursor's fast autocomplete. To this day, I still have not found any other tool that comes close. It does the job perfectly for most of the code that I need to write:

    • When I start writing the function signature and docstring, it usually gets most of the body right
    • It takes over repetitive tasks such as changing function calls, log messages, and even trickier things like adding try/except blocks almost like mind-reading magic
    • It still lets me feel like I wrote the code because it's closely patterned after code I just wrote
  2. For bigger changes, I use Cursor's highlight and add to context feature and let the Auto agent do a first pass. I then review the code and revise it.

  3. I don't let it make changes to multiple files at once -- or even to multiple different places in a single file. This lets me keep my mental map of the code I'm working on.

  4. For research tasks, I use Perplexity and Claude to start charting where I should look -- but I still pick it up like an absolute neanderthal in 2025 and read through Stack Overflow and GitHub issues myself.

  5. If I cannot avoid letting an agent write big swaths of code, I treat it like I would an actual person submitting a PR -- I scrutinize the changes line by line, make review notes, and then perform the edits myself.

For all of this, I prepaid a year of Cursor in advance, and so far haven't run into any rate limiting issues or quotas. I guess running into those would be another good indicator that I'm slipping down the slope.


This is exactly why I don't think I'll ever buy into fully autonomous driving myself, even as an EV fan. I enjoy driving, and I fear that delegating the entire process would make my "driving muscles" atrophy, make me complacent, and eventually put me into a situation where I'm speeding down a highway in a style my normal fully-aware self wouldn't use. The same principle applies to coding -- the moment you stop being an active participant in the process, you lose something essential about your craft.

#ai #coding