Using AI for development still feels a little strange to me.
I have not been doing much with personal projects lately, so most of my AI usage there has been limited to revamping my website and running small experiments when I get the chance. At work, though, it has become a much more serious part of the loop.
I switched teams recently and joined a greenfield Android project. The codebase is fully modularized, most of the UI is written with Jetpack Compose, and at the module level we use api/impl separation. Inside the modules, we also try to keep a strict package-level architecture and dependency direction.
For the first four or five months, I used almost no AI assistance. At the time I was mainly using Cursor, and pairing it with Android Studio never really clicked for me. I rely a lot on syntax highlighting and native autocomplete to feel confident in a language. Around December 2025, though, I started using Claude Code more seriously, and that was the point where using LLMs for coding finally started to make sense.
Part of that is probably prompt quality. Part of it is probably that I instinctively keep the blast radius low. Whatever the reason, the code it produces often matches how I would have approached the problem myself.
I think two things made that possible.
First, we standardized solutions across the codebase very aggressively. For most problems, there is almost one accepted way to do things. That is not automatically a good thing in Android projects, because modular systems tend to accumulate different abstractions over time. We made those bets early, and we will see how they age. But in practice, that consistency gives the agent much less room to hallucinate. When I add a feature or extend an existing flow, it usually lands close to the conventions of the codebase.
Second, I almost always work from a plan. Sometimes that is a built-in plan mode. Sometimes it is just me writing out the implementation path in detail before I let the tool touch anything. If I am not confident about the shape of the solution, I do not let the agent run loose. I either review each step as it happens, or I refine the plan until I am comfortable with the outcome. Both approaches work, but they produce slightly different results. Manual review gives you tighter control, even if it costs time.
That all works well enough for feature work. Bug fixing is where things get more interesting.
I recently ran into a deadlock caused by a race condition in the app. I had a few suspects already, so I traced the initial suspension point, collected the relevant context, and gave it to Claude for a second opinion.
It suggested a plausible source of the deadlock. I checked the reasoning, made the change, and the issue disappeared.
But my spidey sense was tingling.
The fix worked, but it did not feel true. It felt like a band-aid, not the root cause.
So I threw away the suggested change, opened the debugger, and traced the execution path myself. The actual reason the app was blocking turned out to be something else entirely:
We were doing a post-login deeplink check on pre-login deeplinks, and while doing that we were applying a soft synchronization barrier on a service call to prevent a race condition during data fetch. That created a deadlock because in the pre-login case, we should not have been fetching that service at all.
That is a cool bug, by the way.
The interesting part is not that the model was useless. It was not. It got me to a plausible local fix very quickly. The problem is that a plausible fix and a correct explanation are not the same thing.
That was a useful reminder: if you want to truly fix a system, you still have to know what you are looking at.
AI assistance is already good enough that I can generate clean implementations with tests, often in a style that looks close to my own. But I still need to know the codebase inside out to make the right judgment calls. I still need to understand the tradeoffs I am introducing through these agents. I still need to review the code and understand the solution if I want to keep my own engineering instincts sharp.
Basically, I still have to train my spidey sense.