Posted on 2025-12-29 by Matt.
It's late 2025. Is using AI for software development a good idea? Here's my answer.
As of writing this, my position is that in most cases, using AI for software development is more trouble than it's worth. I'm not saying that it's not useful. I'm saying if you are going to use it, you should treat it like the top of the food pyramid says; use sparingly.
First, what do we mean by AI? What I'm talking about applies to anything from copy/pasting from a chat interface, an editor plugin, agentic workflows, AI powered code reviews, vibe coding, all of it.
There's a few things we're not going to cover. We're not going to get into the environmental costs, and we're not going to touch the ethical issues around training an LLM on content without the permission or attribution of the original authors.
I'm just going to argue that as an individual, a team, or a company, you'll be better off by minimizing use of AI.
The reality is that this stuff is all still pretty new. No one can truly tell you what it's like to maintain these projects long term. We also don't entirely know what the effects on the developer are. And any measurements that we have of productivity gains (if any) are pretty flimsy.
Let's talk about the cost in terms of dollars and cents.
It's well known that it costs a fortune to train these massive LLMs. The actual inference is currently much cheaper, but still expensive.
None of these companies are profitable, the financial situation is a bubble. We've already seen the prices raised, and this is while every token is still subsidized by investors. How much do you think those costs are going to go up when the bubble starts to burst?
Even if you are using open source models running on your own hardware, someone had to foot the bill for training that model in the first place.
Maybe you are having great results, I know a lot of people are. But these also come at a cost to your team.
First of all, you're creating instant brain drain. In traditional development, you could rely on there being at least one person at the company who understood what this did, how it worked, and why it worked this way. Maybe they even remember the tradeoffs they had to make, the corners they had to cut, or the caveats with this approach. If an AI wrote it, now you can't rely on there ever having been even one person who had this knowledge.
I also hear people argue that you just need to treat an AI like it's a junior developer. It's very different, though. Bringing on a junior is an investment, and the return on that investment comes when they can take over responsibilities from you. An AI is very different. If you want to put an AI into your on-call rotation, have it manage a project for you, talk to customers, or do support, good luck with that. 1
Another issue is that you're also taking on risk. LLMs excel at delivering something that looks passable, which is usually the hardest code to review. Something that's obviously wrong is easy to find flaws with and request changes (or outright reject). It's usually harder to confidently sign off on something that looks okay.
And with things on the other side of the fence, I've had to deal with code reviews from Copilot. On average, three out of every five comments were incorrect but looked convincing and were delivered in very confident language. These included the type of suggestions that I might have accepted if I were someone a little less thorough.
Every bit of AI generated code or code review comment is a real risk that the developer lets their guard down and accepts it on face value. And if developers are facing a deluge of code generated by their AI, and then switching gears to do code review on AI-generated PRs submitted by their teammates, it's only a matter of time before they cave.
Another tax your team is paying is due to an increase in work in progress. Every bit of WIP has a real cost involved in keeping it up to date. And AI-driven development incentivizes having more work in progress in the same way instant messaging tools incentivize more messages and more interruptions. It makes it easier for the author to just throw something out there, instead of being forced to decide if this was important and worth anyone's time right now.
It's always been a problem that your worst developers can crank out problematic code faster than your best developers can keep up with it. AI is just supercharging this imbalance.
In this industry, your main assets are your skills, knowledge, and experience. The big thing about skills and knowledge is that it's all use-it-or-lose-it. A lot of people don't like to admit this, but we've all seen it in others and experienced it ourselves.
Ask any engineering manager, or even better, put them through a live coding interview. They might have a decade of development experience, but even a year or so away from doing the actual development takes its toll.
We're not just talking about syntax and API, we're talking about all of it. Imagine there is a complex, mission-critical database migration that needs to be done. You wouldn't want the guy doing this to have had zero reps in the past year, would you? No, of course not, you would hope that he'd gotten so many reps with easier but similar migrations.
I constantly hear people talk about pawning off the low-value, everyday work to AI, as if that were a clear cut win. It's the mental equivalent of always taking a car or elevator; you're robbing yourself of those reps. You know that you can't take a year off from all physical activity and expect to still be in shape, but people talk as if that doesn't apply to your mind.
Another issue is that these tools are slow. So much of programming is all about trying to stay in The Zone, stay engaged with the problem and the moving pieces, and keep the feedback loop2 as tight as possible. This is why so much effort goes into making compilers and test suites faster. But any given prompt takes some amount of time, even the simplest requests, and those are the most dangerous ones. Anything that isn't perceptibly instant is room for your mind to wander, for you to get bored, or fall victim to the temptation to context switch entirely.
No, your model is not "hallucinating", it's doing exactly what it's designed to do: generate output that is statistically similar to the training data. This is not some quirk that can be patched, it's foundational to what an LLM is.
It also is never going to actually understand what you're trying to accomplish, it's never going to exercise good judgement, and you will never be able to actually count on it having done what it just said that it did.
These models are basically trained on all the code that could be fed into it. Statistically, not all of this is going to be high quality. And the old rule of Garbage In, Garbage Out still applies.
You can train it on better examples, you can give it better prompts or more context, but an LLM is incapable of actually learning from its mistakes.
Every month, there's a new technique, a new tool, or a new protocol to try to compensate for these shortcomings. But these are fundamental issues and they are not going to go away.
LLMs have already hit the point of diminishing returns. People like to say things like "The models today are the worst they're ever going to be", but the reality is looking more like "If it's not good enough for you now, it never will be." Even leaders in this space admit "the scaling era has ended," and that it's time to get back to the drawing board.3
Unsurprisingly, the "scaling laws" turned out not to be laws at all.
You can keep hoping that the next big model will be a huge leap forward, but unless someone discovers a new approach that actually gives us Artificial General Intelligence, you're going to be disappointed.
Most of what I'm addressing here is relevant even with the assumption that these tools are any good and are producing useful results in your circumstances. But that's not always a given.
Most of the time, I hear people admit that they have to heavily edit the generated code. But they claim this was okay because it saved them from having to lay out the general approach. I would argue that if your AI gets the details wrong, you probably shouldn't be relying on its general approach either.
I also constantly hear complaints that the generated code ends up being anything but DRY. Even though it's harder for a human to scan a large codebase, we can make up for this by an intuition. How many times have you had the feeling "Surely this isn't the first time we've needed to do this," which lead you to finding the already existing utility you needed?
And when it comes to workflows where the developer is merely guiding the AI, you frequently hear that it did great on the first 90%, but it just can't ever seem to get the rest of the way there. Maybe it introduces two new problems for every one that it fixes, or maybe it's just incapable of understanding that the approach is actually fundamentally wrong and won't ever get there. The sunk cost fallacy here is real, and common.
A similar issue is when someone relies on AI to get them further into a project than they could have gotten on their own. Maybe they don't know how to program at all, are just starting out, or maybe they're working in an unfamiliar language, framework, or domain. The AI tools get as far as they can, but they eventually hit their limits, and now the developer is left with a project that they are also incapable of making progress on.
I'm not a lawyer, but I do know that the legal questions haven't been fully ironed out. What happens when you accept AI generated code that turns out to be reproduced entirely from another project, minus the license? What if that license isn't compatible with your use case? Who is responsible? You or the company that trained the model? Or both?
What happens if the use of the training data is actually found to be illegal in the first place? Is all the code generated by the model also in violation? We just don't have an answer.
I am too cynical to actually believe that the US legal system will ever answer these questions, but maybe other countries will. My point is really just that there are unanswered legal questions here, which means you are possibly sailing your company or project into uncharted waters.
So given everything I've laid out above, how can we use this technology?
If you aren't opposed to using these technologies on a moral or ethical level, you might take a minimalist approach. Try to get the most value from these tools with the least amount of the costs that I've described.
This might be look like
And finally, there are a couple places where I'm not going to argue against it.
First, there's the code you were going to copy/paste from StackOverflow and never bother to understand anyway. This might be some obscure config, some workaround for a known bug that's not going to be fixed, or just some snippet to resolve some weird cross-platform edge case. Either way, this was code you were going to just cross your fingers and try out anyway, so where it comes from is not really important.
Next, there's that one-off, throwaway code that doesn't really matter. This could be some awk script that you were never going to figure out, some query in a proprietary language like JQL, or just an insane bash one-liner. The tradeoff here is that if this was something you were capable of doing on your own, you are robbing yourself of those reps.
Lastly, you might work in a dysfunctional company where AI use is being pushed, rewarded, or outright demanded. Maybe you are even overemployed and this company is just your second job anyway. In that case, you have my blessing to give them exactly what they're asking for and let the chips fall where they may.
But if you care about your career or your craft, you'll be doing other things outside of work without the use of AI.
I know many people are concerned that if they don't adapt to AI in software development, they'll be left behind.
First, you need to accept that use of AI comes with all the costs I've listed above. If you decide to lean further into it than I'm suggesting, be my guest, but don't delude yourself into thinking that it's black and white, all or nothing.
Next, keep in mind everyone is still trying to figure this all out. The tools and techniques are churning constantly, so you also risk that you're investing time and energy in something that will have turned out to be a bit of a dead end. If you're excited about all these things and enjoy being on the cutting edge, go for it. If you're like everybody else, you can do a little research here and there and otherwise wait for the industry to settle.
If you feel like this is an existential threat, take a deep breath. Many people who are stressed about falling behind are also worried that AI will make software developers entirely obsolete. If that's true, we're all fucked anyway. Your ability to use AI won't be much of a moat.
But if it doesn't outright replace you, your skills and experience are only going to put you at an advantage.
If you are concerned about falling behind, invest some of your time and energy into working with AI, but not all of it. After all these years, the advice from the Pragmatic Programmer still holds up:
Technology exists along a spectrum from risky, potentially high-reward to low-risk, low-reward standards. It’s not a good idea to invest all of your money in high-risk stocks that might collapse suddenly, nor should you invest all of it conservatively and miss out on possible opportunities. Don’t put all your technical eggs in one basket.