There’s a particular kind of satisfaction that comes from asking an LLM to generate a solution to your problem. The code appears, seemingly tailored to your exact needs. It… mostly works. You ship it. But somewhere in the back of your mind, a voice whispers: “Didn’t someone solve this already?”
They did. Probably a decade ago. Possibly in a well-maintained library with thousands of contributors, comprehensive documentation, and battle-tested edge cases (maybe even in the standard library of the language itself!). But you’ve just spent an afternoon generating a bespoke version that does 80% of what you need and introduces 100% new maintenance burden.
Welcome to the pre-industrial revolution of software development.
When Every Wheel Was Handcrafted
Before Eli Whitney championed interchangeable parts in the late 1700s, every rifle was a snowflake. Each component was individually fitted by a skilled craftsman. If a part broke, you couldn’t just swap in a replacement: you needed an artisan to craft a new one specifically for your weapon. It was expensive, slow, and scaled terribly.
But there was something seductive about it: each piece was custom-made, perfectly fitted to its purpose. The craftsman could look at a rifle and say “I made that.” There was artistry in the work, even if there was also tremendous inefficiency.
The industrial revolution didn’t just give us factories, but standardization. The radical idea that problems could be solved once, solutions could be refined over time, and those solutions could be reused by everyone. A broken gear could be replaced with any other gear of the same specification. Knowledge became cumulative rather than artisanal.
Software development spent decades moving toward this ideal. We built package managers, shared libraries, and open source ecosystems. We solved authentication, logging, HTTP clients, date handling, and thousands of other problems. Once. Well.
The LLM Whisperer’s Workshop
Now we’re speedrunning backwards.
Need to parse CSV files? Sure, you could use a mature library that handles edge cases like quoted delimiters, encoding issues, and malformed data. Or you could ask Claude to write you a quick parser. It’ll work for your test file. It might even work in production for a while. Until you hit the edge case that the library authors handled in 2015.
Need authentication? OAuth2 is well-specified, and every major language has robust implementations. But maybe your requirements are “special.” The LLM can generate something that looks like OAuth2, implements the happy path, and introduces three security vulnerabilities that won’t be discovered until you’re processing credit cards at scale.
The pattern repeats endlessly:
- Custom API clients instead of generated ones from OpenAPI specs
- Bespoke state machines instead of XState or Akka
- Hand-rolled validation instead of JSON Schema or Zod
- DIY retry logic instead of Polly or Tenacity
- Yet another logging wrapper instead of structured logging libraries
Each one feels productive in the moment. The code appears quickly. It does what you asked. You move on to the next task, feeling efficient.
But you’ve just created a bespoke component that only you understand, that only handles the cases you thought of, and that will need to be debugged, maintained, and eventually rewritten by someone who wishes you’d just used the standard solution.
The Thanksgiving Algorithm That Shouldn’t Exist
Let me give you a concrete example that perfectly encapsulates this phenomenon.
I once asked an LLM to write a C function to calculate what date US Thanksgiving falls on for a given year. Simple requirement: Thanksgiving is the fourth Thursday of November.
What came back was impressive in the worst possible way. The LLM generated an implementation using Zeller’s Congruence, a mathematical algorithm from 1887 for calculating the day of the week for any given date. It worked fine. It was also completely, utterly unnecessary.
/**
* Calculate the day of the week using Zeller's Congruence
* Returns: 0 = Sunday, 1 = Monday, ..., 6 = Saturday
*/
int getDayOfWeek(int year, int month, int day) {
if (month < 3) {
month += 12;
year--;
}
int q = day;
int m = month;
int k = year % 100;
int j = year / 100;
int h = (q + (13 * (m + 1)) / 5 + k + k / 4 + j / 4 - 2 * j) % 7;
// Convert to 0 = Sunday, 1 = Monday, etc.
int dayOfWeek = (h + 6) % 7;
return dayOfWeek;
}
You know what else calculates what day of the week a date falls on? The mktime() function. It’s been in the C standard library since 1989. It’s implemented, tested, and optimized on every platform you’ll ever target. The mktime solution was:
/**
* Calculate the day of the week using standard C library
* Returns: 0 = Sunday, 1 = Monday, ..., 6 = Saturday
*/
int getDayOfWeek(int year, int month, int day) {
struct tm timeinfo = {0};
timeinfo.tm_year = year - 1900; // years since 1900
timeinfo.tm_mon = month - 1; // months since January (0-11)
timeinfo.tm_mday = day;
// Normalize the time structure and calculate day of week
mktime(&timeinfo);
return timeinfo.tm_wday; // 0 = Sunday, 1 = Monday, etc.
}
Instead, I got a few more lines of mathematical wizardry implementing an algorithm that predates the telephone, when a call to a standard library function would have sufficed. While the code savings is minimal, every line in the mktime version clearly serves a purpose for defining the problem to get the solution, vs. reimplementing an algorithm that itself needs explanation if anyone is to maintain it.
This is the essence of the problem. The LLM didn’t ask “what tools are available?” It asked “what algorithm can I implement?” It demonstrated knowledge—impressive, academic knowledge about calendar mathematics—when what I needed was basic competence with the standard library.
The generated code wasn’t wrong. It was just reinventing a wheel that’s been standard equipment for 35 years. And if you didn’t know better, you’d look at that Zeller’s Congruence implementation and think “wow, this is sophisticated.” You’d check it in. You’d move on. And you’d have introduced a maintenance burden for no reason whatsoever.
Six months later, someone would be reviewing that code and thinking “why on earth didn’t they just use mktime?” But by then, it works, tests are written around it, and changing it seems risky. The technical debt is locked in.
The Great Rewrite Delusion, LLM Edition
This pattern has a close cousin that every seasoned developer has seen: the urge to rewrite a legacy system.
You know the one. That rock-solid stable system that’s been running in production for years. It handles edge cases you’ve forgotten existed. It processes millions of transactions without complaint. But it’s written in an “old” style, maybe lacks tests, perhaps uses patterns that aren’t fashionable anymore. It’s hard to understand. It’s boring to maintain. And you just know you could build it better, cleaner, faster.
Sometimes that rewrite makes absolute sense. Sometimes the legacy system truly is unmaintainable, or the technology is genuinely obsolete, or business requirements have fundamentally changed. But often—more often than we’d like to admit—the rewrite urge is about trading the boring, hard work of understanding an existing system for the fun, exciting work of building something new.
LLMs have turbocharged this impulse and added a tempting new twist: you don’t even have to write the new code yourself.
“This legacy Python service is hard to understand. Let’s ask Claude to rewrite it in Go. Look how clean this generated code is! Look at these type signatures! This is so much better.”
Is it though?
The legacy service handles timezone edge cases you didn’t know existed. It has workarounds for vendor API quirks discovered through painful production incidents. It accounts for data quality issues in the upstream systems. It has retry logic tuned through months of observability data. All of that institutional knowledge is embedded in code that you’ve just… thrown away.
And here’s the kicker: you didn’t write the new code. The LLM did. Which means it’s already legacy code from Day 0. It’s someone else’s code. More accurately, it’s an amalgamation of patterns from thousands of someone else’s codebases, synthesized by a statistical model that doesn’t understand your specific context, your business logic, or your production environment.
You’ve traded:
- Legacy code you don’t understand written by your team
- For legacy code you don’t understand written by a statistical process
The only difference is the second one feels newer because it appeared yesterday. It uses modern idioms, current language features, and fashionable patterns. But it’s still code you didn’t write, don’t fully understand, and now have to maintain.
Except it’s worse than traditional legacy code. Traditional legacy code usually has at least one person who understands it deeply: the person who originally built it. They might not be on your team anymore, but their knowledge is findable. The code often has comments explaining the “why” behind weird decisions, or at minimum, you can look at git history and piece together the evolution.
LLM-generated code has none of that. There’s no git history showing the evolution of thought. There are no commit messages explaining why that particular approach was chosen. There’s no original author to ask “why does this handle null differently in these two branches?” The code simply… materialized. And now you own it.
The Illusion of Productivity
LLMs are incredibly good at creating the appearance of progress. Code materializes at conversational speed. Tests are generated. Documentation is written. The dopamine hits keep coming.
But productivity isn’t measured in lines of code produced; it’s measured in problems solved that stay solved. It’s measured in systems that are maintainable, debuggable, and upgradeable. It’s measured in solutions that benefit from collective knowledge rather than individual improvisation.
When you use a well-maintained library:
- Security patches flow downstream automatically
- Performance improvements benefit everyone
- Documentation explains not just how, but why
- Edge cases have already been discovered and handled
- Other developers can understand and maintain the code
When you generate bespoke solutions with LLMs:
- You own every bug, forever
- Performance optimization is your problem
- Documentation is what the LLM hallucinated
- Edge cases are landmines waiting to detonate
- The next developer needs to reverse-engineer your intentions
The technical debt compounds silently. Three months later, you’re debugging a race condition in your custom retry logic while the standard library released a fix for the same issue last week. Six months later, you’re rewriting the authentication system because you didn’t understand token refresh. A year later, you’re explaining to a new team member why this particular codebase doesn’t use any standard tools—and also why you rewrote that perfectly functional service in a different language for no clear benefit. And you completely skipped the journey that the original service provided: the learning of the problem space along the path of building it.
The Pre-Interchangeable Parts Trap
We’re recreating the world before standardization, where every solution is artisanal and nothing quite fits together. Your LLM-generated HTTP client works differently from mine. Your error handling follows different patterns. Your validation logic makes different assumptions.
It’s not that the code doesn’t work—craftsman-made rifles worked fine. It’s that it doesn’t scale, it doesn’t compose, and it doesn’t improve over time.
The industrial revolution succeeded because standardization created a flywheel: better standards enabled better tools, better tools enabled better standards. The same thing happened in software with open source. When everyone uses the same components, everyone benefits from improvements to those components.
LLM-generated code breaks this flywheel. Each solution is isolated. Improvements don’t propagate. Knowledge isn’t cumulative. We’re spending our time re-solving problems instead of solving new ones.
Not All Wheels Should Be Reinvented
To be clear: there are absolutely cases where bespoke solutions make sense. When you’re doing something genuinely novel, when existing solutions don’t fit your constraints, when you’re exploring new problem spaces—generate away. LLMs excel at prototyping, at exploring possibilities, at helping you understand a domain before committing to an approach.
And yes, sometimes that legacy system really does need to be rewritten. Sometimes the technology truly is obsolete. Sometimes business requirements have fundamentally changed in ways that make evolution impossible.
The problem isn’t using LLMs. The problem is using them as a replacement for discovering and understanding existing solutions rather than as a tool for understanding problems before solving them. The problem is using them to avoid the boring work of learning what already exists—whether that’s a standard library or your own legacy codebase.
Before you ask an LLM to generate a solution, ask yourself:
- Has someone already solved this?
- Are there mature libraries or standards?
- What will maintenance look like in six months?
- Am I creating technical debt because I’m avoiding the work of understanding existing solutions?
- Am I rewriting something that works because learning it is hard and building is fun?
- Do I understand why the existing solution was built the way it was?
Sometimes the answer is “yes, generate this one-off script” or “yes, we need something custom here” or “yes, this really does need a rewrite.” But often the answer should be “let me spend an hour reading the documentation for the standard solution” or “let me spend a week understanding why this legacy system works the way it does.”
The Path Forward
The industrial revolution didn’t eliminate craftspeople—it freed them to focus on problems that genuinely required understanding of the craft. You don’t hand-forge nails anymore, which means you can spend time designing furniture.
The same should be true with LLMs. They should free us to focus on the problems that genuinely need solving, the innovations that genuinely haven’t been done before. They should help us understand and integrate existing solutions faster, not circumvent them entirely.
Use LLMs to:
- Understand how existing libraries work
- Generate the boilerplate around standard solutions
- Explore problem spaces before committing
- Create prototypes that help you evaluate approaches
- Document and understand existing legacy systems
Don’t use LLMs to:
- Avoid reading documentation
- Reinvent mature, battle-tested solutions
- Generate alternatives to standard libraries because integration seems hard
- Create bespoke versions of solved problems
- Rewrite working systems just because understanding them is boring
Every Generation Learns This Again
Maybe every technological revolution requires a generation to rediscover why standards matter. Maybe we need to feel the pain of maintaining ten thousand slightly different implementations of the same functionality. Maybe we need to experience the frustration of debugging artisanal code at 2 AM to remember why we built standard parts in the first place.
But we don’t have to. The lesson is sitting there in history, in the evolution from craft to industry, from individual expertise to collective knowledge. Interchangeable parts weren’t a limitation on creativity, but a platform for it. Standards didn’t reduce innovation—they enabled it to scale.
LLMs are powerful tools. Let’s use them to build on top of standards, not to route around them. Let’s use them to solve new problems, not to re-solve old ones with slightly different syntax. Let’s use them to understand existing systems better, not to avoid the work of understanding them at all.
Otherwise, we’re just very efficient blacksmiths, individually crafting parts that will never quite fit anyone else’s system, wondering why progress feels so exhausting—and why we keep having to reforge the same components over and over again.
The wheel was invented once. We spent millennia making better wheels, standardizing them, learning what makes them work. Don’t let the ease of reinvention make you forget that the hardest part wasn’t making a wheel—it was making one that worked the same way every time, that others could build on, that got better with collective knowledge.
Your future self, debugging that custom implementation at midnight, will thank you for using the boring, standard solution. And they’ll thank you even more for taking the time to understand the “boring” legacy system instead of generating a shiny replacement that’s legacy from the moment it hits production.