Gotta love a world in which a tool which has ingested "all the world's libraries" is now trotted out as a solution to replace those libraries.
You know what would happen if all the people who handwrote and maintained those libraries revoked their code from the training datasets and forbid their use by the models?
:clown face emoji:
This LLM-maxxing is always a myopic one-way argument. The LLMs steal logic from the humans who invent it, then people claim those humans are no longer required. Yet, in the end, it's humans all the way down. It's never not.
> You know what would happen if all the people who handwrote and maintained those libraries revoked their code from the training datasets and forbid their use by the models?
The MCP servers combined with agentic search solved this possibility, just this year superseding RAG methods but all techniques have their place. I don't see much of a future for RAG though, given its computational intensity.
Long story short, training and fine tuning is no longer necessary for an LLM to understand the latest libraries, and therefore the "permission" to train would not even be something applicable to debate
it's a fast moving field, best not to have a strong opinion about anything
This article is lumps design and implementation together. In my experience, LLMs are really quite bad at designing anything interesting. They are sort of tolerable at implementation — they’re remarkably persistent (compared to humans, anyway), they will tirelessly use whatever framework, good or bad, you throw at them, and they will produce code that is quite painful to look at. And they’ll say that they’re architecting things and hope you’re impressed.
I was wondering if our goal is to leverage them to think about interfaces a bit, like a slightly accelerated modeling phase and then let them loose on the implementation (and maybe later let them loose on local optimization tricks)
>LLMs are really quite bad at designing anything interesting
Let’s be honest, how many devs are actually creating something interesting/unique at their work?
Most of the time, our job is just picking the right combination of well-known patterns to make the best possible trade-offs while fulfilling the requirements.
> Most of the time, our job is just picking the right combination of well-known patterns to make the best possible trade-offs while fulfilling the requirements.
Right. I don't trust LLM's to pick the right pattern. It will pick _a_ pattern and it will mostly sorta fulfill the requirements.
Today I asked an LLM (Codex whatever-the-default-is) to implement something straightforward, and it cheerfully implemented it twice, right next to each other, in the same file, and then wrote the actual code that used it and open-coded a stupendously crappy implementation of the same thing right there. The amazing thing is that the whole mess kind of worked.
I was going to say - with agents the only part I actually have to do is design. Well, and testing. But they don’t really do design work so much as architecture selection if you provide a design.
This is a great take. It applies the SaaS is dead theory at a lower level (libraries are dead) but has it a much more nuanced view.
Yeh even if LLMs are 10x better than today you probably still don't want to implement cryptography from scratch, but use a library.
I also like the 3d printing analogy. We will see how good LLMs get, but I will say that a lot of AI coded tools today have the same feeling as 3d printed hardware.
If no engineer was involved the software is cheap and breaks under pressure because no one considered the edge cases. It looks good on the surface but if you use it for something serious it does break.
The engineer might still use an LLM/3d printer but where necessary he'll use a metal connection (write code by hand or at least tightly guide the LLM) to make the product sturdy.
Which is the reverse of how humans design things, layers, modules. LLMs act as generalized compilers. Impressive but at the same time you end up with a static-like bunch of files instead of a system of parts (that said I'm not a great user of llms so maybe people managed to produce proto-frameworks with them, or maybe that will be the next phase.. module-oriented llm training).
But why? Even if you could have an AI do that it’s, if anything, a waste of cpu cycles. If you have a battle tested library that works and has been tested for trillions of request cycles why would you even want to write a new one that needs testing and maintenance? No matter how cheap code gen gets it doesn’t make sense. For something like a UI library, sure build something specific to your needs.
Libraries are really built for human beings, not super intelligent machines. ORMs are built because I don’t like to and can’t write complex sql with every edge case.
Same with a lot of software, software libraries are designed to work with the deficiencies of the human mind.
There’s no reason to think ai needs these libraries in the same way
I agree with part of this (see my comment above). That said our limitations were also how we produced mathematics. Categorizing world into fixed concepts is valuable i'd say.
This seems like today's version of "I could write Facebook in a weekend."
What are the incentives for doing that? What are the incentives for everyone else to move?
So if proven things exist for basics, what's the incentive to not use them? If everyone decides they're too heavy, they could make and publish new libraries and tools would pick those up. And since they're old, the feature-set is probably more nuanced than you expect. YAGNI is a motto for doing less to avoid creating code debt, but writing more net new code to avoid using a stable and proven library doesn't fit that.
Importing a library for everything will become a dated concept. Similar to the idea that object relational mappers might go away if the ai can just write ultra complicated hyper efficient sql for every query
Why does the AI need SQL queries? Who needs that? It will just write its own ACID-compliant database with its own language, and while it's at it, reinvent the operating system as well. It's turtles all the way down.
It’s actually not a ridiculous concept, but I think in some ways code will go away, and the agent itself will do what the code used to do. Software of the future will be far more dynamic and on the fly. We won’t have these rigid structures
Why does the AI need hardware/chips? Why does the AI need the universe to exist? Why does the AI need math/logic to exist?
Using these preexisting will all become outdated. You will look like primitive cavemen if your agents don't build these from scratch every time you build $NEXT_BIG_THING.
Even local LLMs will be able to build these from scratch by end of 2026.
ORMs have largely been fading away for a while because there are real wins of not using them.
Hyper-optimized HTTP request/response parsing? Yawn. Far less interesting.
AFAICT, the advantages of keeping context tight and focused have not gone away. So there would neeed to be pretty interesting advantages to not just doing the easy thing.
Build times too. I kinda doubt you're setting up strictly-modularized and tightly-controlled bazel builds for all your stuff to avoid extra recompilation... so why are we overcomplicating the easy stuff? Just because "it will probably function just as well"?
"leftpad"-level library inanity? Sure, even less need than before (there was never much). Substantial libraries? What's the point?
Hell, some of the most-used heavily-AI-coded software is going the opposite direction and jumping through hoops to keep using web libraries for UI even though they're terminal apps.
If AI makes the per line cost of producing software cheaper but you still need an expensive human in the loop then the per line cost is merely cheap not free or at the cost of electricity.
Given the choice between
A) having one AI produce a library and having 1000 produce code using that library which comes with tests human in the loop vetting documentation and examples which drastically increase the chance of the 1000 AIs doing it correctly
B) Having 1001 produce the same functionality provided by the library probably on average worse and requiring more expensive hand holding
What in that benefit of B? You might have slightly higher specificity to your use case but its more likely that the only increased specificity is shit you didn't realize you needed yet and will have to prompt and guide the AI to produce.
I fail to see how AI would obviate the need to modularize and re-use code.
This take is just an intermediary take until ai takes over software engineering. In the same way, eventually self driving cars will make human drivers look dangerous
I think your thought process is not taking into account what a super logical ai can do, and how effortlessly it could generate some of this code.
It’ll just spit the code out. I vibe coded some with cookie handling the other day that worked. Should I have done it? Nope. But the ai did it and I allowed it
The concept of using a library for everything will become outdated
It read the library and created a custom implementation for my use case. The implementation was interoperable with a popular nextjs with library. It was a hack sure, but it also took me three minutes
I think the main point is that "reinventing the wheel" has become cheap, not software design itself.
For example, when a designer sends me the SVG icons he created, I no longer need to push back against just using a library. Instead, I can just give these icons to Claude Code and ask it to "Make like react-icons," and an hour later, my issue is solved with minimal input from me. The LLM can use all available data, since the problem is not new.
But many software problems challenge LLMs, especially with features lacking public training data, and creating solutions for these issues is certainly not cheap.
> What matters less is the mechanical knowledge of how to express the solution in code. The LLM generates code, not understanding.
I think it's the opposite -- if you have a good way to design your software (e.g., conceptual and modular), LLM will generate the understanding as well. Design does not only mean code architecture, it also means how you express the concepts in it to a user. If software isn't really understood by humans, I doubt LLMs will be able to generate working code for it anyway, so we get a design problem to solve.
The interesting tension here is that design used to be expensive because it required holding the entire system in your head. Now AI can generate plausible designs quickly, but the hard part has shifted to evaluation: knowing which design is actually good requires the same deep understanding it always did. Cheap design generation just means you need better taste faster.
AI is taking over Senior Devs' Work is the same as IKEA is taking over carpenter's moat - no, no, and again no way.
AI lets you do some impressive stuff, I really enjoy using it. No doubt about that.
But app development, the full Software Delivery Life Cycle - boy, is AI bad. And I mean in a very extreme way.
I talked to a carpenter yesterday about IKEA. He said, people call him to integrate their IKEA stuff, especially the expensive stuff.
And AI is the same.
Configuration Handling: Works on my machine, impressive SaaS app, fast, cool, PostgreSQL etc.
And then there is the final moment: Docker, Live Server - and BOOM! deployment failed.
If you ever happen to debug and fix certain infrastructure and therefore deployment fails - you wish you were doing COBOL or x86/M68000 assembly code like it is 1987 all over again - if you happen to be a very seasoned Senior Dev with a lot of war stories to share.
If you are some vibe coder or consulting MBA - good luck.
AI fails so bad at doing certain things consistently well - and it costs company dearly.
Firing up a Landing Page in React using some Tailwind + ShadCN UI - oh well...
Software Design, Solution Architecture - the hard things are getting harder, not cheaper.
IKEA is great - for certain use cases. It made carpenter's work only more valuable. They thrived because of IKEA, they didn't suffer. In fact, there is more work for them to do. Is their business still hard, of course, but difficult in a different way (talent).
And all doomer's talking about the dev apocalypse - if AI takes over software development, who is in trouble then? Computer Science, software development? Or any and every job market out there?
Think twice. Develop and deploy ten considerably complex SaaS apps using AI and tell me how it went.
Access to information got cheaper. A fool with a tool is still a fool.
The problem is that if you don’t understand the code you’re taking a risk shipping that code.
There’s a reason why most vibe coded apps I’ve seen leak keys and have basic security flaws all over the place.
If you don’t know what you’re doing and you’re generating code at scale that you can’t manage you’re going to have a bad time.
The models are trained on all the slop we had to ship under time pressure and swore we’d fix later, etc. They’re not going to autocomplete the good code. They’re going to autocomplete the most common denominator code.
I don’t agree that design is cheap. Maybe for line-of-business software that doesn’t matter much.
LLM's are only as good as they are because we have such amazing incredible open source software everywhere. Because their job is to look at the types of really good libraries that have decades of direct and indirect wisdom poured into them, and then to be a little glue.
Yes the LLM can go make you alternatives, and it will be mostly fine-ish in many cases. But LLMs are not about pure endless frivolous frontiersing. They deeply reward and they are trained on what the settlers and town planners have done (referencing Wardley here).
And they will be far better at using those good robust well built tools (which they have latently built-in to their models some!) than they will be at re-learning and fine-tuning for your bespoke weird hodgepodge solution.
Cheap design is cheap now. Sure. But good design will be ever more important. Model's ability, their capacity, is a function of what material they can work with, and I can't for the life of me imagine shorting yourself with cheap design like proposed here. The LLM's are very good, but but honing in on good design is hard, period, and I think that judgement and character is something the next orders of magnitude of parameters is still not going to close the gap on.
We are maintaining and extending a bunch (15 around) large ERP/data type projects which are over 20 years old with vibe coding. We have a very strict system to keep the LLMs in bounds, keeping to standard etc and we are feeling we are close to not having to read the code; we have over 2 months of not having to touch anything after review. I designed most of those projects 20-30 years ago; all have the same design principles which are well documented (by me) so the LLM just knows what to find where and what to do with it. These are large 'living' projects (many updates over decades).
This seems like an uncharitable take. Personally I would refrain from putting a 20 year barrier around the qualification, that seems a little harsh.
TFA's take makes sense in a certain context. Getting a high-quality design which is flexible in desirable ways is now easier than ever. As the human asking an LLM for the design, maybe you shouldn't be claiming to have "designed" it, though.
I was specific to the age of the product not of the age of the developer on that project. The point is that "high quality design" is such a fleeting thing that perhaps "longevity of design" is more worth having. It's also probably the case that the latter is much harder to come by which makes it a perfect barrier of qualification.
You know what would happen if all the people who handwrote and maintained those libraries revoked their code from the training datasets and forbid their use by the models?
:clown face emoji:
This LLM-maxxing is always a myopic one-way argument. The LLMs steal logic from the humans who invent it, then people claim those humans are no longer required. Yet, in the end, it's humans all the way down. It's never not.
The code is mostly not bad, but most programmers i have worked with write far better code.
The MCP servers combined with agentic search solved this possibility, just this year superseding RAG methods but all techniques have their place. I don't see much of a future for RAG though, given its computational intensity.
Long story short, training and fine tuning is no longer necessary for an LLM to understand the latest libraries, and therefore the "permission" to train would not even be something applicable to debate
it's a fast moving field, best not to have a strong opinion about anything
Let’s be honest, how many devs are actually creating something interesting/unique at their work?
Most of the time, our job is just picking the right combination of well-known patterns to make the best possible trade-offs while fulfilling the requirements.
It’s a winner-takes-all market. There are no buyers for off brand Salesforce or Uber.
Right. I don't trust LLM's to pick the right pattern. It will pick _a_ pattern and it will mostly sorta fulfill the requirements.
Yeh even if LLMs are 10x better than today you probably still don't want to implement cryptography from scratch, but use a library.
I also like the 3d printing analogy. We will see how good LLMs get, but I will say that a lot of AI coded tools today have the same feeling as 3d printed hardware. If no engineer was involved the software is cheap and breaks under pressure because no one considered the edge cases. It looks good on the surface but if you use it for something serious it does break.
The engineer might still use an LLM/3d printer but where necessary he'll use a metal connection (write code by hand or at least tightly guide the LLM) to make the product sturdy.
That's LLMs extending C and C++ Undefined Behaviour to every project regardless of language.
-------------------
EDIT: I tried articulating it in a blog post in a sleep-deprived frenzy of writing on Sunday - https://www.lelanthran.com/chap14/content.html
Same with a lot of software, software libraries are designed to work with the deficiencies of the human mind.
There’s no reason to think ai needs these libraries in the same way
What are the incentives for doing that? What are the incentives for everyone else to move?
So if proven things exist for basics, what's the incentive to not use them? If everyone decides they're too heavy, they could make and publish new libraries and tools would pick those up. And since they're old, the feature-set is probably more nuanced than you expect. YAGNI is a motto for doing less to avoid creating code debt, but writing more net new code to avoid using a stable and proven library doesn't fit that.
Using these preexisting will all become outdated. You will look like primitive cavemen if your agents don't build these from scratch every time you build $NEXT_BIG_THING.
Even local LLMs will be able to build these from scratch by end of 2026.
Hyper-optimized HTTP request/response parsing? Yawn. Far less interesting.
AFAICT, the advantages of keeping context tight and focused have not gone away. So there would neeed to be pretty interesting advantages to not just doing the easy thing.
Build times too. I kinda doubt you're setting up strictly-modularized and tightly-controlled bazel builds for all your stuff to avoid extra recompilation... so why are we overcomplicating the easy stuff? Just because "it will probably function just as well"?
"leftpad"-level library inanity? Sure, even less need than before (there was never much). Substantial libraries? What's the point?
Hell, some of the most-used heavily-AI-coded software is going the opposite direction and jumping through hoops to keep using web libraries for UI even though they're terminal apps.
Given the choice between
A) having one AI produce a library and having 1000 produce code using that library which comes with tests human in the loop vetting documentation and examples which drastically increase the chance of the 1000 AIs doing it correctly
B) Having 1001 produce the same functionality provided by the library probably on average worse and requiring more expensive hand holding
What in that benefit of B? You might have slightly higher specificity to your use case but its more likely that the only increased specificity is shit you didn't realize you needed yet and will have to prompt and guide the AI to produce.
I fail to see how AI would obviate the need to modularize and re-use code.
I think your thought process is not taking into account what a super logical ai can do, and how effortlessly it could generate some of this code.
The concept of using a library for everything will become outdated
For example, when a designer sends me the SVG icons he created, I no longer need to push back against just using a library. Instead, I can just give these icons to Claude Code and ask it to "Make like react-icons," and an hour later, my issue is solved with minimal input from me. The LLM can use all available data, since the problem is not new.
But many software problems challenge LLMs, especially with features lacking public training data, and creating solutions for these issues is certainly not cheap.
I think it's the opposite -- if you have a good way to design your software (e.g., conceptual and modular), LLM will generate the understanding as well. Design does not only mean code architecture, it also means how you express the concepts in it to a user. If software isn't really understood by humans, I doubt LLMs will be able to generate working code for it anyway, so we get a design problem to solve.
AI is taking over Senior Devs' Work is the same as IKEA is taking over carpenter's moat - no, no, and again no way.
AI lets you do some impressive stuff, I really enjoy using it. No doubt about that.
But app development, the full Software Delivery Life Cycle - boy, is AI bad. And I mean in a very extreme way.
I talked to a carpenter yesterday about IKEA. He said, people call him to integrate their IKEA stuff, especially the expensive stuff.
And AI is the same.
Configuration Handling: Works on my machine, impressive SaaS app, fast, cool, PostgreSQL etc.
And then there is the final moment: Docker, Live Server - and BOOM! deployment failed.
If you ever happen to debug and fix certain infrastructure and therefore deployment fails - you wish you were doing COBOL or x86/M68000 assembly code like it is 1987 all over again - if you happen to be a very seasoned Senior Dev with a lot of war stories to share.
If you are some vibe coder or consulting MBA - good luck.
AI fails so bad at doing certain things consistently well - and it costs company dearly.
Firing up a Landing Page in React using some Tailwind + ShadCN UI - oh well...
Software Design, Solution Architecture - the hard things are getting harder, not cheaper.
IKEA is great - for certain use cases. It made carpenter's work only more valuable. They thrived because of IKEA, they didn't suffer. In fact, there is more work for them to do. Is their business still hard, of course, but difficult in a different way (talent).
And all doomer's talking about the dev apocalypse - if AI takes over software development, who is in trouble then? Computer Science, software development? Or any and every job market out there?
Think twice. Develop and deploy ten considerably complex SaaS apps using AI and tell me how it went.
Access to information got cheaper. A fool with a tool is still a fool.
There’s a reason why most vibe coded apps I’ve seen leak keys and have basic security flaws all over the place.
If you don’t know what you’re doing and you’re generating code at scale that you can’t manage you’re going to have a bad time.
The models are trained on all the slop we had to ship under time pressure and swore we’d fix later, etc. They’re not going to autocomplete the good code. They’re going to autocomplete the most common denominator code.
I don’t agree that design is cheap. Maybe for line-of-business software that doesn’t matter much.
LLM's are only as good as they are because we have such amazing incredible open source software everywhere. Because their job is to look at the types of really good libraries that have decades of direct and indirect wisdom poured into them, and then to be a little glue.
Yes the LLM can go make you alternatives, and it will be mostly fine-ish in many cases. But LLMs are not about pure endless frivolous frontiersing. They deeply reward and they are trained on what the settlers and town planners have done (referencing Wardley here).
And they will be far better at using those good robust well built tools (which they have latently built-in to their models some!) than they will be at re-learning and fine-tuning for your bespoke weird hodgepodge solution.
Cheap design is cheap now. Sure. But good design will be ever more important. Model's ability, their capacity, is a function of what material they can work with, and I can't for the life of me imagine shorting yourself with cheap design like proposed here. The LLM's are very good, but but honing in on good design is hard, period, and I think that judgement and character is something the next orders of magnitude of parameters is still not going to close the gap on.
Less maintenance and flexibility. You're not really "designing software" until you have a 20+ year old product.
Vibe coders really embody the "temporarily embarrassed billionaire" mindset so perfectly.
TFA's take makes sense in a certain context. Getting a high-quality design which is flexible in desirable ways is now easier than ever. As the human asking an LLM for the design, maybe you shouldn't be claiming to have "designed" it, though.
More to the point how much of that profit is generated from selling those customers data rather than earning those customers payments?