> Video games stand out as one market where consumers have pushed back effectively
No, it's simply untrue. Players only object against AI art assets. And only when they're painfully obvious. No one cares about how the code is written.
If you actually read the words used in Steam AI survey you'll know Steam has completely caved in for AI-gen code as well. It's specifically worded like this:
> content such as artwork, sound, narrative, localization, etc.
No 'code' or 'programming.'
If game players are the most anti-AI group then it's crystal clear that LLM coding is inevitable.
> This stands in stark contrast to code, which generally doesn't suffer from re-use at all, or may even benefit from it, if it's infrastructure.
Yeah, exactly. And LLM help developers save time from writing the same thing that has be done by other developers for a thousand times. I don't know how one can spins this as a bad thing.
> Classic procedural generation is noteworthy here as a precedent, which gamers were already familiar with, because by and large it has failed to deliver.
Spore is well acclaimed. Minecraft is literally the most sold game ever. The fact one developer fumbled it doesn't make the idea of procedural generation bad. This is a perfect example of that a tool isn't inherently good or bad. It's up to the tool's wielder.
> Classic procedural generation is noteworthy here as a precedent, which gamers were already familiar with, because by and large it has failed to deliver.
Yes, this is a wildly uneducated perspective.
Procedural generation has often been a key component of some incredibly successful, and even iconic games going back decades. Elite is a canonical example here, with its galaxies being procedurally generated. Powermonger, from Bulldog, likewise used fractal generation for its maps.
More recently, the prevalence of procedurally generated rogue-likes and Metroidvanias is another point against. Granted, people have got a bit bored of these now, but that's because there were so many of them, not because they were unsuccessful or "failed to deliver".
Procedural generation underlies the most popular game of all time (Minecraft) and is foundational for numerous other games of a similar type - Dwarf Fortress, et al.
And it's used to power effect where you might not expect it (Stardew Valley mines).
What procedural generation does NOT work at is generating "story elements" though perhaps even that can fall, Dwarf Fortress already does decently enough given that the player will fill in the blanks.
> And it's used to power effect where you might not expect it (Stardew Valley mines).
Apparently Stardew Valley's mines are not procedurally generated, but rather hand-crafted. Per their recent 10 year anniversary video, the developer did try to implement procedural generation for the mines, but ended up scrapping it:
Roguelike/lites are is of the most popular genres of indie games nowadays. One of it's main characteristics is randomization and procedural generation.
I’m a hard core rogue-like player (easily over a thousand hours at least in all the games I’ve played) but even so I can admit that hey have nothing compared to a well crafted world like you’d find in From Software titles or Expedition 33, or classic Zelda games for that matter.
Making a great world is an incredibly hard task though and few studios have the capabilities to do so.
Is it wildly uneducated to not know any of the games you mentioned? I didn’t realize education covered less known video games? Wouldn’t a better example be No Man’s Sky, if we’re talking procedural gen and eventually a good game.
In any case, I agree that gamers by and large don’t care to what extent the game creation was automated. They are happy to use automated enemies, automated allies, automated armies and pre-made cut scenes. Why would they stop short at automated code gen? I genuinely think 90% wouldn’t mind if humans are still in the loop but the product overall is better.
> Is it wildly uneducated to not know any of the games you mentioned? I didn’t realize education covered less known video games?
Yes. It is "wildly uneducated" to have, and express, strong opinions about ANY field of endeavour where you are unfamiliar with large parts of that field.
> Yeah, exactly. And LLM help developers save time from writing the same thing that has be done by other developers for a thousand times.
Before LLMs we did already have a way to "save developers time from writing the same thing that has been done by other developers for a thousand times", you know? A LLM doing the same thing the 1001st time is not code reuse. Code reuse is code reuse.
Because code reuse is hard. Like, really hard. If it weren't we wouldn't be laughing at left-pad. If it weren't hard we wouldn't have so many front-end JavaScript frameworks. If it weren't Unreal wouldn't still have their own GC and std-like implementation today. Java wouldn't have been reinventing build system every five years.
The whole history of programming tool is exploring how to properly reuse code: are functions or objects the fundamental unit of reuse? is diamond inheritance okay? should a language have an official package management? build system? should C++ std have network support? how about gui support? should editors implement their own parsers or rely on language server? And none of these questions has a clear answer after thousands if not millions of smart people attempted. (well perhaps except the function vs object one)
Electron is the ultimate effort of code reuse: we reuse the tens of thousands of human-years invested to make a markup-based render engine that covers 99% of use case. And everyone complains about it, the author of OP article included.
LLM-coding is not code reuse. It's more like throwing hands up and admitting humans are yet not smart enough to properly reuse code except for some well-defined low level cases like compiling C into different ISA. And I'm all for that.
Hard agree. Before LLMs, if there was some bit of code needed across the industry, somebody would put the effort into writing a library and we'd all benefit. Now, instead of standardizing and working together we get a million slightly different incompatible piles of stochastic slop.
I was talking about libraries, higher-level units of reuse than individual functions. And your "syntactic" vs "semantic" reuse makes zero sense. Functions are literally written and invoked for their semantics – what they make happen. "Syntactic reuse" would be macros if anything, and indeed macros are very good at reducing boilerplate.
You might have a more compelling argument if instead of syntax and semantics you contrasted semantics and pragmatics.
A library is a collection of data structures functions. My argument still holds.
> Syntactic reuse would be macros
Well sure. My point is that what can be reused is decided ahead of time and encoded in the syntax. Whereas with LLMs it is not, and is encoded in the semantics.
> Pragmatics
Didn't know what that is. Consider my post updated with the better terms.
One the topic procedural generation; rogue likes are all about it and new generation Diablo like games have definitely similar things, well respected new games like Blue Prince. There has never been such as successful period of time for procedural generation in games like now, and all of these are pre-AI. AI powered procedural generation is wet dream of rogue-like lovers
I love procedural generation, and there is definitely a craft to it. Creating a process that generates a playable level or world is just very interesting to explore as an emergent system. I don't think LLMs will make these system more interesting by default. Of course there are still things to explore in this new space.
It's similar to generative/plotter art compared to a midjourney piece of slop. The craft that goes into creating the code for the plotter is what makes it interesting.
You're cherry picking. The open world games aren't as compelling anymore since the novelty is wearing off. I can cherry pick, too. For example, Starfield in all its grandeur is pretty boring.
And the users may not care about code directly, but they definitely do indirectly. The less optimized and more off-the-shelf solutions have seen a stark decrease in performance but allowing game development to be more approachable.
LLMs saving engineers and developers time is an unfounded claim because immediate results does not mean net positive. Actually, I'd argue that any software engineer worth their salt knows intimately that more immediate results is usually at the expense of long term sustainability.
No one even cares how architecture is done. Unless you are the one fixing it or maintaining it.
Sorry, no one. We all know Apple did some great stuff with their code, but we care more about the awful work done on the UI, right? I mean - the UI seems to not be breaking in these new OSs which is amazing feature... for a game perhaps, and most likely the code is top notch. But we care about other things.
This is the reality, and the blind notion that so-many people care about code is super untrue. Perhaps someone putting money on developers care, but we have so many examples already of money put on implementation no matter what the code is. We can see everywhere funds thrown at obnoxious implementations, and particularly in large enterprises, that are only sustained by the weird ecosystem of white-collar jobs that sustains this impression.
Very few people care about the code in total, and this can be observed very easy, perhaps it can be proved no other way around is possible.
This is overstating it. Computers are amazing machines, and modern operating systems are also amazing. But even they cannot completely mask the downstream effects of poor quality code.
You say you don't care, but I bet you do when you're dealing with a problem caused by poor code quality or bad choices made by the developer.
If you read the next couple of paragraphs, the author addresses this:
> That said, Steam's policy has been recently updated to exclude dev tools used for "efficiency gains", but which are not used to generate content presented to players.
I only quoted the first paragraph, but there is more.
Also "AI" has been in gaming, especially mobile gaming, for a literal decade already.
Household name game studios have had custom AI art asset tooling for a long time that can create art quickly, using their specific style.
AI is a tool and as Steve Jobs said, you can hold it wrong. It's like plastic surgery, you only notice the bad ones and object to them. An expert might detect the better jobs, but the regular folk don't know and for the most part don't care unless someone else tells them to care.
Another example is upscaled texture mods, which has been a trend for a long while before 'large language' took off as a trend. Mods to improve textures in a game are definitely not new and that probably means including from other sources, but the ability to automate/industrialize that (and presumably a lot of training material available) meant there was a big wave of that mod category a few years back. My impression is that gamers will overlook a lot so long as it's 'free' or at least are very anti-business (even if the industry they enjoy relies upon it), the moment money is involved they suddenly care a lot about the whole fabric being hand made and need verification that everyone involved was handsomely rewarded.
The issue isn't objective quality or realism, it's sticking to a specific style consistently.
_Everyone_ (and their grandmother) can instantly tell a ChatGPT generated image, it has a very distinct style - and in my experience no amount of prompting will make it go away. Same for Grok and to a smaller degree Google's stuff.
What the industry needs (and uses) is something they can feed a, say, wall texture into and the AI workflow will produce a summer, winter and fall variant of that - in the exact style the specific game is using.
I think that's a different category, though. Those backgrounds are actual video recordings of real places, not 3D environments modeled from scratch. It looks 'real' because the background actually exists.
Tbf Spore's acclaim comes with the caveat that it completely failed to live up to years of pre-release hype. Much of the goodwill it's garnered since, which is reflected in review scores, only came after the storm of controversy over Spore not being "the ultimate simulator which would mark the 'end of history' for gaming" died down.
And you wouldn't really have any idea this was the case if you weren't there when it happened.
An LLM has never saved me time. It has always produced something that doesn't quite work, has the rough shape of what I want, but somehow always gets all the details wrong.
I can type up what I want much faster and be sure it's at least solving the right problem, even if it may have bugs.
There are also tools to generate boilerplate that work much much better than LLMs. And they're deterministic.
If you do not plan out the architecture soundly, no amount of prompting will fix it if it is bad. I know this because my "handmade" project made with backward compatibility and horrible architecture keeps being badly fixed by LLM while the ones that rely on preemptive planning of the features and architecture, end up working right.
I think that's true, but something even more subtle is going on. The quality of the LLM output depends on how it was prompted in a way more profound than I think most people realize. If you prompt the LLM using jargon and lingo that indicate you are already well experienced with the domain space, the LLM will rollplay an experienced developer. If you prompt it like you're a clueless PHB who's never coded, the LLM will output shitty code to match the style of your prompt. This extends to architecture, if your prompts are written with a mature understanding of the architecture that should be used, the LLM will follow suit, but if not then the LLM will just slap together something that looks like it might work, but isn't well thought out.
> An LLM has never saved me time. It has always produced something that doesn't quite work, has the rough shape of what I want, but somehow always gets all the details wrong.
This reads like a skill issue on your end, in part at least in the prompting side.
It does take time to reach a point where you can prompt an LLM sufficiently well to get a correct answer in one shot, developing an intuitive understanding of what absolutely needs to be written out and what can be inferred by the model.
I’m curious about how you landed “git gud; prompt better” and not “maybe the domain I work in is a better fit for LLM code”. Or, to be a bit less generous, consider the possibility that the code you’re generating is boilerplate, marshaling, and/or API calls. A facade of perceived complexity over something that’s as complex as a filter-map or two.
> I’m curious about how you landed “git gud; prompt better” and not “maybe the domain I work in is a better fit for LLM code”.
1. Personal experience. Lazy prompting vs careful prompting.
2. They're coincidentally good at things I'm good at, and shit at things I don't understand.
3. Following from 2, when used by somebody who does understand a problem space which I do not, they easily succeed. That dog vibe coding games succeeded in getting claude to write games because his master knew a thing or two about it. I on the other hand have no game Dev experience, even almost no hobby experience with games specifically, so I struggle to get any game code that even remotely works.
When web search first arrived, the same thing happened. That is, some people didn't like using the tool because it wasn't finding what they wanted. This is still true for a lot of folks today, actually.
It's less "git gud; prompt better", and more, "be able to explain (well) what you want as the output". If someone messages the IT guy and says "hey my computer is broken" - what sort of helpful information can the IT guy offer beyond "turn it on and off again"?
In the past 2 months I've been using all the SOTA models to help me design a new DSL for narrative scripting (such as game story telling) and a c# runtime implementation o the script player engine.
The language spec and design is about 95% authored by me up to this point; I have the LLMs work on the 2nd layer: the implementation specs/guidelines and the 3rd layer: concrete c# implementation.
Since it's a new language, I consider it's somewhat new/novel tasks for LLMs (at least, not like boilerplate stuff like HTTP API or CRUD service). I'd say, these LLMs have been very helpful - you can tell they sometimes get confused and have trouble to comply to the foreign language spec and design - but they are mostly smart enough to carry out the objectives, and they get better and better after the project got on track and has plenty of files/resources to read and reference.
And I'd also say "prompt better" is a important factor, just much more nuanced/complicated. I started with 0 experience with LLM agents and have learned a lot about how to tame them, and developed a protocol to collaborate with agents, these all comes from countless trial and errors, but in the end get boiled down to "prompt better".
The parent is specifically talking about producing boilerplate code -a domain in which LLM excell at- and not having had any success at that. It's therefore not a leap of logic to assume they haven't put (enough) effort into getting better at prompting first, which is perfectly fine per se but leans towards a skill issue and not an immutable property of gen AI.
The uncomfortable fact remains that one cannot really expect to get much better results from an LLM without putting some work themselves. They aren't magical oracles.
> I don't know how one can spins this as a bad thing.
People spin all kinds of things if they believe (accurately or not) that their livelihood is on the line. The knee-jerk "AI universally bad" movement seems just as absurd to me as the "AGI is already here" one.
> Spore is well acclaimed. Minecraft is literally the most sold game ever.
Counterpoint: Oblivion, one of the first high-profile games to use procedural terrain/landscape generation, seemed very soulless to me at the time.
As I see it, it's all a matter of how well it's executed. In the best case, a skilled artist uses automation to fill in mechanical rote work (in the same way that e.g. renaissance artists didn't make every single brushstroke of their masterpieces themselves).
In the worst (or maybe even average? time will tell) case, there are only minimal human-made artistic decisions flowing into a work and the output is a mediocre average of everything that's already been done before, which is then rightfully perceived as slop.
> Counterpoint: Oblivion, one of the first high-profile games to use procedural terrain/landscape generation, seemed very soulless to me at the time.
Is that even a counter point? Nobody in their right mind would ever claim that procedural generation is impossible to fuck up. The reason Minecraft/etc are good examples is because they prove procedural generation can work, not that it always works.
True, I should have said "counterexample". Procedural generation is just another tool, in the end, and it can be used for great or mediocre results like any other.
Yes, but I beg to differ on the "skilled" part. I find the result very jarring somehow; the scale of the world didn't seem right. (Probably because it was too realistic; part of the art of game terrain design is reconciling the inherently unrealistic scales.)
WoW had this but you never really thought about it - even the massive capital cities were a few blocks at most.
The problem with procedural generation is it's hard to make it as action-packed and desirable as WoW zones, and even those quickly become fly-over territory.
> Yeah, exactly. And LLM help developers save time from writing the same thing that has be done by other developers for a thousand times. I don't know how one can spins this as a bad thing
Do you ever ask why you're writing the same thing over and over again? That's literally the foundational piece of being an engineer; understanding when you're reinventing the wheel when there's a perfectly good wheel nearby.
It is reusable only if simply changing a, b, c is enough to give the function that you want. Options object etc _parameterise_ that function. It is useful only if the variability in reuse you desire is spanned by the parameters. This is syntactic reuse.
With LLMs, the parameterisation goes into semantic space. This makes code more reusable.
A model trained on all of GitHub can reuse all that code regardless of whether they are syntactically reusable or not. This is semantic reuse, which is naturally much broader.
There are two important failures I see with this logic:
First, I am not arguing for reusability. Reusability is one of the most common mistakes you can make as a software engineer because you are over-generalizing what you need before you need it. Code should be written for your specific use case, and only generalized as problems appear. But if you can recognize that your specific use case fits a known problem, then you can find the best way to solve that problem, faster.
Second, when you're using an LLM to make your code more 'reusable' you are taking full responsibility for everything that LLM vomits out. You're no longer assembling a car from well known parts, taking care to tailor it to your use case as needed. You're now building everything in said car, from the tires to the engine and the rearview mirror.
Coding is a constant balance between understanding what you're solving for and what can solve it. Using LLMs takes the worst of both worlds, by offloading both your understanding of the problem and your understanding of the solution.
I am not talking about using an LLM to make code reusable in the sense youre arguing.
My point is that the very act of training an LLM on any corpus of code, automatically makes all of that code reusable, in a much broader semantic way rather than through syntax. Because the LLM uses a compressed representation of all that code to generate the function you ask it to. It is like having an npm where it already has compressed the code specific to your situation (like you were saying) that you want to write.
Sorry to say this feels like cope from someone who is a talented developer in more niche areas finally seeing AI start to solve problems in those areas.
Many people don't know this, but the Luddites were right. I studied Art History and this particular movement. One of the claims of the Luddites is that quality would go down, because their craft took half a lifetime to master (it was passed down from parent to chile.)
I was able to feel wool scarves made in europe from the middle ages. (In museum storage, under the guidance of a curator) They are a fundamentally different product than what is produced in woolen mills. A handmade (in the old traditiona) woolen scarf can be pulled through a ring, because it is so thin and fine. Not so for a modern mill-made scarf.
Another interesting thing is that we do not know how they made them so fine. The technique was never recorded or documented in detail, as it was passed down from parent to child. So the knowledge is actually lost forever.
Weavers in Kashmir work a similar level of quality, but their wool is different, their needs and techniques are different, so while we still have craftsman that can produce wool by hand, most of the traditions and techniques are lost.
Is it a tragedy? I go back and forth. Obviously the heritage fabrics are phenomenal and luxurious. Part of me wishes that the tradition could have been maintained through a luxury sector.
Automation is never a 1:1 improvement. It's not just about the speed or process. The process itself changes the product. I don't know where we will net out on software, and I do think the complaints are justified - but the Luddites were also justified. They were *Right*. Their whole argument was that the mills could not product fabric of the same quality. But being right is not enough.
I'm already seeing vibe-coded internal tools at an org I consult at saving employees hundreds of hours a month, because a non-technical person was empowered to build their own solution. It was a mess, and I stepped in to help optimize it, but I only optimized it partially, making it faster. I let it be the spaghetti mess it was for the most part - why? because it was making an impact already. The product was succeeding. And it was a fundamentally different product than what internal tools were 10 years ago.
Your comment made me think of the Japanese. They have a highly industrialised society, but they also value greatly hand-made products from food and clothes to woodwork and houses.
And they also like to emphasise how long it takes for someone to become a master at a given trade.
It was really eye opening seeing they’re able to eat raw eggs and (to maybe a lesser degree of safety) raw chicken because their society requires high standards of cleanliness in food production. We are literal cattle over here in the states.
Though, given Amodei and Altman’s behavior (along with the rest of the billionaire class) that shouldn’t be a surprise to anyone.
only code anyone will be touching in a museum in 800 years will be the good code. I hope they don't talk about what great craftsmen we all were because someone saw an original Fabrice Bellard at the Louvre.
Survivor bias plays a role in glorifying the past.
You're right in that we kept the best examples (as coding museums will do in the future) but the best of something is a benchmark. It is striking that modern automation, even hundreds of years later, can't touch what a skilled craftsman could do in the past.
With programming, we documented a lot of it, so it's unlikely to go the way of fine weaving. People will always be able to learn to think and be great programmers.
Maybe if the wool weavers had internet, they could have blogged, made youtube videos, and cataloged their profession so it could last Millenia.
Agreed, I think the good gained by wool mills is greater in that little Timmy is less likely to lose a leg to frostbite than the bad loss of my scarf not passing through a ring.
Long term though, I’ve always wondered if the Amish turn out to be the only survivors.
I've had this talk in mind during the past 2/3 years of AI boom, and it feels like rewatching a video from the 80s about the dangers of global warming. Prescient, and perhaps a bit quaint in its optimism that somehow we won't make things even worse for ourselves.
And yet in the 200 years since human civilization has improved by every imaginable metric, in most cases by orders of magnitude. The difference between 2026 and 1826 is nearly incomprehensible. I suspect most people scarcely imagine how horrific the average life was in 1826, relatively speaking. And between then and now were the industrial revolution, multiple world wars, and generally some of the most terrible events, crooked politicians, and life changing technological forces. And here we are, mostly fine in most places.
I get there are many things happening today that are frustrating or moving some element of human life in negative or ambiguous directions, but we really have to keep perspective on these things.
Nearly every problem today is a problem with a solution.
The feelings of panic we have that things are going wrong are useful signals to help guide and motivate us to implement those solutions, but we really must avoid letting the doomerism dominate. Just because we hear constant negative news doesn't mean things are lost. Doesn't even mean things are bad.
It just means we have been hearing a lot of negative news.
This is what it looks like for progress to not be monotonically increasing.
A big difference is cutting quality for the sake of mass production when it enables creating more necessities for people to live is a good thing. It is a good tradeoff. Cutting quality to make previously deterministic software more non deterministic does not improve anyones life except Sam Altman, Dario Amodei and the rest of the billionaire class.
I have no doubt in the future there will be a class of vibe software and it will be known as distinctly lower quality than human understood software. I do think the example you describe is a good use of vibing. I also think tech orgs mandating 100% LLM code generation are short sighted and stupid.
A lot of this push for “slop” is downstream of our K shaped economy. Give the people more money and quality becomes a lot more important. Give them less, and you’re selling to their boss who is often insulated from the effects of low quality.
> One of the claims of the Luddites is that quality would go down, because their craft took half a lifetime to master (it was passed down from parent to chile.)
Sounds like a tautology. If you deliberately hoard knowledge of course it’s going to be hard to obtain.
You're right. Automation often trades quality for speed and quantity.
The difference between automating the creation of software and automating the creation of physical products is that software is everywhere. It is relied on for most tools and processes that keep our civilization alive. Cutting corners on that front, and deciding to entrust our collective future to tech bros and VC firms fiending for their next payout, seems like an incredibly dumb and risky proposition.
What the author and many others find hard to digest is that LLMs are surfacing the reality that most of our work is a small bit of novelty against boiler plate redundant code.
Most of what we do is programming is some small novel idea at high level and repeatable boilerplate at low level. A fair question is: why hasn’t the boilerplate been automated as libraries or other abstractions? LLMs are especially good at fuzzy abstracting repeatable code, and it’s simply not possible to get the same result from other manual methods.
I empathise because it is distressing to realise that most of value we provide is not in those lines of code but in that small innovation at the higher layer. No developer wants to hear that, they would like to think each lexicon is a creation from their soul.
Most of the people doing the most rote and monotonous work were and are doing so in some of the least productive circumstances, with clear ways of increasing speed and productivity.
If development velocity was truly an important factor in these businesses, we'd migrated away from that gang of four ass Java 8 codebase, given these poor souls offices, or at least cubicles to reduce the noise, we wouldn't make them spend 3 hours a day in ceremonial meetings.
The reason none of this happens is that even if these developers crank out code 10x faster, by the time it's made it past all the inertia and inefficiencies of the organization, the change is nearly imperceptible. Though the bill for the new office and the 2 year refactoring effort are much more tangible.
Yep. It's ridiculous to talk about 10x or 5x or 2x anything in any but the smallest companies. All this talk about programmer velocity is micro-optimizing something that's not a bottleneck.
I’ve been thinking a lot about this. I think that AI software automation tools are disproportionately more useful in greenfield work done by small or tiny organizations. By an order of magnitude, maybe 2 in some cases.
What that means is anyone’s guess, but it seems like it should result in a Cambrian explosion of disruptive new companies, limited in scope by the idea space.
The thing about small teams is, with a few exceptions, the biggest challenges are typically funnels for users and product-market fit, overcoming and exploitation of network effects, etc… so even in small orgs, if you make 30 percent of the problem 4x faster/smaller you still have the other 70 percent which is now 92.5% of the problem.
This applies even more acutely in larger organizations… so for them, 99 percent of the problem remains.
Intangibles in an organization like reluctance, education, and organizational inertia fill the gap left by software acceleration, and in the end you only see tiny gains, if any.
What really happened, on an organizational scale, is that software development costs went down. We wouldn’t expect a wage collapse in coding to foment an explosive revolution in company profitability or dynamism. We shouldn’t expect those things of LLM assistance.
We should look at it as a reduction in cost with potentially dangerous side effects if not managed carefully, with an especially big reduction in r&d development costs.
Libraries create boundaries, which are in most cases arbitrary, that then limit the way you can interact with code, creating more boilerplate to get what you want from a library.
Abstractions are the source of bloat. Without abstractions you can always reduce bloat, or you can reduce bloat in your glue, but you can't reduce glue.
It takes discipline to NOT create arbitrary function signatures and short-lived intermediate data structures or type definitions. This is the beginning of boilerplate.
So many advances in removing boilerplate are realizing your 5 function calls and 10 intermediate data structures or type definitions, essentially compute a thing that you can do with 0 function calls and 0 custom datatypes and less lines of code.
The abstraction hides how simple the thing you want is.
Problem is that all open source code looks like the bloat described above, so LLMs have no idea how to actually write code that is without boilerplate. The only place where I've seen it work is in shaders, which are usually written to avoid common pitfalls of abstraction.
LLMs are incapable of writing a big program in 1 function and 1 file, that does what you want. Splitting the program into functions or even multiple files, is a step you do after a lot of time, yet all open source looks nothing like that.
Yep, people not understanding the value of abstraction is exactly why LLM coded apps are going to be a shit show. You could use them to come up with better abstractions, but most will not.
No I don’t agree. Just because it’s « boilerplate », that does not mean it’s worthless or doesn’t carry novelty.
There is « boilerplate » in building many things, house, cars etc where to add real new stuff it’s « always the same base » but you have to nail that base and there is real value in it. With craft and deep knowledge and pride.
Every project is different and not everything can be made from a generic out-of-shelf product
I wrote a book a while back where I argued that coding involves choosing what to work on, writing it, and then debugging it, and that we tend to master these steps in reverse chronological order.
It's weird to look at something that recent and think how dated it reads today. I also wrote about the Turing test as some major milestone of AI development, when in fact the general response to programs passing the Turing test was to shrug and minimize it
To me, a function is a single sentence within a book. It may approach the larger picture, but that sentence can be reviewed, changed, switched around, killed by an editor.
Some programmers believe they're fantastic sentence writers. They brag about how good of a sentence they write, they're entire worldview has been built on being good sentence creators. Especially within enterprises, you may spend your entire life writing sentences without ever really understanding the whole book.
If your worldview has been built on sentence creation, and suddenly there's a sentence creator AI, you're going to be deathly afraid of it replacing you as a sentence writer.
Both the books and the song analogies are incorrect. In the case of code, the users for whom the programmes are written, are not engaging with the statements of the code, they are interacting with interfaces the programmes provide.
This is not the same when it comes to books and music.
> A fair question is: why hasn’t the boilerplate been automated as libraries or other abstractions?
Because our ways of programming computers are still woefully inadequate and rudimentary. This is why we have a tons of technique for code reuse, yet we keep reinventing the wheel because they shatter in contact with reality. OOP was supposed to save us all in the 1990s, we've seen how it went.
In other fields we've had a lot of time to figure out basic patterns and components that can be endlessly reused. Imagine if car manufacturers had to reinvent the screw, the piston, the gear and lubricants for every new car model.
One example that has bugged me for a decade is: we've been in the Internet era for decades at this point, yet we spend a lot of time reinventing communication. An average programmer can't spend two days without having to deal with JSON serialization, or connectivity, or sending notifications about the state of a process. What about adding authentication and authorization? There is a whole cottage industry to simplify something that should be, by now, almost as basic as multiplying two integers. Isn't that utter madness? It is a miracle we can build complex systems at all when we have to focus on this minutiae that pop up in every single application.
Now we have intelligences that can create code, using the same inadequate language of grunts and groans we use ourselves in our day to day.
> "Why hasn’t the boilerplate been automated as libraries or other abstractions?"
Because a lot of programmers don't know how to copy-paste or make packages for themselves? We have boilerplate at my work, which comprises of some ready made packages that we can drop in, and tweak as needed, no LLMs required
I'm not entirely sure what you mean... If something becomes repetitive enough to be boilerplate, we can just make it into a package and keep it around for the next time
This is actually quite an insightful comment into the mindset of the tech set vs. the many writers and artists whose only 'boilerplate redundant code' is the language itself, and a loose aggregate of ideas and philosophies.
Probably the original sin here is that we started calling them programming languages instead of just 'computer code'.
Also - most of your work is far more than mere novelty! There are intangibles like your intellectual labor and time.
Abstraction isn't free... even if you had the correct abstraction and the tools to remove the parts you don't need for deployment, there is still the cost of understanding and compiling.
There is also the cost reason, somebody trying to sell an abstraction will try to monetize it and this means not everyone will want/be able to use it (or it will take forever/be unfinished if it's open/free).
There's also the platform lockin/competition aspect...
Actually I think this is one of the more tragic outcomes of the LLM revolution: it was already hard to get funding for ergonomic advances in programming before. Funding a new PL ecosystem or major library was no mean feat. Despite that, there were a number of promising advances that could have significantly raised the level of abstraction.
However, LLMs destroy this economic incentive utterly. It now seems most productive to code in fairly low level TypeScript and let the machines spew tons of garbage code for you.
We already have tools to generate boilerplate, and they work exceptionally well. The LLM just produces nondeterministic boilerplate.
I also don't know what work you do, but I would not characterize the codebases I work in as "small bits of novelty" on boilerplate. Software engineering is always a holistic systems undertaking, where every subcomponent and the interactions between them have to be considered.
FORTRAN ("formula translator") was one of the first programs ever written and it was supposed to make coding obsolete. Scientists will now be able to just type in formulas and the computer will just calculate the result, imagine that!
Boilerplate has been with us since the dawn of programming.
I still think LLMs as fancy autocomplete is the truth and not even a dig. Autocomplete is great. It works best when there’s one clear output desired (even if you don’t know exactly what it is yet). Nobody is surprised when you type “cal” and California comes up in an address form, why should we be surprised when you describe a program and the code is returned?
Knowledge has the same problem as cosmology, the part we can observe doesn’t seem to account for the vast majority of what we know us out there. Symbolic knowledge encompasses unfathomable multitudes and will eventually be solved by AI but the “dark matter” of knowledge that can’t easily be expressed in language or math is still out in the wild
Time to learn design, how to talk to customers, and how to discover unsolved problems. Used right LLMs should improve your software quality. Make stuff that matters that you can be proud of.
I don't care if LLMs are good at coding or bad at it (in my experience the answer is "it depends"). I don't care how good are they at anything else. What matters in the end is that this tech is not to empower a common person (although it could). It is not here to make our lives better, more worthwhile, more satisfying (it could do these as well). It is there to reduce our agency, to make it easier to fire us, to put us in even more precarious position, to suck even more wealth from those that have little to those that have a lot.
Yet what I see are pigs discussing the usefulness of bacon-making machine just because it also happens to be able to produce tasty soybean feed. They forget that it is not soybean feed that their owner bought this machine for, and that their owner expects a return from such investment.
> It is there to reduce our agency, to make it easier to fire us, to put us in even more precarious position
Could be. It could also end up freeing us from every commercial dependency we have. Write your own OS, your own mail app, design your own machinery to farm with.
It’s here, so I don’t know where you’re going with “I’m unhappy this is happening and someone should do something”
Its unfortunate that there’s mode collapse around what the consensus “best way” to use these things are. It’s too bad we didn’t have a period where these things were great teachers but didn’t attempt to write code because in my opinion the ideal way to use them is not by agents mass producing sloppy buggy disorganized code, but to teach you things way faster than the old alternatives, rubber duck, and occasionally write snippets of functions when your brain is too tired or it’s throwaway cli code or some api you’re not familiar with.
> It’s too bad we didn’t have a period where these things were great teachers but didn’t attempt to write code
The period is now. Just add "be a great teacher but don't attempt to write code" in the prompt.
(yes, it's a teacher who gets things wrong from time to time. You still need to refer to the source and ground truth just like when you're taught by a human teacher.)
> to teach you things way faster than the old alternatives
I'm not sure if you ever had a teacher or instructor that you didn't trust, because they were a compulsive liar or addiction or any other issue. I didn't (as least not that I can remember) but I know I would be VERY on guard about it. I imagine I would consequently be quite stressed learning with them, even if they were brilliant, kind, etc.
It would feel a bit like walking on thin ice to get to a beautiful island. Sure, it's not infeasible and if you somehow make it, it might be worth the risk, but honestly wouldn't you prefer a slower boat?
I feel like this is partially a skill issue - You can get direct, cited information from LLMs. There's a level of personal responsibility for over-using the tools and letting them feed you bad/false information, but if you try researching specific abstractions, newer documentation, most LLMS now correctly call and research the tools available, directly citing them.
I think you can build a very easy workflow that reinforces rather than replaces learning, I've used a citation flow to link and put into practice a ton of more advanced programming techniques, that I found incredibly difficult to locate and research before AI.
I'd say the comparison is faulty, it's more akin to swimming to an island (no-ai) vs using a boat. You control the speed and direction of the boat, which also means you have the responsbility of directing it to the correct location.
The analogy was about the unknown thinnest of the ice, not just the fastest way to get there. It's specifically about the lack of reliability of the process.
Yes, I was disagreeing with the premise of the analogy - what would the slow boat in this case be? As my experience, going through software engineering before AI, is that you'd get lost to the ice, with nobody to really help you get out.
If you get lost on the ice and you have someone who confidently tells you the path but is sometimes wrong, is it actually helpful?
PS: sorry if the analogy is a bit wonky but it's quite dear to me as I do ice skating on frozen lakes and it's basically a life or death information "game" that I can relate to. It might not be a great analogy for others.
Haha it's a good analogy, i'm being a little bit argumentative for the sake of it potentially.
I guess in my view - the main alternative you'd have beforehand is just to drown.
For me, AI sits in a space where if you know how to use it, it can tell you all the thin spots of the ice accurately. You can then verify those spots, but there's a level of personal responsibility of verification.
I'd agree there's currently a ton of people that are using these tools to essentially just find the specific route - but i'd argue those people probably shouldn't be skating in the first place, and would've fallen one way or the other.
> AI sits in a space where if you know how to use it, it can tell you all the thin spots of the ice accurately. You can then verify those spots, but there's a level of personal responsibility of verification.
Right, but AFAICT most people just venture over the ice and don't bother to check. In fact a lot of people venture there, do check once or twice, then check less and less frequently. The fact that you do it is great but others seem a lot less careful, until cracks start to show and then it might be too late.
I'd only argue that people were doing this before AI, slop development was just copy pasting from the first stack overflow issue that matched the question rather than thinking
So i'd argue there's a part of it that is just personal responsibility with how these tools are used
I agree, it can be incredibly frustrating at times. My rule is that if it “compiles” in my brain as an understood idea then i accept it. I also push back a lot (sometimes it points out good errors in my thinking, sometimes it admits it hallucinated). Real humans hallucinate a lot as well or confidently state subtly wrong ideas, it’s a good habit anyway. It’s basically the same approach when presented with a “formula” for something in school. If i dont know how to derive/prove it then i dont accept it as part of my memorized or accepted toolkit/things i use (and try to forget it). If it fits with the rest of my network of understood ideas i do. It’s annoying but still more time efficient than trawling through lecture slides with domain specific language etc
> Real humans hallucinate a lot as well or confidently state subtly wrong ideas, it’s a good habit anyway.
I think that's actually deeply different. If a human keeps on apologizing because they are being caught in a lie, or just a mistake, you distrust them a LOT more. It's not normal to shrug off a problem then REPEAT it.
I imagine the cost of a mistake is exponential, not linear. So when somebody says "oops, you got me there!" I don't mistrust them just marginally more, I distrust them a LOT more and it will take a ton of effort, if even feasible, to get back to the initial level of trust.
I do not think it's at all equivalent to what "Real humans" do. Yes, we do mistake, but the humans you trust and want to partner with are precisely the one who are accountable when they make mistakes.
The framing of an LLM's response as truth vs lie is in itself incorrect.
In order to lie, one needs to understand what truth and objective reality are.
Even with people, when a flat-earther tells you the earth is flat, they're not lying, they're just wrong.
All LLM output is speculation. All speculation, by definition, has some probability of being incorrect.
---
We can go even deeper in a philosophical sense. If I made the audacious claim that 2 +2 = 4, I may think it's true, but I'm still speculating that the objective reality I experience is the same one others also experience, and that my senses and mental faculties, and therefore the qualia making up my reality, are indeed intact, correct, and functional. So is there a degree of speculation when I made that claim?
Regardless, I am able to agree upon a shared reality with the rest of the world, and I also share a common understanding of truth and untruth. If I lied, it can only be caused by an intention to mislead others. For example, if I claimed to be the president of the united states, of course that would be incorrect (thankfully!), but since we all agree that no one reading this post would actually be mislead into thinking I am the POTUS, then it isn't a lie. Perhaps sarcasm, a failed attempt at humor, or just trolling. it is untruth, but it isn't a lie, no one was mislead. You need intent (LLM isn't capable of one), and that intent needs to be at least in part, an intent to mislead.
> This sort of protectionism is also seen in e.g. controlled-appelation foods like artisanal cheese or cured ham. These require not just traditional manufacturing methods and high-quality ingredients from farm to table, but also a specific geographic origin.
Maybe "Artisanal Coding" will be a thing in the future?
The 'Handmade Network' is essentially this (in a good way though) - and long before LLMs got good enough for code-generation - instead as a counter philosophy to the soulless "enterprise software development" where a feature that could be implemented in 10 lines of code is wrapped in 1000 lines of "industry-best-practices" boilerplate.
Programming via LLMs is just the logical conclusion to this niche of industrialized software development which favours quantity over quality. It's basically replacing human bots which translate specs written by architecture astronauts into code without having to think on their own.
And good riddance to that type of 'spec-in-code-out' type of programming, it should never have existed in the first place. Let the architecture astronauts go wild with LLMs implementing their ideas without having to bother human programmers who actually value their craft ;)
I'm kinda leaning towards the analogy that LLMs are to programming as textile machines were to the loom.
People still pay for hand-knit fabrics (there's one place in Italy that makes silk by hand and it costs 5 figures per foot), but the vast majority is machine made.
Same thing will happen to code, unless the bubble bursts really badly. Most bulk API Glue CRUD stuff and basic web UI work will be mostly automated and churned off automated agentic production lines.
But there will still be a market for that special human touch in code, most likely when you need safety/security or efficiency/speed.
Like you can still make Karelian pies[0] anywhere, but unless you follow the exact recipe, you can't sell them as "Karelian pies". It's good for the heritage and good for the customers.
You can also make any cheeses and wines and whatever you like, it's just how you name them and market them that's regulated.
If you consider only the product is relevant and not how it is made, then no it does not matter; or at least it doesn't matter as long as you don't personally attach any emotional qualities to products beyond their material qualities (unlike the vast majority of people).
But the comment you reply to explicitly points out the process is in fact relevant as it is itself a cultural artifact. You're not replying to their main point.
The main point is "It's good for the heritage and good for the customers."
How are the customers hurt if their pie has not been baked by a babushka in Petrozavodsk using the old original recipe, but by an anonymous migrant worker in a dark kitchen using an optimized recipe if the end result is objectively the same? The packaging doesn't have to say who it was made by.
I also don't see the problem with the heritage. The comment I replied to already said anyone could call their pies Karelian, so there was no restriction that benefitted the residents of a specific region. I can see a PDO-like carveout that goes "we want to preserve the traditional pie-making of Karelia, so we want this activity to remain economically viable. Therefore, only pies baked in Karelia can be sold as Karelian pies." But I don't see how Sysco baking the same pies and distributing them nationwide helps maintain the heritage.
It seems like with time hallucinations and lying increased, it’s very different now from what it was 2 years ago. is this because of training bias ?
Is there any research data on dynamics over past years ?
There’s a cold reality that we in this profession have yet to accept: nobody cares about our code. Nobody cares whether it’s pretty or clever or elegant. Sometimes, rarely, they care whether it’s maintainable.
We are only craftsmen to ourselves and each other. To anyone else we are factory workers producing widgets to sell. Once we accept this then there is little surprise that the factory owners want us using a tool that makes production faster, cheaper. I imagine that watchmakers were similarly dismayed when the automatic lathe was invented and they saw their craft being automated into mediocrity. Like watchmakers we can still produce crafted machines of elegance for the customers who want them. But most customers are just going to want a quartz.
I will just copy paste my comment from another thread but still very relevant>
Coding isn’t creative, it isn’t sexy, and almost nobody outside this bubble cares
Most of the world doesn’t care about “good code.” They care about “does it work, is it fast enough, is it cheap enough, and can we ship it before the competitor does?”
Beautiful architecture, perfect tests, elegant abstractions — those things feel deeply rewarding to the person who wrote them, but they’re invisible to users, to executives, and, let’s be honest, to the dating market.
Being able to refactor a monolith into pristine microservices will not make you more attractive on a date. What might is the salary that comes with the title “Senior Engineer at FAANG.” In that sense, many women (not all, but enough) relate to programmers the same way middle managers and VCs do: they’re perfectly happy to extract the economic value you produce while remaining indifferent to the craft itself. The code isn’t the turn-on; the direct deposit is.
That’s brutal to hear if you’ve spent years telling yourself that your intellectual passion is inherently admirable or sexy. It’s not. Outside our tribe it’s just a means to an end — same as accounting, law, or plumbing, just with worse dress code and better catering.
So when AI starts eating the parts of the job we insisted were “creative” and “irreplaceable,” the threat feels existential because the last remaining moat — the romantic story we told ourselves about why this profession is special — collapses. Turns out the scarcity was mostly the paycheck, not the poetry.
I’m not saying the work is meaningless or that system design and taste don’t matter. I’m saying we should stop pretending the act of writing software is inherently sexier or more artistically noble than any other high-paying skilled trade. It never was.
This is just sad. If your passion for creating something you can be proud of is entirely propped up by imaginary sex appeal that not even most teenagers would believe exists, it's no surprise you'd arrive at such a cynical, pathetic conclusion.
Your perspective is a path with only one logical end. That nothing you do or think or believe matters unless someone you're attracted to finds it attractive.
That is not how I or most others live. We take pride in and derive satisfaction from our accomplishments without the need for external validation.
Yeah, only I care whether the solution I found to a problem today was elegant, or whether my kitchen was pristine and well organized after I prepped for next week's lunches, but so what? I care and it injects more than enough meaning into my life to be worth it.
What weirds me out is that it seems few US corporations care that they don't have copyright to their synthesised code, if the rumours regarding this are correct.
If you don't have the copyright, then you can't license or litigate it under the common rules of software. If someone 'steals' it you can at best go after them with some trade secret case, and I suspect this would be limited if you had already shared the code with them, e.g. because they helped you synthesise it.
I don't agree with many statements in the article. It almost seems like an article from about a year ago, despite it being posted yesterday. Not sure if the author had the idea a long time ago and just took his time to finish it up, but the "vibe-coding" he describes surely isn't the current way of using LLMs in a codebase.
While LLMs are surely used to generate a lot of slop-code and overwhelm (open source) code bases, this surely isn't the only thing they can do. I dislike discussing the potential of a technology exclusively by looking at its negative impact.
LLMs in proper hands don't create code which is "stolen", they also shouldn't create unnecessary code and definitely don't remove any of the ownership of the programmer, at least not any more than using a mighty IDE does.
The problem seems to be in the usage of LLMs. These effects definitely do happen when just releasing an agent on a codebase without any oversight. But they can also largely be mitigated by using frameworks such as Openspec or Spec-Kit, properly designing a spec, plan, granular tasks and manually reviewing all code yourself. The LLM should not be responsible for any creative idea, it should at most verify the practicality against the codebase. When doing that, the entire creative control is in the hands of the programmer and so is the mechanical execution. The LLM is reduced to a very powerful autocomplete with a strict harness around it. Obviously this also doesn't lead to 10x or even 100x improvements in speed like some AI merchants promise, but in my personal experience the speedup is still significant enough to make LLMs a very, very useful technology.
The authors logic only works for software engineers and as I have said time and time again- software engineers have been automating people out of their passions for decades and now it has come for yours... The lying here is LYING TO YOURSELVES.
>If you ask me, no court should have ever rendered a judgement on whether AI output as a category is legal or copyrightable, because none of it is sourced. The judgement simply cannot be made, and AI output should be treated like a forgery unless and until proven otherwise.
Guilty until proven innocent will satisfy the author's LLM-specific point of contention, but it is hardly a good principle.
You are missing the point of the author. He literally said no court should have rendered a judgement, that's the exact opposite of guilty until proven innocent. Guilty means a court has made a judgement.
He is proposing to not make a judgement at all. If the AI company CLAIMS something they have to prove it. Like they do in science or something. Any claim is treated as such, a claim. The trick is to not even claim anything, let the users all on their own come to the conclusion that it's magic. And it's true that LLMs by design cannot cite sources. Thus they cannot by design tell you if they made something up with disregard to it making sense or working, if they just copy and pasted it, something that either works or is crap, or if they somehow created something new that is fantastic.
All we ever see are the success stories. The success after the n-th try and tweaking of the prompt and the process of handling your agents the right way. The hidden cost is out there, barely hidden.
This ambiguity is benefitting the AI companies and they are exploiting it to the maximum. Going even as far as illegally obtaining pirated intellectual property from an entity that is banned in many countries on one end of their utilization pipeline and selling it as the biggest thing ever at the other end. And yes, all the doomsday stories of AI taking over the world are part of the marketing hype.
> Engineers who know their craft can still smell the slop from miles away when reviewing it, despite the "advances" made. It comes in the form of overly repetitive code, unnecessary complexity, and a reluctance to really refactor anything at all, even when it's clearly stale and overdue.
I’ve seen reluctance to refactor even 10+-year-old garbage long before LLMs were first made available to the broader public.
If it's lasted 10 years and someone is still using it after all that time, that seems like a pretty good signal there's a lot of value in the 'garbage'?
I've seen a lot of 'fixes' for 10 year old 'garbage' that turned out to be regressions for important use cases that the author of the 'fix' wasn't aware of.
LLM-generated snippets of code are a breath of fresh air compared with much legacy code. Since models learn probability distributions they gravitate to the most common ways of doing things. Almost like having a linter built in. On the other hand, legacy code often does things in novel ways that leave you scratching your head--the premise behind sites like https://thedailywtf.com/
> Whether something is a forgery is innate in the object and the methods used to produce it. It doesn't matter if nobody else ever sees the forged painting, or if it only hangs in a private home. It's a forgery because it's not authentic.
On a philosophical level I do not get the discussions about paintings. I love a painting for what it is not for being the first or the only one. An artist that paints something that I can't distinguish from a Van Gogh is a very skillful artist and the painting is very beautiful. Me labeling "authentic" it or not should not affect it's artistic value.
For a piece of code you might care about many things: correctness, maintainability, efficiency, etc. I don't care if someone wrote bad (or good) code by hand or uses LLM, it is still bad (or good code). Someone has to take the decision if the code fits the requirements, LLM, or software developer, and this will not go away.
> but also a specific geographic origin. There's a good reason for this.
Yes, but the "good reason" is more probably the desire of people to have monopolies and not change. Same as with the paintings, if the cheese is 99% the same I don't care if it was made in a region or not. Of course the region is happy because means more revenue for them, but not sure it is good.
> To stop the machines from lying, they have to cite their sources properly.
I would be curious how can this be applied to a human? Should we also cite all the courses, articles that we have read on a topic when we write code?
>Me labeling "authentic" it or not should not affect it's artistic value.
The problem with automated imitation generators is that they can produce thousands of painting that imitate Van Gogh, but does not have the same soul.
It is the same reason why these things cannot create genuinely funny jokes. They cannot assess the funnyness of the themselves. They cannot feel, and cannot do the filtering based on emotion.
It is easy to recognize the emptiness of a joke, but not so easy for a painting, or some other form of art.
This is why it will never work for art. But the sad thing is that that will not stop them from being used to create art. Because it just needs to sell.
I would say that for art, at least for most of the movies, music etc, this was already the case. So nothing much to lose.
Regarding art, what do you feel about museums? Why would you go see an original instead of simply looking at a jpg.
Even if you aren't in the group, there is clearly a group of people who appreciate seeing the original, the thing that modified our collective artistic trajectory.
Forgeries and master studies have a long history in art. Every classically trained worth their salt has a handful of forgeries under their belt. Remaking work that you enjoy helps you appreciate it further, understand the choices they made and get a better for feel how they wielded the medium. Though these forgeries are for learning and not intended to be pieces in their own right.
> Regarding art, what do you feel about museums? Why would you go see an original instead of simply looking at a jpg.
I go to a museum to see a curated collection with explanations in a place that prevents distractions (I can't open a new tab) and going with people that might be interested to talk about what they see and feel. It's as well a social and personal experience on top information gathering.
> there is clearly a group of people who appreciate seeing the original,
There are many people interested in many things, do you want to say that "because some people think it is important, it must be important"? There were many people with really weird and despicable ideas along history and while I am neutral to this one, they definitely don't convince me just by their numbers.
> simply looking at a jpg.
Technically a jpg would not work because is lossy compression. But a png at the correct resolution might do the trick for some things (paintings that you see from far), but not for others. Museum have multiple objects that would be hard to put in an image (statues, clothes, bones, tables, etc.). You definitely can't put https://en.wikipedia.org/wiki/Comedian_(artwork) in a jpg - but the discussion surrounding it touches topics discussed here.
The value of a piece is definitely not completely tied to its physical attributes, but the story around it. The story is what creates its scarcity and generates the value.
It is similar for collectible items. If I had in my possession the original costume that Michael Jackson wore in thriller, I am sure I could sell it for thousands of dollars. I can also buy a copy for less than a hundred.
Same with luxury brands. Their price is not necessarily linked to their quality, but to the status they bring and the story they tell (i.e. wearing this transforms me into somebody important).
It can seem quite silly, but I think we are all doing it to some extent. While you said that a good forgery shouldn't affect one's opinion on the object (and I agree with you), what about AI-generated content? If I made a novel painting in the style of Van Gogh, you might find it beautiful. What if I told you I just prompted it and painted it? What if I just printed it? There are levels of involvement that we are all willing to accept differently.
> An artist that paints something that I can't distinguish from a Van Gogh is a very skillful artist and the painting is very beautiful.
There are a lot such artists who can do that after having seen Van Gogh's paintings before. Only Van Gogh (as far as we know) did paint those without having seen anything like it before - in other words, he had a new idea.
So, if we apply to software, should we quote Dijkstra each time we use his graph algorithm?
Should we also say "if you can implement Dijkstra's algorithm" it's irrelevant because "you did not have the idea"?
It's great to credit people that have an idea first. I fail to see how using an idea is that "bad" or "not worthy", ideas should be spread and used, not locked by the first one that had them (except some small time period maybe).
Even the mechanical skill of painting gets a lot harder without an example to look at. Most people can get pretty good at painting from example within a year or two but it’s a big leap to simply paint from memory, much less create something original.
> I would be curious how can this be applied to a human? Should we also cite all the courses, articles that we have read on a topic when we write code?
Yea this is the kind of BS and counter-productiveness that irrational radicals try to push the crowd towards.
The idea that one owns your observations of their work and can collect rent on it is absurd.
I see a future where I program at work less, which is sad but c'est la vie. I think the challenge of the job will be heralding and managing my own context for larger codebases managed by smaller teams, and finding ways to allow for more experimental/less verified code in prod. And plenty of consulting work for companies which have vibe coded their business and who are left with a totally fucked data model (if not codebase).
I think it says a lot about this opinion piece that the people agreeing with it are posting short comments saying "So true!" and "Great!" whilst the people criticizing it are writing paragraphs of well-spoken criticism.
Claude makes me mad:
even when I ask for small code snippets to be improved, it increasingly starts to comment "what I could improve" in the code I stead of generating the embarrassingly easy code with the improvement itself.
If I point it to that by something like "include that yourself", it does a decent job.
Enforce this with deterministic guardrails. Use strictest linting config you possibly can, and even have it write custom, domain specific linters of things that can't happen. Then you won't have to hand hold it that much
That's a problem with any self-improving tools, not just LLMs. Successful self-improvement leads to efficiency, which is just another name for laziness.
Hello, I am a single dev using an agent (Claude Code) on a solo project.
I have accepted that reading 100% of the generated code is not possible.
I am attempting to find methods to allow for clean code to be generated none the less.
I am using extremely strict DDD architecture. Yes it is totally overkill for a one man project.
Now i only have to be intimate with 2 parts of the code:
* the public facade of the modules, which also happens to be the place where authorization is checked.
* the orchestrators, where multiple modules are tied together.
If the inners of the module are a little sloppy (code duplication and al), it is not really an issue, as these do not have an effect at a distance with the rest of the code.
I have to be on the lookout though. It happens that the agent tries to break the boundaries between the modules, cheating its way with stuff like direct SQL queries.
Sigh. Another one standing on the train tracks giving the approaching train a good scolding. First this article tries to equate AI-generated code with "forgery". Please, tell me how you "forge math". Next, it makes a little dig at senior engineers who use LLMs, because they must not realize that "every line of code is a liability". No no, senior engineers realize this, but they are also adept at observing successes and failures and coming up with a mental model for risk. That's part of keeping an application running, otherwise we'd all still be using jQuery and leftPad. We made the jump to react because we recognized that these NEW lines of code were far more valuable than their "liability". Somehow the author decided to store "liability" in a boolean. Oh, was AI involved, or is that a genuine human error..? Next the article makes a tired appeal to the fact that LLMs are trained on open-source code and are therefore "plagiarizing" this code constantly. This is where the train comes around the mountain. So when the AI generates Carmack's Reverse, is it plagiarizing Carmack or the book that he got the idea from? In what percentages? And what do I do with this valuable insight? Send Carmack $0.01 in an envelope for the privilege? In short, I don't know what the author wants, but I hope writing this helped.
A short design note and tribute to Richard Stallman (RMS) and St. IGNUcius for the term Pretend Intelligence (PI) and the ethic behind it: don’t overclaim, don’t over-trust, and don’t let marketing launder accountability.
Richard Stallman proposes the term Pretend Intelligence (PI) for what the industry calls “AI”: systems that pretend to be intelligent and are marketed as worthy of trust. He uses it to push back on hype that asks people to trust these systems with their lives and control.
From his January 2026 talk at Georgia Tech (YouTube, event, LibreTech Collective):
> "So I've come up with the term Pretend Intelligence. We could call it PI. And if we start saying this more often, we might help overcome this marketing hype campaign that wants people to trust those systems, and trust their lives and all their activities to the control of those systems and the big companies that develop and control them." — Richard Stallman, Georgia Tech, 2026-01-23. Source: YouTube (full talk) — "Dr. Richard Stallman @ Georgia Tech - 01-23-2026," Alex Jenkins, CC BY-ND 4.0; transcript in video description.
So PI is both a label (call it PI, not AI) and a stance: resist the campaign to make people trust and hand over control to systems and vendors that don’t deserve that trust. In MOOLLM we use the same framing: we find models useful when we don’t overclaim — advisory guidance, not a guarantee (see MOOAM.md §5.3).
[...]
Richard Stallman critiques AI, connected cars, smartphones, and DRM (slashdot.org)
42 points by MilnerRoute 38 days ago | hide | past | favorite | 10 comments
> Open source software maintainers have been one of the first to feel the downsides. ... The last thing they needed was to receive slop-coded pull requests from contributors merely looking to cheat their way into having a credible GitHub resumé... As a result, projects have closed down public contributions and dropped their bug bounties...
Has this really been people's experience?
I develop and maintain several small FOSS projects, some of which are moderately popular (e.g. 90,000-user Thunderbird extension; a library with 850 stars on GitHub). So, I'm no superstar or in the center of attention but also not a tumbleweed. I've not received a single AI-slop pull request, so far.
Am I an exception to the rule? Or is this something that only happens for very "fashionable" projects?
No, it's simply untrue. Players only object against AI art assets. And only when they're painfully obvious. No one cares about how the code is written.
If you actually read the words used in Steam AI survey you'll know Steam has completely caved in for AI-gen code as well. It's specifically worded like this:
> content such as artwork, sound, narrative, localization, etc.
No 'code' or 'programming.'
If game players are the most anti-AI group then it's crystal clear that LLM coding is inevitable.
> This stands in stark contrast to code, which generally doesn't suffer from re-use at all, or may even benefit from it, if it's infrastructure.
Yeah, exactly. And LLM help developers save time from writing the same thing that has be done by other developers for a thousand times. I don't know how one can spins this as a bad thing.
> Classic procedural generation is noteworthy here as a precedent, which gamers were already familiar with, because by and large it has failed to deliver.
Spore is well acclaimed. Minecraft is literally the most sold game ever. The fact one developer fumbled it doesn't make the idea of procedural generation bad. This is a perfect example of that a tool isn't inherently good or bad. It's up to the tool's wielder.
Yes, this is a wildly uneducated perspective.
Procedural generation has often been a key component of some incredibly successful, and even iconic games going back decades. Elite is a canonical example here, with its galaxies being procedurally generated. Powermonger, from Bulldog, likewise used fractal generation for its maps.
More recently, the prevalence of procedurally generated rogue-likes and Metroidvanias is another point against. Granted, people have got a bit bored of these now, but that's because there were so many of them, not because they were unsuccessful or "failed to deliver".
And it's used to power effect where you might not expect it (Stardew Valley mines).
What procedural generation does NOT work at is generating "story elements" though perhaps even that can fall, Dwarf Fortress already does decently enough given that the player will fill in the blanks.
Apparently Stardew Valley's mines are not procedurally generated, but rather hand-crafted. Per their recent 10 year anniversary video, the developer did try to implement procedural generation for the mines, but ended up scrapping it:
https://www.stardewvalley.net/stardew-valley-10-year-anniver...
In any case, I agree that gamers by and large don’t care to what extent the game creation was automated. They are happy to use automated enemies, automated allies, automated armies and pre-made cut scenes. Why would they stop short at automated code gen? I genuinely think 90% wouldn’t mind if humans are still in the loop but the product overall is better.
Yes. It is "wildly uneducated" to have, and express, strong opinions about ANY field of endeavour where you are unfamiliar with large parts of that field.
Before LLMs we did already have a way to "save developers time from writing the same thing that has been done by other developers for a thousand times", you know? A LLM doing the same thing the 1001st time is not code reuse. Code reuse is code reuse.
The whole history of programming tool is exploring how to properly reuse code: are functions or objects the fundamental unit of reuse? is diamond inheritance okay? should a language have an official package management? build system? should C++ std have network support? how about gui support? should editors implement their own parsers or rely on language server? And none of these questions has a clear answer after thousands if not millions of smart people attempted. (well perhaps except the function vs object one)
Electron is the ultimate effort of code reuse: we reuse the tens of thousands of human-years invested to make a markup-based render engine that covers 99% of use case. And everyone complains about it, the author of OP article included.
LLM-coding is not code reuse. It's more like throwing hands up and admitting humans are yet not smart enough to properly reuse code except for some well-defined low level cases like compiling C into different ISA. And I'm all for that.
https://news.ycombinator.com/item?id=47260385
You might have a more compelling argument if instead of syntax and semantics you contrasted semantics and pragmatics.
> Syntactic reuse would be macros
Well sure. My point is that what can be reused is decided ahead of time and encoded in the syntax. Whereas with LLMs it is not, and is encoded in the semantics.
> Pragmatics
Didn't know what that is. Consider my post updated with the better terms.
I love procedural generation, and there is definitely a craft to it. Creating a process that generates a playable level or world is just very interesting to explore as an emergent system. I don't think LLMs will make these system more interesting by default. Of course there are still things to explore in this new space.
It's similar to generative/plotter art compared to a midjourney piece of slop. The craft that goes into creating the code for the plotter is what makes it interesting.
And the users may not care about code directly, but they definitely do indirectly. The less optimized and more off-the-shelf solutions have seen a stark decrease in performance but allowing game development to be more approachable.
LLMs saving engineers and developers time is an unfounded claim because immediate results does not mean net positive. Actually, I'd argue that any software engineer worth their salt knows intimately that more immediate results is usually at the expense of long term sustainability.
I would overstate:
No one even cares how architecture is done. Unless you are the one fixing it or maintaining it.
Sorry, no one. We all know Apple did some great stuff with their code, but we care more about the awful work done on the UI, right? I mean - the UI seems to not be breaking in these new OSs which is amazing feature... for a game perhaps, and most likely the code is top notch. But we care about other things.
This is the reality, and the blind notion that so-many people care about code is super untrue. Perhaps someone putting money on developers care, but we have so many examples already of money put on implementation no matter what the code is. We can see everywhere funds thrown at obnoxious implementations, and particularly in large enterprises, that are only sustained by the weird ecosystem of white-collar jobs that sustains this impression.
Very few people care about the code in total, and this can be observed very easy, perhaps it can be proved no other way around is possible.
You say you don't care, but I bet you do when you're dealing with a problem caused by poor code quality or bad choices made by the developer.
> That said, Steam's policy has been recently updated to exclude dev tools used for "efficiency gains", but which are not used to generate content presented to players.
I only quoted the first paragraph, but there is more.
Household name game studios have had custom AI art asset tooling for a long time that can create art quickly, using their specific style.
AI is a tool and as Steve Jobs said, you can hold it wrong. It's like plastic surgery, you only notice the bad ones and object to them. An expert might detect the better jobs, but the regular folk don't know and for the most part don't care unless someone else tells them to care.
And then they go around blaming EVERYTHING as AI.
_Everyone_ (and their grandmother) can instantly tell a ChatGPT generated image, it has a very distinct style - and in my experience no amount of prompting will make it go away. Same for Grok and to a smaller degree Google's stuff.
What the industry needs (and uses) is something they can feed a, say, wall texture into and the AI workflow will produce a summer, winter and fall variant of that - in the exact style the specific game is using.
"So you hated the TV Series Ugly Betty then?"
"What? that's not CGI!"
This video is 15 years old
https://www.youtube.com/watch?v=rDjorAhcnbY
In that specific 15 year old example they're mostly composited, you're right about that.
https://www.youtube.com/watch?v=RxD6H3ri8RI
His Blender Conference talk about photogrammetry / camera projection / projection mapping was fantastic:
World Building in Blender - Ian Hubert
https://www.youtube.com/watch?v=whPWKecazgM
Spore was fun (IMHO) but at the time of release was considered a disappointment compared to its hype.
And yet it also effectively ended Will Wright's career. Rave press reviews are not a good indicator of anything, really.
And you wouldn't really have any idea this was the case if you weren't there when it happened.
I can type up what I want much faster and be sure it's at least solving the right problem, even if it may have bugs.
There are also tools to generate boilerplate that work much much better than LLMs. And they're deterministic.
This reads like a skill issue on your end, in part at least in the prompting side.
It does take time to reach a point where you can prompt an LLM sufficiently well to get a correct answer in one shot, developing an intuitive understanding of what absolutely needs to be written out and what can be inferred by the model.
1. Personal experience. Lazy prompting vs careful prompting.
2. They're coincidentally good at things I'm good at, and shit at things I don't understand.
3. Following from 2, when used by somebody who does understand a problem space which I do not, they easily succeed. That dog vibe coding games succeeded in getting claude to write games because his master knew a thing or two about it. I on the other hand have no game Dev experience, even almost no hobby experience with games specifically, so I struggle to get any game code that even remotely works.
It's less "git gud; prompt better", and more, "be able to explain (well) what you want as the output". If someone messages the IT guy and says "hey my computer is broken" - what sort of helpful information can the IT guy offer beyond "turn it on and off again"?
In the past 2 months I've been using all the SOTA models to help me design a new DSL for narrative scripting (such as game story telling) and a c# runtime implementation o the script player engine.
The language spec and design is about 95% authored by me up to this point; I have the LLMs work on the 2nd layer: the implementation specs/guidelines and the 3rd layer: concrete c# implementation.
Since it's a new language, I consider it's somewhat new/novel tasks for LLMs (at least, not like boilerplate stuff like HTTP API or CRUD service). I'd say, these LLMs have been very helpful - you can tell they sometimes get confused and have trouble to comply to the foreign language spec and design - but they are mostly smart enough to carry out the objectives, and they get better and better after the project got on track and has plenty of files/resources to read and reference.
And I'd also say "prompt better" is a important factor, just much more nuanced/complicated. I started with 0 experience with LLM agents and have learned a lot about how to tame them, and developed a protocol to collaborate with agents, these all comes from countless trial and errors, but in the end get boiled down to "prompt better".
The uncomfortable fact remains that one cannot really expect to get much better results from an LLM without putting some work themselves. They aren't magical oracles.
People spin all kinds of things if they believe (accurately or not) that their livelihood is on the line. The knee-jerk "AI universally bad" movement seems just as absurd to me as the "AGI is already here" one.
> Spore is well acclaimed. Minecraft is literally the most sold game ever.
Counterpoint: Oblivion, one of the first high-profile games to use procedural terrain/landscape generation, seemed very soulless to me at the time.
As I see it, it's all a matter of how well it's executed. In the best case, a skilled artist uses automation to fill in mechanical rote work (in the same way that e.g. renaissance artists didn't make every single brushstroke of their masterpieces themselves).
In the worst (or maybe even average? time will tell) case, there are only minimal human-made artistic decisions flowing into a work and the output is a mediocre average of everything that's already been done before, which is then rightfully perceived as slop.
Is that even a counter point? Nobody in their right mind would ever claim that procedural generation is impossible to fuck up. The reason Minecraft/etc are good examples is because they prove procedural generation can work, not that it always works.
I might be misremembering but wasn't the Oblivion proc-gen entirely in the development process, not "live" in the game, which means...
> "In the best case, a skilled artist uses automation to fill in mechanical rote work"
...is what Bethesda did, no?
The problem with procedural generation is it's hard to make it as action-packed and desirable as WoW zones, and even those quickly become fly-over territory.
Do you ever ask why you're writing the same thing over and over again? That's literally the foundational piece of being an engineer; understanding when you're reinventing the wheel when there's a perfectly good wheel nearby.
With LLMs, the parameterisation goes into semantic space. This makes code more reusable.
A model trained on all of GitHub can reuse all that code regardless of whether they are syntactically reusable or not. This is semantic reuse, which is naturally much broader.
First, I am not arguing for reusability. Reusability is one of the most common mistakes you can make as a software engineer because you are over-generalizing what you need before you need it. Code should be written for your specific use case, and only generalized as problems appear. But if you can recognize that your specific use case fits a known problem, then you can find the best way to solve that problem, faster.
Second, when you're using an LLM to make your code more 'reusable' you are taking full responsibility for everything that LLM vomits out. You're no longer assembling a car from well known parts, taking care to tailor it to your use case as needed. You're now building everything in said car, from the tires to the engine and the rearview mirror.
Coding is a constant balance between understanding what you're solving for and what can solve it. Using LLMs takes the worst of both worlds, by offloading both your understanding of the problem and your understanding of the solution.
My point is that the very act of training an LLM on any corpus of code, automatically makes all of that code reusable, in a much broader semantic way rather than through syntax. Because the LLM uses a compressed representation of all that code to generate the function you ask it to. It is like having an npm where it already has compressed the code specific to your situation (like you were saying) that you want to write.
I was able to feel wool scarves made in europe from the middle ages. (In museum storage, under the guidance of a curator) They are a fundamentally different product than what is produced in woolen mills. A handmade (in the old traditiona) woolen scarf can be pulled through a ring, because it is so thin and fine. Not so for a modern mill-made scarf.
Another interesting thing is that we do not know how they made them so fine. The technique was never recorded or documented in detail, as it was passed down from parent to child. So the knowledge is actually lost forever.
Weavers in Kashmir work a similar level of quality, but their wool is different, their needs and techniques are different, so while we still have craftsman that can produce wool by hand, most of the traditions and techniques are lost.
Is it a tragedy? I go back and forth. Obviously the heritage fabrics are phenomenal and luxurious. Part of me wishes that the tradition could have been maintained through a luxury sector.
Automation is never a 1:1 improvement. It's not just about the speed or process. The process itself changes the product. I don't know where we will net out on software, and I do think the complaints are justified - but the Luddites were also justified. They were *Right*. Their whole argument was that the mills could not product fabric of the same quality. But being right is not enough.
I'm already seeing vibe-coded internal tools at an org I consult at saving employees hundreds of hours a month, because a non-technical person was empowered to build their own solution. It was a mess, and I stepped in to help optimize it, but I only optimized it partially, making it faster. I let it be the spaghetti mess it was for the most part - why? because it was making an impact already. The product was succeeding. And it was a fundamentally different product than what internal tools were 10 years ago.
And they also like to emphasise how long it takes for someone to become a master at a given trade.
Though, given Amodei and Altman’s behavior (along with the rest of the billionaire class) that shouldn’t be a surprise to anyone.
Survivor bias plays a role in glorifying the past.
With programming, we documented a lot of it, so it's unlikely to go the way of fine weaving. People will always be able to learn to think and be great programmers.
Maybe if the wool weavers had internet, they could have blogged, made youtube videos, and cataloged their profession so it could last Millenia.
Long term though, I’ve always wondered if the Amish turn out to be the only survivors.
https://www.youtube.com/watch?v=ZSRHeXYDLko
Now we're way past the point of no return.
I get there are many things happening today that are frustrating or moving some element of human life in negative or ambiguous directions, but we really have to keep perspective on these things.
Nearly every problem today is a problem with a solution.
The feelings of panic we have that things are going wrong are useful signals to help guide and motivate us to implement those solutions, but we really must avoid letting the doomerism dominate. Just because we hear constant negative news doesn't mean things are lost. Doesn't even mean things are bad.
It just means we have been hearing a lot of negative news.
This is what it looks like for progress to not be monotonically increasing.
I have no doubt in the future there will be a class of vibe software and it will be known as distinctly lower quality than human understood software. I do think the example you describe is a good use of vibing. I also think tech orgs mandating 100% LLM code generation are short sighted and stupid.
A lot of this push for “slop” is downstream of our K shaped economy. Give the people more money and quality becomes a lot more important. Give them less, and you’re selling to their boss who is often insulated from the effects of low quality.
Sounds like a tautology. If you deliberately hoard knowledge of course it’s going to be hard to obtain.
The difference between automating the creation of software and automating the creation of physical products is that software is everywhere. It is relied on for most tools and processes that keep our civilization alive. Cutting corners on that front, and deciding to entrust our collective future to tech bros and VC firms fiending for their next payout, seems like an incredibly dumb and risky proposition.
Most of what we do is programming is some small novel idea at high level and repeatable boilerplate at low level. A fair question is: why hasn’t the boilerplate been automated as libraries or other abstractions? LLMs are especially good at fuzzy abstracting repeatable code, and it’s simply not possible to get the same result from other manual methods.
I empathise because it is distressing to realise that most of value we provide is not in those lines of code but in that small innovation at the higher layer. No developer wants to hear that, they would like to think each lexicon is a creation from their soul.
If development velocity was truly an important factor in these businesses, we'd migrated away from that gang of four ass Java 8 codebase, given these poor souls offices, or at least cubicles to reduce the noise, we wouldn't make them spend 3 hours a day in ceremonial meetings.
The reason none of this happens is that even if these developers crank out code 10x faster, by the time it's made it past all the inertia and inefficiencies of the organization, the change is nearly imperceptible. Though the bill for the new office and the 2 year refactoring effort are much more tangible.
What that means is anyone’s guess, but it seems like it should result in a Cambrian explosion of disruptive new companies, limited in scope by the idea space.
The thing about small teams is, with a few exceptions, the biggest challenges are typically funnels for users and product-market fit, overcoming and exploitation of network effects, etc… so even in small orgs, if you make 30 percent of the problem 4x faster/smaller you still have the other 70 percent which is now 92.5% of the problem.
This applies even more acutely in larger organizations… so for them, 99 percent of the problem remains.
Intangibles in an organization like reluctance, education, and organizational inertia fill the gap left by software acceleration, and in the end you only see tiny gains, if any.
What really happened, on an organizational scale, is that software development costs went down. We wouldn’t expect a wage collapse in coding to foment an explosive revolution in company profitability or dynamism. We shouldn’t expect those things of LLM assistance.
We should look at it as a reduction in cost with potentially dangerous side effects if not managed carefully, with an especially big reduction in r&d development costs.
Abstractions are the source of bloat. Without abstractions you can always reduce bloat, or you can reduce bloat in your glue, but you can't reduce glue.
It takes discipline to NOT create arbitrary function signatures and short-lived intermediate data structures or type definitions. This is the beginning of boilerplate.
So many advances in removing boilerplate are realizing your 5 function calls and 10 intermediate data structures or type definitions, essentially compute a thing that you can do with 0 function calls and 0 custom datatypes and less lines of code.
The abstraction hides how simple the thing you want is.
Problem is that all open source code looks like the bloat described above, so LLMs have no idea how to actually write code that is without boilerplate. The only place where I've seen it work is in shaders, which are usually written to avoid common pitfalls of abstraction.
LLMs are incapable of writing a big program in 1 function and 1 file, that does what you want. Splitting the program into functions or even multiple files, is a step you do after a lot of time, yet all open source looks nothing like that.
from foundations import ConcreteStrip
ConcreteStrip(x,y,z)
Doesn't work for houses
It's weird to look at something that recent and think how dated it reads today. I also wrote about the Turing test as some major milestone of AI development, when in fact the general response to programs passing the Turing test was to shrug and minimize it
To me, a function is a single sentence within a book. It may approach the larger picture, but that sentence can be reviewed, changed, switched around, killed by an editor.
Some programmers believe they're fantastic sentence writers. They brag about how good of a sentence they write, they're entire worldview has been built on being good sentence creators. Especially within enterprises, you may spend your entire life writing sentences without ever really understanding the whole book.
If your worldview has been built on sentence creation, and suddenly there's a sentence creator AI, you're going to be deathly afraid of it replacing you as a sentence writer.
This is not the same when it comes to books and music.
Because our ways of programming computers are still woefully inadequate and rudimentary. This is why we have a tons of technique for code reuse, yet we keep reinventing the wheel because they shatter in contact with reality. OOP was supposed to save us all in the 1990s, we've seen how it went.
In other fields we've had a lot of time to figure out basic patterns and components that can be endlessly reused. Imagine if car manufacturers had to reinvent the screw, the piston, the gear and lubricants for every new car model.
One example that has bugged me for a decade is: we've been in the Internet era for decades at this point, yet we spend a lot of time reinventing communication. An average programmer can't spend two days without having to deal with JSON serialization, or connectivity, or sending notifications about the state of a process. What about adding authentication and authorization? There is a whole cottage industry to simplify something that should be, by now, almost as basic as multiplying two integers. Isn't that utter madness? It is a miracle we can build complex systems at all when we have to focus on this minutiae that pop up in every single application.
Now we have intelligences that can create code, using the same inadequate language of grunts and groans we use ourselves in our day to day.
Sometimes it has. The amount of generated code that selected count(distinct id) from customers would produce is huge.
Because a lot of programmers don't know how to copy-paste or make packages for themselves? We have boilerplate at my work, which comprises of some ready made packages that we can drop in, and tweak as needed, no LLMs required
Probably the original sin here is that we started calling them programming languages instead of just 'computer code'.
Also - most of your work is far more than mere novelty! There are intangibles like your intellectual labor and time.
There is also the cost reason, somebody trying to sell an abstraction will try to monetize it and this means not everyone will want/be able to use it (or it will take forever/be unfinished if it's open/free).
There's also the platform lockin/competition aspect...
However, LLMs destroy this economic incentive utterly. It now seems most productive to code in fairly low level TypeScript and let the machines spew tons of garbage code for you.
I also don't know what work you do, but I would not characterize the codebases I work in as "small bits of novelty" on boilerplate. Software engineering is always a holistic systems undertaking, where every subcomponent and the interactions between them have to be considered.
FORTRAN ("formula translator") was one of the first programs ever written and it was supposed to make coding obsolete. Scientists will now be able to just type in formulas and the computer will just calculate the result, imagine that!
Yes, it is. Literally every programming innovation claims to "make coding obsolete". I've seen a half dozen in my own lifetime.
Care to share some examples that prove your point?
I still think LLMs as fancy autocomplete is the truth and not even a dig. Autocomplete is great. It works best when there’s one clear output desired (even if you don’t know exactly what it is yet). Nobody is surprised when you type “cal” and California comes up in an address form, why should we be surprised when you describe a program and the code is returned?
Knowledge has the same problem as cosmology, the part we can observe doesn’t seem to account for the vast majority of what we know us out there. Symbolic knowledge encompasses unfathomable multitudes and will eventually be solved by AI but the “dark matter” of knowledge that can’t easily be expressed in language or math is still out in the wild
Cue the smug Lisp weenies.
I don't care if LLMs are good at coding or bad at it (in my experience the answer is "it depends"). I don't care how good are they at anything else. What matters in the end is that this tech is not to empower a common person (although it could). It is not here to make our lives better, more worthwhile, more satisfying (it could do these as well). It is there to reduce our agency, to make it easier to fire us, to put us in even more precarious position, to suck even more wealth from those that have little to those that have a lot.
Yet what I see are pigs discussing the usefulness of bacon-making machine just because it also happens to be able to produce tasty soybean feed. They forget that it is not soybean feed that their owner bought this machine for, and that their owner expects a return from such investment.
Could be. It could also end up freeing us from every commercial dependency we have. Write your own OS, your own mail app, design your own machinery to farm with.
It’s here, so I don’t know where you’re going with “I’m unhappy this is happening and someone should do something”
Yeah, companies that develop and push this tech definitely have this in mind.
> I don’t know where you’re going with “I’m unhappy this is happening and someone should do something
I am not surprised because I didn't write anything like it.
Another distraction is AGI that which is a danger to humanity- the only danger is people...
The period is now. Just add "be a great teacher but don't attempt to write code" in the prompt.
(yes, it's a teacher who gets things wrong from time to time. You still need to refer to the source and ground truth just like when you're taught by a human teacher.)
I'm not sure if you ever had a teacher or instructor that you didn't trust, because they were a compulsive liar or addiction or any other issue. I didn't (as least not that I can remember) but I know I would be VERY on guard about it. I imagine I would consequently be quite stressed learning with them, even if they were brilliant, kind, etc.
It would feel a bit like walking on thin ice to get to a beautiful island. Sure, it's not infeasible and if you somehow make it, it might be worth the risk, but honestly wouldn't you prefer a slower boat?
I think you can build a very easy workflow that reinforces rather than replaces learning, I've used a citation flow to link and put into practice a ton of more advanced programming techniques, that I found incredibly difficult to locate and research before AI.
I'd say the comparison is faulty, it's more akin to swimming to an island (no-ai) vs using a boat. You control the speed and direction of the boat, which also means you have the responsbility of directing it to the correct location.
PS: sorry if the analogy is a bit wonky but it's quite dear to me as I do ice skating on frozen lakes and it's basically a life or death information "game" that I can relate to. It might not be a great analogy for others.
I guess in my view - the main alternative you'd have beforehand is just to drown.
For me, AI sits in a space where if you know how to use it, it can tell you all the thin spots of the ice accurately. You can then verify those spots, but there's a level of personal responsibility of verification.
I'd agree there's currently a ton of people that are using these tools to essentially just find the specific route - but i'd argue those people probably shouldn't be skating in the first place, and would've fallen one way or the other.
Right, but AFAICT most people just venture over the ice and don't bother to check. In fact a lot of people venture there, do check once or twice, then check less and less frequently. The fact that you do it is great but others seem a lot less careful, until cracks start to show and then it might be too late.
I'd only argue that people were doing this before AI, slop development was just copy pasting from the first stack overflow issue that matched the question rather than thinking
So i'd argue there's a part of it that is just personal responsibility with how these tools are used
Before most who didn't know the ice didn't went out on it, today a lot of people who shouldn't be there go far out on the ice.
I think that's actually deeply different. If a human keeps on apologizing because they are being caught in a lie, or just a mistake, you distrust them a LOT more. It's not normal to shrug off a problem then REPEAT it.
I imagine the cost of a mistake is exponential, not linear. So when somebody says "oops, you got me there!" I don't mistrust them just marginally more, I distrust them a LOT more and it will take a ton of effort, if even feasible, to get back to the initial level of trust.
I do not think it's at all equivalent to what "Real humans" do. Yes, we do mistake, but the humans you trust and want to partner with are precisely the one who are accountable when they make mistakes.
You seem to have a different understanding of what it means in the context of neural networks.
Real humans will not make up non existent api and implement a solution with it, (unless they do it on purpose).
This has upvotes?
Anyway, as I train people in LLMs/AI. I unapologetically will say "DONT LISTEN TO IT, IT LIES!" and send commands like "Try again, try harder"
In order to lie, one needs to understand what truth and objective reality are.
Even with people, when a flat-earther tells you the earth is flat, they're not lying, they're just wrong.
All LLM output is speculation. All speculation, by definition, has some probability of being incorrect.
---
We can go even deeper in a philosophical sense. If I made the audacious claim that 2 +2 = 4, I may think it's true, but I'm still speculating that the objective reality I experience is the same one others also experience, and that my senses and mental faculties, and therefore the qualia making up my reality, are indeed intact, correct, and functional. So is there a degree of speculation when I made that claim?
Regardless, I am able to agree upon a shared reality with the rest of the world, and I also share a common understanding of truth and untruth. If I lied, it can only be caused by an intention to mislead others. For example, if I claimed to be the president of the united states, of course that would be incorrect (thankfully!), but since we all agree that no one reading this post would actually be mislead into thinking I am the POTUS, then it isn't a lie. Perhaps sarcasm, a failed attempt at humor, or just trolling. it is untruth, but it isn't a lie, no one was mislead. You need intent (LLM isn't capable of one), and that intent needs to be at least in part, an intent to mislead.
Al-least some of them know they're wrong and are thus lying.
Maybe "Artisanal Coding" will be a thing in the future?
Programming via LLMs is just the logical conclusion to this niche of industrialized software development which favours quantity over quality. It's basically replacing human bots which translate specs written by architecture astronauts into code without having to think on their own.
And good riddance to that type of 'spec-in-code-out' type of programming, it should never have existed in the first place. Let the architecture astronauts go wild with LLMs implementing their ideas without having to bother human programmers who actually value their craft ;)
People still pay for hand-knit fabrics (there's one place in Italy that makes silk by hand and it costs 5 figures per foot), but the vast majority is machine made.
Same thing will happen to code, unless the bubble bursts really badly. Most bulk API Glue CRUD stuff and basic web UI work will be mostly automated and churned off automated agentic production lines.
But there will still be a market for that special human touch in code, most likely when you need safety/security or efficiency/speed.
Like you can still make Karelian pies[0] anywhere, but unless you follow the exact recipe, you can't sell them as "Karelian pies". It's good for the heritage and good for the customers.
You can also make any cheeses and wines and whatever you like, it's just how you name them and market them that's regulated.
[0] https://en.wikipedia.org/wiki/Karelian_pasty
But the comment you reply to explicitly points out the process is in fact relevant as it is itself a cultural artifact. You're not replying to their main point.
How are the customers hurt if their pie has not been baked by a babushka in Petrozavodsk using the old original recipe, but by an anonymous migrant worker in a dark kitchen using an optimized recipe if the end result is objectively the same? The packaging doesn't have to say who it was made by.
I also don't see the problem with the heritage. The comment I replied to already said anyone could call their pies Karelian, so there was no restriction that benefitted the residents of a specific region. I can see a PDO-like carveout that goes "we want to preserve the traditional pie-making of Karelia, so we want this activity to remain economically viable. Therefore, only pies baked in Karelia can be sold as Karelian pies." But I don't see how Sysco baking the same pies and distributing them nationwide helps maintain the heritage.
We are only craftsmen to ourselves and each other. To anyone else we are factory workers producing widgets to sell. Once we accept this then there is little surprise that the factory owners want us using a tool that makes production faster, cheaper. I imagine that watchmakers were similarly dismayed when the automatic lathe was invented and they saw their craft being automated into mediocrity. Like watchmakers we can still produce crafted machines of elegance for the customers who want them. But most customers are just going to want a quartz.
It's certainly intellectually stimulating to create it, but I've learned to take joy in discarding vast swathes of it when it's no longer required.
I will just copy paste my comment from another thread but still very relevant>
Coding isn’t creative, it isn’t sexy, and almost nobody outside this bubble cares
Most of the world doesn’t care about “good code.” They care about “does it work, is it fast enough, is it cheap enough, and can we ship it before the competitor does?”
Beautiful architecture, perfect tests, elegant abstractions — those things feel deeply rewarding to the person who wrote them, but they’re invisible to users, to executives, and, let’s be honest, to the dating market.
Being able to refactor a monolith into pristine microservices will not make you more attractive on a date. What might is the salary that comes with the title “Senior Engineer at FAANG.” In that sense, many women (not all, but enough) relate to programmers the same way middle managers and VCs do: they’re perfectly happy to extract the economic value you produce while remaining indifferent to the craft itself. The code isn’t the turn-on; the direct deposit is.
That’s brutal to hear if you’ve spent years telling yourself that your intellectual passion is inherently admirable or sexy. It’s not. Outside our tribe it’s just a means to an end — same as accounting, law, or plumbing, just with worse dress code and better catering.
So when AI starts eating the parts of the job we insisted were “creative” and “irreplaceable,” the threat feels existential because the last remaining moat — the romantic story we told ourselves about why this profession is special — collapses. Turns out the scarcity was mostly the paycheck, not the poetry.
I’m not saying the work is meaningless or that system design and taste don’t matter. I’m saying we should stop pretending the act of writing software is inherently sexier or more artistically noble than any other high-paying skilled trade. It never was.
Your perspective is a path with only one logical end. That nothing you do or think or believe matters unless someone you're attracted to finds it attractive.
That is not how I or most others live. We take pride in and derive satisfaction from our accomplishments without the need for external validation.
Yeah, only I care whether the solution I found to a problem today was elegant, or whether my kitchen was pristine and well organized after I prepped for next week's lunches, but so what? I care and it injects more than enough meaning into my life to be worth it.
When I charge a customer for a solution they don't care about how elegant my code is. They just care if it works for solving their problem...
Isn't the problem right now the vibe coded sotware does not appear to meet these requirements?
This is an absolute chef-kiss double-entendre.
If you don't have the copyright, then you can't license or litigate it under the common rules of software. If someone 'steals' it you can at best go after them with some trade secret case, and I suspect this would be limited if you had already shared the code with them, e.g. because they helped you synthesise it.
While LLMs are surely used to generate a lot of slop-code and overwhelm (open source) code bases, this surely isn't the only thing they can do. I dislike discussing the potential of a technology exclusively by looking at its negative impact.
LLMs in proper hands don't create code which is "stolen", they also shouldn't create unnecessary code and definitely don't remove any of the ownership of the programmer, at least not any more than using a mighty IDE does.
The problem seems to be in the usage of LLMs. These effects definitely do happen when just releasing an agent on a codebase without any oversight. But they can also largely be mitigated by using frameworks such as Openspec or Spec-Kit, properly designing a spec, plan, granular tasks and manually reviewing all code yourself. The LLM should not be responsible for any creative idea, it should at most verify the practicality against the codebase. When doing that, the entire creative control is in the hands of the programmer and so is the mechanical execution. The LLM is reduced to a very powerful autocomplete with a strict harness around it. Obviously this also doesn't lead to 10x or even 100x improvements in speed like some AI merchants promise, but in my personal experience the speedup is still significant enough to make LLMs a very, very useful technology.
Love it. Calling it "Copilot" in itself is a lie. Marketing speak to sell you an idea that doesn't exist. The idea is that you are still in control.
Someone might call LLMs that today, except they've stepped a bit up from steroids.
Guilty until proven innocent will satisfy the author's LLM-specific point of contention, but it is hardly a good principle.
He is proposing to not make a judgement at all. If the AI company CLAIMS something they have to prove it. Like they do in science or something. Any claim is treated as such, a claim. The trick is to not even claim anything, let the users all on their own come to the conclusion that it's magic. And it's true that LLMs by design cannot cite sources. Thus they cannot by design tell you if they made something up with disregard to it making sense or working, if they just copy and pasted it, something that either works or is crap, or if they somehow created something new that is fantastic.
All we ever see are the success stories. The success after the n-th try and tweaking of the prompt and the process of handling your agents the right way. The hidden cost is out there, barely hidden.
This ambiguity is benefitting the AI companies and they are exploiting it to the maximum. Going even as far as illegally obtaining pirated intellectual property from an entity that is banned in many countries on one end of their utilization pipeline and selling it as the biggest thing ever at the other end. And yes, all the doomsday stories of AI taking over the world are part of the marketing hype.
>AI output should be treated like a forgery
Who's passing this judgement this? Author? Civil society?
I’ve seen reluctance to refactor even 10+-year-old garbage long before LLMs were first made available to the broader public.
I've seen a lot of 'fixes' for 10 year old 'garbage' that turned out to be regressions for important use cases that the author of the 'fix' wasn't aware of.
On a philosophical level I do not get the discussions about paintings. I love a painting for what it is not for being the first or the only one. An artist that paints something that I can't distinguish from a Van Gogh is a very skillful artist and the painting is very beautiful. Me labeling "authentic" it or not should not affect it's artistic value.
For a piece of code you might care about many things: correctness, maintainability, efficiency, etc. I don't care if someone wrote bad (or good) code by hand or uses LLM, it is still bad (or good code). Someone has to take the decision if the code fits the requirements, LLM, or software developer, and this will not go away.
> but also a specific geographic origin. There's a good reason for this.
Yes, but the "good reason" is more probably the desire of people to have monopolies and not change. Same as with the paintings, if the cheese is 99% the same I don't care if it was made in a region or not. Of course the region is happy because means more revenue for them, but not sure it is good.
> To stop the machines from lying, they have to cite their sources properly.
I would be curious how can this be applied to a human? Should we also cite all the courses, articles that we have read on a topic when we write code?
The problem with automated imitation generators is that they can produce thousands of painting that imitate Van Gogh, but does not have the same soul.
It is the same reason why these things cannot create genuinely funny jokes. They cannot assess the funnyness of the themselves. They cannot feel, and cannot do the filtering based on emotion.
It is easy to recognize the emptiness of a joke, but not so easy for a painting, or some other form of art.
This is why it will never work for art. But the sad thing is that that will not stop them from being used to create art. Because it just needs to sell.
I would say that for art, at least for most of the movies, music etc, this was already the case. So nothing much to lose.
Even if you aren't in the group, there is clearly a group of people who appreciate seeing the original, the thing that modified our collective artistic trajectory.
Forgeries and master studies have a long history in art. Every classically trained worth their salt has a handful of forgeries under their belt. Remaking work that you enjoy helps you appreciate it further, understand the choices they made and get a better for feel how they wielded the medium. Though these forgeries are for learning and not intended to be pieces in their own right.
Generally you get a much better ‘view’ of the artwork in a museum. It’s higher ‘resolution’ you can view it from multiple angles etc.
There are some exceptions. You’re probably going to get a better look at the Mona Lisa online than if you try and see it at the Louvre.
I go to a museum to see a curated collection with explanations in a place that prevents distractions (I can't open a new tab) and going with people that might be interested to talk about what they see and feel. It's as well a social and personal experience on top information gathering.
> there is clearly a group of people who appreciate seeing the original,
There are many people interested in many things, do you want to say that "because some people think it is important, it must be important"? There were many people with really weird and despicable ideas along history and while I am neutral to this one, they definitely don't convince me just by their numbers.
> simply looking at a jpg.
Technically a jpg would not work because is lossy compression. But a png at the correct resolution might do the trick for some things (paintings that you see from far), but not for others. Museum have multiple objects that would be hard to put in an image (statues, clothes, bones, tables, etc.). You definitely can't put https://en.wikipedia.org/wiki/Comedian_(artwork) in a jpg - but the discussion surrounding it touches topics discussed here.
The value of a piece is definitely not completely tied to its physical attributes, but the story around it. The story is what creates its scarcity and generates the value.
It is similar for collectible items. If I had in my possession the original costume that Michael Jackson wore in thriller, I am sure I could sell it for thousands of dollars. I can also buy a copy for less than a hundred.
Same with luxury brands. Their price is not necessarily linked to their quality, but to the status they bring and the story they tell (i.e. wearing this transforms me into somebody important).
It can seem quite silly, but I think we are all doing it to some extent. While you said that a good forgery shouldn't affect one's opinion on the object (and I agree with you), what about AI-generated content? If I made a novel painting in the style of Van Gogh, you might find it beautiful. What if I told you I just prompted it and painted it? What if I just printed it? There are levels of involvement that we are all willing to accept differently.
There are a lot such artists who can do that after having seen Van Gogh's paintings before. Only Van Gogh (as far as we know) did paint those without having seen anything like it before - in other words, he had a new idea.
Should we also say "if you can implement Dijkstra's algorithm" it's irrelevant because "you did not have the idea"?
It's great to credit people that have an idea first. I fail to see how using an idea is that "bad" or "not worthy", ideas should be spread and used, not locked by the first one that had them (except some small time period maybe).
Yea this is the kind of BS and counter-productiveness that irrational radicals try to push the crowd towards.
The idea that one owns your observations of their work and can collect rent on it is absurd.
btw you can make git commits with AI as author and you as commiter. Which makes git blame easier
A Private (system) Investigator. :)
Claude makes me mad: even when I ask for small code snippets to be improved, it increasingly starts to comment "what I could improve" in the code I stead of generating the embarrassingly easy code with the improvement itself.
If I point it to that by something like "include that yourself", it does a decent job.
That's so _L_azy.
I have accepted that reading 100% of the generated code is not possible.
I am attempting to find methods to allow for clean code to be generated none the less.
I am using extremely strict DDD architecture. Yes it is totally overkill for a one man project.
Now i only have to be intimate with 2 parts of the code:
* the public facade of the modules, which also happens to be the place where authorization is checked.
* the orchestrators, where multiple modules are tied together.
If the inners of the module are a little sloppy (code duplication and al), it is not really an issue, as these do not have an effect at a distance with the rest of the code.
I have to be on the lookout though. It happens that the agent tries to break the boundaries between the modules, cheating its way with stuff like direct SQL queries.
A short design note and tribute to Richard Stallman (RMS) and St. IGNUcius for the term Pretend Intelligence (PI) and the ethic behind it: don’t overclaim, don’t over-trust, and don’t let marketing launder accountability.
https://github.com/SimHacker/moollm/blob/main/designs/PRETEN...
1. What PI Is
Richard Stallman proposes the term Pretend Intelligence (PI) for what the industry calls “AI”: systems that pretend to be intelligent and are marketed as worthy of trust. He uses it to push back on hype that asks people to trust these systems with their lives and control.
From his January 2026 talk at Georgia Tech (YouTube, event, LibreTech Collective):
https://www.youtube.com/watch?v=YDxPJs1EPS4
> "So I've come up with the term Pretend Intelligence. We could call it PI. And if we start saying this more often, we might help overcome this marketing hype campaign that wants people to trust those systems, and trust their lives and all their activities to the control of those systems and the big companies that develop and control them." — Richard Stallman, Georgia Tech, 2026-01-23. Source: YouTube (full talk) — "Dr. Richard Stallman @ Georgia Tech - 01-23-2026," Alex Jenkins, CC BY-ND 4.0; transcript in video description.
So PI is both a label (call it PI, not AI) and a stance: resist the campaign to make people trust and hand over control to systems and vendors that don’t deserve that trust. In MOOLLM we use the same framing: we find models useful when we don’t overclaim — advisory guidance, not a guarantee (see MOOAM.md §5.3).
[...]
Richard Stallman critiques AI, connected cars, smartphones, and DRM (slashdot.org) 42 points by MilnerRoute 38 days ago | hide | past | favorite | 10 comments
https://news.ycombinator.com/item?id=46757411
https://news.slashdot.org/story/26/01/25/1930244/richard-sta...
Gnu: Words to Avoid: Artificial Intelligence:
https://www.gnu.org/philosophy/words-to-avoid.html#Artificia...
...currently not responding... archive.org link:
https://web.archive.org/web/20260303004610/https://www.gnu.o...
Has this really been people's experience?
I develop and maintain several small FOSS projects, some of which are moderately popular (e.g. 90,000-user Thunderbird extension; a library with 850 stars on GitHub). So, I'm no superstar or in the center of attention but also not a tumbleweed. I've not received a single AI-slop pull request, so far.
Am I an exception to the rule? Or is this something that only happens for very "fashionable" projects?