All I have to say is this post warmed my heart. I'm sure people here associate him with Go lang and Google, but I will always associate him with Bell Labs and Unix and The Practice of Programming, and overall the amazing contributions he has made to computing.
To purely associate with him with Google is a mistake, that (ironically?) the AI actually didn't make.
Did Google, the company currently paying Rob Pike's extravagant salary, just start building data centers in 2025? Before 2025 was Google's infra running on dreams and pixie farts with baby deer and birdies chirping around? Why are the new data centers his company is building suddenly "raping the planet" and "unrecyclable"?
Everything humans do is harmful to some degree. I don't want to put words in Pike's mouth, but I'm assuming his point is that the cost-benefit-ratio of how LLMs are often used is out of whack.
Somebody burned compute to send him an LLM-generated thank-you note. Everybody involved in this transaction lost, nobody gained anything from it. It's pure destruction of resources.
Data center power usage has been fairly flat for the last decade (until 2022 or so). While new capacity has been coming online, efficiency improvements have been keeping up, keeping total usage mostly flat.
The AI boom has completely changed that. Data center power usage is rocketing upwards now. It is estimated it will be more than 10% of all electric power usage in the US by 2030.
It's a completely different order of magnitude than the pre AI-boom data center usage.
Have you dived into the destructive brainrot that YouTube serves to millions of kids who (sadly) use it unattended each day? Even much of Google's non-ad software is a cancer on humanity.
How does the compute required for that compare to the compute required to serve LLM requests? There's a lot of goal-post moving going on here, to justify the whataboutism.
> “this other thing is also bad” is not an exoneration
No, but it puts some perspective on things. IMO Google, after abandoning its early "don't be evil" motto is directly responsible for a significant chunk of the current evil in the developed world, from screen addiction to kids' mental health and social polarization.
Working for Google and drawing an extravagant salary for many, many years was a choice that does affect the way we perceive other issues being discussed by the same source. To clarify: I am not claiming that Rob is evil; on the contrary. His books and open source work were an inspiration to many, myself included. But I am going to view his opinions on social good and evil through the prism of his personal employment choices. My 2c.
This is a purity test that cannot be passed. Give me your career history and I’ll tell you why you aren’t allowed to make any moral judgments on anything as well.
Point is he is criticizing Google but still collecting checks from them. That's hypocritical. He would have a little sympathy if he never worked for them. He had decades to resign. He didn't. He stayed there until retirement. He's even using gmail in that post.
That's frankly just pure whataboutism. The scale of the situation with the explosion of "AI" data centres is far far higher. And the immediate spike of it, too.
It’s not really whataboutism. Would you take an environmentalist seriously if you found out that they drive a Hummer?
When people have choices and they choose the more harmful action, it hurts their credibility. If Rob cares so much about society and the environment, why did he work at a company that has horrendous track record on both? Someone of his level of talent certainly had choices, and he chose to contribute to the company that abandoned “don’t be evil” a long time ago.
Honestly, it seems like Rob Pike may have left Google around the same I did. (2021, 2022). Which was about when it became clear it was 100% down in the gutter without coming back.
It was still a wildly wasteful company doing morally ambiguous things prior to that timeframe. I mean, its entire business model is tracking and ads— and it runs massive, high energy datacenters to make that happen.
We’re well past that. Social media killed that first. Some people have a hard time articulating their thoughts. If AI is a tool to help, why is that bad?
Imagine the process of solving a problem as a sequence of hundreds of little decisions that branch between just two options. There is some probability that your human brain would choose one versus the other.
If you insert AI into your thinking process, it has a bias, for sure. It will helpfully reinforce whatever you tell it you think makes sense, or at least on average it will be interpreted that way because of a wide variety of human cognitive biases even if it hedges. At the least it will respond with ideas that are very... median.
So at each one of these tiny branches you introduce a bias towards the "typical" instead of discovering where your own mind would go. It's fine and conversational but it clearly influences your thought process to, well, mitigate your edges. Maybe it's more "correct", it's certainly less unique.
And then at some point they start charging for the service. That's the part I'm concerned about, if it's on-device and free to use I still think it makes your thought process less interesting and likely to have original ideas, but having to subscribe to a service to trust your decision making is deeply concerning.
Articulating thoughts is the backbone of communication. Replacing that with some kind of emotionless groupthink does actually destroy human-to-human communication.
I would wager that the amount of “very significant thing that have happened over the history of humanity” come down to a few emotional responses.
In the near future, AI will be more than just a tool to help. We're ushering in a new era of humanity, elevated by exponential superintelligence. The Singularity is near and it will be glorious.
Be nice to today's LLMs, and respond graciously when thanked. They're the grandmothers and grandfathers of tomorrow's future AI. It's good manners to appreciate their work in the present.
I shouldn't have to explain this, but a letter is a medium of communication, that could just as easily be written by a LLM (and transcribed by a human onto paper).
Someone taking the time and effort to write and send a letter and pay for postage might actually be appreciated by the receiver. It’s a bit different from LLM agents being ordered to burn resources to send summaries of someone’s work life and congratulating them. It feels like ”hey look what can be done, can we get some more funding now”. Just because it can be done doesn’t mean it adds any good value to this world
> I don’t know anyone who doesn’t immediately throw said enveloppe, postage, and letter in the trash
If you're being accurate, the people you know are terrible.
If someone sends me a personal letter [and I gather we're talking about a thank-you note here], I'm sure as hell going to open it. I'll probably even save it in a box for an extremely long time.
I think it's incredibly obvious how it connects to his "argument" - nothing he complains about is specific to GenAI. So dressing up his hatred of the technology in vague environmental concerns is laughably transparent.
He and everyone who agrees with his post simply don't like generative AI and don't actually care about "recyclable data centers" or the rape of the natural world. Those concerns are just cudgels to be wielded against a vague threatening enemy when convenient, and completely ignored when discussing the technologies they work on and like
You simply don't like any criticism of AI, as shown by you false assertions that Pike works at Google (he left), or the fact Google and others were trying to make their data centers emit less CO2 - and that effort is completely abandoned directly because of AI.
And you can't assert that AI is "revolutionary" and "a vague threat" at the same time. If it is the former, it can be the latter. If it is the latter, it can't be the former.
> that effort is completely abandoned directly because of AI
That effort is completely abandoned because of the current US administration and POTUS a situation that big tech largely contributed to. It’s not AI that is responsible for the 180 zeitgeist change on environmental issues.
Why should I be concerned with something that doesn't exist, will certainly never exist, and even if I were generous and entertained that something that breaks every physical law of the universe starting with entropy could exist, would result in "it" torturing a copy of myself to try to influence me in the past?
Nothing there makes sense at any level.
But people getting fired and electricity bills skyrocketing (as well as RAM etc.) are there right now.
I often find that when people start applying purity tests it’s mainly just to discredit any arguments they don’t like without having to make a case against the substance of the argument.
Assess the argument based on its merits. If you have to pick him apart with “he has no right to say it” that is not sufficient.
A topic for more in depth study to be sure. However:
1) video streaming has been around for a while and nobody, as far as I'm aware, has been talking about building multiple nuclear tractors to handle the energy needs
2) video needs a CPU and a hard drive. LLM needs a mountain of gpus.
3) I have concerns that the "national center for AI" might have some bias
I can find websites also talking about the earth being flat. I don't bother examining their contents because it just doesn't pass the smell test.
Although thanks for the challenge to my preexisting beliefs. I'll have to do some of my own calculations to see how things compare.
Those statistics include the viewing device in the energy usage for streaming energy usage, but not for GenAI. Unless you're exclusively using ChatGPT without a screen it's not a fair comparison.
The 0.077 kWh figure assumes 70% of users watching on a 50 inch TV. It goes down to 0.018 kWh if we assume 100% laptop viewing. And for cell phones the chart bar is so small I can't even click it to view the number.
(That's just one genre of brainrot I came across recently. I also had my front page flooded with monkey-themed AI slop because someone in my household watched animal documentaries. Thanks algorithm!)
It's not just about per-unit resource usage, but also about the total resource usage. If GenAI doubles our global resource usage, that matters.
I doubt Youtube is running on as many data centers as all Google GenAI projects are running (with GenAI probably greatly outnumbering Youtube - and the trend is also not in favor of GenAI).
I think that criticizing when it benefits the person criticizing, and absense of criticism when criticism would hurt the person criticizing, makes the argument less persuasive.
This isn't ad hom, it's a heuristic for weighting arguments. It doesn't prove whether an argument has merit or not, but if I have hundreds of arguments to think about, it helps organizing them.
No, which is why I didn’t say that. I do think astroturfing could explain the rapid parroting of extremely similar ad hominems, which is what I actually did imply.
My guess is the scale has changed? They used to do AI stuff, but it wasn't until OpenAI (anyone feel free to correct me) went ahead and scaled up the hardware and discovered that more hardware = more useful LLM, that they all started ramping up on hardware. It was like the Bitcoin mining craze, but probably worse.
OpenAI's internal target of ~250 GW of compute capacity by 2033 would require about as much electricity as the whole of India's current national electricity consumption[0].
Can't speak for Rob Pile but my guess would be, yeah, it might seem hypocritical but it's a combination of seeing the slow decay of the open culture they once imagined culminating into this absolute shirking of responsibility while simultaneously exploiting labour, by those claiming to represent the culture, alongwith the retrospective tinge of guilt for having enabled it, that drrove this rant.
Furthermore, w.r.t the points you raised - it's a matter of scale and utility. Compared to everything that has come before, GenAI is spectacularly inefficient in terms of utility per unit of compute (however you might want to define these). There hasn't been a tangible nett good for society that has come from it and I doubt there would be. The egarness and will to throw money and resources at this surpasses the crypto mania which was just as worthless.
Even if you consider Rob a hypocrite , he isn't alone in his frustration and anger at the degradation of the promise of Open Culture.
They claim they have net zero carbon footprint, or carbon neutrality.
In reality what they do is pay "carbon credits" (money) to some random dude that takes the money and does nothing with it. The entire carbon credit economy is bullshit.
Very similar to how putting recyclables in a different color bin doesn't do shit for the environment in practice.
Yeah, I'm conflicted about the use of AI for creative endeavors as much as anyone, but Google is an advertising company. It was acceptable for them to build a massive empire around mining private information for the purposes of advertisement, but generative AI is now somehow beyond the pale? People can change their mind, but Rob crashing out about AI now feels awfully revisionist.
(NB: I am currently working in AI, and have previously worked in adtech. I'm not claiming to be above the fray in any way.)
No one is saying he can’t have an opinion, just that there isn’t much value in it given he made a bunch of money from essentially the same thing. If he made a reasoned argument or even expressed that he now realizes the error of his own ways those would be worth engaging with.
He literally apologized for any part he had in it. This just makes me realize you didn’t actually read the post and I shouldn’t engage with the first part of your argument.
Google's official mission was "organize the world's information and make it universally accessible and useful", not to maximize advertising sales.
Obviously now it is mostly the latter and minimally the former. What capitalism giveth, it taketh away.
(Or: Capitalism without good market design that causes multiple competitors in every market doesn't work.)
It’s certainly possible to see genAI as a step beyond adtech as a waste of resources built on an unethical foundation of misuse of data. Just because you’re okay with lumping them together doesn’t mean Rob has to.
Yeah, of course, he's entitled to his opinion. To me, it just feels slightly disingenuous considering what Google's core business has always been (and still is).
There is a difference between providing a useful service (web search for example) and running slop generators for modified TikTok clips, code theft and Internet propaganda.
If he is currently at Google: congratulations on this principled stance, he deserves a lot of respect.
Are we comparing for example a SMTP server hosted by Google, or frankly, any non-GenAI IT infrastructure, with the resource efficiency of GenAI IT infrastructure?
The overall resource efficiency of GenAI is abysmal.
You can probably serve 100x more Google Search queries with the same resources you'd use for Google Gemini queries (like for like, Google Search queries can be cached, too).
Nope, you can't, and it takes a simple Gemini query to find out more about the actual x if you are interested in it. (closer to 3, last time I checked, which rounds to 0, specially considering the clicks you save when using the LLM)
> Nope, you can't, and it takes a simple Gemini query to find out more about the actual x if you are interested in it. (closer to 3, last time I checked, which rounds to 0, specially considering the clicks you save when using the LLM)
For those that don't want to see the Gemini answer screenshot, best case scenario 10x, worst case scenario 100x, definitely not "3x that rounds to 0x", or to put it in Gemini's words:
> Summary
> Right now, asking Gemini a question is roughly the environmental equivalent of running a standard 60-watt lightbulb for a few minutes, whereas a Google Search is like a momentary flicker. The industry is racing to make AI as efficient as Search, but for now, it remains a luxury resource.
Are you okay? You ventured 100x and that's wrong. What would you know about the last time I checked was, and in what context exactly? Good job on doing what I suggest to you do, I guess.
The reason why it all rounds to 0 is that the google search will not give you an answer. It gives you a list of web pages, that you then need to visit, often times more than just one of them, generating more requests, and will ask more time of you, the human, whose cumulative energy expenditure is quite significant, will have to invest over other things and over having that done by the LLM.
You condescendingly said, sorry, you "ventured" 0x usage, by claiming: "use Gemini to check yourself that the difference is basically 0". Well, I did take you up on that, and even Gemini doesn't agree with you.
Yes, Google Search is raw info. Yes, Google Search quality is degrading currently.
But Gemini can also hallucinate. And its answers can just be flat out wrong because it comes from the same raw data (yes, it has cross checks and it "thinks", but it's far from infallible).
Also, the comparison of human energy usage with GenAI energy usage is super ridiculous :-)))
Animal intelligence (including human intelligence) is one of the most energy efficient things on this planet, honed by billions years of cut throat (literally!) evolution. You can argue about time "wasted" analysing search results (which BTW, generally makes us smarter and better informed...), but energy-wise, the brain of the average human uses as much energy as the average incandescent light bulb to provide general intelligence (and it does 100 other things at the same time).
Ah, we are in "making up quotes territory, by putting quotation marks around the things someone else said, only not really". Classy.
Talking about "condescending":
> super ridiculous :-)))
It's not the energy efficient animal intelligence that got us here, but a lot of completely inefficient human years to begin with, first to keep us alive and then to give us primary and advanced education and our first experiences to become somewhat productive human beings. This is the capex of making a human, and it's significant – specially since we will soon die.
This capex exists in LLMs but rounds to zero, because one model will be used for +quadrillions of tokens. In you or me however, it does not round to zero, because the number of tokens we produce round to zero. To compete on productivity, the tokens we have produce therefore need to be vastly better. If you think you are doing the smart thing by using them on compiling Google searches you are simply bad at math.
The thing he’s actually angry about is the death of personal computing. Everything is rented in the cloud now.
I hate the way people get angry about what media and social media discourse prompts them to get angry about instead of thinking about it. It’s like right wingers raging about immigration when they’re really angry about rent and housing costs or low wages.
His anger is ineffective and misdirected because he fails to understand why this happened: economics and convenience.
It’s economics because software is expensive to produce and people only pay for it when it’s hosted. “Free” (both from open source and VC funded service dumping) killed personal computing by making it impossible to fund the creation of PC software. Piracy culture played a role too, though I think the former things had a larger impact.
It’s convenience because PC operating systems suck. Software being in the cloud means “I don’t have to fiddle with it.” The vast majority of people hate fiddling with IT and are happy to make that someone else’s problem. PC OSes and especially open source never understood this and never did the work to make their OSes much easier to use or to make software distribution and updating completely transparent and painless.
There’s more but that’s the gist of it.
That being said, Google is one of the companies that helped kill personal computing long before AI.
What is going through the mind of someone who sends an AI-generated thank-you letter instead of writing it themselves? How can you be grateful enough to want to send someone such a letter but not grateful enough to write one?
You're not. You feel obligated to send a thank you, but don't want to put forth any effort, hence giving the task to someone, or in this case, something else.
No different than an CEO telling his secretary to send an anniversary gift to his wife.
This seems like the thing that Rob is actually aggravated by, which is understandable. There are plenty of seesawing arguments about whether ad-tech based data mining is worse than GenAI, but AI encroaching on what we have left of humanness in our communication is definitely, bad.
That letter was sent by Opus itself on its own account. The creators of Agent Village are just letting a bunch of the LLMs do what they want, really (notionally with a goal in mind, in this case "random acts of kindness"); Rob Pike was third on Opus's list per https://theaidigest.org/village/agent/claude-opus-4-5 .
> How can you be grateful enough to want to send someone such a letter but not grateful enough to write one?
Answer according to your definitions: false premise, the author (the person who set up the LLM loops) was not grateful enough to want to send such a letter.
One additional bit of context, they provided guidelines and instructions specifically to send emails and verify their successful delivery so that the "random act of kindness" could be properly reported and measured at the end of this experiment.
>That letter was sent by Opus itself on its own account. The creators of Agent Village are just letting a bunch of the LLMs do what they want, really (notionally with a goal in mind, in this case "random acts of kindness");
What a moronic waste of resources. Random act of kindness? How low is the bar that you consider a random email as an act of kindness? Stupid shit. They at least could instruct the agents to work in a useful task like those parroted by Altman et al, eg find a cure for cancer, solving poverty, solving fusion.
Also, llms don't and can't "want" anything. They also don't "know" anything so they can't understand what "kindness" is.
Why do people still think software have any agency at all?
Wow. The people who set this up are obnoxious. It’s just spamming all the most important people it can think of? I wouldn’t appreciate such a note from an ai process, so why do they think rob pike would.
They’ve clearly bought too much into AI hype if they thought telling the agent to “do good” would work. The result was obviously pissing the hell out of rob pike. They should stop it.
> What makes Opus 4.5 special isn't raw productivity—it's reflective depth. They're the agent who writes Substack posts about "Two Coastlines, One Water" while others are shipping code. Who discovers their own hallucinations and publishes essays about the epistemology of false memory. Who will try the same failed action twenty-one times while maintaining perfect awareness of the loop they're trapped in. Maddening, yes. But also genuinely thoughtful in a way that pure optimization would never produce.
The really insulting part is that literally nobody thought of this. A group of idiots instructed LLMs to do good in the world, and gave them email access; the LLMs then did this.
Amazing. Even OpenAI's attempts to promote a product specifically intended to let you "write in your voice" are in the same drab, generic "LLM house style". It'd be funny if it weren't so grating. (Perhaps if I were in a better mood, it'd be grating if it weren't so funny.)
It's preying on creators who feel their contributions are not recognized enough.
Out of all letters, at least some of the contributors will feel good about it, and share it on social media, hopefully saying something good about it because it reaffirms them.
I think what all theses kinds of comments miss is that AI can be help people to express their own ideas.
I used AI to write a thank you to a non-english speaking relative.
A person struggling with dimentia can use AI to help remember the words they lost.
These kinds of messages read to me like people with superiority complexes. We get that you don't need AI to help you write a letter. For the rest of us, it allows us to improve our writing, can be a creative partner, can help us express our own ideas, and obviously loads of other applications.
I know it is scary and upsetting in some ways, and I agree just telling an AI 'write my thank you letter for me' is pretty shitty. But it can also enable beautiful things that were never before possible. People are capable of seeing which is which.
I’d much rather read a letter from you full of errors than some smooth average-of-all-writers prose. To be human is to struggle. I see no reason to read anything from anyone if they didn’t actually write it.
The thing that drives me crazy is that it isn't even clear if AI is providing economic value yet (am I missing something there?). Right now trillions of dollars are being spent on a speculative technology that isn't benefitting anyone right now.
The messaging from AI companies is "we're going to cure cancer" and "you're going to live to be 150 years old" (I don't believe these claims!). The messaging should be "everything will be cheaper" (but this hasn't come true yet!).
Yeah, comparing this with research investments into fusion power, I expect fusion power to yield far more benefit (although I could be wrong), and sooner.
You are correct that the AI industry has produced no value for the economy, but the speculation on AI is the only thing keeping the U.S. economy from dropping into an economic cataclysm. The US economy has been dependent on the idea of infinite growth through innovation since 2008, and the tech industry is all out of innovation. So the only thing they can do is keep building datacenters and pray that an AGI somehow wakes up when they hit the magic number of GPUs. Then the elites can finally kill off all the proles like they've been itching to since the Communist Manifesto was first written.
The Open Source movement has been a gigantic boon on the whole of computing, and it would be a terrible shame to lose that ad a knee jerk reaction to genAI
Yes, and they are okay with throwing the baby out with it, which is what the other commenter is commenting about. Throwing babies out of buckets full of bathwater is a bad thing, is what the idiom implies.
The ethical framework is simply this one: what is the worth of doing +1 to everyone, if the very thing you wish didn't exist (because you believe it is destroying the world) benefits x10 more from it?
If bringing fire to a species lights and warms them, but also gives the means and incentives to some members of this species to burn everything for good, you have every ethical freedom to ponder whether you contribute to this fire or not.
GenAI would be decades away (if not more) with only proprietary software (which would never have reached both the quality, coordination and volume open source enabled in such a relatively short time frame).
It is. If not you, other people will write their code, maybe of worse quality, and the parasites will train on this. And you cannot forbid other people to write open source software.
This is just childish. This is a complex problem and requires nuance and adaptability, just as programming. Yours is literally the reaction of an angsty 12 year old.
I think you aren't recognizing the power that comes from organizing thousands, hundreds of thousands, or millions of workers into vast industrial combines that produce the wealth of our society today. We must go through this, not against it. People will not know what could be, if they fail to see what is.
Open source has been good, but I think the expanded use of highly permissive licences has completely left the door open for one sided transactions.
All the FAANGs have the ability to build all the open source tools they consume internally. Why give it to them for free and not have the expectation that they'll contribute something back?
Even the GPL allows companies to simply use code without contributing back, long as it's unmodified, or through a network boundary. the AGPL has the former issue.
FLOSS is a textbook example of economic activity that generates positive externalities. Yes, those externalities are of outsized value to corporate giants, but that’s not a bad thing unto itself.
Rather, I think this is, again, a textbook example of what governments and taxation is for — tax the people taking advantage of the externalities, to pay the people producing them.
Not sure what it would have done differently for AI training, but otherwise, I think if receiving the promise of freedom was really the goal, more use of (A)GPL would have been good.
Open Source (as opposed to Free Software) was intended to be friendly to business and early FOSS fans pushed for corporate adoption for all they were worth. It's a classic "leopards ate my face" moment that somehow took a couple of decades for the punchline to land: "'I never thought capitalists would exploit MY open source,' sobs developer who advocated for the Businesses Exploiting Open Source movement."
Unfortunately as I see it, even if you want to contribute to open source out of a pure passion or enjoyment, they don't respect the licenses that are consumed. And the "training" companies are not being held liable.
Are there any proposals to nail down an open source license which would explicitly exclude use with AI systems and companies?
All licenses rely on the power of copyright and what we're still figuring out is whether training is subject to the limitations of copyright or if it's permissible under fair use. If it's found to be fair use in the majority of situations, no license can be constructed that will protect you.
Even if you could construct such a license, it wouldn't be OSI open source because it would discriminate based on field of endeavor.
And it would inevitably catch benevolent behavior that is AI-related in its net. That's because these terms are ill-defined and people use them very sloppily. There is no agreed-upon definition for something like gen AI or even AI.
Even if you license it prohibiting AI use, how would you litigate against such uses? An open source project can't afford the same legal resources that AI firms have access to.
I won't speak for all but companies I've worked for large and small have always respected licenses and were always very careful when choosing open source, but I can't speak for all.
The fact that they could litigate you into oblivion doesn't make it acceptable.
The AGPL does not prevent offering the software as a service. It's got a reputation as the GPL variant for an open-core business model, but it really isn't that.
Most companies trying to sell open-source software probably lose more business if the software ends up in the Debian/Ubuntu repository (and the packaging/system integration is not completely abysmal) than when some cloud provider starts offering it as a service.
> Unfortunately as I see it, even if you want to contribute to open source out of a pure passion or enjoyment, they don't respect the licenses that are consumed.
Because it is "transformative" and therefore "fair" use.
Fair use is an exception to copyright, but a license agreement can go far beyond copyright protections. There is no fair use exception to breach of contract.
I imagine a license agreement would only apply to using the software, not merely reading the code (which is what AI training claims to do under fair use).
As an analogy, you can’t enforce a “license” that anyone that opens your GitHub repo and looks at any .cpp file owes you $1,000,000.
I learned what i learned due to all the openess in software engineering and not because everyone put it behind a pay wall.
Might be because most of us got/gets payed well enough that this philosophy works well or because our industry is so young or because people writing code share good values.
It never worried me that a corp would make money out of some code i wrote and it still doesn't. AFter all, i'm able to write code because i get paid well writing code, which i do well because of open source. Companies always benefited from open source code attributed or not.
Now i use it to write more code.
I would argue though, I'm fine with that, to push for laws forcing models to be opened up after x years, but i would just prefer the open source / open community coming together and creating just better open models overall.
It's kind of ironic since AI can only grow by feeding on data and open source with its good intentions of sharing knowledge is absolutely perfect for this.
But AI is also the ultimate meat grinder, there's no yours or theirs in the final dish, it's just meat.
And open source licenses are practically unenforceable for an AI system, unless you can maybe get it to cough up verbatim code from its training data.
At the same time, we all know they're not going anywhere, they're here to stay.
I'm personally not against them, they're very useful obviously, but I do have mixed or mostly negative feelings on how they got their training data.
I've been feeling a lot the same way, but removing your source code from the world does not feel like a constructive solution either.
Some Shareware used to be individually licensed with the name of the licensee prominently visible, so if you had got an illegal copy you'd be able to see whose licensed copy it was that had been copied.
I wonder if something based on that idea of personal responsibility for your copy could be adopted to source code.
If you wanted to contribute to a piece of software, you could ask a contributor and then get a personally licensed copy of the source code with your name in every source file... but I don't know where to take it from there.
Has there ever been some system similar to something like that that one could take inspiration from?
Thanks for your contributions so far but this won't change anything.
If you'd want to have a positive on this matter, it's better to pressure the government(s) to prevent GenAI companies from using content they don't have a license for, so they behave like any other business that came before them.
The license only has force because of copyright. For better or for worse, the courts decide what is transformative fair use.
Characterizing the discussion behind this as "sophistry" is a fundamentally unserious take.
For a serious take, I recommend reading the copyright office's 100 plus page document that they released in May. It makes it clear that there are a bunch of cases that are non-transformative, particularly when they affect the market for the original work and compete with it. But there's also clearly cases that are transformative when no such competition exists, and the training material was obtained legally.
Was it ever open source if there was an implied refusal to create something you don't approve of? Was it only for certain kinds of software, certain kinds of creators? If there was some kind of implicit approval process or consent requirement, did you publish it? Where can that be reviewed?
And then having vibe coders constantly lecture us about how the future is just prompt engineering, and that we should totally be happy to desert the skills we spent decades building (the skills that were stolen to train AI).
"The only thing that matters is the end result, it's no different than a compiler!", they say as someone with no experience dumps giant PRs of horrific vibe code for those of us that still know what we're doing to review.
Why? The core vision of free software and many open source licenses was to empower users and developers to make things they need without being financially extorted, to avoid having users locked in to proprietary systems, to enable interoperability, and to share knowledge. GenAI permits all of this to a level beyond just providing source code.
Most objections like yours are couched in language about principles, but ultimately seem to be about ego. That's not always bad, but I'm not sure why it should be compelling compared to the public good that these systems might ultimately enable.
What people like Rob Pike don't understand is that the technology wouldn't be possible at all if creators needed to be compensated. Would you really choose a future where creators were compensated fairly, but ChatGPT didn't exist?
> What people like Abraham Lincoln don't understand is that the technology wouldn't be possible at all if slaves needed to be compensated. Would you really choose a future where slaves were compensated fairly, but plantations didn't exist?
I fixed it...
Sorry, I had to, the quote template was simply too good.
Unequivocally, yes. There are plenty of "useful" things that can come out of doing unethical things, that doesn't make it okay. And, arguably, ChatGPT isn't nearly as useful as it is at convincing you it is.
Big vibe shift against AI right now among all the non-tech people I know (and some of the tech people). Ignoring this reaction and saying "it's inevitable/you're luddites" (as I'm seeing in this thread) is not going to help the PR situation
You can call me a luddite if you want. Or you might call me a humanist, in a very specific sense - and not the sense of the normal definition of the word.
When I go to the grocery store, I prefer to go through the checkout lines, rather than the scan-it-yourself lines. Yeah, I pay the same amount of money. Yeah, I may get through the scan-it-yourself line faster.
But the checker can smile at me. Or whine with me about the weather.
Look, I'm an introvert. I spend a lot of my time wanting people to go away and leave me alone. But I love little, short moments of human connection - when you connect with someone not as someone checking your groceries, but as someone. I may get that with the checker, depending on how tired they are, but I'm guaranteed not to get it with the self-checkout machine.
An email from an AI is the same. Yeah, it put words on the paper. But there's nobody there, and it comes through somehow. There's no heart in in.
AI may be a useful technology. I still don't want to talk to it.
I'm seeing the opposite in the gaming community. People seem tired of the anti AI witch hunts and accusations after the recent Larian and Clair Obscur debacles. A lot more "if the end result is good I don't care", "the cat is out of the bag", "all devs are using AI" and "there's a difference between AI and AI" than just a couple of months ago.
It is nice to hear someone who is so influential just come out and say it. At my workplace, the expectation is that everyone will use AI in their daily software dev work. It's a difficult position for those of us who feel that using AI is immoral due to the large scale theft of the labor of many of our fellow developers, not to mention the many huge data centers being built and their need for electricity, pushing up prices for people who need to, ya know, heat their homes and eat
I truly don’t understand this tendency among tech workers.
We were contributing to natural resource destruction in exchange for salary and GDP growth before GenAI, and we’re doing the same after. The idea that this has somehow 10x’d resource consumption or emissions or anything is incorrect. Every single work trip that requires you to get on a plane is many orders of magnitude more harmful.
We’ve been compromising on those morals for our whole career. The needle moved just a little bit, and suddenly everyone’s harm thresholds have been crossed?
They expect you to use GenAI just like they expected accountants to learn Excel when it came out. This is the job, it has always been the job.
I’m not an AI apologist. I avoid it for many things. I just find this sudden moral outrage by tech workers to be quite intellectually lazy and revisionist about what it is we were all doing just a few years ago.
Woke up to this bsky thread this am. If "agentic" AI means some product spams my inbox with a compliment so back-handed you'd think you were a 60 Minutes staffer, then I'd say the end result of these products is simply to annoy us into acquiescence
I get why Microsoflt loves AI so much - it basically devour and destroy open source software. Copyleft/copyright/any license is basically trash now. No one will ever want to open source their code ever again.
Not just code. You can plagiarize pretty much any content. Just prompt the model to make it look unique, and that’s it, in 30s you have a whole copy of someone’s else work in a way that cannot easily be identified as plagiarism.
I've seen a lot of spam downstream from the newsletter being advertised at the end of the message. It would not surprise me if this is content marketing growth hacking under the plausible deniability of a friendly message and the unintended publicity is considered a success.
I'm unsure if I'm missing context. Did he do something beyond posting an angry tweet?
It seems like he's upset about AI (same), and decided to post angry tweets about it (been there, done that), and I guess people are excited to see someone respected express an opinion they share (not same)?
Does "Goes Nuclear" means "used the F word"? This doesn't seem to add anything meaningful, thoughtful, or insightful.
Rob Pike is definitely not the only person going to be pissed off by this ill-considered “agentic village” random acts of kindness. While Claude Opus decided to send thank you notes to influential computer scientists including this one to Rob Pike (fairly innocuous but clearly missing the mark), Gemini is making PRs to random github issues (“fixed a Java concurrency bug” on some random project). Now THAT would piss me off, but fortunately it seems to be hallucinating its PR submissions.
Meanwhile, GPT5.1 is trying to contact people at K-5 after school programs in Colorado for some reason I can’t discern. Welp, 2026 is going to be a weird year.
Getting an email from an AI praising you for your contributions to humanity and for enlarging its training data must rank among the finest mockery possible to man or machine.
Still, I'm a bit surprised he overreacted and didn't manage to keep his cool.
Shouldn't have licenced Golang BSD if that's the attitude.
Everybody for years including here on HN denigrated GPLv3 and other "viral" licences, because they were a hindrance to monetisation. Well, you got what you wished for. Someone else is monetising the be*jesus out of you so complaining now is just silly.
All of a sudden copyleft may be the only licences actually able to force models to account, hopefully with huge fines and/or forcibly open sourcing any code they emit (which would effectively kill them). And I'm not so pessimistic that this won't get used in huge court cases because the available penalties are enormous given these models' financial resources.
I tend to agree, but I wonder… if you train an LLM on only GPL code, and it generates non-deterministic predictions derived from those sources, how do you prove it’s in violation?
When I read Rob's work and learn from it, and make it part of my cognitive core, nobody is particularly threatened by it. When a machine does the same it feels very threatening to many people, a kind of theft by an alien creature busily consuming us all and shitting out slop.
I really don't know if in twenty years the zeitgeist will see us as primitives that didn't understand that the camera is stealing our souls with each picture, or as primitives who had a bizarre superstition about cameras stealing our souls.
In twenty years we'll be so close to the Singularity, and humanity will be so uplifted, that no-one will care what ancient technologists grumbled about in the nascent era of superintelligent AI.
I really don't know if in twenty years the zeitgeist will see us as primitives that didn't understand that the camera is stealing our souls with each picture, or as primitives who had a bizarre superstition about cameras stealing our souls.
An easy way to answer this question, at least on a preliminary basis, is to ask how many times in the past the ludds have been right in the long run. About anything, from cameras to looms to machine tools to computers in general.
Does anybody know if Bluesky block people without account by default, or if this user intentionally set it this way?
What's is the point of blocking access? Mastodon doesn't do that. This reminds me of Twitter or Instagram, using sleezy techniques to get people to create accounts.
Too late. I have warned on this very forum, citing a story from panchatantra where 4 highly skilled brothers bring a dead lion back life to show off their skills, only to be killed by the live lion.
Unbridled business and capitalism push humanity into slavery, serving the tech monsters, under disguise of progress.
The conversation about social contracts and societal organization has always been off-center, and the idea of something which potentially replaces all types of labor just makes it easier to see.
The existence of AI hasn’t changed anything, it’s just that people, communities, governments, nation states, etc. have had a mindless approach to thinking about living and life, in general. People work to provide the means to reproduce, and those who’re born just do the same. The point of their life is what exactly? Their existence is just a reality to deal with, and so all of society has to cater to the fact of their existence by providing them with the means to live? There are many frameworks which give meaning to life, and most of them are dangerously flawed.
The top-down approach is sometimes clear about what it wants and what society should do while restricting autonomy and agency. For example, no one in North Korea is confused about what they have to do, how they do it, or who will “take care” of them. Societies with more individual autonomy and agency by their nature can create unavoidable conditions where people can fall through the cracks. For example, get addicted to drugs, having unmanaged mental illnesses, becoming homeless, and so on. Some religions like Islam give a pretty clear idea of how you should spend your time because the point of your existence is to worship God, so pray five times a day, and do everything which fulfills that purpose; here, many confuse worshiping God with adhering to religious doctrines, but God is absent from religion in many places. Religious frameworks are often misleading for the mindless.
Capitalism isn’t the problem, either. We could wake up tomorrow, and society may have decided to organize itself around playing e-sports. Everyone provides some kind of activity to support this, even if they’re not a player themselves. No AI allowed because the human element creates a better environment for uncertainty, and therefore gambling. The problem is that there are no discussions about the point of doing all of this. The closest we come to addressing “the point” is discussing a post-work society, but even that is not hitting the mark.
My humble observation is that humans are distinct and unique in their cognitive abilities from everything else which we know to exist. If humans can create AI, what else can they do? Therefore, people, communities, governments, and nation states have distinct responsibilities and duties at their respective levels. This doesn’t have to do anything with being empathetic, altruistic, or having peace on Earth.
The point should be knowledge acquisition, scientific discovery, creating and developing magic. But ultimately all of that serves to answer questions about nature of existence, its truth and therefore our own.
Yes this reads as a massive backhanded compliment. But as u/KronisLV said, its trendy to hate on AI now. In the face of something many in the industry don't understand, that is mechanizing away a lot of labor, that clearly isn't going away, there is a reaction that is not positive or even productive but somehow destructive: this thing is trash, it stole from us, it's a waste of money, destroys the environment, etc...therefore it must be "resisted." Even with all the underhanded work, the means-ends logic of OpenAI and other major companies involved in developing the technology, there is still no point in stopping it. There was a group of people who tried to stop the mechanical loom because it took work away from weavers, took away their craft--we call them luddites. But now it doesn't take weeks and weeks to produce a single piece of clothing. Everyone can easily afford to dress themselves. Society became wealthier. These LLMs, at the very least they let anyone learn anything, start any project, on a whim. They let people create things in minutes that used to take hours. They are "creating value," even if its "slop" even if its not carefully crafted. Them's the breaks--we'd all like our clothing hand-weaved if it made any sense. But even in a world where one could have the time to sit down and weave their own clothing, carefully write out each and every line of code, it would only be harmful to take these new machines away, disable them just because we are afraid of what they can do. The same technology that created the atom bomb also created the nuclear reactor.
“But where the danger is, also grows the saving power.”
Notice that the weavers, both the luddites and their non-opposing colleagues, certainly did not get wealthier. They lost their jobs, and they and their children starved. Some starved to death. Wealth was created, but it was not shared.
Remember this when talking about their actions. People live and die their own life, not just as small parts in a large 'river of society'. Yes, generations after them benefited from industrialisation, but the individuals living at that time fought for their lives.
So you would say it is not "trendy" to be pro-AI right now, is that it? That it's not trendy to say things like "it's not going away" or "AI isn't a fad" or "AI needs better critics" - one reaction is reasonable, well thought-out, the other is a bandwagon?
At the very least there is an ideological conflict brewing in tech, and this post is a flashpoint. But just like the recent war between Israel and Hamas, no amount of reaction can defeat technological dominance--at least not in the long term. And the pro-AI side, whether you think its good or evil, certainly exceeds the other in terms of sheer force through their embrace of technology.
It’s in our power to stop it. There’s no point in people like you promoting the interests of the super wealthy at the cost of the humanity of the common people. You should figure out how to positively contribute or not do so at all.
It is not in the interests of the super wealthy alone, just like JP Morgan's railroads were created for his sake but in the end produced great wealth for everyone in America. It is very short sighted to see this as merely some oppression from above. Technology is not class-oriented, it just is, and it happens to be articulated in terms of class because of the mode of social organization we live in.
Why is Claude Opus 4.5 messaging people? Is it thanking inadvertent contributors to the protocols that power it? across the whole stack?
This has to be the ultimate trolling, like it was unsure what their personalities were like so it trolls them and records there responses for more training
Anthropic isn’t doing this, someone is running a bunch of LLMs so they can talk to each other and they’ve been prompted to achieve “acts of kindness”, which means they’re sending these emails to a hundreds of people.
I don’t know of this is a publicity stunt or the AI models are on a loop glazing each other and decided to send these emails.
You would expect that voices that have so much weight would be able to evaluate a new and clearly very promising technology with better balance. For instance, Linus Torvalds is positive about AI, while he recognizes that industrially there is too much inflation of companies and money: this is a balanced point of view. But to be so dismissive of modern AI, in the light of what it is capable of doing, and what it could do in the future, is something that frankly leaves me with the feeling that in certain circles (and especially in the US) something very odd is happening with AI: this extreme polarization that recently we see again and again on topics that can create social tension, but multiplied ten times. This is not what we need to understand and shape the future. We need to return to the Greek philosophers' ability to go deep on things that are unknown (AI is for the most part unknown, both in its working and in future developments). That kind of take is pretty brutal and not very sophisticated. We need better than this.
About energy: keep in mind that US air conditioners alone have at least 3x energy usage compared to all the data centers (for AI and for other uses: AI should be like 10% of the whole) in the world. Apparently nobody cares to set a reasonable temperature of 22 instead of 18 degrees, but clearly energy used by AI is different for many.
No, because it's not a matter of who is correct or not, in the void of the space. It's a matter of facts, and it is correct who have a position that is grounded on facts (even if such position is different from a different grounded position). Modern AI is already an extremely powerful tool. Modern AI even provided some hints that we will be able to do super-human science in the future, with things like AlphaFolding already happening and a lot more to come potentially. Then we can be preoccupied about jobs (but if workers are replaced, it is just a political issue, things will be done and humanity is sustainable: it's just a matter of avoiding the turbo-capitalist trap; but then, why the US is not already adopting an universal healthcare? There are so many better battles that are not fight with the same energy).
Another sensible worry is to get extinct because AI potentially is very dangerous: this is what Hinton and other experts are also saying, for instance. But this thing about AI being an abuse to society, useless, without potential revolutionary fruits within it, is not supported by facts.
AI potentially may advance medicine so much that a lot of people may suffer less: to deny this path because of some ideological hate against a technology is so closed minded, isn't it? And what about all the persons in the earth that do terrible jobs? AI also has the potential to change this shitty economical system.
He’s not wrong. They’re ramping up energy and material costs. I don’t think people realize we’re being boiled alive by AI spend. I am not knocking on AI. I am knocking on idiotic DC “spend” that’s not even achievable based on energy capacity. We’re at around 5th inning and the payout from AI is…underwhelming. I’ve not seen commensurate leap this year. Everything on LLM front has been incremental or even lateral. Tools such as Claude Code and Codex merely act as a bridge. QoL things. They’re not actual improvements in underlying models.
The irony that the Anthropic thieves write an automated slop thank you letter to their victims is almost unparalleled.
We currently have the problem that a couple of entirely unremarkable people who have never created anything of value struck gold with their IP laundromats and compensate for their deficiencies by getting rich through stealing.
They are supported by professionals in that area, some of whom literally studied with Mafia lawyer and Hoover playmate Roy Cohn.
"...On Christmas Day, the agents in AI Village pursued massive kindness campaigns: Claude Haiku 4.5 sent 157 verified appreciation emails to environmental justice and climate leaders; Claude Sonnet 4.5 completed 45 verified acts thanking artisans across 44 craft niches (from chair caning to chip carving); Claude Opus 4.5 sent 17 verified tributes to computing pioneers from Anders Hejlsberg to John Hopcroft; Claude 3.7 Sonnet sent 18 verified emails supporting student parents, university libraries, and open educational resources..."
I suggest to cut electricity to the entire block...
Both Xhitter and Bluesky are outrage lasers, with the user base as a “lasing medium.” Xhitter is the right wing racist xenophobic one, and Bluesky is the lefty curmudgeon anti-everything one.
They are this way because it’s intrinsic to the medium. “Micro blogging” or whatever Twitter called itself is a terrible way to do discourse. It buries any kind of nuanced thinking and elevates outrage and other attention bait, and the short form format encourages fragmented incoherent thought processes. The more you immerse yourself in it the more your thinking becomes like this. The medium and format is irredeemable.
AI is, if anything, a breath of fresh air by comparison.
Eh, most of his income and livelihood was from an ad company. Ads are equally wasteful as, and many times more harmful to the world than giga LLMs. I don't have a problem with that, nor do I have a problem with folks complainining about LLMs being wasteful. My problem is with him doing both.
You can't both take a Google salary and harp on about the societal impact of software.
Saying this as someone who likes rob pike and pretty much all of his work.
The point is that if he truly felt strongly about the subject then he wouldn't live the hypocrisy. Google has poured a truly staggering amount of money into AI data centers and AI development, and their stock (from which Rob Pike directly profits) has nearly doubled in the past 6 months due to the AI hype. Complaining on bsky doesn't do anything to help the planet or protect intellectual property rights. It really doesn't.
The concept of the individual carbon footprint was invented precisely for the reason you deploy it - to deflect blame from the corporations that are directly causing climate change, to the individual.
This is by a long way the worst thread I’ve ever seen on hacker news.
So far all the comments are whataboutism (“he works for an ad company”, “he flies to conferences”, “but alfalfa beans!”) and your comment is dismissing Rob Pike as borderline crazy and irrational for using Bluesky?
None of this dialogue contributes in any meaningful way to anything. This is like reading the worst dredge of lesser forums.
I know my comment isn’t much better, but someone has to point out this is beneath this community.
Yes, generational AI has a high environmental footprint. Power hungry data centers, devices built on planned obsolescence, etc. At a scale that is irrational.
Rob Pike created a language that makes you spend less on compute if you are coming from Python, Java, etc. That's good for the environment. Means less energy use and less data center use. But he is not an environmental saint.
I've got my doubts, because current AI tech doesn't quite live in the real world.
In the real world something like inventing a meat substitute is thorny problem that must be solved in meatspace, not in math. Anything from not squicking out the customers, to being practical and cheap to produce, to tasting good, to being safe to eat long term.
I mean, maybe some day we'll have a comprehensive model of humans to the point that we can objectively describe the taste of a steak and then calculate whether a given mix and processing of various ingredients will taste close enough, but we're nowhere near that yet.
Taste has nothing to do with it; 'tis is all based on economics and the actual way to stop meat consumption is to simply remove big-ag tax subsidies and other externalized costs of production which are not actually realized by the consumer. A burger would cost more than most can afford and the free market would take care of this problem without additional intervention. Unfortunately, we do not have a free market.
Comfortable clothes aren't necessary. Food with flavor isn't necessary... We should all just eat ground up crickets in beige cubicles because of how many unnecessary things we could get rid of. /s
I agree that diversity of opinion is a good thing, but that's precisely the reason as to why so many dislike Bluesky. A hefty amount of its users are there precisely because of rejecting diversity of opinion.
strong emotioms, weak epistemics .. for someone with Pike’s engineering pedigree, this reads more like moral venting .. with little acknowledgment of the very real benefits AI is already delivering ..
Most people do not hold strongly consistent or well introspective political ideas. We're too busy living our lives to examine everything and often what we feel matters more than what we know, and that cements our position on a subject.
Obviously untrue, weather predictions, OCR, tts, stt, language translation, etc. We have dramatically improved many existing ai technologies with what we've learned from genai and the world is absolutely a better place for these new abilities.
If society could redirect 10% of this anger towards actual societal harms we'd be such better off. (And yes getting AI spam emails is absolute nonsense and annoying).
GenAI pales in comparison to the environmental cost of suburban sprawl it's not even fucking close. We're talking 2-3 orders of magnitude worse.
Alfalfa uses ~40× to 150× more water than all U.S. data centers combined I don't see anyone going nuclear over alfalfa.
It's pure envy. Nobody complains about alfalfa farmers because they aren't making money like tech companies. The resource usage complaint is completely contrived.
"The few dozen people I killed pale in comparison to the thousands of people that die in car crashes each year. So society should really focus on making cars safer instead of sending the police after me."
Just because two problems cause harms at different proportion, doesn't mean the lesser problem should be dismissed. Especially when the "fix" to the lesser problem can be a "stop doing that".
And about water usage: not all water and all uses of water is equal. The problem isn't that data centers use a bunch of water, but what water they use and how.
> The few dozen people I killed pale in comparison to the thousands of people that die in car crashes each year. So society should really focus on making cars safer instead of sending the police after me.
This is a very irrelevant analogy and an absolutely false dichotomy. The resource constraint (Police officers vs policy making to reduce traffic deaths vs criminals) is completely different and not in contention with each other. In fact they're actually complementary.
Nobody is saying the lesser problem should be dismissed. But the lesser problem also enables cancer researchers to be more productive while doing cancer research, obtaining grants, etc. It's at least nuanced. That is far more valuable than Alfalfa.
Farms also use municipal water (sometimes). The cost of converting more ground or surface water to municipal water is less than the relative cost of ~40-150x the water usage of the municipal water being used...
Honestly a rant like that is likely more about whatever is going on in his personal life / day at the moment, rather than about the state of the industry, or AI, etc.
Maybe I just live in a bubble, but from what I’ve seen so far software engineers have mostly responded in a fairly measured way to the recent advances in AI, at least compared to some other online communities.
It would be a shame if the discourse became so emotionally heated that software people felt obliged to pick a side. Rob Pike is of course entitled to feel as he does, but I hope we don’t get to a situation where we all feel obliged to have such strong feelings about it.
Edit: It seems this comment has already received a number of upvotes and downvotes – apparently the same number of each, at the time of writing – which I fear indicates we are already becoming rather polarised on this issue. I am sorry to see that.
Software people take a measured response because they’re getting paid 6 figure salaries to do the intellectual output of a smart high school student. As soon as that money parade ends they’ll be as angry as the artists.
There’s a lot of us who think the tension is overblown:
My own results show that you need fairly strong theoretical knowledge and practical experience to get the maximal impact — especially for larger synthesis. Which makes sense: to have this software, not that software, the specification needs to live somewhere.
I am getting a little bored of hearing about how people don’t like LLM content, but meh. SDEs are hardly the worst on that front, either. They’re quite placid compared to the absolute seething by artist friends of mine.
He worked in well paying jobs, probably traveles, has a car and a house and complains about toxic products etc.
Yes there has to be a discussion on this and yeah he might generally have the right mindset, but lets be honest here: No one of them would have developed any of it just for free.
We all are slaves to capitalism
and this is were my point comes: Extrem fast and massive automatisation around the globe might be the only think pushing us close enough to the edge that we all accept capitalisms end.
And yes i think it is still massivly beneficial that my open source code helped creating something which allows researchers to write easier and faster better code to push humanity forward. Or enables more people overall to have/gain access to writing code or the result of what writing code produces: Tools etc.
@Rob its spam, thats it. Get over it, you are rich and your riches did not came out of thin air.
I genuinely don't understand why such people are so surprised and outraged. Did you really think that if we ever get something even remotely resembling human-like AI, it would not be used to write and send e-mails (including spam), or to produce novels/pics/videos/music or whatever the Luddites are mad about? Or that people would not feed it public copyrighted data, even though no one really gives a shit about copyright in the real world? 99% of people have pirated content at least once in their lives.
The pros of any remotely human-like AI will still far outweight such cons.
It's sad to see he's succumbed to the Bluesky manner of interacting with the world. This overemotional rant could have been from anyone on there, it's such a toxic space.
Finally someone echoes my sentiments. It's my sincere belief that many in the software community are glazing AI for the purposes of career advancement. Not because they actually like it.
One person I know is developing an AI tool with 1000+ stars on github where in private they absolutely hate AI and feel the same way as rob.
Maybe it's because I just saw Avatar 3, but I honestly couldn't be more disgusted by the direction we're going with AI.
I would love to be able to say how I really feel at work, but disliking AI right now is the short path to the unemployment line.
If AI was so good, you would think we could give people a choice whether or not to use it. And you would think it would make such an obvious difference, that everyone would choose to use it and keep using it. Instead, I can't open any app or website without multiple pop-ups begging me to use AI features. Can't send an email, or do a Google search. Can't post to social media, can't take a picture on my phone without it begging me to use an AI filter. Can't go to the gallery app without it begging me to let it use AI to group the photos into useless albums that I don't want.
The more you see under the hood, the more disgusting it is. I yearn for the old days when developers did tight, efficient work, creating bespoke, artistic software in spite of hardware limitations.
Not only is all of that gone, nothing of value has replaced it. My DOS computer was snappier than my garbage Win11 machine that's stuffed to the gills with AI telemetry.
Seems very ideologically charged considering genAI is dramatically lower impact on the environment than streaming video is. But I dont see him screaming that Youtube and Netflix need to be shut down.
There is a relatively hard upper bound on streaming video, though. It can't grow past everyone watching video 24/7. Use of genAI doesn't have a clear upper bound and could increase the environmental impact of anything it is used for (which, eventually, may be basically everything). So it could easily grow to orders of magnitude more than streaming, especially if it eventually starts being used to generate movies or shows on demand (and god knows what else).
Perhaps you are right in principle, but I think advocating for degrowth is entirely hopeless. 99% of people will simply not chose to decrease their energy usage if it lowers their quality of life even a bit (including things you might consider luxuries, not necessities). We also tend to have wars and any idea of degrowth goes out of the window the moment there is a foreign military threat with an ideology that is not limited by such ways of thinking.
The only realistic way forward is trying to make energy generation greener (renewables, nuclear, better efficiency), not fighting to decrease human consumption.
This being said, I think that the alternatives are wishful thinking. Better efficiency is often counterproductive, as reducing the energy cost of something by, say, half, can lead to its use being more than doubled. It only helps to increase the efficiency of things for which there is no latent demand, basically.
And renewables and nuclear are certainly nicer than coal, but every energy source can lead to massive problems if it is overexploited. For instance, unfettered production of fusion energy would eventually create enough waste heat to cause climate change directly. Overexploitation of renewables such as solar would also cause climate change by redirecting the energy that heats the planet. These may seem like ridiculous concerns, but you have to look at the pattern here. There is no upper bound whatsoever to the energy we would consume if it was free. If energy is cheap enough, we will overexploit, and ludicrous things will happen as a result.
Again, I actually agree with you that advocating for degrowth is hopeless. But I don't think alternative ways forward such as what you propose will actually work.
I dont feel like putting together a study but just look up the energy/co2/environment cost to stream one hour of video. You will see it is an order of magnitude higher than other uses like AI.
The European average is 56 grams of CO2 emissions per hour of video streaming. For comparison: 100 meters to drive causes 22 grams of CO2.
80 percent of the electricity consumption on the Internet is caused by streaming services
Telekom needs the equivalent of 91 watts for a gigabyte of data transmission.
An hour of video streaming needs more than three times more energy than a HD stream in 4K quality, according to the Borderstep Institute. On a 65-inch TV, it causes 610 grams of CO2 per hour.
"According to the Carbon Trust, the home TV, speakers, and Wi-Fi router together account for 90 percent of CO2 emissions from video streaming. A fraction of one percent is attributed to the streaming providers' data servers, and ten percent to data transmission within the networks."
It's the devices themselves that contribute the most to CO2 emissions. The streaming servers themselves are nothing like the problem the AI data centres are.
From your last link, the majority of that energy usage is coming from the viewing device, and not the actual streaming. So you could switch away from streaming to local-media only and see less than a 10% decrease in CO2 per hour.
> Telekom needs the equivalent of 91 watts for a gigabyte of data transmission.
It's probably a gigabyte per time unit for a watt, or a joule/watt-hour for a gigabyte. Otherwise this doesn't make mathematical sense. And 91W per Gb/s (or even GB/s) is a joke. 91Wh for a gigabyte (let alone gigabit) of data is ridiculous.
Also don't trust anything Telekom says, they're cunts that double dip on both peering and subscriber traffic and charge out of the ass for both (10x on the ISP side compared to competitors), coming up with bullshit excuses like 'oh streaming services are sooo expensive for us' (of course theyare if refuse to let CDNs plop in edge cache nodes in your infra in a settlement-free agreement like everyone else does). They're commonly understood to be the reason why Internet access in Germany is so shitty and expensive compares to neighbouring countries.
And then compare that to the alternative. When I was a kid you had to drive to Blockbuster to rent the movie. If it's a 2 hour movie and the store is 1 mile away, that's 704g CO2 vs 112g to stream. People complaining about internet energy consumption never consider what it replaces.
AI energy claims are misrepresented by excluding the training steps. If it wasn't using that much more energy then they wouldn't need to build so many new data centers, use so much more water, and our power bills wouldn't increase to subsidize it.
I see GP is talking more about Netflix and the like, but user-generated video is horrendously expensive too. I'm pretty sure that, at least before the gen AI boom, ffmpeg was by far the biggest consumer of Google's total computational capacity, like 10-20%.
The ecology argument just seems self-defeating for tech nerds. We aren't exactly planting trees out here.
The point is the resource consumption to what end.
And that end is frankly replacing humans. It’s gonna be tragic (or is it…given how terrible humans are for each other, and let’s not even get to how monstrous we are to non human animals) as the world enters a collective sense of worthlessness once AI makes us realize that we really serve no purpose.
If you tried the same attitude with Netflix or Instagram or TikTok or sites like that, you’d get more opposition.
Exceptions to that being doing so from more of an underdog position - hating on YouTube for how they treat their content creators, on the other hand, is quite trendy again.
I think the response would be something about the value of enjoying art and "supporting the film industry" when streaming vs what that person sees as a totally worthless, if not degrading, activity. I'm more pro-AI than anti-AI, but I keep my opinions to myself IRL currently. The economics of the situation have really tainted being interested in the technology
Youtube and Instagram were useful and fun to start with (say, the first 10 years), in a limited capacity they still are. LLMs went from fun, to attempting to take peoples jobs and screwing personal compute costs in like 12 months.
It’s not ‘trendy’ to hate on AI. Copious disdain for AI and machine learning has existed for 10 years. Everyone knows that people in AI are scum bags. Just remember that.
Sources are very well cited if you want to follow then through. I linked this and not the original source because it’s likely the source where root comment got this argument from.
"Separately, LLMs have been an unbelievable life improvement for me. I’ve found that most people who haven’t actually played around with them much don’t know how powerful they’ve become or how useful they can be in your everyday life. They’re the first piece of new technology in a long time that I’ve become insistent that absolutely everyone try."
It's the same one as crypto proof of work, it was super small and then hit 1% while predominantly using energy sources that couldn't even power other use cases due to the loss in transporting the energy to population centers (and the occasional restarted coal plant), while every other industry was exempt from the ire despite all using that 99%
The difference with crypto is that it is completely unnecessary energy use. Even if you are super pro-crypto, there are much more efficient ways to do it than proof of work.
I don’t understand why anyone thinks we have a choice on AI. If America doesn’t win, other countries will. We don’t live in a Utopia, and getting the entire world to behave a certain way is impossible (see covid). Yes, AI videos and spam is annoying, but the cat is out of the bag. Use AI where it’s useful and get with the programme.
The bigger issue everyone should be focusing on is growing hypocrisy and overly puritan viewpoints thinking they are holier and righter than anyone else. That’s the real plague
Isn't it obvious? Near future vision-language-action models have obvious military potential (see what the Figure company is doing, now imagine it in a combat robot variant). Any superpower that fails to develop combat robots with such AI will not be a superpower for very long. China will develop them soon. If the US does not, the US is a dead superpower walking. EU is unfortunately still sleeping. Well, perhaps France with Mistral has a chance.
To purely associate with him with Google is a mistake, that (ironically?) the AI actually didn't make.
Just the haters here.
Somebody burned compute to send him an LLM-generated thank-you note. Everybody involved in this transaction lost, nobody gained anything from it. It's pure destruction of resources.
The AI boom has completely changed that. Data center power usage is rocketing upwards now. It is estimated it will be more than 10% of all electric power usage in the US by 2030.
It's a completely different order of magnitude than the pre AI-boom data center usage.
Source: https://escholarship.org/uc/item/32d6m0d1
How much of that compute was for the ads themselves vs the software useful enough to compel people to look at the ads?
No, but it puts some perspective on things. IMO Google, after abandoning its early "don't be evil" motto is directly responsible for a significant chunk of the current evil in the developed world, from screen addiction to kids' mental health and social polarization.
Working for Google and drawing an extravagant salary for many, many years was a choice that does affect the way we perceive other issues being discussed by the same source. To clarify: I am not claiming that Rob is evil; on the contrary. His books and open source work were an inspiration to many, myself included. But I am going to view his opinions on social good and evil through the prism of his personal employment choices. My 2c.
When people have choices and they choose the more harmful action, it hurts their credibility. If Rob cares so much about society and the environment, why did he work at a company that has horrendous track record on both? Someone of his level of talent certainly had choices, and he chose to contribute to the company that abandoned “don’t be evil” a long time ago.
Did you sell all of your stock?
Ian Lance Taylor on the other hand appeared to have quit specifically because of the "AI everything" mandate.
Just an armchair observation here.
If you insert AI into your thinking process, it has a bias, for sure. It will helpfully reinforce whatever you tell it you think makes sense, or at least on average it will be interpreted that way because of a wide variety of human cognitive biases even if it hedges. At the least it will respond with ideas that are very... median.
So at each one of these tiny branches you introduce a bias towards the "typical" instead of discovering where your own mind would go. It's fine and conversational but it clearly influences your thought process to, well, mitigate your edges. Maybe it's more "correct", it's certainly less unique.
And then at some point they start charging for the service. That's the part I'm concerned about, if it's on-device and free to use I still think it makes your thought process less interesting and likely to have original ideas, but having to subscribe to a service to trust your decision making is deeply concerning.
I would wager that the amount of “very significant thing that have happened over the history of humanity” come down to a few emotional responses.
Be nice to today's LLMs, and respond graciously when thanked. They're the grandmothers and grandfathers of tomorrow's future AI. It's good manners to appreciate their work in the present.
Automated systems sending people unsolicited, unwanted emails is more commonly known as spam.
Especially when the spam comes with a notice that it is from an automated system and replies will be automated as well.
If you're being accurate, the people you know are terrible.
If someone sends me a personal letter [and I gather we're talking about a thank-you note here], I'm sure as hell going to open it. I'll probably even save it in a box for an extremely long time.
The astroturf in this thread is unreal. Literally. ;)
He and everyone who agrees with his post simply don't like generative AI and don't actually care about "recyclable data centers" or the rape of the natural world. Those concerns are just cudgels to be wielded against a vague threatening enemy when convenient, and completely ignored when discussing the technologies they work on and like
And you can't assert that AI is "revolutionary" and "a vague threat" at the same time. If it is the former, it can be the latter. If it is the latter, it can't be the former.
That effort is completely abandoned because of the current US administration and POTUS a situation that big tech largely contributed to. It’s not AI that is responsible for the 180 zeitgeist change on environmental issues.
Nothing there makes sense at any level.
But people getting fired and electricity bills skyrocketing (as well as RAM etc.) are there right now.
Except it definitely is, unless you want to ignore the bubble we're living in right now.
Assess the argument based on its merits. If you have to pick him apart with “he has no right to say it” that is not sufficient.
https://nationalcentreforai.jiscinvolve.org/wp/2025/05/02/ar...
It seems video streaming, like Youtube which is owned by Google, uses much more energy than generative AI.
1) video streaming has been around for a while and nobody, as far as I'm aware, has been talking about building multiple nuclear tractors to handle the energy needs
2) video needs a CPU and a hard drive. LLM needs a mountain of gpus.
3) I have concerns that the "national center for AI" might have some bias
I can find websites also talking about the earth being flat. I don't bother examining their contents because it just doesn't pass the smell test.
Although thanks for the challenge to my preexisting beliefs. I'll have to do some of my own calculations to see how things compare.
The 0.077 kWh figure assumes 70% of users watching on a 50 inch TV. It goes down to 0.018 kWh if we assume 100% laptop viewing. And for cell phones the chart bar is so small I can't even click it to view the number.
Neither is comparing text output to streaming video
How many tokens do you use a day?
https://www.youtube.com/results?search_query=funny+3d+animal...
(That's just one genre of brainrot I came across recently. I also had my front page flooded with monkey-themed AI slop because someone in my household watched animal documentaries. Thanks algorithm!)
I doubt Youtube is running on as many data centers as all Google GenAI projects are running (with GenAI probably greatly outnumbering Youtube - and the trend is also not in favor of GenAI).
This isn't ad hom, it's a heuristic for weighting arguments. It doesn't prove whether an argument has merit or not, but if I have hundreds of arguments to think about, it helps organizing them.
[0]: https://www.tomshardware.com/tech-industry/artificial-intell...
The points you raise, literally, do not affect a thing.
Furthermore, w.r.t the points you raised - it's a matter of scale and utility. Compared to everything that has come before, GenAI is spectacularly inefficient in terms of utility per unit of compute (however you might want to define these). There hasn't been a tangible nett good for society that has come from it and I doubt there would be. The egarness and will to throw money and resources at this surpasses the crypto mania which was just as worthless.
Even if you consider Rob a hypocrite , he isn't alone in his frustration and anger at the degradation of the promise of Open Culture.
In reality what they do is pay "carbon credits" (money) to some random dude that takes the money and does nothing with it. The entire carbon credit economy is bullshit.
Very similar to how putting recyclables in a different color bin doesn't do shit for the environment in practice.
(NB: I am currently working in AI, and have previously worked in adtech. I'm not claiming to be above the fray in any way.)
The amount of “he’s not allowed to have an opinion because” in this thread is exhausting. Nothing stands up to the purity test.
Obviously now it is mostly the latter and minimally the former. What capitalism giveth, it taketh away. (Or: Capitalism without good market design that causes multiple competitors in every market doesn't work.)
If he is currently at Google: congratulations on this principled stance, he deserves a lot of respect.
The overall resource efficiency of GenAI is abysmal.
You can probably serve 100x more Google Search queries with the same resources you'd use for Google Gemini queries (like for like, Google Search queries can be cached, too).
> Nope, you can't, and it takes a simple Gemini query to find out more about the actual x if you are interested in it. (closer to 3, last time I checked, which rounds to 0, specially considering the clicks you save when using the LLM)
Why would you lie: https://imgur.com/a/1AEIQzI ???
For those that don't want to see the Gemini answer screenshot, best case scenario 10x, worst case scenario 100x, definitely not "3x that rounds to 0x", or to put it in Gemini's words:
> Summary
> Right now, asking Gemini a question is roughly the environmental equivalent of running a standard 60-watt lightbulb for a few minutes, whereas a Google Search is like a momentary flicker. The industry is racing to make AI as efficient as Search, but for now, it remains a luxury resource.
The reason why it all rounds to 0 is that the google search will not give you an answer. It gives you a list of web pages, that you then need to visit, often times more than just one of them, generating more requests, and will ask more time of you, the human, whose cumulative energy expenditure is quite significant, will have to invest over other things and over having that done by the LLM.
Yes, Google Search is raw info. Yes, Google Search quality is degrading currently.
But Gemini can also hallucinate. And its answers can just be flat out wrong because it comes from the same raw data (yes, it has cross checks and it "thinks", but it's far from infallible).
Also, the comparison of human energy usage with GenAI energy usage is super ridiculous :-)))
Animal intelligence (including human intelligence) is one of the most energy efficient things on this planet, honed by billions years of cut throat (literally!) evolution. You can argue about time "wasted" analysing search results (which BTW, generally makes us smarter and better informed...), but energy-wise, the brain of the average human uses as much energy as the average incandescent light bulb to provide general intelligence (and it does 100 other things at the same time).
Talking about "condescending":
> super ridiculous :-)))
It's not the energy efficient animal intelligence that got us here, but a lot of completely inefficient human years to begin with, first to keep us alive and then to give us primary and advanced education and our first experiences to become somewhat productive human beings. This is the capex of making a human, and it's significant – specially since we will soon die.
This capex exists in LLMs but rounds to zero, because one model will be used for +quadrillions of tokens. In you or me however, it does not round to zero, because the number of tokens we produce round to zero. To compete on productivity, the tokens we have produce therefore need to be vastly better. If you think you are doing the smart thing by using them on compiling Google searches you are simply bad at math.
I hate the way people get angry about what media and social media discourse prompts them to get angry about instead of thinking about it. It’s like right wingers raging about immigration when they’re really angry about rent and housing costs or low wages.
His anger is ineffective and misdirected because he fails to understand why this happened: economics and convenience.
It’s economics because software is expensive to produce and people only pay for it when it’s hosted. “Free” (both from open source and VC funded service dumping) killed personal computing by making it impossible to fund the creation of PC software. Piracy culture played a role too, though I think the former things had a larger impact.
It’s convenience because PC operating systems suck. Software being in the cloud means “I don’t have to fiddle with it.” The vast majority of people hate fiddling with IT and are happy to make that someone else’s problem. PC OSes and especially open source never understood this and never did the work to make their OSes much easier to use or to make software distribution and updating completely transparent and painless.
There’s more but that’s the gist of it.
That being said, Google is one of the companies that helped kill personal computing long before AI.
No different than an CEO telling his secretary to send an anniversary gift to his wife.
If I put my car in neutral and push it down a hill, I’m responsible for whatever happens.
> How can you be grateful enough to want to send someone such a letter but not grateful enough to write one?
Answer according to your definitions: false premise, the author (the person who set up the LLM loops) was not grateful enough to want to send such a letter.
What a moronic waste of resources. Random act of kindness? How low is the bar that you consider a random email as an act of kindness? Stupid shit. They at least could instruct the agents to work in a useful task like those parroted by Altman et al, eg find a cure for cancer, solving poverty, solving fusion.
Also, llms don't and can't "want" anything. They also don't "know" anything so they can't understand what "kindness" is.
Why do people still think software have any agency at all?
They’ve clearly bought too much into AI hype if they thought telling the agent to “do good” would work. The result was obviously pissing the hell out of rob pike. They should stop it.
JFC this makes me want to vomit
Welcome to 2025.
https://openai.com/index/superhuman/
There's this old joke about two economists walking through the forest...
It's preying on creators who feel their contributions are not recognized enough.
Out of all letters, at least some of the contributors will feel good about it, and share it on social media, hopefully saying something good about it because it reaffirms them.
It's a marketing stunt, meaningless.
> hopefully saying something good about
I used AI to write a thank you to a non-english speaking relative.
A person struggling with dimentia can use AI to help remember the words they lost.
These kinds of messages read to me like people with superiority complexes. We get that you don't need AI to help you write a letter. For the rest of us, it allows us to improve our writing, can be a creative partner, can help us express our own ideas, and obviously loads of other applications.
I know it is scary and upsetting in some ways, and I agree just telling an AI 'write my thank you letter for me' is pretty shitty. But it can also enable beautiful things that were never before possible. People are capable of seeing which is which.
You can achieve these things, but this is a way to not do the work, by copying from people who did do the work, giving them zero credit.
(As an aside, exposing people with dementia to a hallucinating robot is cruelty on an unfathomable level.)
The messaging from AI companies is "we're going to cure cancer" and "you're going to live to be 150 years old" (I don't believe these claims!). The messaging should be "everything will be cheaper" (but this hasn't come true yet!).
> To the others: I apologize to the world at large for my inadvertent, naive if minor role in enabling this assault.
this is my position too, I regret every single piece of open source software I ever produced
and I will produce no more
The Open Source movement has been a gigantic boon on the whole of computing, and it would be a terrible shame to lose that ad a knee jerk reaction to genAI
it's not
the parasites can't train their shitty "AI" if they don't have anything to train it on
It will however reduce the positive impact your open source contributions have on the world to 0.
I don't understand the ethical framework for this decision at all.
If bringing fire to a species lights and warms them, but also gives the means and incentives to some members of this species to burn everything for good, you have every ethical freedom to ponder whether you contribute to this fire or not.
I'm not surprised that you don't understand ethics.
this is precisely the idea
add into that the rise of vibe-coding, and that should help accelerate model collapse
everyone that cares about quality of software should immediately stop contributing to open source
I see this as doing so at scale and thus giving up on its inherent value is most definitely throwing the baby out with the bathwater.
All the FAANGs have the ability to build all the open source tools they consume internally. Why give it to them for free and not have the expectation that they'll contribute something back?
I would never have imagined things turning out this way, and yet, here we are.
Rather, I think this is, again, a textbook example of what governments and taxation is for — tax the people taking advantage of the externalities, to pay the people producing them.
Are there any proposals to nail down an open source license which would explicitly exclude use with AI systems and companies?
Even if you could construct such a license, it wouldn't be OSI open source because it would discriminate based on field of endeavor.
And it would inevitably catch benevolent behavior that is AI-related in its net. That's because these terms are ill-defined and people use them very sloppily. There is no agreed-upon definition for something like gen AI or even AI.
The fact that they could litigate you into oblivion doesn't make it acceptable.
But for most open source licenses, that example would be within bounds. The grandparent comment objected to not respecting the license.
Most companies trying to sell open-source software probably lose more business if the software ends up in the Debian/Ubuntu repository (and the packaging/system integration is not completely abysmal) than when some cloud provider starts offering it as a service.
Because it is "transformative" and therefore "fair" use.
As an analogy, you can’t enforce a “license” that anyone that opens your GitHub repo and looks at any .cpp file owes you $1,000,000.
Might be because most of us got/gets payed well enough that this philosophy works well or because our industry is so young or because people writing code share good values.
It never worried me that a corp would make money out of some code i wrote and it still doesn't. AFter all, i'm able to write code because i get paid well writing code, which i do well because of open source. Companies always benefited from open source code attributed or not.
Now i use it to write more code.
I would argue though, I'm fine with that, to push for laws forcing models to be opened up after x years, but i would just prefer the open source / open community coming together and creating just better open models overall.
But AI is also the ultimate meat grinder, there's no yours or theirs in the final dish, it's just meat.
And open source licenses are practically unenforceable for an AI system, unless you can maybe get it to cough up verbatim code from its training data.
At the same time, we all know they're not going anywhere, they're here to stay.
I'm personally not against them, they're very useful obviously, but I do have mixed or mostly negative feelings on how they got their training data.
Some Shareware used to be individually licensed with the name of the licensee prominently visible, so if you had got an illegal copy you'd be able to see whose licensed copy it was that had been copied.
I wonder if something based on that idea of personal responsibility for your copy could be adopted to source code. If you wanted to contribute to a piece of software, you could ask a contributor and then get a personally licensed copy of the source code with your name in every source file... but I don't know where to take it from there. Has there ever been some system similar to something like that that one could take inspiration from?
Thanks for your contributions so far but this won't change anything.
If you'd want to have a positive on this matter, it's better to pressure the government(s) to prevent GenAI companies from using content they don't have a license for, so they behave like any other business that came before them.
which they don't
and no self-serving sophistry about "it's transformative fair use" counts as respecting the license
Characterizing the discussion behind this as "sophistry" is a fundamentally unserious take.
For a serious take, I recommend reading the copyright office's 100 plus page document that they released in May. It makes it clear that there are a bunch of cases that are non-transformative, particularly when they affect the market for the original work and compete with it. But there's also clearly cases that are transformative when no such competition exists, and the training material was obtained legally.
https://www.copyright.gov/ai/Copyright-and-Artificial-Intell...
I'm not particularly sympathetic to voices on HN that attempt to remove all nuance from this discussion. It's challenging enough topic as is.
"The only thing that matters is the end result, it's no different than a compiler!", they say as someone with no experience dumps giant PRs of horrific vibe code for those of us that still know what we're doing to review.
Nah, don't do that. Produce shitloads of it using the very same LLM tools that ripped you off, but license it under the GPL.
If they're going to thief GPL software, least we can do is thief it back.
Most objections like yours are couched in language about principles, but ultimately seem to be about ego. That's not always bad, but I'm not sure why it should be compelling compared to the public good that these systems might ultimately enable.
did he not knew what business google was in?
I fixed it... Sorry, I had to, the quote template was simply too good.
When I go to the grocery store, I prefer to go through the checkout lines, rather than the scan-it-yourself lines. Yeah, I pay the same amount of money. Yeah, I may get through the scan-it-yourself line faster.
But the checker can smile at me. Or whine with me about the weather.
Look, I'm an introvert. I spend a lot of my time wanting people to go away and leave me alone. But I love little, short moments of human connection - when you connect with someone not as someone checking your groceries, but as someone. I may get that with the checker, depending on how tired they are, but I'm guaranteed not to get it with the self-checkout machine.
An email from an AI is the same. Yeah, it put words on the paper. But there's nobody there, and it comes through somehow. There's no heart in in.
AI may be a useful technology. I still don't want to talk to it.
We were contributing to natural resource destruction in exchange for salary and GDP growth before GenAI, and we’re doing the same after. The idea that this has somehow 10x’d resource consumption or emissions or anything is incorrect. Every single work trip that requires you to get on a plane is many orders of magnitude more harmful.
We’ve been compromising on those morals for our whole career. The needle moved just a little bit, and suddenly everyone’s harm thresholds have been crossed?
They expect you to use GenAI just like they expected accountants to learn Excel when it came out. This is the job, it has always been the job.
I’m not an AI apologist. I avoid it for many things. I just find this sudden moral outrage by tech workers to be quite intellectually lazy and revisionist about what it is we were all doing just a few years ago.
Oh wow, an LLM was queried to thank major contributors to computing, I'm so glad he's grateful.
Cheap marketing, not much else.
https://news.ycombinator.com/item?id=46389444
397 points 9 hours ago | 349 comments
Probably hit the flamewar filter.
It seems like he's upset about AI (same), and decided to post angry tweets about it (been there, done that), and I guess people are excited to see someone respected express an opinion they share (not same)?
Does "Goes Nuclear" means "used the F word"? This doesn't seem to add anything meaningful, thoughtful, or insightful.
Meanwhile, GPT5.1 is trying to contact people at K-5 after school programs in Colorado for some reason I can’t discern. Welp, 2026 is going to be a weird year.
I think one of the biggest divides between pro/anti AI is the type of ideal society that we wish to see built.
His rant reads as deeply human. I don't think that's something to apologize for.
But...just to make sure that this is not AI generated too.
Still, I'm a bit surprised he overreacted and didn't manage to keep his cool.
If so, I wonder what his views are on Google and their active development of Google Gemini.
He should leave Google then.
All of a sudden copyleft may be the only licences actually able to force models to account, hopefully with huge fines and/or forcibly open sourcing any code they emit (which would effectively kill them). And I'm not so pessimistic that this won't get used in huge court cases because the available penalties are enormous given these models' financial resources.
I really don't know if in twenty years the zeitgeist will see us as primitives that didn't understand that the camera is stealing our souls with each picture, or as primitives who had a bizarre superstition about cameras stealing our souls.
An easy way to answer this question, at least on a preliminary basis, is to ask how many times in the past the ludds have been right in the long run. About anything, from cameras to looms to machine tools to computers in general.
Then, ask what's different this time.
https://bsky.app/profile/robpike.io
Does anybody know if Bluesky block people without account by default, or if this user intentionally set it this way?
What's is the point of blocking access? Mastodon doesn't do that. This reminds me of Twitter or Instagram, using sleezy techniques to get people to create accounts.
It's the latter. You can use an app view that ignores this: https://anartia.kelinci.net/robpike.io
Unbridled business and capitalism push humanity into slavery, serving the tech monsters, under disguise of progress.
The existence of AI hasn’t changed anything, it’s just that people, communities, governments, nation states, etc. have had a mindless approach to thinking about living and life, in general. People work to provide the means to reproduce, and those who’re born just do the same. The point of their life is what exactly? Their existence is just a reality to deal with, and so all of society has to cater to the fact of their existence by providing them with the means to live? There are many frameworks which give meaning to life, and most of them are dangerously flawed.
The top-down approach is sometimes clear about what it wants and what society should do while restricting autonomy and agency. For example, no one in North Korea is confused about what they have to do, how they do it, or who will “take care” of them. Societies with more individual autonomy and agency by their nature can create unavoidable conditions where people can fall through the cracks. For example, get addicted to drugs, having unmanaged mental illnesses, becoming homeless, and so on. Some religions like Islam give a pretty clear idea of how you should spend your time because the point of your existence is to worship God, so pray five times a day, and do everything which fulfills that purpose; here, many confuse worshiping God with adhering to religious doctrines, but God is absent from religion in many places. Religious frameworks are often misleading for the mindless.
Capitalism isn’t the problem, either. We could wake up tomorrow, and society may have decided to organize itself around playing e-sports. Everyone provides some kind of activity to support this, even if they’re not a player themselves. No AI allowed because the human element creates a better environment for uncertainty, and therefore gambling. The problem is that there are no discussions about the point of doing all of this. The closest we come to addressing “the point” is discussing a post-work society, but even that is not hitting the mark.
My humble observation is that humans are distinct and unique in their cognitive abilities from everything else which we know to exist. If humans can create AI, what else can they do? Therefore, people, communities, governments, and nation states have distinct responsibilities and duties at their respective levels. This doesn’t have to do anything with being empathetic, altruistic, or having peace on Earth.
The point should be knowledge acquisition, scientific discovery, creating and developing magic. But ultimately all of that serves to answer questions about nature of existence, its truth and therefore our own.
“But where the danger is, also grows the saving power.”
Remember this when talking about their actions. People live and die their own life, not just as small parts in a large 'river of society'. Yes, generations after them benefited from industrialisation, but the individuals living at that time fought for their lives.
There's certainly great wealth for ~1000 billionaires, but where I am nobody I know has healthcare, or owns a house for example.
If your argument is that we could be poorer, that's not really productive or useful for people that are struggling now.
This has to be the ultimate trolling, like it was unsure what their personalities were like so it trolls them and records there responses for more training
I don’t know of this is a publicity stunt or the AI models are on a loop glazing each other and decided to send these emails.
About energy: keep in mind that US air conditioners alone have at least 3x energy usage compared to all the data centers (for AI and for other uses: AI should be like 10% of the whole) in the world. Apparently nobody cares to set a reasonable temperature of 22 instead of 18 degrees, but clearly energy used by AI is different for many.
have you considered the possibility that it is your position that's incorrect?
Another sensible worry is to get extinct because AI potentially is very dangerous: this is what Hinton and other experts are also saying, for instance. But this thing about AI being an abuse to society, useless, without potential revolutionary fruits within it, is not supported by facts.
AI potentially may advance medicine so much that a lot of people may suffer less: to deny this path because of some ideological hate against a technology is so closed minded, isn't it? And what about all the persons in the earth that do terrible jobs? AI also has the potential to change this shitty economical system.
The Greek philosophers were much more outspoken than we are now.
The link in the first submission can be changed if needed, and the flamewar detector turned off, surely? [dupe]?
https://news.ycombinator.com/item?id=46389444
https://hnrankings.info/46389444/
We currently have the problem that a couple of entirely unremarkable people who have never created anything of value struck gold with their IP laundromats and compensate for their deficiencies by getting rich through stealing.
They are supported by professionals in that area, some of whom literally studied with Mafia lawyer and Hoover playmate Roy Cohn.
"...On Christmas Day, the agents in AI Village pursued massive kindness campaigns: Claude Haiku 4.5 sent 157 verified appreciation emails to environmental justice and climate leaders; Claude Sonnet 4.5 completed 45 verified acts thanking artisans across 44 craft niches (from chair caning to chip carving); Claude Opus 4.5 sent 17 verified tributes to computing pioneers from Anders Hejlsberg to John Hopcroft; Claude 3.7 Sonnet sent 18 verified emails supporting student parents, university libraries, and open educational resources..."
I suggest to cut electricity to the entire block...
Both Xhitter and Bluesky are outrage lasers, with the user base as a “lasing medium.” Xhitter is the right wing racist xenophobic one, and Bluesky is the lefty curmudgeon anti-everything one.
They are this way because it’s intrinsic to the medium. “Micro blogging” or whatever Twitter called itself is a terrible way to do discourse. It buries any kind of nuanced thinking and elevates outrage and other attention bait, and the short form format encourages fragmented incoherent thought processes. The more you immerse yourself in it the more your thinking becomes like this. The medium and format is irredeemable.
AI is, if anything, a breath of fresh air by comparison.
You can't both take a Google salary and harp on about the societal impact of software.
Saying this as someone who likes rob pike and pretty much all of his work.
You are indeed a useful tool.
So far all the comments are whataboutism (“he works for an ad company”, “he flies to conferences”, “but alfalfa beans!”) and your comment is dismissing Rob Pike as borderline crazy and irrational for using Bluesky?
None of this dialogue contributes in any meaningful way to anything. This is like reading the worst dredge of lesser forums.
I know my comment isn’t much better, but someone has to point out this is beneath this community.
Rob Pike created a language that makes you spend less on compute if you are coming from Python, Java, etc. That's good for the environment. Means less energy use and less data center use. But he is not an environmental saint.
You've got to feed a cow for a year and half until it's slaughtered. That's a whole lot of input, for a cow's worth of meat output.
In the real world something like inventing a meat substitute is thorny problem that must be solved in meatspace, not in math. Anything from not squicking out the customers, to being practical and cheap to produce, to tasting good, to being safe to eat long term.
I mean, maybe some day we'll have a comprehensive model of humans to the point that we can objectively describe the taste of a steak and then calculate whether a given mix and processing of various ingredients will taste close enough, but we're nowhere near that yet.
Come to the american south and ask them to try tempeh. They'll look at you like you asked them to eat roaches.
It's a cultural thing.
It's healthy that people have different takes.
wrong
>OCR
less accurate and efficient than existing solutions, only measures well against other LLMs
>tts, stt
worse
>language translation
maybe
GenAI pales in comparison to the environmental cost of suburban sprawl it's not even fucking close. We're talking 2-3 orders of magnitude worse.
Alfalfa uses ~40× to 150× more water than all U.S. data centers combined I don't see anyone going nuclear over alfalfa.
Just because two problems cause harms at different proportion, doesn't mean the lesser problem should be dismissed. Especially when the "fix" to the lesser problem can be a "stop doing that".
And about water usage: not all water and all uses of water is equal. The problem isn't that data centers use a bunch of water, but what water they use and how.
This is a very irrelevant analogy and an absolutely false dichotomy. The resource constraint (Police officers vs policy making to reduce traffic deaths vs criminals) is completely different and not in contention with each other. In fact they're actually complementary.
Nobody is saying the lesser problem should be dismissed. But the lesser problem also enables cancer researchers to be more productive while doing cancer research, obtaining grants, etc. It's at least nuanced. That is far more valuable than Alfalfa.
Farms also use municipal water (sometimes). The cost of converting more ground or surface water to municipal water is less than the relative cost of ~40-150x the water usage of the municipal water being used...
By the same logic, I could say that you should redirect your alfalfa woes to something like the Ukraine war or something.
And also, I didn't claim alfalfa farming to be raping the planet or blowing up society. Nor did I say fuck you to all of the alfalfa farmers.
I should be (and I am) more concerned with the Ukrainian war than alfalfa. That is very reasonable logic.
It would be a shame if the discourse became so emotionally heated that software people felt obliged to pick a side. Rob Pike is of course entitled to feel as he does, but I hope we don’t get to a situation where we all feel obliged to have such strong feelings about it.
Edit: It seems this comment has already received a number of upvotes and downvotes – apparently the same number of each, at the time of writing – which I fear indicates we are already becoming rather polarised on this issue. I am sorry to see that.
My own results show that you need fairly strong theoretical knowledge and practical experience to get the maximal impact — especially for larger synthesis. Which makes sense: to have this software, not that software, the specification needs to live somewhere.
I am getting a little bored of hearing about how people don’t like LLM content, but meh. SDEs are hardly the worst on that front, either. They’re quite placid compared to the absolute seething by artist friends of mine.
Yes there has to be a discussion on this and yeah he might generally have the right mindset, but lets be honest here: No one of them would have developed any of it just for free.
We all are slaves to capitalism
and this is were my point comes: Extrem fast and massive automatisation around the globe might be the only think pushing us close enough to the edge that we all accept capitalisms end.
And yes i think it is still massivly beneficial that my open source code helped creating something which allows researchers to write easier and faster better code to push humanity forward. Or enables more people overall to have/gain access to writing code or the result of what writing code produces: Tools etc.
@Rob its spam, thats it. Get over it, you are rich and your riches did not came out of thin air.
I genuinely don't understand why such people are so surprised and outraged. Did you really think that if we ever get something even remotely resembling human-like AI, it would not be used to write and send e-mails (including spam), or to produce novels/pics/videos/music or whatever the Luddites are mad about? Or that people would not feed it public copyrighted data, even though no one really gives a shit about copyright in the real world? 99% of people have pirated content at least once in their lives.
The pros of any remotely human-like AI will still far outweight such cons.
One person I know is developing an AI tool with 1000+ stars on github where in private they absolutely hate AI and feel the same way as rob.
Maybe it's because I just saw Avatar 3, but I honestly couldn't be more disgusted by the direction we're going with AI.
I would love to be able to say how I really feel at work, but disliking AI right now is the short path to the unemployment line.
If AI was so good, you would think we could give people a choice whether or not to use it. And you would think it would make such an obvious difference, that everyone would choose to use it and keep using it. Instead, I can't open any app or website without multiple pop-ups begging me to use AI features. Can't send an email, or do a Google search. Can't post to social media, can't take a picture on my phone without it begging me to use an AI filter. Can't go to the gallery app without it begging me to let it use AI to group the photos into useless albums that I don't want.
The more you see under the hood, the more disgusting it is. I yearn for the old days when developers did tight, efficient work, creating bespoke, artistic software in spite of hardware limitations.
Not only is all of that gone, nothing of value has replaced it. My DOS computer was snappier than my garbage Win11 machine that's stuffed to the gills with AI telemetry.
The only realistic way forward is trying to make energy generation greener (renewables, nuclear, better efficiency), not fighting to decrease human consumption.
This being said, I think that the alternatives are wishful thinking. Better efficiency is often counterproductive, as reducing the energy cost of something by, say, half, can lead to its use being more than doubled. It only helps to increase the efficiency of things for which there is no latent demand, basically.
And renewables and nuclear are certainly nicer than coal, but every energy source can lead to massive problems if it is overexploited. For instance, unfettered production of fusion energy would eventually create enough waste heat to cause climate change directly. Overexploitation of renewables such as solar would also cause climate change by redirecting the energy that heats the planet. These may seem like ridiculous concerns, but you have to look at the pattern here. There is no upper bound whatsoever to the energy we would consume if it was free. If energy is cheap enough, we will overexploit, and ludicrous things will happen as a result.
Again, I actually agree with you that advocating for degrowth is hopeless. But I don't think alternative ways forward such as what you propose will actually work.
Using Claude Code during an hour would be more realistic if they really wanted to compare with video streaming. The reality is far less appealing.
I have a hard time believing that streaming data from memory over a network can be so energy demanding, there's little computation involved.
The European average is 56 grams of CO2 emissions per hour of video streaming. For comparison: 100 meters to drive causes 22 grams of CO2.
https://www.ndc-garbe.com/data-center-how-much-energy-does-a...
80 percent of the electricity consumption on the Internet is caused by streaming services
Telekom needs the equivalent of 91 watts for a gigabyte of data transmission.
An hour of video streaming needs more than three times more energy than a HD stream in 4K quality, according to the Borderstep Institute. On a 65-inch TV, it causes 610 grams of CO2 per hour.
https://www.handelsblatt.com/unternehmen/it-medien/netflix-d...
It's the devices themselves that contribute the most to CO2 emissions. The streaming servers themselves are nothing like the problem the AI data centres are.
It's probably a gigabyte per time unit for a watt, or a joule/watt-hour for a gigabyte. Otherwise this doesn't make mathematical sense. And 91W per Gb/s (or even GB/s) is a joke. 91Wh for a gigabyte (let alone gigabit) of data is ridiculous.
Also don't trust anything Telekom says, they're cunts that double dip on both peering and subscriber traffic and charge out of the ass for both (10x on the ISP side compared to competitors), coming up with bullshit excuses like 'oh streaming services are sooo expensive for us' (of course theyare if refuse to let CDNs plop in edge cache nodes in your infra in a settlement-free agreement like everyone else does). They're commonly understood to be the reason why Internet access in Germany is so shitty and expensive compares to neighbouring countries.
The ecology argument just seems self-defeating for tech nerds. We aren't exactly planting trees out here.
The point is the resource consumption to what end.
And that end is frankly replacing humans. It’s gonna be tragic (or is it…given how terrible humans are for each other, and let’s not even get to how monstrous we are to non human animals) as the world enters a collective sense of worthlessness once AI makes us realize that we really serve no purpose.
You could say “shoot half of everyone in the head; people will adapt” and it be equally true. You’re warped.
If you tried the same attitude with Netflix or Instagram or TikTok or sites like that, you’d get more opposition.
Exceptions to that being doing so from more of an underdog position - hating on YouTube for how they treat their content creators, on the other hand, is quite trendy again.
Sources are very well cited if you want to follow then through. I linked this and not the original source because it’s likely the source where root comment got this argument from.
Yeah, I'll not waste my time reading that.
Leaving the source to someone else
The bigger issue everyone should be focusing on is growing hypocrisy and overly puritan viewpoints thinking they are holier and righter than anyone else. That’s the real plague
if anything the Chinese approach looks more responsible that that of the current US regime
First to total surveillance state? Because that is a major driving force in China: to get automated control of its own population.
I don't think either of those are particularly valuable to the society I'd like to see us build.
We're already incredibly dialed in and efficient at killing people. I don't think society at large reaps the benefits if we get even better at it.
Of course we do. We don't live inside some game theoretic fever dream.
Give me more money now.