It is an unusual list. Along with a list an AI websites it also blocks a handful of instagram, X and Pinterest profiles. It also blocks a number of specific products on Amazon, such as a colouring book that presumably was generated with AI.
This kind of reminds me Steam where indie devs need to exclaim loudly that they are not using AI, otherwise they face backlash. Meanwhile a significant percentage of devs are using GenAI for better tab completion, better search or generating tests. All things that do not impact the end user experience negatively.
I think AI as a tool versus AI as a product are different. Even in coding you can see it with tab completion/agents v vibe coding. It's a spectrum and people are trying to find their personal divider on it. Additionally there are those out there that decry anything involving AI as heresy. (no thinking machines!)
It's an old saying. The ability for submarines to move through water has nothing to do with swimming, and AIs ability to do generate content has nothing to do with thinking.
The quote (from Dijkstra) is that asking whether machines think is as uninteresting as asking whether submarines swim. He's not saying machines don't think, he's saying it's a pointless thing to have an opinion about. That is, an argument about whether AIs think is an argument about word usage, not about AIs.
Are you hitting tab because it’s what you were about to type, or did it “generate” something you don’t understand? Seems a personalized distinguisher to me.
Even if GenAI is helpful it's okay to morally reject using it. There are plenty of things that give you an advantage in your career but are morally wrong. Complaints include putting people out of jobs, causing a financial bubble, filling GitHub and the internet in general with AI slop, using tons of energy, increasing dram and GPU prices.
And it's not even that apparent how much GenAI improves overall development speed, beyond making toy apps. Hallucinations, bugs, misreading your intentions, getting stuck in loops, wasting your time debugging and testing and it still doesn't help with the actual hard problems of devwork. Even the examples you mention can be fallible.
On top of all that is AI even profitable? It might be fine now but what happens when it's priced to reflect its actual costs? Anecdotally it already feels like models are being quantised and dumbed down - I find them objectively less useful and I'm hitting usage limits quicker than before. Once the free ride is over, only rich people from rich countries will have access to them and of course only big tech companies control the models. It could be peer pressure but many people genuinely object to AI universally. You can't get the useful parts without the rest of it.
Given the political comments in what's supposed to be a filter, and how everything is prefaced with "shit" like "Pinterest shit," I bet the author had a personal political disagreement with those accounts.
The list is also too specific to be useful in some cases, like, is it really important to you that you add 12 entries for specific Amazon products, like: `
duckduckgo.com,bing.com##a[href*="amazon.com/Rabbit-Coloring-Book-Rabbits-Lovers/dp/B0CV43GKGZ"]:upward(li):remove()`?
You're right it's about paying customers. No one is going to waste time campaigning against a $1.99 squid game knockoff on Steam if it uses AI (many are just Unity assets flips already).
The backlash I've seen is against large studies leaving AI slop in 60+ dollar games. Sure, it might just be some background textures or items at the moment, but the reasoning is that if studies know they can get away with it, quality decline is inevitable. I tend to agree. AI tooling is useful but it can't be at the expense of the product quality.
If a "C+++" was created that was so efficient that it would allow teams to be smaller and achieve the same work faster, would that be anti-worker?
If an IDE had powerful, effective hotkeys and shortcuts and refactoring tools that allowed devs to be faster and more efficient, would that be anti-worker?
What part of c++ is inefficient? I can write that pretty quickly without having some cloud service hallucinate stuff.
And no, a faster way to write or refactor code is not anti-worker. Corporations gobbling up tax payer money to build power hungry datacenters so billionaires can replace workers is.
I don't know why people say this. I look on the front page and it's just interesting articles and blog posts on a variety of differing subjects. You must be either actively seeking out stuff you don't like and wasting your time actively hating it or just imagining it.
Yes, it is an unpopular opinion around here, but pretty much in the tech world.
I think this is because most of the users/praisers of GenAI can only see it as a tool to improve productivity (see sibling comment). And yes, end of 2025, it's becoming harder to argue that GenAI is not a productivity booster across many industries.
The vast majority of people in tech are totally missing the question of morality. Missing it, or ignoring it, or hiding it.
I agree. The goal of AI is to reduce payroll costs. It has nothing to do with IDEs or writing code or making "art". It's meant to allow the owning class to pay the working class less, nothing more. What it *can* do is irrelevant in the face of what it is for.
You've pretty much described the "what it is for" for a large percentage of industrial inventions. Clearly, however, the world would be worse off without many of them.
Surprisingly neurotic files full of strange comments and odd blocking choices[1]. Feels very much like the pet project of a few wannabe activist teenagers in a Discord chatroom with too much time on their hands.
[1] Numerous individual pages on places like digital storefronts and social media sites appear in the blocklist. Do the people behind this think they can create a list of every single AI-adjacent thing on the entire internet?
99% of the "main" list entries would be made redundant by simply blocking all .ai domains.
> Surprisingly neurotic files full of strange comments
1. Have you looked at block lists before?
2. Do you have a specific example of what in these blocklists is strange/neurotic? I swear I've skimmed all of them a few times now and although I won't be using them, I'm struggling to understand what's odd about them.
This reminds me of the final scene in The Conversation, when Caul sits alone in his desconstructed apartment after being convinced he is being eavesdropped - a victim of his own paranoia.
Yeah I hoped it would blacklist all those spammy autogenerated SEO sites from search results, but it looks like a vendetta with anything AI or machine learning in general.
It's a so-called hot-button topic and unfortunately HN isn't quite the paragon of pragmatic technical discussion that it was in the past. C'est la vie.
The reactionary response against the new technology, even on HN, is pretty strong. If I ran an AI developer, I'd take it as a signal that I'm doing something right - people see how powerful our product is, and they care about it. Hate and love aren't too different; we'll have many dedicated users who will have forgotten.
First they laugh at you
Then they tell you it violates the orthodoxy
Then they think they knew it all along
“But the fact that some geniuses were laughed at does not imply that all who are laughed at are geniuses. They laughed at Columbus, they laughed at Fulton, they laughed at the Wright brothers. But they also laughed at Bozo the Clown.”
yes and so are the worst, and the problem is 95% of ideas that sound stupid aren't stupid and genius, but just stupid. As Peter Thiel used to say, it's not enough to be a contrarian, that's easy, you need to be contrarian and correct.
Right, and let’s not forget that the VC game that YC plays in assumes that the vast majority of their ventures will fail.
It’s way more exploitative than it gets credit for, even those who criticize VC firms aren’t verbalizing the vastness of the scope of the issue:
Startup incubators prey on young and ambitious people’s willingness to have zero life outside of work in order to set 90% of them up to fail and make huge profits off of the 10% Airbnb-type success stories.
These VC firms have money but no talent or time of their own so they basically steal it from founders in exchange for a Hollywood or pro sports-style superstar pipe dream where most are statistically guaranteed to fail, and even those who succeed don’t keep the majority of the fruits of their labor.
These failed startup founders end up with skills that are supposedly transferable to future ventures or what have you, but I bet if someone actually tracked down a lot of these people they might find a lot of sob stories of early stage founders who ended up burning out of their early career and having the whole startup founder experience representing a net negative in their lives.
If it can be generated with a prompt, it has infinite supply and finite demand. It’s literally worthless in all senses of the term.
What worries me is that it’s reducing the value of actual engineering work (or good quality art). It’s like car lemons. Their existence also reduces the value of the good quality work
This kind of reminds me Steam where indie devs need to exclaim loudly that they are not using AI, otherwise they face backlash. Meanwhile a significant percentage of devs are using GenAI for better tab completion, better search or generating tests. All things that do not impact the end user experience negatively.
I don’t think anyone decrying the current crop of “AI” is against “thinking machines”. We’re not there yet, LLMs don’t think, despite the marketing.
The quote (from Dijkstra) is that asking whether machines think is as uninteresting as asking whether submarines swim. He's not saying machines don't think, he's saying it's a pointless thing to have an opinion about. That is, an argument about whether AIs think is an argument about word usage, not about AIs.
And it's not even that apparent how much GenAI improves overall development speed, beyond making toy apps. Hallucinations, bugs, misreading your intentions, getting stuck in loops, wasting your time debugging and testing and it still doesn't help with the actual hard problems of devwork. Even the examples you mention can be fallible.
On top of all that is AI even profitable? It might be fine now but what happens when it's priced to reflect its actual costs? Anecdotally it already feels like models are being quantised and dumbed down - I find them objectively less useful and I'm hitting usage limits quicker than before. Once the free ride is over, only rich people from rich countries will have access to them and of course only big tech companies control the models. It could be peer pressure but many people genuinely object to AI universally. You can't get the useful parts without the rest of it.
The list is also too specific to be useful in some cases, like, is it really important to you that you add 12 entries for specific Amazon products, like: ` duckduckgo.com,bing.com##a[href*="amazon.com/Rabbit-Coloring-Book-Rabbits-Lovers/dp/B0CV43GKGZ"]:upward(li):remove()`?
A smart loud minority is screaming a lot but actual paying customers don't care as long as the game is not trash.
The backlash I've seen is against large studies leaving AI slop in 60+ dollar games. Sure, it might just be some background textures or items at the moment, but the reasoning is that if studies know they can get away with it, quality decline is inevitable. I tend to agree. AI tooling is useful but it can't be at the expense of the product quality.
If an IDE had powerful, effective hotkeys and shortcuts and refactoring tools that allowed devs to be faster and more efficient, would that be anti-worker?
And no, a faster way to write or refactor code is not anti-worker. Corporations gobbling up tax payer money to build power hungry datacenters so billionaires can replace workers is.
> Corporations gobbling up tax payer money to build power hungry datacenters so billionaires can replace workers is.
Which part of this is important? If there was no taxpayer funding, would it be okay? If it was low power-consumption, would it be okay?
I just want to understand what the precise issue is.
now it's full of SBF and scam altman wannabes
I think this is because most of the users/praisers of GenAI can only see it as a tool to improve productivity (see sibling comment). And yes, end of 2025, it's becoming harder to argue that GenAI is not a productivity booster across many industries.
The vast majority of people in tech are totally missing the question of morality. Missing it, or ignoring it, or hiding it.
I was trying to use an obscure CLI tool the other day. Almost no documentation and one wrong argument and I would brick an expensive embedded device.
Somehow Google gave me the right arguments in its AI generated answer to my search, and it worked.
I first tried every forum post I could find, but nobody seemed to be doing exactly what I was attempting to do.
I think this is a clear and moral win for AI. I am not in a position to hire embedded development consultants for personal DIY projects.
I doubt it
[1] Numerous individual pages on places like digital storefronts and social media sites appear in the blocklist. Do the people behind this think they can create a list of every single AI-adjacent thing on the entire internet?
99% of the "main" list entries would be made redundant by simply blocking all .ai domains.
1. Have you looked at block lists before?
2. Do you have a specific example of what in these blocklists is strange/neurotic? I swear I've skimmed all of them a few times now and although I won't be using them, I'm struggling to understand what's odd about them.
Edit: On a second look the list is kind of weird. I'd love to block AI stuff but the blocklist is far, far too broad.
https://news.ycombinator.com/item?id=39771742
yes and so are the worst, and the problem is 95% of ideas that sound stupid aren't stupid and genius, but just stupid. As Peter Thiel used to say, it's not enough to be a contrarian, that's easy, you need to be contrarian and correct.
It’s way more exploitative than it gets credit for, even those who criticize VC firms aren’t verbalizing the vastness of the scope of the issue:
Startup incubators prey on young and ambitious people’s willingness to have zero life outside of work in order to set 90% of them up to fail and make huge profits off of the 10% Airbnb-type success stories.
These VC firms have money but no talent or time of their own so they basically steal it from founders in exchange for a Hollywood or pro sports-style superstar pipe dream where most are statistically guaranteed to fail, and even those who succeed don’t keep the majority of the fruits of their labor.
These failed startup founders end up with skills that are supposedly transferable to future ventures or what have you, but I bet if someone actually tracked down a lot of these people they might find a lot of sob stories of early stage founders who ended up burning out of their early career and having the whole startup founder experience representing a net negative in their lives.
What worries me is that it’s reducing the value of actual engineering work (or good quality art). It’s like car lemons. Their existence also reduces the value of the good quality work