Why is that whenever there is a news about AI, its either a new scam or something vile. Like all this harm being done to environment, people's sanity and lives, just so companies can pay less to their employees. Great work.
Are you suggesting people shouldn't develop AI because it's basically just produces unemployment and scams? Like that they should just be good people and stop, or government should ban the development of AI?
I mean you are clearly equivocating AI with unemployment and scams, which I think is a very incomplete picture. What do you think should be done in light of that?
Blaming the technology for bad human behavior seems an error and it's not clear that the GP made it.
People could and likely will also increase economic activity, flexibility, and evolve how we participate in the world. The alternative would get pretty ugly pretty quick. My pitchfork is sharp and the powers that be prefer it continues being used on straw.
>I mean you are clearly equivocating AI with unemployment and scams, which I think is a very incomplete picture.
What else, let me guess, slop in software, ai psychosis, environmental concerns, growing wealth inequality. And yes may be we can write some crappy software faster. That should cover it.
I have no suggestions to on how to solve it. Only way is to watch openAI/Claude lose more money and then hopefully models are cheaper or completely useless.
I mean from your perspective it just sounds like it should be stopped somehow. Either people collectively decide its a waste of time or something. I guess im very surprised to hear someone things it brings no value. I can relate to some of the negative outcomes but to not see any significant value seems kind of crazy to me.
Yes, But I am talking about slop in all the software I use, not just what I make. Every app is trying to do everything. Every place we have summarise button, some cobbled together AI gen features. Software continuously fails, and companies provide no support as that is all automated to save money.
If they reported on heart disease people might get healthy. But it's instinctual understanding that people dying all over just improves journalists odds in our society. Keep them anxious with crime stats!
About all of the good news, once you read a little bit more, are all due to traditional ML and all are in medical imagery field. Then OpenAI tries to take credit and say "Oh look AI is doing that too", which is not true. Go ahead and read deeper on any of those news and you would quickly find LLMs haven't done much good.
They helped me make some damn good brownies and be a better parent in the last month. Maybe I should write a blog for all of the great things LLMs are doing for me.
Oh yeah, and one rewrote the 7-minute-workout app for me without the porn ads before and after the workout so I can enjoy working out with one of my kids.
What makes you think you couldn't have made brownies without LLMs. Go to google and just scroll 20cm and there it is, a recipe, the same one chatGPT gave you. I wont comment on rewriting an app, because LLMs can definitely do that.
Because, "Why are the edges burnt and the middle is too soft? How are these supposed to actually look? I used a clear 8"x8" pan, and I'm in Utah, which is at 4,600 ft elevation"
Oh, it's a higher elevation, I need to change the recipe and lower the temperature. Oh, after it looked at the picture, the top is supposed to be crackly and shiny. Now I know what to look for. It's okay if it's a little soft while still in the oven because it'll firm up after taking them out? Great!
Another one, "Uh oh, I don't have Dutch-processed baking power. Can I still use the normal stuff for this recipe?" Yeah, Google can answer that, but so can an LLM.
What makes you think you couldn't have made brownies without Google. Just go to your local library and find the first baking cookbook you can find. And there it is, a better recipe than Google without all the SEO blog spam.
To avoid my comment just being snarky, I agree that there's a difference between comparing Google to LLMs, and the library to Google... but still I hope you can acknowledge that LLMs can do a lot more than Google such as answering questions about recipe alterations or baking theory which a simple recipe website can't/won't.
fwiw modern recipe sites are awful - you have to scroll down literal minutes until you get to the recipe. LLMs give you the answer you want in seconds.
I’m certainly no LLM enthusiast but pretending they are useless won’t make the issues with them go away
I doubt this bonanza is gonna last... These chatbots, feeding from the very source that can't seem to surface quality stuff by the way, will likely degrade just like those searches have for the last 20 years. There will be ads, there will be manipulation and deception, there will be pointless preambles and they will spit out even more wrong instructions and unusable garbage, and on top of it all it won't take 20 years this time do degrade, it's rather likely that it will take less than 5 years.
Maybe open source models will hold these accountable, or maybe they will degrade too somehow. Or maybe the world will be going through a hard collapse for any of us to care.
BUT, notice the absolutely opposite approach to AI and Web3 on HN. Things that highlight Web3 scams are upvoted and celebrated. But AI deepfakes and scams at scale are always downvoted, flagged and minimized with a version of the comment:
“This has always been the case. AI doesn’t do anything new. This is a nothingburger, move on.”
It comes up so often as to be systematic. Both downvoting Web3 and upvoting AI. Almost like there is brigading, or even automation.
Why?
I kept saying for years that AI has far larger downsides than Web3, because in Web3 you can only lose what you volunarily put in, but AI can cause many, many, many people to lose their jobs, their reputations, etc. and even lives if weaponized. Web3 and blockchain can… enforce integrity?
At this point I think HN is flooded with wannabe founders who think this is "their" gold rush and any pushback against AI is against them personally, against their enterprise, against their code. This is exactly what happens on every vibe coding thread, every AI adjacent thread.
> BUT, notice the absolutely opposite approach to AI and Web3 on HN. Things that highlight Web3 scams are upvoted and celebrated. But AI deepfakes and scams at scale are always downvoted, flagged and minimized…
The era of new technologies being used to work for us rather than net against us is something we took for granted and it's in the past. Those who'd scam or enshittify have the most power now. This new era of AI isn't unique in that, but it's a powerful force multiplier, and more for the predatory than the good.
What's worse is a significant number of folks here seem to be celebrating it. Or trivializing what makes us human. Or celebrating the death of human creativity.
What is it, do you think, that has attracted so many misanthropes into tech over the last decade?
cultural problem too... like even before AI in recent years there's been more of a societal push that it's fair game to just lie to people. Not that it didn't always happen, but it's more shameless now. Like... I don't know, just to pick one, actors pretending to be romantically involved for pr of their upcoming movie. That's something that seems way more common than I remember in the past.
Do you have any data to back that "it is more socially acceptable to lie"? I looked a bit and could not find anything either way.
The impression can be a bias of growing up. Adults will generally teach and insist that children tell the truth. As one grows, it is less constrained and can say many "white lies" (low impact lies).
We do have more impact for some people (known people, influences, etc.) than before because of network effects.
I think it's the speed by which it can do harm. Whatever efficiency gains we gain from AI for good causes will also be seen by nefarious causes. Tools need safety mechanisms to ensure they aren't symmetrically supporting good and bad actors. If we can't sufficiently minimize the latter, the benefits the former group gains may not be worth it.
Text only, no ads, and aggressive downmodding of self-promotion.
Edit: On the other hand, here we are looking at it and talking about it. Some number of us followed links in that article. Some number of them followed those to an OnlyFans page.
Another kind of protection is Reddit and Twitter remaining alive as quarantines. Rather than if they collapsed and the newer better places absorbed the refugees.
Nothing is bulletproof, but more hands-on moderation tends to be better at making pragmatic judgement calls when someone is being disruptive without breaking the letter of the law, or breaks the rules in ways that take non-trivial effort to prove. That approach can only scale so far though.
Essentially, gatekeeping. Places that are hard to access without the knowledge or special software, places that are invite-only, places that need special hardware...
Or a place that can influence a captive audience. Bots have been known to play a part in convincing people of one thing over another via the comments section. No direct money to be made there but shifting opinions can lead to sales, eventually. Or prevent sales for your competitors.
Or places with a terminally uncool reputation. I'm still on Tumblr, and it's actually quite nice these days, mostly because "everyone knows" that Tumblr is passé, so all the clout-chasers, spammers and angry political discoursers abandoned it. It's nice living, under the radar.
This is a technique that will absolutely be used by those reputaiton management companies.
I predict that it within three years we'll be discussing a story about how a celebrity hired a company to produce pictures of them doing intimate things with people to head off the imminent release sexual assault allegations.
There's something about the way terminology used in this article that feels off to me.
First of all, I'm not sure it makes sense to refer to these AI-generated characters as AI 'influencers'. Did these characters actually have followers prior to these fake videos being generated in December 2025? Do they even have followers now? I don't know, maybe they did or do, but I get the impression that they are just representing influencer-ish characteristics as part of the scheme. Don't get me wrong, the last thing I want is to gatekeep such an asinine term as 'influencer'. However, just like I would not be an influencer just by posting a video acting like one, neither do AI characters get a free pass at becoming one.
Second, there's the way the article is subjectifying the AI-generated characters. I can forgive the headline for impact, but by consistently using 'AI influencers' throughout the article as the subject of these actions, it is not only contributing to the general confusion as to what characters in AI-generated videos actually are, but also subtly removing the very real human beings who are behind this from the equation. See for instance these two sentences from the article, UPPERCASE mine:
1- 'One AI influencer even SHARED an image of HER in bed with Venezuela’s president Nicolás Maduro'
2- 'Sometimes, these AI influencers STEAL directly from real adult content creators by faceswapping THEMSELVES into their existing videos.'
No, there is no her sharing an image of herself in bed with anyone. No, there are no them stealing and faceswapping themselves onto videos of real people. The 'AI influencers' are not real. They are pure fictions, as fictional as the fictinal Nicolás Maduro, Mike Tyson and Dwayne Johnson representations that appear in the videos. The sharing and the faceswapping is being done by real dishonest individuals and organisations out there in the real world.
When you see what z-image turbo with some added LORA does in mere seconds on a 4090 locally, you know it's a lost fight. And that's not even the best model: just a very good one for something that everybody can run.
Not only is the cat out of the bag but this is just the beginning. For example say porn vids where people can change the actress to their favorites celebrity in real-time is imminent.
I think most people know that these aren't real. They are just for laughs or titillation and a way to get attention/follower and (ultimately) payers. Celebrity impersonations in advertising are not at all new.
You may be surprised to learn that headlines actually lead to an even larger body of text called an "article" which contains, among other things, references to the scale of the issue named named in the headline.
I mean you are clearly equivocating AI with unemployment and scams, which I think is a very incomplete picture. What do you think should be done in light of that?
Blaming the technology for bad human behavior seems an error and it's not clear that the GP made it.
People could and likely will also increase economic activity, flexibility, and evolve how we participate in the world. The alternative would get pretty ugly pretty quick. My pitchfork is sharp and the powers that be prefer it continues being used on straw.
you suggested it:
> government should ban the development of AI?
works for me!
What else, let me guess, slop in software, ai psychosis, environmental concerns, growing wealth inequality. And yes may be we can write some crappy software faster. That should cover it.
I have no suggestions to on how to solve it. Only way is to watch openAI/Claude lose more money and then hopefully models are cheaper or completely useless.
Are you a developer? If so does this mean you have not been able to employ AI to increase the speed nor quality of your owrk?
https://flowingdata.com/2025/10/08/mortality-in-the-news-vs-...
If they reported on heart disease people might get healthy. But it's instinctual understanding that people dying all over just improves journalists odds in our society. Keep them anxious with crime stats!
Such an unserious joke of a society.
I've read statistics to the effect that bad news (fear or rage bait) often gets as much as 10,000X the engagement vs good news.
Expecting tech bros to take responsibility for what they have unleashed is asking too much I suppose.
Oh yeah, and one rewrote the 7-minute-workout app for me without the porn ads before and after the workout so I can enjoy working out with one of my kids.
Oh, it's a higher elevation, I need to change the recipe and lower the temperature. Oh, after it looked at the picture, the top is supposed to be crackly and shiny. Now I know what to look for. It's okay if it's a little soft while still in the oven because it'll firm up after taking them out? Great!
Another one, "Uh oh, I don't have Dutch-processed baking power. Can I still use the normal stuff for this recipe?" Yeah, Google can answer that, but so can an LLM.
To avoid my comment just being snarky, I agree that there's a difference between comparing Google to LLMs, and the library to Google... but still I hope you can acknowledge that LLMs can do a lot more than Google such as answering questions about recipe alterations or baking theory which a simple recipe website can't/won't.
I’m certainly no LLM enthusiast but pretending they are useless won’t make the issues with them go away
Maybe open source models will hold these accountable, or maybe they will degrade too somehow. Or maybe the world will be going through a hard collapse for any of us to care.
BUT, notice the absolutely opposite approach to AI and Web3 on HN. Things that highlight Web3 scams are upvoted and celebrated. But AI deepfakes and scams at scale are always downvoted, flagged and minimized with a version of the comment:
“This has always been the case. AI doesn’t do anything new. This is a nothingburger, move on.”
You can probably see multiple versions in this thread or the sibling post just next to it on HN front page: https://news.ycombinator.com/item?id=46603535
It comes up so often as to be systematic. Both downvoting Web3 and upvoting AI. Almost like there is brigading, or even automation.
Why?
I kept saying for years that AI has far larger downsides than Web3, because in Web3 you can only lose what you volunarily put in, but AI can cause many, many, many people to lose their jobs, their reputations, etc. and even lives if weaponized. Web3 and blockchain can… enforce integrity?
https://www.ycombinator.com/companies?batch=Winter%202026
What you're noticing is a form of selection bias:
https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
It took a few years for that to happen.
Plenty of folks here were all-in on NFTs.
can't wait until he figures out AI
What is it, do you think, that has attracted so many misanthropes into tech over the last decade?
We are now talking about AI in how it enables porn, spam, and scams....
But I do agree. It is more socially acceptable to just lie, as long as you're trying to make money or win an argument or something. It's out of hand.
The impression can be a bias of growing up. Adults will generally teach and insist that children tell the truth. As one grows, it is less constrained and can say many "white lies" (low impact lies).
We do have more impact for some people (known people, influences, etc.) than before because of network effects.
Edit: On the other hand, here we are looking at it and talking about it. Some number of us followed links in that article. Some number of them followed those to an OnlyFans page.
I predict that it within three years we'll be discussing a story about how a celebrity hired a company to produce pictures of them doing intimate things with people to head off the imminent release sexual assault allegations.
First of all, I'm not sure it makes sense to refer to these AI-generated characters as AI 'influencers'. Did these characters actually have followers prior to these fake videos being generated in December 2025? Do they even have followers now? I don't know, maybe they did or do, but I get the impression that they are just representing influencer-ish characteristics as part of the scheme. Don't get me wrong, the last thing I want is to gatekeep such an asinine term as 'influencer'. However, just like I would not be an influencer just by posting a video acting like one, neither do AI characters get a free pass at becoming one.
Second, there's the way the article is subjectifying the AI-generated characters. I can forgive the headline for impact, but by consistently using 'AI influencers' throughout the article as the subject of these actions, it is not only contributing to the general confusion as to what characters in AI-generated videos actually are, but also subtly removing the very real human beings who are behind this from the equation. See for instance these two sentences from the article, UPPERCASE mine:
1- 'One AI influencer even SHARED an image of HER in bed with Venezuela’s president Nicolás Maduro'
2- 'Sometimes, these AI influencers STEAL directly from real adult content creators by faceswapping THEMSELVES into their existing videos.'
No, there is no her sharing an image of herself in bed with anyone. No, there are no them stealing and faceswapping themselves onto videos of real people. The 'AI influencers' are not real. They are pure fictions, as fictional as the fictinal Nicolás Maduro, Mike Tyson and Dwayne Johnson representations that appear in the videos. The sharing and the faceswapping is being done by real dishonest individuals and organisations out there in the real world.
Not only is the cat out of the bag but this is just the beginning. For example say porn vids where people can change the actress to their favorites celebrity in real-time is imminent.
There's no fighting this.
This is rage bait and we are above it. Flagged.