I heard someone on a podcast call social media algorithms "the modern-day cigarette" and that really resonated with me. These companies know their product is addictive and bad for users, but they keep pushing it anyways. Like cigarettes, it's bad for everyone, not just kids. I made an algorithm blocker for Safari because of that and it's actually crazy how much more pleasant social media is if you don't have recommendation algorithms at all. I think the EU and other jurisdictions should really look beyond just limiting this stuff to kids, but I understand why it's starting there...
If you didn’t notice, this comment is an ad for a paid app trying to capitalize on social media anger. I respect the hustle, but this is not a neutral comment on the topic due to the financial interest. There are many free alternative plugins for targeting social media feeds if someone wants to filter these.
I was going to make a similar accusation as the above but I skimmed your comments and it didn't seem like you were the sort to have ill intent behind bringing it up. Next time you might want to include one of those stuffy "Disclosure" notices.
That's a good idea, thank you for the feedback. I have a hard time finding the line between "advertising" and "sharing something I built" on this site sometimes.
If you haven't been on HN, you'd believe this was some aberration as opposed to the norm. This is a YC run forum, so it's pretty normal for comments to contain software advertisment based comments.
The modern-day cigarette is such a perfect metaphor for social media. A cabal of unfathomably wealthy companies spreading their harmful products across the world; making them as addictive as possible while actively burying the research which proves how harmful they are. I truly hope one day we'll look back on social media and smartphone use the same way we regard smoking.
I hope you're right but I think you're dead wrong. Social media has not only affected the mental health of millions of people negatively, it has brought about social, political and economic harms that will affect the planet for generations.
> The effects of social media usage are surely reversible by stopping using it and then some retraining of the brain
This is a reasonable, but optimistic take. The effects of social media on developing brains will need to be studied to be sure the effects are reversible. Furthermore, how extensive is the damage and how long does it take to reverse? Are older people less likely to recover?
Neuroplasticity. Seems better than the damage caused to your lungs and cells from smoking.
I mean, do you have any evidence that the brain is irreversibly damaged by social media? I have not seen any, but I have seen evidence that there is permanent cell damage from smoking.
I see what you're saying, I should have been more specific. I more so mean recommendation algorithms that are artificially created by platforms to drive more traffic. I think the HN method of user votes without manipulation by the platform is better but not ideal, the best method is 100% user curated content (i.e. following specific accounts on instagram/twitter, RSS feeds, etc), which I would argue is not really a recommendation algorithm. I think that people don't realize how much the content they see influences your thoughts, and how much that content is chosen by profitability over anything else.
Look up images in Google with `eu cigarettes boxes`. Banning is a thin wedge, but I think we need something like these warning labels for social media.
Glad to hear a false comparison to something that's actually physically/chemically addictive really resonated with you (a.k.a. affirmed your already existing beliefs in this moral panic).
If we step back and look at this rationally though, can anybody point me to any peer reviewed studies (the actual studies, not clickbait articles written based off the studies) showing that social media is anywhere near as physically harmful or addictive as cigarettes?
I'm totally open to the idea that engagement algorithms are inflaming social division. I'm less convinced that the children are the ones being harmed however. I think its the adults who grew up in a media mono-culture where the default was trust are the ones more susceptible to negative outcomes.
When things change, the young are the ones more likely to adapt.
This is a clear line and we fought Europe already over this in the last century. There is absolutely no world where we need a group of people telling us how long we are allowed to be on TikTok. It is inexcusable to think this way in a free democracy.
Democracy depends on shared rules and institutions that limit what you can do in pursuit of those beliefs. There is certainly a line to be drawn of where our freedoms begin and end. TikTok is nowhere near this
Smoking has definite physiological effects. Molecules bind to receptors or neurons and initiate cascades/responses.
I don't see this with user interface in a browser at all. IF you wish to reason for that, why are regular ads allowed? They piss me off. Why do I have to see them? They cause my brain an addiction to want to buy crappy products. So why is there no ban here?
Let's face it - the EU is on a path of "Minority Report" here.
> I think the EU and other jurisdictions should really look beyond just limiting this stuff to kids
Yeah they try to restrict what we can do. We oldschool people call this fascism. See the EU trying to destroy VPN. And this is a meta-strategy we see here - many lobbyists are activated and try to "sync" laws that never made any sense to as many countries as possible. I see where corruption happens. And I don't buy the "we protect kids" fake lie for a moment.
Already Hippocrates was linking the mind to the physical brain, and if you've never felt a physical reaction from looking at the fairer sex I feel bad for you son, yet if you got ninety-nine problems at least sex ain't one.
It's just so tedious to see this "information cannot harm anyone" theory in a context where a huge fraction of the people spend their entire day jobs tying to make phishing less effective.
That's why I make the cigarette comparison. They know it's bad, but it's profitable for people to be addicted to it. I think it's bad for adults for a different reason, I've seen adults in my own life get influenced by things they see online (conspiracy theories, pseudo-science around health and nutrition, political radicalization). And this happens because it's profitable for people to be hooked on these topics with false or misleading information, not because it's true. That's not to say this never happened before recommendation algorithms, but it's a difference in magnitude. I think that's the reason we are seeing such a dramatic rise in political polarization- because it's profitable.
To hold this view you have to think of information as "not real", not like "real" molecules and receptors, the mind as distinct from the body, and then restrict the legal definition of harm to only "real" things.
This is an odd thing to do, because :
- information is real, it exists in the universe.
- the harm of social media is real, as measured by many of the same measures as the harm of smoking
Why not do something about ads? No, that's a good thought, we should do that too.
I think a decent conceptualization here is "psychic damage", as in a video game. These things deal a lot of it.
The other side of the view is "information is real and I don't like some of it ("it's harmful/addictive/blasphemous") so it must be controlled and regulated."
I don't think it's an odd thing to be opposed to that line of thinking.
People in here are casually linking social media to cigarettes, a product that kills half its users, and in previous iterations I've seen people compare social media to using heroin. It's completely hysterical.
I expect tabloid journalists and grandstanding politicians to do this, it really scares me when HN users that should know better do it.
Depression is real, I'm experiencing it right now reading these comments.
You know what, why don't you go buy a carton of cigarettes and some heroin, and go use that for a few months. Since it's the same thing as looking at a news feed you shouldn't have to worry about addiction because you've already done that and not gotten addicted to it, so you should be fine, right?
> Depression is real, I'm experiencing it right now reading these comments.
No, you aren't. You are trivializing what Depression actually is by making flippant comments like that. You're also letting everyone know that you are utterly ignorant to what Depression actually is.
> Yeah they try to restrict what we can do. We oldschool people call this fascism.
Come on, this is an absurd statement. Governments regulate what people can do, yes. It’s part of their role. It’s why I can’t sell tainted meat on the street. It’s a good thing.
Of course there is a line you can cross where the control becomes excessive but “the government sets rules around what people can do, that’s fascism!” is absurd.
This is pretty easy to solve. If you present data by algorithm, you are no longer an impartial common carrier and are liable for the content you present. If the user decides you don’t, ala social media 1.0.
> If you present data by algorithm, you are no longer an impartial common carrier and are liable for the content you present
Hacker News is a site that presents data by algorithm. Under your definition, Hacker News goes away, too.
A more accurate framing would be that they’re going after personalized recommendation algorithms. It’s not obvious that offering a recommendation algorithm would mean that the site is no longer an impartial common carrier.
But still an algorithm. The difference is that we (at least some of us) place a greater trust in the integrity behind how information surfaces on HN. I think that some parts of it are open source, and the moderators are transparent enough about what isn't public + there is a mix of folk knowledge that explains how HN works under the hood.
Depersonalized algorithms or recommender systems aren't inherently better than personalized ones. HN is an exceptional example of the former but I think at scale people would come up with a different crop of complaints for them.
Yes it's still an algorithm. Cable TV programming is another example. Everyone sees the same content. The ads are changed at the local broadcaster level but are not tailored to the individual, and are not harmful in the ways the EU is regulating. If anything, everyone watching the same thing is good for social cohesion. Everyone discusses the latest TV episode the next day at the office.
Right. Withholding the fact that cable television doesn't appear to be the typical distribution method anymore, how do broadcasters select/schedule their programming?
Goes away, or is liable for the content promoted to the frontpage under the OP's take?
But I'd agree, that it's personalisation rather than just curation that's the issue.
I think even requiring sites to have a "bring your own algo" version (and where ads are targetted to the algorithm, rather than the person) would cure a lot of ills.
As is, even with something like Spotify where you _are_ paying there's no easy way to "reset" your profile to neutral recommendations
> Goes away, or is liable for the content promoted to the frontpage under the OP's take?
Same thing. There is no Hacker News if Y Combinator becomes liable for user submitted content.
It’s an obvious backdoor play to make sites go away. If a site becomes liable for content posted, you cannot allow users to post content without having the site review and take responsibility for every comment and every post.
The people proposing it haven’t considered how damaging that would be for the ability of individuals to share ideas and their content. When every site with “an algorithm” is liable for content posted, nobody is going to allow you to post something. It’s back to only reading content produced and curated by companies for us. Total own-goal for the individual internet user.
I agree with what OOP said. But it’s not my intent to “shut sites down.” I have this view to try to increase diversity of media consumption and break people out of echo chambers. If your business model is so shit you have to exploit weaknesses in human brains to keep people viewing ads and can’t adapt, then that’s your problem.
If you have an algorithm whose sole purpose is to “engagement” with your own platform (by intentionally and purposely pushing clickbait, ragebait, and media that keeps reinforcing your clicks) you should no longer get section 230 protections - you are no longer a neutral party. These algorithms exist to create echo chambers and keep you clicking so you can consume more ads.
I would love to hear other ways of solving the problems of social media.
Have you ever browsed by New and seen the firehose of shit which doesn’t make it to the front page? HN sorted by new is effectively useless and you might as well shut the site down at that point.
“Chronological only” might work for something like Twitter where you’re choosing to follow specific individuals to see their posts, it can’t work for curation sites like HN/Reddit.
The majority of terminally addicted people I have interacted with at length have both recognized the terminal nature of their addiction and been unable to do anything about it.
In the case of Instagram: You show the videos from the people you follow on instagram, then no more short videos at all. Possibly a search box.
If you search on youtube then it can rank any way it wants, just not use e.g. anything from the viewing history. No "related videos" column. That's what YouTube used to be. But YouTube (unlike TikTok) worked well before it had rabbit holes.
For TikTok the situation is worse. Their whole app just doesn't exist unless you have the custom feeds. This would make YouTube be 2010 youtube, Instagram be 2010 Instagram (great!) but it would effectively be a ban of TikTok's whole functionality (again, great!).
I think it would be great if all of these apps had an option to function like you propose: Your feed is a simple view of people you’ve chosen to follow. The end.
Then all of the people who have trouble with self-control on infinite feeds can enable this mode, and everyone who wants the recommendation algorithm can leave it on.
This is the optimal outcome that actually serves everyone’s personal goals for using these platforms. If we get into a conversation where some are demanding we don’t allow anyone to use a recommendation algorithm because they feel the need to control what other people see, that’s a different conversation. That conversation usually reveals other motives, like when people defend the algorithm sites they view (Hacker News, Reddit, whatever) but targets sites they don’t like TikTok.
I don’t endorse using these apps, but for what it’s worth, Instagram actually does have this feature (tap “instagram” at the top and select “following”). You get a chronological feed with no adds and no reels. Of course they don’t provide an option to make that the default as far as I know.
Instagram and Facebook both have such features. They’re hidden, though. With Instagram you tap the logo in the top middle of the app and choose “Following”. With Facebook it’s hidden away under the “Feeds” section in the app.
I’d love for there to be an option to have them as default. It’s obvious ($$$) why they won’t do that unless forced to by regulators.
Why do you assume the recommendation algorithm should be the default? The algorithm is the dangerous thing, THAT should be the opt-in mode not the other way around.
IMO they should not only be opt-in, but should actually be required to publicly list the parameters and weights they’re using and allow users to tune those weights.
Sure, if that makes the angry mob happy then let’s make it default. Then every new user can click the button once and be back to the app they expect.
> IMO they should not only be opt-in, but should actually be required to publicly list the parameters and weights they’re using and allow users to tune those weights.
I wonder how many people here know that many of the popular apps have rolled out finer controls for recommendation algorithms so you can do this. On Instagram you can go in and see the topics your recommendation algorithm picked up and modify them manually if you like.
I think the goalposts will just continue to move, though.
No they should have to pick every time whether they want to be in follower mode or discovery mode. Dismissing concerns as “the angry mob” is richly ironic considering the entire objection is that recommendation algorithms seem precisely tuned to foster angry mob dynamics. So yeah it will make the angry mob happy because it will be removing the primary mechanism for inciting angry mobs.
People here know that they have finer controls (which are still not actually that fine and also don’t really make the parameters auditable). The problem is these settings are hidden away in places most people will never look. And also, I stress again, none of this is actually auditable because they treat these as some kind of trade secret special sauce and there’s really no reason society should feel obligated to support or enable this business model.
Not sure what confiscation would accomplish that regulation couldn’t? I mean we’re all aware that if regulators target TikTok then a new app would pop up and take its place.
But the thing about regulation is that it doesn’t need to be water tight. You can just target a small handful of large players and it will improve the situation in practice. It doesn’t matter if 998/1000 apps use addictive feeds if the largest two apps don’t and they have 90% of users/views.
It’s naive to think that regulation is going to cover the entire global internet.
If you regulated domestic companies out of existence, global options would pop up in their place. You could try to block them all in app stores but people would go to the web views.
I think that's still mostly fine. Youtube is already not an app but a web site (It has apps too but I think it's less app centric than e.g. instagram).
Obviously we need the ability to regulate also global options. Typically if these actors truly become big, then they have a presence in their "target" countries, such as ad sales.
Do it like a library. When a person walks into a library, they're presented with a short curated list of books suggested from the librarian. All visitors to the library see the same books. From there, the visitor can go about their business searching for what they want.
If they don't know what they want, perhaps a good use case for the newfangled LLM-search we have now would be "What's an interesting or popular topic I haven't searched for before?" to which the AI will respond with a list of newly searchable terms.
The first unwatched video from the user's followed/subscribed channels. Chronological, reverse chronological, sorted alphabetically, by the user's channel prioritisation, by likes, by views... whatever the user chooses. And then an end of feed.
For new users? A search bar and a set of (human? AI?) curated seed recommendations that the platform is comfortable with being held liable for.
If they just signed up they have no followings or subscriptions. So now what, you need to show accounts to follow first? Thats the same problem as deciding what the first video to show is. How do you decide who they should follow? Or the vision is that you can only have friends as if it's 2005 and you can't discover anything serendipitously?
I don't consume any content from my friends on something like tiktok where I'm interested in discovering people that have good content under topics I'm interested in. I don't know who those people are and I want to discover new ones that come up not just follow some already popular accounts.
Undoubtably the change needed here will introduce friction, will reduce viewing time, and society will be better off for it.
The whole idea here is to make content consumption more deliberate and mindful rather than just opening the app and veging out to an endless feed of slop.
That’s also an algorithm. An unsophisticated one, but an algorithm nonetheless.
You can (and should) argue that such a simple algorithm doesn’t “count”, but fundamentally the exact wording of the grandparent post never works, legislatively.
> That’s also an algorithm. An unsophisticated one, but an algorithm nonetheless.
The problem always has been "(personalized) opaque algorithms". Time sorted by followers isn't really opaque, nor is "sorted by likes" or whatever. The problem is always pulling in parameters that a users either has no active control over or are so variable they effectively could be random.
The internet solved the problem of millions of millions in it's implementation details, you share a URL. You follow people, they share URLs, it grows organically, same way every website worked pre... Instagram? I'm not sure who moved to the algorithmic feed first.
I would say, no *personalised* algorithms other than those based on deliberate user choices would solve the problem. So, what user chooses to follow, or the same for everyone in the country.
This seems to be consciously dishonest. Show them "most recent" or "most upvoted" or "A to Z." Pretending like this is hard is bizarre. People have always selected sort and filter algorithms, until companies started taking them away.
Of course it's easy: such decisions were taken _before_ the feeds where algorithmically built.
You rely on unambigous, "physical" properties of the videos.
There is a physical property of all the videos: the time of publication.
There is a physical property of all the channels: did you subscribe to it, or not ?
So, you show, in (reverse) chronological order of publication, the list of videos published by the channels you subscribed to.
Now, of course, a brand new user would have no subscription - you show them a search box.
But then, now, your search algorithm has to weight the various channels that match - but your algo can be relatively transparent, relatively auditable, and the same for all users (unless given explicit preferences, and of course national laws, etc, etc...)
I'm sorry, but, I have a "subscriptions" page in youtube or substack, and they're chronological, and they show me what I want to watch. You keep that.
There is a "home" page in both service that is algoritmically built, and they show me crap that the algo want me to watch. You get rid of that.
Do this, and I can consider you a "neutral" actor, and accept that you shift the blame to content producer.
Or, keep the algo feed, but don't take money from advertiser when I watch yet another flat earther video because YOU decided it was trending.
If you want to decide what I watch, and make money from that decision - congrats, you are an editor. You get the earnings, and the responsibility.
Please don't tell me, with a straight face, that the people who build the algo don't "decide" what I watch. If they want to tweak the algo to downgrade the flamewars and outrage and conspiracy theories and violence and abuse, they can. They do not want to, for business reasons. [1]
That's fair, up to a point - we need publications with editors that agree on having "edgy" content. I'm not advocating for blanket censorship.
I did not like social network preventing me from _sharing_ articles about Biden's son laptop (this was actually beyond the law, but somehow they managed to find the resources and programmers to implement _that_, because, at the time, the execs where cozying with a different administration.)
I'm advocating for "accepting your responsibility as an editor".
This kind of complex leglislation already exists in many areas of the law: revenue collection being the most obvious one. We could choose to treat "societal harm" the way we treat "tax collection".
I'm not saying there aren't infinite edge cases and second-order effects - but we tolerate those already for many things. I'm not pretending this is simple or even desirable - I'm merely stating it's possible if we want to do it.
My biggest fear is that (like the UK Online safety act) this acts to favour the huge corporations because they are the only ones that can afford a team of lawyers. Any legislation should aim to carve out exceptions to avoid indirectly helping monopolies.
Great example. These companies are already experts at circumventing taxes, what makes you think they can’t weasel their way around some arbitrary written law?
Just look at the malicious compliance that Apple and Google have around the App Store stuff, they’ll find a way to comply with the law and implement different addictive dark patterns.
I’m not saying that I disagree that these companies need to be regulated, I absolutely do. I just think it’s going to be a complicated process, and not “oh just ban everything that’s an algorithm”.
And I have absolutely 0 faith in companies like Meta willfully complying.
I have a feeling taxes are possible to circumvent only because a government tends to have one arm that wants to collect taxes, and another that wants to reduce them to encourage certain outcomes (like having a business setting up shop within its borders).
The US may have this dual incentive structure since it wants to build its tech giants while limiting their control, but the EU doesn't. The arrival of a foreign tech social media giant might make the legislation a bit more palatable to pass.
It will undoubtedly be complex to regulate all dark patterns away. But there are a few obvious, easy wins. It'd be a shame to make perfect the enemy of good.
But here’s the real problem: people don’t care. And I say that as someone who hasn’t used social media since 2014.
My observation of people’s behavior indicates that when all is said and done, people don’t care—they would rather get the endorphins from posting, liking, following, etc.
But the solution is to allow people to control their own algorithm, and to have open source solutions where communities manage their own social network.
It’s not the algorithm that is the problem it is that people don’t have the choice to curate their own content.
Although it should be noted that Mamdani’s average donation size skewed much smaller than Cuomo’s, so it is possible that Mamdani was “bribed” by the general public.
This is some kind of a meme where people believe things can’t be defined in legal terms and therefore can’t be regulated. These people are usually not lawyers.
Does anyone know where it’s coming from? I can certainly believe that incompetent jurisdictions have a ton of issues with people misapplying the law and using loopholes.
Albert Hirschman wrote a great book about the rhetoric people use to stifle policy proposals 35 years ago. “It’s futile; it won’t ever work” is one common argument. It’s not a meme so much as a cynical reflexive intuition
The point isn't that it can't be regulated. What the original comment said was
> This is pretty easy to solve. If you present data by algorithm, you are no longer an impartial common carrier and are liable for the content you present.
But this is not in fact easy. It's hard to define what "present data by algorithm" means in a coherent way, and it's hard to extend liability for the content you present to liability for the manner in which you present it. You could make it work, if for some reason you really wanted to, but it's easier to pursue the strategy described in the source article of regulating specific abusive patterns.
> This is some kind of a meme where people believe things can’t be defined in legal terms and therefore can’t be regulated. These people are usually not lawyers.
No they’re engineers who think rules have to function as rigidly in every field as they do in programming.
They either can’t or don’t want to accept that the law is a social construct and what it actually means to you is determined by the weight of precedent, as applied by judges and regulatory bodies. Things are vaguely worded in the law all the time. If people want to dispute how the enforcement is done they sue and judge decides how the rule should be applied.
The easy benchmark to setup can easily be, that any feed that displays the data in a way other than the following is considered an editorial choice and thus the platform is liable as a publisher:
1. In a chronological order, and only filtered based on user selected options.
2. In any other order explicitly selected by the user.
An exception can be made to allow filtering out content that violates the platforms terms and conditions.
Alternatively there can be no exception, effectively making these platforms unworkable. This is also a choice. We do not need these platforms, including this one.
If the user selects "sort by algorithm" then I don't see how you've changed anything other than the default. I think it's pretty obvious just changing the default won't work.
That's because the default is 99% the way the app is designed to be used. If the default is regulated, then they will just say "sorry the default is boring, click here to bring back the feed" and everyone will just click.
Instead, a regulation could mandate the administration an anonymized unbiased mandatory eval test at the end of every week/bi-week/month just like instruments for psych evaluation (e.g. do you feel your <mental-health-metric> has become worst in the last <time-period> on <scale>. Did you have <mental-health-marker> after watching content on social media?).
The said regulation can then mandate that after calibration and correction the feed pull back by training the algorithm to adjust it in a rapid A/B test.
This is all doable by the companies themselves, but since they wont, the key is to mandate it and publish the aggregate results regularly — like make it part of the quarterly share holder's SEC reporting requirement or something.
"Algorithm" is a method of selecting the content to display. You're listing presentation types, not selection types. Presentation has nothing to do with supervised selection. Selecting the next video in the infinite scroll would be the algorithm, not the infinite scrolling mechanism itself.
Everything other than sorting the list of entities by a standard measurement unit (time, length, mass, temperature, amount) needs to be covered by this law.
The moment you add other entities to the list (e.g. ads inbetween posts), then it's also subject to the same restrictions.
This effectively means “every online platform ever” and would also have included MySpace and the OG Yahoo etc, and as such would not really single out the truly bad actors.
And then we’ll end up with with another cookie-banner style law which had good intentions but actually missed the point entirely.
Maybe MySpace should be covered. I mean, MySpace probably(?) had the technical capacity to act maliciously in the manner that modern social media sites do, then business model just hadn’t evolved to the modern toxic state yet.
The cookie banner law is fine for the most part. Sites that do the malicious-compliance thing of over-prompting the user for permissions are providing a strong signal that they are bad actors. It’s about as much as we can expect without banning them entirely…
I stopped using facebook around 2015-ish, when they stopped allowing sorting by date. Prior to this, hi5 and the likes also allwoed sorting by date. So no, not every online platform ever.
This doesn't differ much from the legal reality that I've seen. Terms need to be defined, yes. It will require work to do so. And that work should be done even if it's a bother.
Ok so then the "algorithm" must be made available to authorities (or even better, the public at large) and be approved or rejected based on a court or a law. Obviously an algorithm based on "engagement" or "narrative" should be rejected with prejudice every time.
I don't see a single difficult example here. The answer is "NO." It's strange that you couldn't even find one.
I mean "Is including likes an algorithm?" You might as well ask if having a dog in the video is an algorithm. Any question about "likes" would be if you're manipulating the video selection based on likes, or is the user given a control to manipulate the video selection based on likes. If it's you it's an algorithm. If it's the user, it's a control. If you lie about the likes, then it's an algorithm. If you're transparent about the likes, then it is a control.
The other ones aren't even worth discussing. You might as well ask if having a blue logo is an algorithm, or if Comic Sans is an algorithm. "It's all so complicated!"
-----
edit: that being said, the EU does not care about this issue at all, and has had plenty of mandate and plenty of time to have done something about it if it did. They are also going to say "it's all so complicated." Because their problem is the unpopularity of center-left neolib governments that are just barely holding on with extreme minority support through bureaucratic means because they wrote the regulations. They want to keep what's came for British Labour during the recent council elections from coming for them.
So I guarantee that content will somehow become an "algorithm." The goal is to keep people who don't like them from speaking to each other.
The conversation has iterated a couple times and one point that people (on this site at least) are stuck on is “well however you rank things—latest, most popular—you’ll need to use some kind of algorithm, maybe quicksort.” This isn’t what the general public or politicians mean when they say “an algorithm” but it does make something of a point, what exactly the general public and politicians mean when they say that… it’s a bit ambiguous.
I think the EU has fully digested this point, and is focusing on the “addictive design” phrase instead, for good reason. It makes it obvious that the problem is a bit fuzzy and related to the behaviors induced, not some cut-and-dry algorithmic thing.
There's another angle to this except the algorithm. When it comes to kids today, there seems to be peer pressure and the need to maintain social media presence, be cool online, among your peers and so on. Beyond that, some kids have their lives devastated by others secretly (or not) recording and publicly sharing their vulnerable moments in life. That can happen in a night and profoundly damage someone.
The mechanism would be that if the user has chosen to follow an account then posts from that account falls under common carrier. If the platform choses to show you other posts then it's under their responsibility.
This is a bit of systems difference. Under a french law system you would write laws to regulate the harms away. Under english common law liability court cases about the harm would lead to precedents and then to common law derived from it. Though not an expert on this.
Why would anyone go to a new platform if they didn't know anyone to follow there? I don't see a problem there. I download TikTok and search for SexyDancingDinosaur I heard was on there and press follow.
How does this specific horrible take rank so highly on HN whenever something adjacent to big tech gets posted. "impartial common carrier" is not even an extent legal concept.
It's been argued to death already, I just have to express shock that I'm still seeing this non-starter constantly here.
Alternative suggestion: Force them to open up the service and allow third party clients. Take Art. 20 GDPR "Right to data portability" and extend it to public content.
"this" - you mean, engagement optimization? i think it would be different content. i don't know how much liability matters, people spend all day watching netflix too, and it is "liable."
ironically, i'm only reading this kind of low brow take because people upvote it, not because it makes any sense.
A lot of adults need this too. The addictive apps are very well designed, while most blockers are either too easy to ignore or too annoying to keep using.
I built a small iOS blocker because I had the same problem. Making it strict enough to actually work without making people hate it is the main challenge.
On the radio I heard a reporter talking about things China does during school exams. Apparently all schools have exams at the same time and during that period, social media shuts down at night. I forget the exact hours (10pm - 6am maybe). I'm starting to think that would be a great policy in general for everybody.
I think they also said AI companies go offline during exam hours, but I may have got that wrong.
Absolutely wild that we’re seeing proposals to shut down parts of the internet and regulate when people can talk to each other on social platforms as a real suggestion on HN.
I feel like we’ve completely lost the plot when we’re starting to invite government partial Internet shutdowns as a good idea. This is a totalitarian government play.
> I feel like we’ve completely lost the plot when we’re starting to invite government partial Internet shutdowns as a good idea. This is a totalitarian government play.
There's been criticism about the culture surrounding platforms like Mastodon/Bluesky that anticipated this.
I think it speaks to the complete lack of government regulation in the area that people see such extreme answers as positive. If any government had seen fit to engage in light regulation of what social media can do people might be happier.
I can only imagine these people have never experienced such censorship.
Maybe they'll feel differently when they have to upload their ID and face scan (which later gets leaked) just to be able to read a recipe for beer or whatever.
But it's kind of a logical, if misguided, consequence of regulators being completely corrupt and letting those feudal lords do whatever the hell they want.
Toast notifications were the big mistake. Also badges. In my perfect world, the only thing to retain the ability to keep messages alerting the user that someone tried to contact them would be voicemail, subject to the same spam laws as everything else.
personally yes, that kind of choice should belong to the individual not the government. besides that though the laws are nonsensical why is a seatbelt required in a car not not in a bus, why are motorcycles even allowed at all?
This argument falls apart for countries with socialized healthcare.
As long as all people are paying for your dumb decisions, it is reasonable to expect the government to reduce the frequency of dumb decisions by adequate means.
Yes, I do. Its just another way that cops can pull you over for bullshit charges and revenue enhancement.
I remember in my state, it was initially only a citation that couldnt be pulled over on. Then they flipped that and started pulling over for it. Why? Pure fucking money grab.
Me not wearing a seatbelt means I risk getting splattered. Not you, or anyone else.
> Me not wearing a seatbelt means I risk getting splattered. Not you, or anyone else.
Except who pays for your million-dollar reconstructive surgery and rehab? I don't suppose you will cover that out of pocket to avoid burdening your fellow insurance payers with your reckless behavior?
>Me not wearing a seatbelt means I risk getting splattered. Not you, or anyone else.
Physics says otherwise. In a collision you don't decide where you body is yeeted and your skull could end inside the skull of a passenger using his seatbelt. Don't be a moron.
https://youtube.com/shorts/n2yLMGA_YSA?si=AlvRgfpb-PJxGCBw
You might have the self-awareness and impulse control to stop yourself from getting addicted to these apps, but the majority of the world's population does not.
These giant companies pour millions upon millions of dollars into engineering their services to be as "engaging" (read: addictive) as possible with the specific goal of making users spend more time on them.
Against that, the average person has no chance. The power balance is hugely uneven.
A responsible government which actually cares for its people has a duty to protect them from abuse like that.
Same as for the cigarette: it's a lot easier to regulate stuff for kids, because we as a society tend to agree that they need to be protected. Much harder to do with adults, because it is much less of a consensus.
Because, in general, we see adults making bad choices as a price worth paying in a free society, but we recognize that children lack the maturity and judgment to make those choices for themselves.
Most adults also lack the maturity and judgement, but allowing adults to make bad decisions is usually less dangerous than giving someone else the power to decide which decisions are too bad to permit.
The other thing I really love about HN is that titles are all supposed to be boring and to the point. The guidelines[1] for titles are excellent and I wish more of the web and honestly legacy media too would behave that way. Things that are of no interest to me are not trying to waste my time and attention.
> I think especially restricting endless scrolling
The actual point is that they are designed to be addictive. "endless scrolling" is just an implementation detail. If you "ban endless scrolling", they'll still be using every other trick to make it addictive.
FWIW, social media use is mediated by ∆FosB expression, so the less you use social media, the less you want to use social media. Timeline of ~3 months.
I don't agree with this. Addictive, unless we're talking about a chemical substance or something like that, is a subjective thing. At some point, books, movies, comics, etc, etc might have been considered addictive.
Social networks in general should be banned for underage people, that's the thing. And the social network itself should be liable for verifying the age its users, like a nightclub is liable for people who enter it. No bullshit operating system age verification, that's, trust me, totally intended to protect kids and not to spy on you.
> Addictive, unless we're talking about a chemical substance or something like that, is a subjective thing.
What makes you say that? It's well known that the addictive patterns in these apps trigger dopamine the same way drugs do. In a sense, dopamine is the "chemical substance" central to the addiction. Heroine and algorithms are just different ways to get it.
Everything you do “triggers dopamine”. Reading HN triggers dopamine. Eating breakfast triggers dopamine. Dopamine is also important for movement and many other things.
This is a lame reduction of brain chemistry that has been used to push agendas. Dopamine is not equivalent do addiction.
It's well known, but I'm not convinced it's true. Dopamine levels are measurable by blood test, and some drug abuse studies perform that measurement. Why does the literature on social media and dopamine exclusively talk in vague and general terms, rather than pointing to specific studies where researchers measured dopamine before and after 30 minutes of TikTok scrolling?
Addictiveness is measured by ∆FosB gene expression. The 'addictiveness' of a substance or activity is qualified by how much ∆FosB is expressed. It's decidedly not just a completely subjective thing. Books, movies, comics, etc. can all still be measured on this scale. Everything is addictive in some capacity, generally.
Addiction at least is quite straightforward to differentiate from otherwise engaging things by whether it causes significant harmful effects. E.g. per Wikipedia "Addiction is a neuropsychological disorder characterized by a persistent and intense urge to use a drug or engage in a behavior that produces an immediate psychological reward, despite substantial harm and other negative consequences."
Addictive would be then something that (for a substantial portion of population) has a tendency to cause addiction.
But they are so profitable, and we need them to track people around and create a police state efficiently. Ah let's keep them but just fine them as well for the show.
Either what defines an "adult" is going to be raised exponentially or what defines a "kid" is going to be lowered to determine who is allowed access to information in transit and who needs to be "safeguarded" from it.
At what point should the responsibility fall on the parent to protect their children from harm?
Don’t get me wrong, if I had my way TikTok wouldn’t exist for anyone, adults included. It’s just so strange to me that so many parents hand their 7 year olds unrestricted access to TikTok and expect someone else to keep their kid safe.
It's not so easy, they need phones and social media to communicate with their friends. They also need to fit in and find an identity. The algorithms basically all engagement engines have is harmful for humanity as a whole. They are marketed as recommendation engines but it's 100% about engagement and that is why the content you see is mostly creating dopamine from it being fun or rage for it being provocative. It's built to serve one purpose, to keep people using the platform as much as possible. Not because the platform is good, but because it serves content that maximizes engagement.
I read a post about someone saying his wife worked for a snack company. They used MRI scans to see how much salt (or sugar) they should have in the snacks to maximize the response in the brain. Sounds disturbing right.
Well engagement engines are the same thing. It's artificial intelligence optimized to get people to react and stay addicted. Basically AI doing harm. It's not what is best for the individual in terms of health. It's what generates most money to the owner of the platform.
It should not be allowed to build a business around something that exploits humans brains. Basically biohacking our brains for profit.
I am from Eastern Europe and I’ve been living for many years in Western Europe. Where I come from, kids get their first phones when they start school at 6 (there’s a pre-school year) simply because every other kid has one. I keep coming back in my mind to two examples from my birth country: a friend’s kid carrying an 8 inch smartphone in his hand everywhere because the phone was as big as half his thigh and would have to carry a bag for it. The second one was on a visit at the zoo, I was on a bench and a family with two young children with them, in a cart. And both children, couldn’t have been older than 4 or 5, were scrolling TikTok, that was showing them children content!
In contrast, in Western Europe, my son is now in the sixth grade, more than half his class doesn’t have phones, phones are absolutely forbidden on school grounds and at school activities, and they are now doing a class trip where they were told that there’s a pay phone at the hotel, in case they want to call the parents - our son promptly informed us that he’ll rather buy a pack of Pokémon cards than call us and 3 days is not so much anyway.
And it is not only at school, he travels for tournaments with his team every other week and mobile phones are absolutely forbidden on the team bus. Children read, play games (including chess on a magnetic board), sing and change stories for hours at a time
Replace TikTok with cigarettes, and it'll hopefully make sense to you. There was a time when people had no idea that smoking was bad for you, which is where we are now with these apps.
And since they're addictive, kids will find a way to get them even if their parents don't allow it. That's why it's most effective to require ID when you're buying cigarettes than it is to shame people for not being perfectly vigilant parents.
BTW, I'm not saying age verification is the solution here. IMO, we should instead ban addictive social media completely. Eg, target specific design patterns/features, require companies to disclose how their algorithms work to regulators, etc.
Apparently parents are spending more time with their children than ever. Dads especially. Paradoxically, there is what you're addressing.
Personally, I think some parents are afraid of their children growing to resent them for infringing upon their "freedom" in ways that keeping them away from the dangers that social media and other technologies present.
Imagine the pressure on Instagram and Tiktok to serve better content if they were forced to pick out, say, 100 short videos per person per day. And not just for kids, adults need a break from this addiction machine as well.
they are going to put kids on a drip basis. addiction is still there, just limited amount per session. Intermittent rewards is actually the perfect schedule for an advertising company, you don't want people to be making unmonetizable page views.
I do not buy this "holy knight war" by the EU at all.
It also makes no real sense to me.
Nothing against US mega-corporations paying fins, mind you,
but I equally do not trust the EU bureaucrats either. There
has to be a limit to both what politicians can do, what
corporations can do and what bureaucrats can do, while
retaining a democratic base system at all times. If you go
against addictive design, then why not against ALL ads? I
don't want to see any ads. Ublock origin made me change my
mind here - I literally see no reason as to why I would
ever want to burden my brain cells with irrelevant content.
This is a bit different to website layout though. I equally
fail to see why the EU should meta-regulate what is permissive
in regards to design and what is not. Why would I have to accept
any random EU bureaucrat here? If a user interface sucks, I'd
rather expect ublock origin to kill it off. This could also be
community maintained. No need for the EU to waste taxpayers
money. After the EU wants to sniff for age data and then also
declared its holy war against VPNs, I do not trust anything
coming from Brussels. Even less so with Ms. Leyen in charge -
can't the anti-corruption offices in Germany get rid of such
lobbyists?
You know, yeah, you can crack down "addictive design", but then what?
If you don't provide better alternative, the "kids" (and please, stop using "kids" as excuse because everybody can see through it now) will just stick on these platforms because, believe or not, these platforms are much MUCH safer than the alternatives.
Do you know that if you go outside, then there's this huge risk of having to PAY for stuff you don't actually need to live? Like transportation to go to place that don't bring you wealth, like drink that you drink even you're not that thirsty, like movie tickets just so it will not be too awkward after all the dialogue options are exhausted? Does these politicians just somehow forgot all of these costs money, in this economy that they helped to create?
And that is not to mention the REAL risk, such as drugs the bad ones, rude or crazy drivers, unpleasant adults who's only life purpose is to earn enough money to keep them going a little bit longer, just to name a few.
..... ORRRR, you can just stay in your conformable home, sit on your soft and warm sofa/couch, and swipe your life away on TikTok or Instagram for free, safely.
You see the problem here?
I'm really sick and tired of these politicians putting up this act pretending to "love children", when in the reality what they do is putting up easy patches to hide the real problem, which is poverty and inequality, that's the real problem.
Just do what China does, how fucking hard is it? They have 4x, almost 5x the population of the US.
STEM or verifiable educational content only. Have a review team and an AI that moderates content. No politics, no stupid dances, no monetization of content, no slop, and only credentialed people can post on certain topics (ie a delivery driver shouldn't make posts on theoretical mathematics).
In the modern world: any tech proposition that starts with protection of children as a goal can be dismissed out of hand, since it's emotional manipulation masquerading as tech policy. When I hear "protect kids", all I see is a sleazy politician bowing to their respective Security State apparatus.
Makes it an easier sell politically. If you position it as dangerous to kids in particular, your opposition then looks like they're encouraging child harm.
Yeah yeah, virtue signaling, and most of EU online services are now gated by the use of one of the whatng cartel web engines (IRL, google blink), namely EU web sites are broken favoring web apps.
They have to restore interop with noscript/basic html web engines (past/present/and future).
Then, they have to be carefull with their file formats, for instance you never give "carte blanche" to such a disgusting format like PDF, you are very careful at defining a, as simple a possible, subset of it (with some internal software for validation).
I must notice that every time, but really every time, EU moves a pinky finger against tech industry, a sizeable chunk of comments here will be like the one above. I wonder, is it about a general sentiment against EU? Or a general sentiment against restricting technology? Or a general sentiment against humans? Or what?
I think it's easier and safer to complain about everything than to actually have a nuanced and informed stance.
Look at age verification: it's very easy and very safe to say "nobody sane would think that it is a good idea to force people to show their ID to every website they want to access, it will obviously leak the IDs, that is very bad!". While it is not wrong, it is manipulative: that is not the only way to implement age verification. In fact, there is technology that exists that would allow age verification in a privacy-preserving manner: some service that already have access to your ID can give you a token that proves your age, and you can then use this token to access a website. The service cannot know where you use the token, the website cannot know your ID, and they cannot collude.
So the constructive debate around age verification is this: assuming we implement it properly (i.e. in a privacy-preserving manner), is that something that we want or not? Does it solve a problem, or at least does it help?
But we cannot ever elevate the debate to that level, because nobody can't be arsed to get informed about it.
Boiling kid's (and adult's) brains probably makes them a decent chunk of money, either directly via salary or indirectly via stocks. Ensuring kids remain healthy makes no money. An unfortunately large slice of the tech sector doesn't give the tiniest shit about the health of our broader society or any group in it if it means their lines stop going up, or even go up slightly less fast.
> The sentiment that having to present our ID to use tiktok gives us the heebie-jeebies, and for good reason.
So push for privacy-preserving age verification, such that you don't need to leak your ID to anyone but TikTok can still prevent kids from accessing it?
That's my problem with the debate: people like you seem very proud to be uninformed. It exists as much as end-to-end encryption exists. It's cryptography, it's not up to debate.
But people who have no clue are very vocal about their belief that it does not exist.
Imo, both. The more right wing people started to have aggressively anti-EU stance once Vance openly stood on the side of Orban and against EU and democracies in general.
And some people see tech companies as worship worthy and trying to restrict them is kind of a blasphemy.
The Vance thing is far too recent and inconsequential across europe?
The sentiment precedes all that and mostly stems from the EU being in some ways originally lib left dominated and still being seen as facilitating non-eu migration
Regular right wing people (aka not one of the many parties potentially receiving donations) don't tend to love giant webtech companies. Especially since they feel like they're often used as a tool against them and aren't a local thing that draws nationalists either.
A focus on privacy also isn't a very left-right defined thing tho i have noticed that the most far reaching expressions of it come a bit more from the further ends of that spectrum. (you'll see some very left leaning people at fosdems privacy focused/related stands for example)
The most on-brand solution for the EU would be to require mobile phone users to upload brain scans in real-time so the state can check for neural activity associated with addiction.
The effects of social media usage are surely reversible by stopping using it and then some retraining of the brain.
The effects of years of smoking are not so reversible in terms of what it does to your body.
This is a reasonable, but optimistic take. The effects of social media on developing brains will need to be studied to be sure the effects are reversible. Furthermore, how extensive is the damage and how long does it take to reverse? Are older people less likely to recover?
I mean, do you have any evidence that the brain is irreversibly damaged by social media? I have not seen any, but I have seen evidence that there is permanent cell damage from smoking.
If we step back and look at this rationally though, can anybody point me to any peer reviewed studies (the actual studies, not clickbait articles written based off the studies) showing that social media is anywhere near as physically harmful or addictive as cigarettes?
I'm totally open to the idea that engagement algorithms are inflaming social division. I'm less convinced that the children are the ones being harmed however. I think its the adults who grew up in a media mono-culture where the default was trust are the ones more susceptible to negative outcomes.
When things change, the young are the ones more likely to adapt.
At least you are consistent
at least we can use VPNs, for now
protecting the children is always a good pretext but the real goal with this addiction law is to have one extra leverage on the platforms
Where is the line drawn?
Smoking has definite physiological effects. Molecules bind to receptors or neurons and initiate cascades/responses.
I don't see this with user interface in a browser at all. IF you wish to reason for that, why are regular ads allowed? They piss me off. Why do I have to see them? They cause my brain an addiction to want to buy crappy products. So why is there no ban here?
Let's face it - the EU is on a path of "Minority Report" here.
> I think the EU and other jurisdictions should really look beyond just limiting this stuff to kids
Yeah they try to restrict what we can do. We oldschool people call this fascism. See the EU trying to destroy VPN. And this is a meta-strategy we see here - many lobbyists are activated and try to "sync" laws that never made any sense to as many countries as possible. I see where corruption happens. And I don't buy the "we protect kids" fake lie for a moment.
It's just so tedious to see this "information cannot harm anyone" theory in a context where a huge fraction of the people spend their entire day jobs tying to make phishing less effective.
That's why I make the cigarette comparison. They know it's bad, but it's profitable for people to be addicted to it. I think it's bad for adults for a different reason, I've seen adults in my own life get influenced by things they see online (conspiracy theories, pseudo-science around health and nutrition, political radicalization). And this happens because it's profitable for people to be hooked on these topics with false or misleading information, not because it's true. That's not to say this never happened before recommendation algorithms, but it's a difference in magnitude. I think that's the reason we are seeing such a dramatic rise in political polarization- because it's profitable.
This is an odd thing to do, because :
- information is real, it exists in the universe.
- the harm of social media is real, as measured by many of the same measures as the harm of smoking
Why not do something about ads? No, that's a good thought, we should do that too.
I think a decent conceptualization here is "psychic damage", as in a video game. These things deal a lot of it.
I don't think it's an odd thing to be opposed to that line of thinking.
I expect tabloid journalists and grandstanding politicians to do this, it really scares me when HN users that should know better do it.
You know what, why don't you go buy a carton of cigarettes and some heroin, and go use that for a few months. Since it's the same thing as looking at a news feed you shouldn't have to worry about addiction because you've already done that and not gotten addicted to it, so you should be fine, right?
No, you aren't. You are trivializing what Depression actually is by making flippant comments like that. You're also letting everyone know that you are utterly ignorant to what Depression actually is.
Do better.
Come on, this is an absurd statement. Governments regulate what people can do, yes. It’s part of their role. It’s why I can’t sell tainted meat on the street. It’s a good thing.
Of course there is a line you can cross where the control becomes excessive but “the government sets rules around what people can do, that’s fascism!” is absurd.
Fascism isn't government making laws, fascism is "we're the superior race, kill anyone who disagrees".
I wouldn't call this move fascism, even if can be considered a bit heavy handed.
Hacker News is a site that presents data by algorithm. Under your definition, Hacker News goes away, too.
A more accurate framing would be that they’re going after personalized recommendation algorithms. It’s not obvious that offering a recommendation algorithm would mean that the site is no longer an impartial common carrier.
Depersonalized algorithms or recommender systems aren't inherently better than personalized ones. HN is an exceptional example of the former but I think at scale people would come up with a different crop of complaints for them.
But I'd agree, that it's personalisation rather than just curation that's the issue.
I think even requiring sites to have a "bring your own algo" version (and where ads are targetted to the algorithm, rather than the person) would cure a lot of ills.
As is, even with something like Spotify where you _are_ paying there's no easy way to "reset" your profile to neutral recommendations
Same thing. There is no Hacker News if Y Combinator becomes liable for user submitted content.
It’s an obvious backdoor play to make sites go away. If a site becomes liable for content posted, you cannot allow users to post content without having the site review and take responsibility for every comment and every post.
The people proposing it haven’t considered how damaging that would be for the ability of individuals to share ideas and their content. When every site with “an algorithm” is liable for content posted, nobody is going to allow you to post something. It’s back to only reading content produced and curated by companies for us. Total own-goal for the individual internet user.
If you have an algorithm whose sole purpose is to “engagement” with your own platform (by intentionally and purposely pushing clickbait, ragebait, and media that keeps reinforcing your clicks) you should no longer get section 230 protections - you are no longer a neutral party. These algorithms exist to create echo chambers and keep you clicking so you can consume more ads.
I would love to hear other ways of solving the problems of social media.
Oh no.
It doesn’t have to go away, just switch to chronological sorting.
“Chronological only” might work for something like Twitter where you’re choosing to follow specific individuals to see their posts, it can’t work for curation sites like HN/Reddit.
so be it.
That's the nature of addiction.
If the user can search like in Youtube then how do you rank the results? That's also an algorithm.
It isn't pretty easy to solve at all.
If you search on youtube then it can rank any way it wants, just not use e.g. anything from the viewing history. No "related videos" column. That's what YouTube used to be. But YouTube (unlike TikTok) worked well before it had rabbit holes.
For TikTok the situation is worse. Their whole app just doesn't exist unless you have the custom feeds. This would make YouTube be 2010 youtube, Instagram be 2010 Instagram (great!) but it would effectively be a ban of TikTok's whole functionality (again, great!).
Then all of the people who have trouble with self-control on infinite feeds can enable this mode, and everyone who wants the recommendation algorithm can leave it on.
This is the optimal outcome that actually serves everyone’s personal goals for using these platforms. If we get into a conversation where some are demanding we don’t allow anyone to use a recommendation algorithm because they feel the need to control what other people see, that’s a different conversation. That conversation usually reveals other motives, like when people defend the algorithm sites they view (Hacker News, Reddit, whatever) but targets sites they don’t like TikTok.
I’d love for there to be an option to have them as default. It’s obvious ($$$) why they won’t do that unless forced to by regulators.
IMO they should not only be opt-in, but should actually be required to publicly list the parameters and weights they’re using and allow users to tune those weights.
> IMO they should not only be opt-in, but should actually be required to publicly list the parameters and weights they’re using and allow users to tune those weights.
I wonder how many people here know that many of the popular apps have rolled out finer controls for recommendation algorithms so you can do this. On Instagram you can go in and see the topics your recommendation algorithm picked up and modify them manually if you like.
I think the goalposts will just continue to move, though.
People here know that they have finer controls (which are still not actually that fine and also don’t really make the parameters auditable). The problem is these settings are hidden away in places most people will never look. And also, I stress again, none of this is actually auditable because they treat these as some kind of trade secret special sauce and there’s really no reason society should feel obligated to support or enable this business model.
But the thing about regulation is that it doesn’t need to be water tight. You can just target a small handful of large players and it will improve the situation in practice. It doesn’t matter if 998/1000 apps use addictive feeds if the largest two apps don’t and they have 90% of users/views.
If you regulated domestic companies out of existence, global options would pop up in their place. You could try to block them all in app stores but people would go to the web views.
Obviously we need the ability to regulate also global options. Typically if these actors truly become big, then they have a presence in their "target" countries, such as ad sales.
If they don't know what they want, perhaps a good use case for the newfangled LLM-search we have now would be "What's an interesting or popular topic I haven't searched for before?" to which the AI will respond with a list of newly searchable terms.
Any ordering is an algorithm technically, so yes just "banning algorithm" doesn't work.
A better alternative could be "the algorithm must be public and reproducible by the user".
"Sort the posts of the people I follow in chronological order" you're good
"Sort the posts by the output of a blackbox trained on user data" too bad you're a publisher and are responsible for what people post.
For new users? A search bar and a set of (human? AI?) curated seed recommendations that the platform is comfortable with being held liable for.
Rank them by best keyword match from their search query, if match is equal, order them newest posted to oldest posted.
Done.
Whatever is latest posted across their followings/subscriptions?
I don't consume any content from my friends on something like tiktok where I'm interested in discovering people that have good content under topics I'm interested in. I don't know who those people are and I want to discover new ones that come up not just follow some already popular accounts.
The whole idea here is to make content consumption more deliberate and mindful rather than just opening the app and veging out to an endless feed of slop.
You can (and should) argue that such a simple algorithm doesn’t “count”, but fundamentally the exact wording of the grandparent post never works, legislatively.
Lawyers will lawyer.
The problem always has been "(personalized) opaque algorithms". Time sorted by followers isn't really opaque, nor is "sorted by likes" or whatever. The problem is always pulling in parameters that a users either has no active control over or are so variable they effectively could be random.
Like social media 1.0.
https://news.ycombinator.com/item?id=37053817
"So the user opens the app - what is the first video you show them?"
You don't. How about that?
Its okay if they have some hard problems to solve.
You rely on unambigous, "physical" properties of the videos.
There is a physical property of all the videos: the time of publication.
There is a physical property of all the channels: did you subscribe to it, or not ?
So, you show, in (reverse) chronological order of publication, the list of videos published by the channels you subscribed to.
Now, of course, a brand new user would have no subscription - you show them a search box.
But then, now, your search algorithm has to weight the various channels that match - but your algo can be relatively transparent, relatively auditable, and the same for all users (unless given explicit preferences, and of course national laws, etc, etc...)
I'm sorry, but, I have a "subscriptions" page in youtube or substack, and they're chronological, and they show me what I want to watch. You keep that.
There is a "home" page in both service that is algoritmically built, and they show me crap that the algo want me to watch. You get rid of that.
Do this, and I can consider you a "neutral" actor, and accept that you shift the blame to content producer.
Or, keep the algo feed, but don't take money from advertiser when I watch yet another flat earther video because YOU decided it was trending.
If you want to decide what I watch, and make money from that decision - congrats, you are an editor. You get the earnings, and the responsibility.
Please don't tell me, with a straight face, that the people who build the algo don't "decide" what I watch. If they want to tweak the algo to downgrade the flamewars and outrage and conspiracy theories and violence and abuse, they can. They do not want to, for business reasons. [1]
That's fair, up to a point - we need publications with editors that agree on having "edgy" content. I'm not advocating for blanket censorship.
I did not like social network preventing me from _sharing_ articles about Biden's son laptop (this was actually beyond the law, but somehow they managed to find the resources and programmers to implement _that_, because, at the time, the execs where cozying with a different administration.)
I'm advocating for "accepting your responsibility as an editor".
[1] https://en.wikipedia.org/wiki/Frances_Haugen#October_5,_2021...
Is adding advertisements an algorithm?
Is including likes an algorithm?
Is automatically starting the next video after a previous one has finished an algorithm?
Is infinite scroll an algorithm?
Etc
I'm not saying there aren't infinite edge cases and second-order effects - but we tolerate those already for many things. I'm not pretending this is simple or even desirable - I'm merely stating it's possible if we want to do it.
My biggest fear is that (like the UK Online safety act) this acts to favour the huge corporations because they are the only ones that can afford a team of lawyers. Any legislation should aim to carve out exceptions to avoid indirectly helping monopolies.
Just look at the malicious compliance that Apple and Google have around the App Store stuff, they’ll find a way to comply with the law and implement different addictive dark patterns.
I’m not saying that I disagree that these companies need to be regulated, I absolutely do. I just think it’s going to be a complicated process, and not “oh just ban everything that’s an algorithm”.
And I have absolutely 0 faith in companies like Meta willfully complying.
The US may have this dual incentive structure since it wants to build its tech giants while limiting their control, but the EU doesn't. The arrival of a foreign tech social media giant might make the legislation a bit more palatable to pass.
It will undoubtedly be complex to regulate all dark patterns away. But there are a few obvious, easy wins. It'd be a shame to make perfect the enemy of good.
But here’s the real problem: people don’t care. And I say that as someone who hasn’t used social media since 2014.
My observation of people’s behavior indicates that when all is said and done, people don’t care—they would rather get the endorphins from posting, liking, following, etc.
But the solution is to allow people to control their own algorithm, and to have open source solutions where communities manage their own social network.
It’s not the algorithm that is the problem it is that people don’t have the choice to curate their own content.
There’s no political organization (yes Mamdani actually out-raised cuomo so let that sink in) that isn’t being actively bribed
Does anyone know where it’s coming from? I can certainly believe that incompetent jurisdictions have a ton of issues with people misapplying the law and using loopholes.
> This is pretty easy to solve. If you present data by algorithm, you are no longer an impartial common carrier and are liable for the content you present.
But this is not in fact easy. It's hard to define what "present data by algorithm" means in a coherent way, and it's hard to extend liability for the content you present to liability for the manner in which you present it. You could make it work, if for some reason you really wanted to, but it's easier to pursue the strategy described in the source article of regulating specific abusive patterns.
No they’re engineers who think rules have to function as rigidly in every field as they do in programming.
They either can’t or don’t want to accept that the law is a social construct and what it actually means to you is determined by the weight of precedent, as applied by judges and regulatory bodies. Things are vaguely worded in the law all the time. If people want to dispute how the enforcement is done they sue and judge decides how the rule should be applied.
The easy benchmark to setup can easily be, that any feed that displays the data in a way other than the following is considered an editorial choice and thus the platform is liable as a publisher:
1. In a chronological order, and only filtered based on user selected options.
2. In any other order explicitly selected by the user.
An exception can be made to allow filtering out content that violates the platforms terms and conditions.
Alternatively there can be no exception, effectively making these platforms unworkable. This is also a choice. We do not need these platforms, including this one.
The said regulation can then mandate that after calibration and correction the feed pull back by training the algorithm to adjust it in a rapid A/B test.
This is all doable by the companies themselves, but since they wont, the key is to mandate it and publish the aggregate results regularly — like make it part of the quarterly share holder's SEC reporting requirement or something.
The moment you add other entities to the list (e.g. ads inbetween posts), then it's also subject to the same restrictions.
And then we’ll end up with with another cookie-banner style law which had good intentions but actually missed the point entirely.
The cookie banner law is fine for the most part. Sites that do the malicious-compliance thing of over-prompting the user for permissions are providing a strong signal that they are bad actors. It’s about as much as we can expect without banning them entirely…
I mean "Is including likes an algorithm?" You might as well ask if having a dog in the video is an algorithm. Any question about "likes" would be if you're manipulating the video selection based on likes, or is the user given a control to manipulate the video selection based on likes. If it's you it's an algorithm. If it's the user, it's a control. If you lie about the likes, then it's an algorithm. If you're transparent about the likes, then it is a control.
The other ones aren't even worth discussing. You might as well ask if having a blue logo is an algorithm, or if Comic Sans is an algorithm. "It's all so complicated!"
-----
edit: that being said, the EU does not care about this issue at all, and has had plenty of mandate and plenty of time to have done something about it if it did. They are also going to say "it's all so complicated." Because their problem is the unpopularity of center-left neolib governments that are just barely holding on with extreme minority support through bureaucratic means because they wrote the regulations. They want to keep what's came for British Labour during the recent council elections from coming for them.
So I guarantee that content will somehow become an "algorithm." The goal is to keep people who don't like them from speaking to each other.
I think the EU has fully digested this point, and is focusing on the “addictive design” phrase instead, for good reason. It makes it obvious that the problem is a bit fuzzy and related to the behaviors induced, not some cut-and-dry algorithmic thing.
I suppose the answer could be that only platforms that do indeed allow spam or worse are impartial, but that is a tricky position to be in.
It's been argued to death already, I just have to express shock that I'm still seeing this non-starter constantly here.
ironically, i'm only reading this kind of low brow take because people upvote it, not because it makes any sense.
A lot of adults need this too. The addictive apps are very well designed, while most blockers are either too easy to ignore or too annoying to keep using.
I built a small iOS blocker because I had the same problem. Making it strict enough to actually work without making people hate it is the main challenge.
I think they also said AI companies go offline during exam hours, but I may have got that wrong.
I feel like we’ve completely lost the plot when we’re starting to invite government partial Internet shutdowns as a good idea. This is a totalitarian government play.
There's been criticism about the culture surrounding platforms like Mastodon/Bluesky that anticipated this.
Maybe they'll feel differently when they have to upload their ID and face scan (which later gets leaked) just to be able to read a recipe for beer or whatever.
As long as all people are paying for your dumb decisions, it is reasonable to expect the government to reduce the frequency of dumb decisions by adequate means.
I remember in my state, it was initially only a citation that couldnt be pulled over on. Then they flipped that and started pulling over for it. Why? Pure fucking money grab.
Me not wearing a seatbelt means I risk getting splattered. Not you, or anyone else.
Except who pays for your million-dollar reconstructive surgery and rehab? I don't suppose you will cover that out of pocket to avoid burdening your fellow insurance payers with your reckless behavior?
Physics says otherwise. In a collision you don't decide where you body is yeeted and your skull could end inside the skull of a passenger using his seatbelt. Don't be a moron. https://youtube.com/shorts/n2yLMGA_YSA?si=AlvRgfpb-PJxGCBw
These giant companies pour millions upon millions of dollars into engineering their services to be as "engaging" (read: addictive) as possible with the specific goal of making users spend more time on them.
Against that, the average person has no chance. The power balance is hugely uneven.
A responsible government which actually cares for its people has a duty to protect them from abuse like that.
They are bad for everyone and if you’re willing to regulate them, make them illegal to be used on anyone.
It just says the platform who use such methods, often target kids.
Most adults also lack the maturity and judgement, but allowing adults to make bad decisions is usually less dangerous than giving someone else the power to decide which decisions are too bad to permit.
HN having pages instead of a feed or endless list is one of the things I really like about it.
The other thing I really love about HN is that titles are all supposed to be boring and to the point. The guidelines[1] for titles are excellent and I wish more of the web and honestly legacy media too would behave that way. Things that are of no interest to me are not trying to waste my time and attention.
[1] https://news.ycombinator.com/newsguidelines.html
The actual point is that they are designed to be addictive. "endless scrolling" is just an implementation detail. If you "ban endless scrolling", they'll still be using every other trick to make it addictive.
Social networks in general should be banned for underage people, that's the thing. And the social network itself should be liable for verifying the age its users, like a nightclub is liable for people who enter it. No bullshit operating system age verification, that's, trust me, totally intended to protect kids and not to spy on you.
What makes you say that? It's well known that the addictive patterns in these apps trigger dopamine the same way drugs do. In a sense, dopamine is the "chemical substance" central to the addiction. Heroine and algorithms are just different ways to get it.
https://med.stanford.edu/news/insights/2021/10/addictive-pot...
This is a lame reduction of brain chemistry that has been used to push agendas. Dopamine is not equivalent do addiction.
> posts a lame reduction of the argument
Addictive would be then something that (for a substantial portion of population) has a tendency to cause addiction.
The difference compared to a book is that a book is not personalized for each individual reader, so the example is not a good one IMHO.
Don’t get me wrong, if I had my way TikTok wouldn’t exist for anyone, adults included. It’s just so strange to me that so many parents hand their 7 year olds unrestricted access to TikTok and expect someone else to keep their kid safe.
I read a post about someone saying his wife worked for a snack company. They used MRI scans to see how much salt (or sugar) they should have in the snacks to maximize the response in the brain. Sounds disturbing right.
Well engagement engines are the same thing. It's artificial intelligence optimized to get people to react and stay addicted. Basically AI doing harm. It's not what is best for the individual in terms of health. It's what generates most money to the owner of the platform.
It should not be allowed to build a business around something that exploits humans brains. Basically biohacking our brains for profit.
In contrast, in Western Europe, my son is now in the sixth grade, more than half his class doesn’t have phones, phones are absolutely forbidden on school grounds and at school activities, and they are now doing a class trip where they were told that there’s a pay phone at the hotel, in case they want to call the parents - our son promptly informed us that he’ll rather buy a pack of Pokémon cards than call us and 3 days is not so much anyway.
And it is not only at school, he travels for tournaments with his team every other week and mobile phones are absolutely forbidden on the team bus. Children read, play games (including chess on a magnetic board), sing and change stories for hours at a time
And since they're addictive, kids will find a way to get them even if their parents don't allow it. That's why it's most effective to require ID when you're buying cigarettes than it is to shame people for not being perfectly vigilant parents.
BTW, I'm not saying age verification is the solution here. IMO, we should instead ban addictive social media completely. Eg, target specific design patterns/features, require companies to disclose how their algorithms work to regulators, etc.
Personally, I think some parents are afraid of their children growing to resent them for infringing upon their "freedom" in ways that keeping them away from the dangers that social media and other technologies present.
I agree with you, but only in theory. Because that's where we are now and it does not seem to work that well.
Maybe through more education? But then again I think reducing addictive tactics like endless scrolling could be part of a 2 prong attack.
With alcohol we have education on what happens, but we also have laws that regulate it.
It also makes no real sense to me.
Nothing against US mega-corporations paying fins, mind you, but I equally do not trust the EU bureaucrats either. There has to be a limit to both what politicians can do, what corporations can do and what bureaucrats can do, while retaining a democratic base system at all times. If you go against addictive design, then why not against ALL ads? I don't want to see any ads. Ublock origin made me change my mind here - I literally see no reason as to why I would ever want to burden my brain cells with irrelevant content.
This is a bit different to website layout though. I equally fail to see why the EU should meta-regulate what is permissive in regards to design and what is not. Why would I have to accept any random EU bureaucrat here? If a user interface sucks, I'd rather expect ublock origin to kill it off. This could also be community maintained. No need for the EU to waste taxpayers money. After the EU wants to sniff for age data and then also declared its holy war against VPNs, I do not trust anything coming from Brussels. Even less so with Ms. Leyen in charge - can't the anti-corruption offices in Germany get rid of such lobbyists?
Which makes it also a matter of also parents and grandparents setting bad examples.
If you don't provide better alternative, the "kids" (and please, stop using "kids" as excuse because everybody can see through it now) will just stick on these platforms because, believe or not, these platforms are much MUCH safer than the alternatives.
How about, let's see the real problem here: 24% of EU children at poverty risk or social exclusion (2024), see https://ec.europa.eu/eurostat/web/products-eurostat-news/w/d.... That's not just a statistic about children, it's also about their parents.
Do you know that if you go outside, then there's this huge risk of having to PAY for stuff you don't actually need to live? Like transportation to go to place that don't bring you wealth, like drink that you drink even you're not that thirsty, like movie tickets just so it will not be too awkward after all the dialogue options are exhausted? Does these politicians just somehow forgot all of these costs money, in this economy that they helped to create?
And that is not to mention the REAL risk, such as drugs the bad ones, rude or crazy drivers, unpleasant adults who's only life purpose is to earn enough money to keep them going a little bit longer, just to name a few.
..... ORRRR, you can just stay in your conformable home, sit on your soft and warm sofa/couch, and swipe your life away on TikTok or Instagram for free, safely.
You see the problem here?
I'm really sick and tired of these politicians putting up this act pretending to "love children", when in the reality what they do is putting up easy patches to hide the real problem, which is poverty and inequality, that's the real problem.
STEM or verifiable educational content only. Have a review team and an AI that moderates content. No politics, no stupid dances, no monetization of content, no slop, and only credentialed people can post on certain topics (ie a delivery driver shouldn't make posts on theoretical mathematics).
Like adults spending their hours scrolling through infinite feed is somehow beneficial to the society?
I have a hard time understanding this.
We have plenty of adults with terrible social media addiction that is destroying their lives, and nothing being done about it.
They have to restore interop with noscript/basic html web engines (past/present/and future).
Then, they have to be carefull with their file formats, for instance you never give "carte blanche" to such a disgusting format like PDF, you are very careful at defining a, as simple a possible, subset of it (with some internal software for validation).
I'm very happy they're taking a stance. I've seen too many messed up kids and there's no doubt the addictive design plays a big role in the problem.
Look at age verification: it's very easy and very safe to say "nobody sane would think that it is a good idea to force people to show their ID to every website they want to access, it will obviously leak the IDs, that is very bad!". While it is not wrong, it is manipulative: that is not the only way to implement age verification. In fact, there is technology that exists that would allow age verification in a privacy-preserving manner: some service that already have access to your ID can give you a token that proves your age, and you can then use this token to access a website. The service cannot know where you use the token, the website cannot know your ID, and they cannot collude.
So the constructive debate around age verification is this: assuming we implement it properly (i.e. in a privacy-preserving manner), is that something that we want or not? Does it solve a problem, or at least does it help?
But we cannot ever elevate the debate to that level, because nobody can't be arsed to get informed about it.
Also, nobody voted for the Commission.
So push for privacy-preserving age verification, such that you don't need to leak your ID to anyone but TikTok can still prevent kids from accessing it?
No such thing.
But people who have no clue are very vocal about their belief that it does not exist.
And some people see tech companies as worship worthy and trying to restrict them is kind of a blasphemy.
The sentiment precedes all that and mostly stems from the EU being in some ways originally lib left dominated and still being seen as facilitating non-eu migration
Regular right wing people (aka not one of the many parties potentially receiving donations) don't tend to love giant webtech companies. Especially since they feel like they're often used as a tool against them and aren't a local thing that draws nationalists either.
A focus on privacy also isn't a very left-right defined thing tho i have noticed that the most far reaching expressions of it come a bit more from the further ends of that spectrum. (you'll see some very left leaning people at fosdems privacy focused/related stands for example)
That’s a bit outdated since musk bought twitter
I'm posting from the EU.