26 comments

  • jrflo 1 hour ago
    I heard someone on a podcast call social media algorithms "the modern-day cigarette" and that really resonated with me. These companies know their product is addictive and bad for users, but they keep pushing it anyways. Like cigarettes, it's bad for everyone, not just kids. I made an algorithm blocker for Safari because of that and it's actually crazy how much more pleasant social media is if you don't have recommendation algorithms at all. I think the EU and other jurisdictions should really look beyond just limiting this stuff to kids, but I understand why it's starting there...
    • Aurornis 44 minutes ago
      If you didn’t notice, this comment is an ad for a paid app trying to capitalize on social media anger. I respect the hustle, but this is not a neutral comment on the topic due to the financial interest. There are many free alternative plugins for targeting social media feeds if someone wants to filter these.
      • jrflo 41 minutes ago
        I removed the link, just thought it was relevant to the discussion.
        • tolerance 29 minutes ago
          I was going to make a similar accusation as the above but I skimmed your comments and it didn't seem like you were the sort to have ill intent behind bringing it up. Next time you might want to include one of those stuffy "Disclosure" notices.
          • jrflo 18 minutes ago
            That's a good idea, thank you for the feedback. I have a hard time finding the line between "advertising" and "sharing something I built" on this site sometimes.
      • cyanydeez 19 minutes ago
        If you haven't been on HN, you'd believe this was some aberration as opposed to the norm. This is a YC run forum, so it's pretty normal for comments to contain software advertisment based comments.
    • wackget 48 minutes ago
      The modern-day cigarette is such a perfect metaphor for social media. A cabal of unfathomably wealthy companies spreading their harmful products across the world; making them as addictive as possible while actively burying the research which proves how harmful they are. I truly hope one day we'll look back on social media and smartphone use the same way we regard smoking.
      • ErigmolCt 0 minutes ago
        I think the smoking comparison works best when applied to the engagement mechanics rather than "social media" as a whole
      • SirMaster 24 minutes ago
        But if you stop using social media, you don't still have a risk of lung cancer in the future.

        The effects of social media usage are surely reversible by stopping using it and then some retraining of the brain.

        The effects of years of smoking are not so reversible in terms of what it does to your body.

        • wussboy 20 minutes ago
          I hope you're right but I think you're dead wrong. Social media has not only affected the mental health of millions of people negatively, it has brought about social, political and economic harms that will affect the planet for generations.
        • somebehemoth 16 minutes ago
          > The effects of social media usage are surely reversible by stopping using it and then some retraining of the brain

          This is a reasonable, but optimistic take. The effects of social media on developing brains will need to be studied to be sure the effects are reversible. Furthermore, how extensive is the damage and how long does it take to reverse? Are older people less likely to recover?

        • enedil 14 minutes ago
          Why are you so sure that the effects of social media are reversible?
          • SirMaster 2 minutes ago
            Neuroplasticity. Seems better than the damage caused to your lungs and cells from smoking.

            I mean, do you have any evidence that the brain is irreversibly damaged by social media? I have not seen any, but I have seen evidence that there is permanent cell damage from smoking.

    • ErigmolCt 2 minutes ago
      The problem is when the product becomes an optimization machine for attention
    • LPisGood 39 minutes ago
      This is still a recommendation algorithm, just less enjoyable/addicting one. Any process by which you decide what to show to a user is an algorithm .
      • jrflo 12 minutes ago
        I see what you're saying, I should have been more specific. I more so mean recommendation algorithms that are artificially created by platforms to drive more traffic. I think the HN method of user votes without manipulation by the platform is better but not ideal, the best method is 100% user curated content (i.e. following specific accounts on instagram/twitter, RSS feeds, etc), which I would argue is not really a recommendation algorithm. I think that people don't realize how much the content they see influences your thoughts, and how much that content is chosen by profitability over anything else.
    • p2detar 1 hour ago
      Look up images in Google with `eu cigarettes boxes`. Banning is a thin wedge, but I think we need something like these warning labels for social media.
    • pembrook 24 minutes ago
      Glad to hear a false comparison to something that's actually physically/chemically addictive really resonated with you (a.k.a. affirmed your already existing beliefs in this moral panic).

      If we step back and look at this rationally though, can anybody point me to any peer reviewed studies (the actual studies, not clickbait articles written based off the studies) showing that social media is anywhere near as physically harmful or addictive as cigarettes?

      I'm totally open to the idea that engagement algorithms are inflaming social division. I'm less convinced that the children are the ones being harmed however. I think its the adults who grew up in a media mono-culture where the default was trust are the ones more susceptible to negative outcomes.

      When things change, the young are the ones more likely to adapt.

    • kjeksfjes 44 minutes ago
      [flagged]
    • mrits 55 minutes ago
      fascism for the greater good?
      • tgv 51 minutes ago
        What are you trying to imply (while hiding behind a rather unsuitable form of irony)? Not that the EU is taking away essential freedom, I hope?
      • toasty228 51 minutes ago
        People stuck in 1940 and not able to imagine new words for new things should not be allowed to discuss these topics online.
        • mrits 18 minutes ago
          "should not be allowed to discuss these topics online."

          At least you are consistent

        • rvnx 48 minutes ago
          the correct modern word is censorship

            DNS_PROBE_FINISHED_NXDOMAIN
          
          there has to be a bug in Europe on some news websites...

          at least we can use VPNs, for now

          protecting the children is always a good pretext but the real goal with this addiction law is to have one extra leverage on the platforms

      • LPisGood 27 minutes ago
        Is any form of government regulation of corporate actions fascism in your view?

        Where is the line drawn?

        • mrits 23 minutes ago
          This is a clear line and we fought Europe already over this in the last century. There is absolutely no world where we need a group of people telling us how long we are allowed to be on TikTok. It is inexcusable to think this way in a free democracy.
          • vrganj 8 minutes ago
            Part of a free democracy is that we get to think in whichever way we want, actually.
            • mrits 2 minutes ago
              Democracy depends on shared rules and institutions that limit what you can do in pursuit of those beliefs. There is certainly a line to be drawn of where our freedoms begin and end. TikTok is nowhere near this
      • vrganj 9 minutes ago
        Please share the definition of fascism you used to come to this conclusion.
    • shevy-java 52 minutes ago
      That rationale never convinced me.

      Smoking has definite physiological effects. Molecules bind to receptors or neurons and initiate cascades/responses.

      I don't see this with user interface in a browser at all. IF you wish to reason for that, why are regular ads allowed? They piss me off. Why do I have to see them? They cause my brain an addiction to want to buy crappy products. So why is there no ban here?

      Let's face it - the EU is on a path of "Minority Report" here.

      > I think the EU and other jurisdictions should really look beyond just limiting this stuff to kids

      Yeah they try to restrict what we can do. We oldschool people call this fascism. See the EU trying to destroy VPN. And this is a meta-strategy we see here - many lobbyists are activated and try to "sync" laws that never made any sense to as many countries as possible. I see where corruption happens. And I don't buy the "we protect kids" fake lie for a moment.

      • SiempreViernes 38 minutes ago
        Already Hippocrates was linking the mind to the physical brain, and if you've never felt a physical reaction from looking at the fairer sex I feel bad for you son, yet if you got ninety-nine problems at least sex ain't one.

        It's just so tedious to see this "information cannot harm anyone" theory in a context where a huge fraction of the people spend their entire day jobs tying to make phishing less effective.

      • jrflo 42 minutes ago
        With some of the legal discovery happening at Facebook, we know that the company did internal research showing that it's products can be addicting and detrimental to kids: https://www.wsj.com/tech/personal-tech/facebook-knows-instag...

        That's why I make the cigarette comparison. They know it's bad, but it's profitable for people to be addicted to it. I think it's bad for adults for a different reason, I've seen adults in my own life get influenced by things they see online (conspiracy theories, pseudo-science around health and nutrition, political radicalization). And this happens because it's profitable for people to be hooked on these topics with false or misleading information, not because it's true. That's not to say this never happened before recommendation algorithms, but it's a difference in magnitude. I think that's the reason we are seeing such a dramatic rise in political polarization- because it's profitable.

      • sixo 46 minutes ago
        To hold this view you have to think of information as "not real", not like "real" molecules and receptors, the mind as distinct from the body, and then restrict the legal definition of harm to only "real" things.

        This is an odd thing to do, because :

        - information is real, it exists in the universe.

        - the harm of social media is real, as measured by many of the same measures as the harm of smoking

        Why not do something about ads? No, that's a good thought, we should do that too.

        I think a decent conceptualization here is "psychic damage", as in a video game. These things deal a lot of it.

        • akersten 44 minutes ago
          The other side of the view is "information is real and I don't like some of it ("it's harmful/addictive/blasphemous") so it must be controlled and regulated."

          I don't think it's an odd thing to be opposed to that line of thinking.

          • kyledrake 35 minutes ago
            People in here are casually linking social media to cigarettes, a product that kills half its users, and in previous iterations I've seen people compare social media to using heroin. It's completely hysterical.

            I expect tabloid journalists and grandstanding politicians to do this, it really scares me when HN users that should know better do it.

            • ambicapter 24 minutes ago
              This sounds like a "depression isn't real" and "if you're addicted, just stop" type of comment.
              • kyledrake 20 minutes ago
                Depression is real, I'm experiencing it right now reading these comments.

                You know what, why don't you go buy a carton of cigarettes and some heroin, and go use that for a few months. Since it's the same thing as looking at a news feed you shouldn't have to worry about addiction because you've already done that and not gotten addicted to it, so you should be fine, right?

                • AshleyGrant 3 minutes ago
                  > Depression is real, I'm experiencing it right now reading these comments.

                  No, you aren't. You are trivializing what Depression actually is by making flippant comments like that. You're also letting everyone know that you are utterly ignorant to what Depression actually is.

                  Do better.

      • afavour 48 minutes ago
        > Yeah they try to restrict what we can do. We oldschool people call this fascism.

        Come on, this is an absurd statement. Governments regulate what people can do, yes. It’s part of their role. It’s why I can’t sell tainted meat on the street. It’s a good thing.

        Of course there is a line you can cross where the control becomes excessive but “the government sets rules around what people can do, that’s fascism!” is absurd.

        • achenet 1 minute ago
          Yes.

          Fascism isn't government making laws, fascism is "we're the superior race, kill anyone who disagrees".

          I wouldn't call this move fascism, even if can be considered a bit heavy handed.

  • conception 2 hours ago
    This is pretty easy to solve. If you present data by algorithm, you are no longer an impartial common carrier and are liable for the content you present. If the user decides you don’t, ala social media 1.0.
    • Aurornis 1 hour ago
      > If you present data by algorithm, you are no longer an impartial common carrier and are liable for the content you present

      Hacker News is a site that presents data by algorithm. Under your definition, Hacker News goes away, too.

      A more accurate framing would be that they’re going after personalized recommendation algorithms. It’s not obvious that offering a recommendation algorithm would mean that the site is no longer an impartial common carrier.

      • tencentshill 1 hour ago
        The algorithm is not personalized. It's the same for every user. No issue there.
        • tolerance 37 minutes ago
          But still an algorithm. The difference is that we (at least some of us) place a greater trust in the integrity behind how information surfaces on HN. I think that some parts of it are open source, and the moderators are transparent enough about what isn't public + there is a mix of folk knowledge that explains how HN works under the hood.

          Depersonalized algorithms or recommender systems aren't inherently better than personalized ones. HN is an exceptional example of the former but I think at scale people would come up with a different crop of complaints for them.

          • tencentshill 17 minutes ago
            Yes it's still an algorithm. Cable TV programming is another example. Everyone sees the same content. The ads are changed at the local broadcaster level but are not tailored to the individual, and are not harmful in the ways the EU is regulating. If anything, everyone watching the same thing is good for social cohesion. Everyone discusses the latest TV episode the next day at the office.
            • tolerance 5 minutes ago
              Right. Withholding the fact that cable television doesn't appear to be the typical distribution method anymore, how do broadcasters select/schedule their programming?
      • another-dave 1 hour ago
        Goes away, or is liable for the content promoted to the frontpage under the OP's take?

        But I'd agree, that it's personalisation rather than just curation that's the issue.

        I think even requiring sites to have a "bring your own algo" version (and where ads are targetted to the algorithm, rather than the person) would cure a lot of ills.

        As is, even with something like Spotify where you _are_ paying there's no easy way to "reset" your profile to neutral recommendations

        • Aurornis 1 hour ago
          > Goes away, or is liable for the content promoted to the frontpage under the OP's take?

          Same thing. There is no Hacker News if Y Combinator becomes liable for user submitted content.

          It’s an obvious backdoor play to make sites go away. If a site becomes liable for content posted, you cannot allow users to post content without having the site review and take responsibility for every comment and every post.

          The people proposing it haven’t considered how damaging that would be for the ability of individuals to share ideas and their content. When every site with “an algorithm” is liable for content posted, nobody is going to allow you to post something. It’s back to only reading content produced and curated by companies for us. Total own-goal for the individual internet user.

          • andrewjf 59 minutes ago
            I agree with what OOP said. But it’s not my intent to “shut sites down.” I have this view to try to increase diversity of media consumption and break people out of echo chambers. If your business model is so shit you have to exploit weaknesses in human brains to keep people viewing ads and can’t adapt, then that’s your problem.

            If you have an algorithm whose sole purpose is to “engagement” with your own platform (by intentionally and purposely pushing clickbait, ragebait, and media that keeps reinforcing your clicks) you should no longer get section 230 protections - you are no longer a neutral party. These algorithms exist to create echo chambers and keep you clicking so you can consume more ads.

            I would love to hear other ways of solving the problems of social media.

          • buellerbueller 1 hour ago
            >It’s an obvious backdoor play to make sites go away.

            Oh no.

      • xigoi 20 minutes ago
        > Under your definition, Hacker News goes away, too.

        It doesn’t have to go away, just switch to chronological sorting.

        • Analemma_ 17 minutes ago
          Have you ever browsed by New and seen the firehose of shit which doesn’t make it to the front page? HN sorted by new is effectively useless and you might as well shut the site down at that point.

          “Chronological only” might work for something like Twitter where you’re choosing to follow specific individuals to see their posts, it can’t work for curation sites like HN/Reddit.

          • xigoi 8 minutes ago
            That could be solved by allowing users to filter by score or number of comments.
      • jackdoe 1 hour ago
        > Hacker News goes away, too.

        so be it.

        • vasco 1 hour ago
          This is a strange thing to comment on HN. If you truly believed it why would you be here?
          • buellerbueller 1 hour ago
            The majority of terminally addicted people I have interacted with at length have both recognized the terminal nature of their addiction and been unable to do anything about it.

            That's the nature of addiction.

    • schnitzelstoat 1 hour ago
      So the user opens the app - what is the first video you show them? How does 'the user decide' from the millions upon millions of videos there are?

      If the user can search like in Youtube then how do you rank the results? That's also an algorithm.

      It isn't pretty easy to solve at all.

      • alkonaut 1 hour ago
        In the case of Instagram: You show the videos from the people you follow on instagram, then no more short videos at all. Possibly a search box.

        If you search on youtube then it can rank any way it wants, just not use e.g. anything from the viewing history. No "related videos" column. That's what YouTube used to be. But YouTube (unlike TikTok) worked well before it had rabbit holes.

        For TikTok the situation is worse. Their whole app just doesn't exist unless you have the custom feeds. This would make YouTube be 2010 youtube, Instagram be 2010 Instagram (great!) but it would effectively be a ban of TikTok's whole functionality (again, great!).

        • Aurornis 1 hour ago
          I think it would be great if all of these apps had an option to function like you propose: Your feed is a simple view of people you’ve chosen to follow. The end.

          Then all of the people who have trouble with self-control on infinite feeds can enable this mode, and everyone who wants the recommendation algorithm can leave it on.

          This is the optimal outcome that actually serves everyone’s personal goals for using these platforms. If we get into a conversation where some are demanding we don’t allow anyone to use a recommendation algorithm because they feel the need to control what other people see, that’s a different conversation. That conversation usually reveals other motives, like when people defend the algorithm sites they view (Hacker News, Reddit, whatever) but targets sites they don’t like TikTok.

          • gibspaulding 46 minutes ago
            I don’t endorse using these apps, but for what it’s worth, Instagram actually does have this feature (tap “instagram” at the top and select “following”). You get a chronological feed with no adds and no reels. Of course they don’t provide an option to make that the default as far as I know.
          • hapticmonkey 55 minutes ago
            Instagram and Facebook both have such features. They’re hidden, though. With Instagram you tap the logo in the top middle of the app and choose “Following”. With Facebook it’s hidden away under the “Feeds” section in the app.

            I’d love for there to be an option to have them as default. It’s obvious ($$$) why they won’t do that unless forced to by regulators.

          • naravara 48 minutes ago
            Why do you assume the recommendation algorithm should be the default? The algorithm is the dangerous thing, THAT should be the opt-in mode not the other way around.

            IMO they should not only be opt-in, but should actually be required to publicly list the parameters and weights they’re using and allow users to tune those weights.

            • Aurornis 41 minutes ago
              Sure, if that makes the angry mob happy then let’s make it default. Then every new user can click the button once and be back to the app they expect.

              > IMO they should not only be opt-in, but should actually be required to publicly list the parameters and weights they’re using and allow users to tune those weights.

              I wonder how many people here know that many of the popular apps have rolled out finer controls for recommendation algorithms so you can do this. On Instagram you can go in and see the topics your recommendation algorithm picked up and modify them manually if you like.

              I think the goalposts will just continue to move, though.

              • naravara 19 minutes ago
                No they should have to pick every time whether they want to be in follower mode or discovery mode. Dismissing concerns as “the angry mob” is richly ironic considering the entire objection is that recommendation algorithms seem precisely tuned to foster angry mob dynamics. So yeah it will make the angry mob happy because it will be removing the primary mechanism for inciting angry mobs.

                People here know that they have finer controls (which are still not actually that fine and also don’t really make the parameters auditable). The problem is these settings are hidden away in places most people will never look. And also, I stress again, none of this is actually auditable because they treat these as some kind of trade secret special sauce and there’s really no reason society should feel obligated to support or enable this business model.

        • HPsquared 1 hour ago
          There is no going back to the 2010 internet unless you confiscate everyone's phones.
          • alkonaut 1 hour ago
            Not sure what confiscation would accomplish that regulation couldn’t? I mean we’re all aware that if regulators target TikTok then a new app would pop up and take its place.

            But the thing about regulation is that it doesn’t need to be water tight. You can just target a small handful of large players and it will improve the situation in practice. It doesn’t matter if 998/1000 apps use addictive feeds if the largest two apps don’t and they have 90% of users/views.

            • Aurornis 53 minutes ago
              It’s naive to think that regulation is going to cover the entire global internet.

              If you regulated domestic companies out of existence, global options would pop up in their place. You could try to block them all in app stores but people would go to the web views.

              • alkonaut 3 minutes ago
                I think that's still mostly fine. Youtube is already not an app but a web site (It has apps too but I think it's less app centric than e.g. instagram).

                Obviously we need the ability to regulate also global options. Typically if these actors truly become big, then they have a presence in their "target" countries, such as ad sales.

      • butlike 1 hour ago
        Do it like a library. When a person walks into a library, they're presented with a short curated list of books suggested from the librarian. All visitors to the library see the same books. From there, the visitor can go about their business searching for what they want.

        If they don't know what they want, perhaps a good use case for the newfangled LLM-search we have now would be "What's an interesting or popular topic I haven't searched for before?" to which the AI will respond with a list of newly searchable terms.

      • mrighele 21 minutes ago
        > If the user can search like in Youtube then how do you rank the results? That's also an algorithm.

        Any ordering is an algorithm technically, so yes just "banning algorithm" doesn't work.

        A better alternative could be "the algorithm must be public and reproducible by the user".

        "Sort the posts of the people I follow in chronological order" you're good

        "Sort the posts by the output of a blackbox trained on user data" too bad you're a publisher and are responsible for what people post.

      • denismi 1 hour ago
        The first unwatched video from the user's followed/subscribed channels. Chronological, reverse chronological, sorted alphabetically, by the user's channel prioritisation, by likes, by views... whatever the user chooses. And then an end of feed.

        For new users? A search bar and a set of (human? AI?) curated seed recommendations that the platform is comfortable with being held liable for.

      • SirMaster 20 minutes ago
        Don't show any video. Make the user search for what they want to see.

        Rank them by best keyword match from their search query, if match is equal, order them newest posted to oldest posted.

        Done.

      • noprocrasted 1 hour ago
        > what is the first video you show them

        Whatever is latest posted across their followings/subscriptions?

        • vasco 1 hour ago
          If they just signed up they have no followings or subscriptions. So now what, you need to show accounts to follow first? Thats the same problem as deciding what the first video to show is. How do you decide who they should follow? Or the vision is that you can only have friends as if it's 2005 and you can't discover anything serendipitously?

          I don't consume any content from my friends on something like tiktok where I'm interested in discovering people that have good content under topics I'm interested in. I don't know who those people are and I want to discover new ones that come up not just follow some already popular accounts.

          • Gigachad 1 hour ago
            Undoubtably the change needed here will introduce friction, will reduce viewing time, and society will be better off for it.

            The whole idea here is to make content consumption more deliberate and mindful rather than just opening the app and veging out to an endless feed of slop.

        • hug 1 hour ago
          That’s also an algorithm. An unsophisticated one, but an algorithm nonetheless.

          You can (and should) argue that such a simple algorithm doesn’t “count”, but fundamentally the exact wording of the grandparent post never works, legislatively.

          Lawyers will lawyer.

          • NekkoDroid 40 minutes ago
            > That’s also an algorithm. An unsophisticated one, but an algorithm nonetheless.

            The problem always has been "(personalized) opaque algorithms". Time sorted by followers isn't really opaque, nor is "sorted by likes" or whatever. The problem is always pulling in parameters that a users either has no active control over or are so variable they effectively could be random.

          • xigoi 22 minutes ago
            It’s not about whether there is an algorithm, but whether it’s controlled by the user.
      • irusensei 19 minutes ago
        Enumerate by creation time in descending order the unwatched videos posted from the accounts the user follows.

        Like social media 1.0.

      • threetonesun 50 minutes ago
        The internet solved the problem of millions of millions in it's implementation details, you share a URL. You follow people, they share URLs, it grows organically, same way every website worked pre... Instagram? I'm not sure who moved to the algorithmic feed first.
      • graemep 48 minutes ago
        I would say, no *personalised* algorithms other than those based on deliberate user choices would solve the problem. So, what user chooses to follow, or the same for everyone in the country.
      • Computer0 1 hour ago
        I made a new YouTube account recently and my homepage was blank.

        https://news.ycombinator.com/item?id=37053817

        • walthamstow 40 minutes ago
          Same thing happens on LinkedIn if you don't follow anybody, the feed is just blank
      • mc32 50 minutes ago
        You know old reddit, Flickr, etc., had ways of presenting content based on different things besides impulsive engagement.
      • thinkingtoilet 59 minutes ago
        It's very easy.

        "So the user opens the app - what is the first video you show them?"

        You don't. How about that?

      • pessimizer 1 hour ago
        This seems to be consciously dishonest. Show them "most recent" or "most upvoted" or "A to Z." Pretending like this is hard is bizarre. People have always selected sort and filter algorithms, until companies started taking them away.
      • wyre 1 hour ago
        These are multi-billion dollar companies.

        Its okay if they have some hard problems to solve.

        • SecretDreams 1 hour ago
          Won't anyone think of the multibillion dollar technolords? They are people too!
      • phtrivier 54 minutes ago
        Of course it's easy: such decisions were taken _before_ the feeds where algorithmically built.

        You rely on unambigous, "physical" properties of the videos.

        There is a physical property of all the videos: the time of publication.

        There is a physical property of all the channels: did you subscribe to it, or not ?

        So, you show, in (reverse) chronological order of publication, the list of videos published by the channels you subscribed to.

        Now, of course, a brand new user would have no subscription - you show them a search box.

        But then, now, your search algorithm has to weight the various channels that match - but your algo can be relatively transparent, relatively auditable, and the same for all users (unless given explicit preferences, and of course national laws, etc, etc...)

        I'm sorry, but, I have a "subscriptions" page in youtube or substack, and they're chronological, and they show me what I want to watch. You keep that.

        There is a "home" page in both service that is algoritmically built, and they show me crap that the algo want me to watch. You get rid of that.

        Do this, and I can consider you a "neutral" actor, and accept that you shift the blame to content producer.

        Or, keep the algo feed, but don't take money from advertiser when I watch yet another flat earther video because YOU decided it was trending.

        If you want to decide what I watch, and make money from that decision - congrats, you are an editor. You get the earnings, and the responsibility.

        Please don't tell me, with a straight face, that the people who build the algo don't "decide" what I watch. If they want to tweak the algo to downgrade the flamewars and outrage and conspiracy theories and violence and abuse, they can. They do not want to, for business reasons. [1]

        That's fair, up to a point - we need publications with editors that agree on having "edgy" content. I'm not advocating for blanket censorship.

        I did not like social network preventing me from _sharing_ articles about Biden's son laptop (this was actually beyond the law, but somehow they managed to find the resources and programmers to implement _that_, because, at the time, the execs where cozying with a different administration.)

        I'm advocating for "accepting your responsibility as an editor".

        [1] https://en.wikipedia.org/wiki/Frances_Haugen#October_5,_2021...

    • stingraycharles 2 hours ago
      This is one of those things that don’t translate to legal reality very well, as then you have to define “what is an algorithm”.

      Is adding advertisements an algorithm?

      Is including likes an algorithm?

      Is automatically starting the next video after a previous one has finished an algorithm?

      Is infinite scroll an algorithm?

      Etc

      • andybak 2 hours ago
        This kind of complex leglislation already exists in many areas of the law: revenue collection being the most obvious one. We could choose to treat "societal harm" the way we treat "tax collection".

        I'm not saying there aren't infinite edge cases and second-order effects - but we tolerate those already for many things. I'm not pretending this is simple or even desirable - I'm merely stating it's possible if we want to do it.

        My biggest fear is that (like the UK Online safety act) this acts to favour the huge corporations because they are the only ones that can afford a team of lawyers. Any legislation should aim to carve out exceptions to avoid indirectly helping monopolies.

        • stingraycharles 2 hours ago
          Great example. These companies are already experts at circumventing taxes, what makes you think they can’t weasel their way around some arbitrary written law?

          Just look at the malicious compliance that Apple and Google have around the App Store stuff, they’ll find a way to comply with the law and implement different addictive dark patterns.

          I’m not saying that I disagree that these companies need to be regulated, I absolutely do. I just think it’s going to be a complicated process, and not “oh just ban everything that’s an algorithm”.

          And I have absolutely 0 faith in companies like Meta willfully complying.

          • soVeryTired 1 hour ago
            I have a feeling taxes are possible to circumvent only because a government tends to have one arm that wants to collect taxes, and another that wants to reduce them to encourage certain outcomes (like having a business setting up shop within its borders).

            The US may have this dual incentive structure since it wants to build its tech giants while limiting their control, but the EU doesn't. The arrival of a foreign tech social media giant might make the legislation a bit more palatable to pass.

            It will undoubtedly be complex to regulate all dark patterns away. But there are a few obvious, easy wins. It'd be a shame to make perfect the enemy of good.

          • bootsabota 1 hour ago
            Yeah it’s a tough situation to figure out.

            But here’s the real problem: people don’t care. And I say that as someone who hasn’t used social media since 2014.

            My observation of people’s behavior indicates that when all is said and done, people don’t care—they would rather get the endorphins from posting, liking, following, etc.

            But the solution is to allow people to control their own algorithm, and to have open source solutions where communities manage their own social network.

            It’s not the algorithm that is the problem it is that people don’t have the choice to curate their own content.

          • AndrewKemendo 1 hour ago
            Regulated by who?

            There’s no political organization (yes Mamdani actually out-raised cuomo so let that sink in) that isn’t being actively bribed

            • bee_rider 30 minutes ago
              Although it should be noted that Mamdani’s average donation size skewed much smaller than Cuomo’s, so it is possible that Mamdani was “bribed” by the general public.
      • kubb 2 hours ago
        This is some kind of a meme where people believe things can’t be defined in legal terms and therefore can’t be regulated. These people are usually not lawyers.

        Does anyone know where it’s coming from? I can certainly believe that incompetent jurisdictions have a ton of issues with people misapplying the law and using loopholes.

        • biophysboy 1 hour ago
          Albert Hirschman wrote a great book about the rhetoric people use to stifle policy proposals 35 years ago. “It’s futile; it won’t ever work” is one common argument. It’s not a meme so much as a cynical reflexive intuition
          • AdamN 1 hour ago
            One that's reinforced by those against whichever legislation or regulation is being proposed.
        • SpicyLemonZest 27 minutes ago
          The point isn't that it can't be regulated. What the original comment said was

          > This is pretty easy to solve. If you present data by algorithm, you are no longer an impartial common carrier and are liable for the content you present.

          But this is not in fact easy. It's hard to define what "present data by algorithm" means in a coherent way, and it's hard to extend liability for the content you present to liability for the manner in which you present it. You could make it work, if for some reason you really wanted to, but it's easier to pursue the strategy described in the source article of regulating specific abusive patterns.

        • naravara 43 minutes ago
          > This is some kind of a meme where people believe things can’t be defined in legal terms and therefore can’t be regulated. These people are usually not lawyers.

          No they’re engineers who think rules have to function as rigidly in every field as they do in programming.

          They either can’t or don’t want to accept that the law is a social construct and what it actually means to you is determined by the weight of precedent, as applied by judges and regulatory bodies. Things are vaguely worded in the law all the time. If people want to dispute how the enforcement is done they sue and judge decides how the rule should be applied.

        • owebmaster 1 hour ago
          It probably comes from the same pockets that influences legislation
      • throwawayffffas 1 hour ago
        "By algorithm" can be easily defined.

        The easy benchmark to setup can easily be, that any feed that displays the data in a way other than the following is considered an editorial choice and thus the platform is liable as a publisher:

        1. In a chronological order, and only filtered based on user selected options.

        2. In any other order explicitly selected by the user.

        An exception can be made to allow filtering out content that violates the platforms terms and conditions.

        Alternatively there can be no exception, effectively making these platforms unworkable. This is also a choice. We do not need these platforms, including this one.

        • tmvphil 30 minutes ago
          If the user selects "sort by algorithm" then I don't see how you've changed anything other than the default. I think it's pretty obvious just changing the default won't work.
          • xigoi 18 minutes ago
            Changing the default makes a huge difference because 99% of people don’t change settings.
            • tmvphil 15 minutes ago
              That's because the default is 99% the way the app is designed to be used. If the default is regulated, then they will just say "sorry the default is boring, click here to bring back the feed" and everyone will just click.
      • itissid 17 minutes ago
        Instead, a regulation could mandate the administration an anonymized unbiased mandatory eval test at the end of every week/bi-week/month just like instruments for psych evaluation (e.g. do you feel your <mental-health-metric> has become worst in the last <time-period> on <scale>. Did you have <mental-health-marker> after watching content on social media?).

        The said regulation can then mandate that after calibration and correction the feed pull back by training the algorithm to adjust it in a rapid A/B test.

        This is all doable by the companies themselves, but since they wont, the key is to mandate it and publish the aggregate results regularly — like make it part of the quarterly share holder's SEC reporting requirement or something.

        • itissid 14 minutes ago
          I would say its naive to regulate the algorithm than its effects. The effects are all that matter at the end.
      • orbital-decay 2 hours ago
        "Algorithm" is a method of selecting the content to display. You're listing presentation types, not selection types. Presentation has nothing to do with supervised selection. Selecting the next video in the infinite scroll would be the algorithm, not the infinite scrolling mechanism itself.
      • randunel 2 hours ago
        Everything other than sorting the list of entities by a standard measurement unit (time, length, mass, temperature, amount) needs to be covered by this law.

        The moment you add other entities to the list (e.g. ads inbetween posts), then it's also subject to the same restrictions.

        • stingraycharles 2 hours ago
          This effectively means “every online platform ever” and would also have included MySpace and the OG Yahoo etc, and as such would not really single out the truly bad actors.

          And then we’ll end up with with another cookie-banner style law which had good intentions but actually missed the point entirely.

          • bee_rider 2 hours ago
            Maybe MySpace should be covered. I mean, MySpace probably(?) had the technical capacity to act maliciously in the manner that modern social media sites do, then business model just hadn’t evolved to the modern toxic state yet.

            The cookie banner law is fine for the most part. Sites that do the malicious-compliance thing of over-prompting the user for permissions are providing a strong signal that they are bad actors. It’s about as much as we can expect without banning them entirely…

          • randunel 2 hours ago
            I stopped using facebook around 2015-ish, when they stopped allowing sorting by date. Prior to this, hi5 and the likes also allwoed sorting by date. So no, not every online platform ever.
          • progval 2 hours ago
            It even includes email providers with a spam filter.
      • 3form 2 hours ago
        This doesn't differ much from the legal reality that I've seen. Terms need to be defined, yes. It will require work to do so. And that work should be done even if it's a bother.
      • baggachipz 1 hour ago
        Ok so then the "algorithm" must be made available to authorities (or even better, the public at large) and be approved or rejected based on a court or a law. Obviously an algorithm based on "engagement" or "narrative" should be rejected with prejudice every time.
      • pessimizer 1 hour ago
        I don't see a single difficult example here. The answer is "NO." It's strange that you couldn't even find one.

        I mean "Is including likes an algorithm?" You might as well ask if having a dog in the video is an algorithm. Any question about "likes" would be if you're manipulating the video selection based on likes, or is the user given a control to manipulate the video selection based on likes. If it's you it's an algorithm. If it's the user, it's a control. If you lie about the likes, then it's an algorithm. If you're transparent about the likes, then it is a control.

        The other ones aren't even worth discussing. You might as well ask if having a blue logo is an algorithm, or if Comic Sans is an algorithm. "It's all so complicated!"

        -----

        edit: that being said, the EU does not care about this issue at all, and has had plenty of mandate and plenty of time to have done something about it if it did. They are also going to say "it's all so complicated." Because their problem is the unpopularity of center-left neolib governments that are just barely holding on with extreme minority support through bureaucratic means because they wrote the regulations. They want to keep what's came for British Labour during the recent council elections from coming for them.

        So I guarantee that content will somehow become an "algorithm." The goal is to keep people who don't like them from speaking to each other.

    • bee_rider 1 hour ago
      The conversation has iterated a couple times and one point that people (on this site at least) are stuck on is “well however you rank things—latest, most popular—you’ll need to use some kind of algorithm, maybe quicksort.” This isn’t what the general public or politicians mean when they say “an algorithm” but it does make something of a point, what exactly the general public and politicians mean when they say that… it’s a bit ambiguous.

      I think the EU has fully digested this point, and is focusing on the “addictive design” phrase instead, for good reason. It makes it obvious that the problem is a bit fuzzy and related to the behaviors induced, not some cut-and-dry algorithmic thing.

    • p2detar 51 minutes ago
      There's another angle to this except the algorithm. When it comes to kids today, there seems to be peer pressure and the need to maintain social media presence, be cool online, among your peers and so on. Beyond that, some kids have their lives devastated by others secretly (or not) recording and publicly sharing their vulnerable moments in life. That can happen in a night and profoundly damage someone.
    • shiandow 2 hours ago
      And when does the user decide? Must a platform do nothing to stimmy spam, or even illegal content to qualify as impartial?

      I suppose the answer could be that only platforms that do indeed allow spam or worse are impartial, but that is a tricky position to be in.

      • leogiertz 2 hours ago
        The mechanism would be that if the user has chosen to follow an account then posts from that account falls under common carrier. If the platform choses to show you other posts then it's under their responsibility.
    • luke5441 1 hour ago
      This is a bit of systems difference. Under a french law system you would write laws to regulate the harms away. Under english common law liability court cases about the harm would lead to precedents and then to common law derived from it. Though not an expert on this.
    • rwmj 1 hour ago
      You'll need to solve the dark pattern where a new account opens on a blank page with a box saying "Would you like us to suggest what you watch here?"
      • PokemonNoGo 1 hour ago
        Why would anyone go to a new platform if they didn't know anyone to follow there? I don't see a problem there. I download TikTok and search for SexyDancingDinosaur I heard was on there and press follow.
    • walthamstow 41 minutes ago
      Back in my day, they used to be called social networks
    • an0malous 1 hour ago
      It’s so elegant that there’s zero chance the EU will do it since this is all performative for them
    • akersten 1 hour ago
      How does this specific horrible take rank so highly on HN whenever something adjacent to big tech gets posted. "impartial common carrier" is not even an extent legal concept.

      It's been argued to death already, I just have to express shock that I'm still seeing this non-starter constantly here.

    • grumbel 1 hour ago
      Alternative suggestion: Force them to open up the service and allow third party clients. Take Art. 20 GDPR "Right to data portability" and extend it to public content.
    • doctorpangloss 35 minutes ago
      "this" - you mean, engagement optimization? i think it would be different content. i don't know how much liability matters, people spend all day watching netflix too, and it is "liable."

      ironically, i'm only reading this kind of low brow take because people upvote it, not because it makes any sense.

  • anzerarkin 2 hours ago
    I don’t think this is only a kids issue.

    A lot of adults need this too. The addictive apps are very well designed, while most blockers are either too easy to ignore or too annoying to keep using.

    I built a small iOS blocker because I had the same problem. Making it strict enough to actually work without making people hate it is the main challenge.

    • criddell 1 hour ago
      On the radio I heard a reporter talking about things China does during school exams. Apparently all schools have exams at the same time and during that period, social media shuts down at night. I forget the exact hours (10pm - 6am maybe). I'm starting to think that would be a great policy in general for everybody.

      I think they also said AI companies go offline during exam hours, but I may have got that wrong.

      • Aurornis 1 hour ago
        Absolutely wild that we’re seeing proposals to shut down parts of the internet and regulate when people can talk to each other on social platforms as a real suggestion on HN.

        I feel like we’ve completely lost the plot when we’re starting to invite government partial Internet shutdowns as a good idea. This is a totalitarian government play.

        • tolerance 1 minute ago
          > I feel like we’ve completely lost the plot when we’re starting to invite government partial Internet shutdowns as a good idea. This is a totalitarian government play.

          There's been criticism about the culture surrounding platforms like Mastodon/Bluesky that anticipated this.

        • afavour 45 minutes ago
          I think it speaks to the complete lack of government regulation in the area that people see such extreme answers as positive. If any government had seen fit to engage in light regulation of what social media can do people might be happier.
          • zombot 25 minutes ago
            Light regulation won't cut it any more for companies that are too big to jail.
        • schnitzelstoat 55 minutes ago
          I can only imagine these people have never experienced such censorship.

          Maybe they'll feel differently when they have to upload their ID and face scan (which later gets leaked) just to be able to read a recipe for beer or whatever.

        • zombot 22 minutes ago
          But it's kind of a logical, if misguided, consequence of regulators being completely corrupt and letting those feudal lords do whatever the hell they want.
        • wackget 52 minutes ago
          [dead]
      • dgellow 1 hour ago
        I can understand regulating dark/abusive patterns, but at the end of the day I should be allowed to doomscroll at night if I want to
        • buellerbueller 1 hour ago
          >I can understand regulating dark/abusive patterns, but at the end of the day I should be allowed to doomscroll at night if I [am an addict]
        • redsocksfan45 1 hour ago
          [dead]
    • butlike 1 hour ago
      Toast notifications were the big mistake. Also badges. In my perfect world, the only thing to retain the ability to keep messages alerting the user that someone tried to contact them would be voicemail, subject to the same spam laws as everything else.
      • djyde 31 minutes ago
        [dead]
    • kgwxd 1 hour ago
      As an adult, who despises all those apps, I don't want to grant government the power to make that decision for me.
      • criddell 1 hour ago
        An an adult, do you also believe seat belt laws are a bad thing?
        • jayGlow 1 hour ago
          personally yes, that kind of choice should belong to the individual not the government. besides that though the laws are nonsensical why is a seatbelt required in a car not not in a bus, why are motorcycles even allowed at all?
          • moooo99 44 minutes ago
            This argument falls apart for countries with socialized healthcare.

            As long as all people are paying for your dumb decisions, it is reasonable to expect the government to reduce the frequency of dumb decisions by adequate means.

        • nekusar 1 hour ago
          Yes, I do. Its just another way that cops can pull you over for bullshit charges and revenue enhancement.

          I remember in my state, it was initially only a citation that couldnt be pulled over on. Then they flipped that and started pulling over for it. Why? Pure fucking money grab.

          Me not wearing a seatbelt means I risk getting splattered. Not you, or anyone else.

          • foobarian 1 hour ago
            > Me not wearing a seatbelt means I risk getting splattered. Not you, or anyone else.

            Except who pays for your million-dollar reconstructive surgery and rehab? I don't suppose you will cover that out of pocket to avoid burdening your fellow insurance payers with your reckless behavior?

          • aeve890 1 hour ago
            >Me not wearing a seatbelt means I risk getting splattered. Not you, or anyone else.

            Physics says otherwise. In a collision you don't decide where you body is yeeted and your skull could end inside the skull of a passenger using his seatbelt. Don't be a moron. https://youtube.com/shorts/n2yLMGA_YSA?si=AlvRgfpb-PJxGCBw

          • wackget 46 minutes ago
            This cannot be a genuine take from a real person.
          • chinathrow 1 hour ago
            Is this satire?
      • wackget 42 minutes ago
        You might have the self-awareness and impulse control to stop yourself from getting addicted to these apps, but the majority of the world's population does not.

        These giant companies pour millions upon millions of dollars into engineering their services to be as "engaging" (read: addictive) as possible with the specific goal of making users spend more time on them.

        Against that, the average person has no chance. The power balance is hugely uneven.

        A responsible government which actually cares for its people has a duty to protect them from abuse like that.

    • actionfromafar 1 hour ago
      If we afford the same protections to adults, we don't need age verification either. Just a thought.
  • Pesthuf 2 hours ago
    Tell me: why are these algorithms suddenly okay when the victim turns 18?

    They are bad for everyone and if you’re willing to regulate them, make them illegal to be used on anyone.

    • Mashimo 2 hours ago
      Just from this article it's not clear if the methods like endless scrolling or "watch next video" are going to be regulated based on user age or not.

      It just says the platform who use such methods, often target kids.

    • palata 2 hours ago
      Same as for the cigarette: it's a lot easier to regulate stuff for kids, because we as a society tend to agree that they need to be protected. Much harder to do with adults, because it is much less of a consensus.
    • poszlem 50 minutes ago
      Because, in general, we see adults making bad choices as a price worth paying in a free society, but we recognize that children lack the maturity and judgment to make those choices for themselves.

      Most adults also lack the maturity and judgement, but allowing adults to make bad decisions is usually less dangerous than giving someone else the power to decide which decisions are too bad to permit.

  • ErigmolCt 4 minutes ago
    I'm usually skeptical of -protect the children- regulation, but addictive design feels like a real and concrete target
  • FinnKuhn 2 hours ago
    I think especially restricting endless scrolling is a good thing overall to reduce the addictiveness of social media and its harmful effects.

    HN having pages instead of a feed or endless list is one of the things I really like about it.

    • nanapipirara 2 hours ago
      For sure.

      The other thing I really love about HN is that titles are all supposed to be boring and to the point. The guidelines[1] for titles are excellent and I wish more of the web and honestly legacy media too would behave that way. Things that are of no interest to me are not trying to waste my time and attention.

      [1] https://news.ycombinator.com/newsguidelines.html

    • ekjhgkejhgk 2 hours ago
      > I think especially restricting endless scrolling

      The actual point is that they are designed to be addictive. "endless scrolling" is just an implementation detail. If you "ban endless scrolling", they'll still be using every other trick to make it addictive.

  • spockz 6 minutes ago
    They should also tackle Youtube. Yesterday my Youtube app opened into "Shorts" automatically. Shorts are of the same addictive variety as TikToks.
  • yipbub 2 hours ago
    Thanks, I'm an adult and I need it too
    • butlike 1 hour ago
      FWIW, social media use is mediated by ∆FosB expression, so the less you use social media, the less you want to use social media. Timeline of ~3 months.
    • mrosenbjerg 2 hours ago
      Had the exact same thought
  • lp4v4n 1 hour ago
    I don't agree with this. Addictive, unless we're talking about a chemical substance or something like that, is a subjective thing. At some point, books, movies, comics, etc, etc might have been considered addictive.

    Social networks in general should be banned for underage people, that's the thing. And the social network itself should be liable for verifying the age its users, like a nightclub is liable for people who enter it. No bullshit operating system age verification, that's, trust me, totally intended to protect kids and not to spy on you.

    • bogwog 1 hour ago
      > Addictive, unless we're talking about a chemical substance or something like that, is a subjective thing.

      What makes you say that? It's well known that the addictive patterns in these apps trigger dopamine the same way drugs do. In a sense, dopamine is the "chemical substance" central to the addiction. Heroine and algorithms are just different ways to get it.

      https://med.stanford.edu/news/insights/2021/10/addictive-pot...

      • Aurornis 56 minutes ago
        Everything you do “triggers dopamine”. Reading HN triggers dopamine. Eating breakfast triggers dopamine. Dopamine is also important for movement and many other things.

        This is a lame reduction of brain chemistry that has been used to push agendas. Dopamine is not equivalent do addiction.

        • bogwog 38 minutes ago
          > calls it a lame reduction of brain chemistry

          > posts a lame reduction of the argument

      • SpicyLemonZest 40 minutes ago
        It's well known, but I'm not convinced it's true. Dopamine levels are measurable by blood test, and some drug abuse studies perform that measurement. Why does the literature on social media and dopamine exclusively talk in vague and general terms, rather than pointing to specific studies where researchers measured dopamine before and after 30 minutes of TikTok scrolling?
    • butlike 1 hour ago
      Addictiveness is measured by ∆FosB gene expression. The 'addictiveness' of a substance or activity is qualified by how much ∆FosB is expressed. It's decidedly not just a completely subjective thing. Books, movies, comics, etc. can all still be measured on this scale. Everything is addictive in some capacity, generally.
    • bootsmann 1 hour ago
      The reason why it is done this way is that “social media” is much harder to delineate and also not what is generally considered harmful.
    • jampekka 1 hour ago
      Addiction at least is quite straightforward to differentiate from otherwise engaging things by whether it causes significant harmful effects. E.g. per Wikipedia "Addiction is a neuropsychological disorder characterized by a persistent and intense urge to use a drug or engage in a behavior that produces an immediate psychological reward, despite substantial harm and other negative consequences."

      Addictive would be then something that (for a substantial portion of population) has a tendency to cause addiction.

    • simion314 1 hour ago
      >At some point, books, movies, comics, etc, etc might have been considered addictive

      The difference compared to a book is that a book is not personalized for each individual reader, so the example is not a good one IMHO.

  • hnthrowaway0315 2 hours ago
    But they are so profitable, and we need them to track people around and create a police state efficiently. Ah let's keep them but just fine them as well for the show.
    • boringg 1 hour ago
      What else will fund the AI boom but computationally expensive video AI?
  • tolerance 1 hour ago
    Either what defines an "adult" is going to be raised exponentially or what defines a "kid" is going to be lowered to determine who is allowed access to information in transit and who needs to be "safeguarded" from it.
  • garrettjoecox 2 hours ago
    At what point should the responsibility fall on the parent to protect their children from harm?

    Don’t get me wrong, if I had my way TikTok wouldn’t exist for anyone, adults included. It’s just so strange to me that so many parents hand their 7 year olds unrestricted access to TikTok and expect someone else to keep their kid safe.

    • perarneng 2 hours ago
      It's not so easy, they need phones and social media to communicate with their friends. They also need to fit in and find an identity. The algorithms basically all engagement engines have is harmful for humanity as a whole. They are marketed as recommendation engines but it's 100% about engagement and that is why the content you see is mostly creating dopamine from it being fun or rage for it being provocative. It's built to serve one purpose, to keep people using the platform as much as possible. Not because the platform is good, but because it serves content that maximizes engagement.

      I read a post about someone saying his wife worked for a snack company. They used MRI scans to see how much salt (or sugar) they should have in the snacks to maximize the response in the brain. Sounds disturbing right.

      Well engagement engines are the same thing. It's artificial intelligence optimized to get people to react and stay addicted. Basically AI doing harm. It's not what is best for the individual in terms of health. It's what generates most money to the owner of the platform.

      It should not be allowed to build a business around something that exploits humans brains. Basically biohacking our brains for profit.

    • kioleanu 2 hours ago
      I am from Eastern Europe and I’ve been living for many years in Western Europe. Where I come from, kids get their first phones when they start school at 6 (there’s a pre-school year) simply because every other kid has one. I keep coming back in my mind to two examples from my birth country: a friend’s kid carrying an 8 inch smartphone in his hand everywhere because the phone was as big as half his thigh and would have to carry a bag for it. The second one was on a visit at the zoo, I was on a bench and a family with two young children with them, in a cart. And both children, couldn’t have been older than 4 or 5, were scrolling TikTok, that was showing them children content!

      In contrast, in Western Europe, my son is now in the sixth grade, more than half his class doesn’t have phones, phones are absolutely forbidden on school grounds and at school activities, and they are now doing a class trip where they were told that there’s a pay phone at the hotel, in case they want to call the parents - our son promptly informed us that he’ll rather buy a pack of Pokémon cards than call us and 3 days is not so much anyway.

      And it is not only at school, he travels for tournaments with his team every other week and mobile phones are absolutely forbidden on the team bus. Children read, play games (including chess on a magnetic board), sing and change stories for hours at a time

    • bogwog 58 minutes ago
      Replace TikTok with cigarettes, and it'll hopefully make sense to you. There was a time when people had no idea that smoking was bad for you, which is where we are now with these apps.

      And since they're addictive, kids will find a way to get them even if their parents don't allow it. That's why it's most effective to require ID when you're buying cigarettes than it is to shame people for not being perfectly vigilant parents.

      BTW, I'm not saying age verification is the solution here. IMO, we should instead ban addictive social media completely. Eg, target specific design patterns/features, require companies to disclose how their algorithms work to regulators, etc.

    • tolerance 1 hour ago
      Apparently parents are spending more time with their children than ever. Dads especially. Paradoxically, there is what you're addressing.

      Personally, I think some parents are afraid of their children growing to resent them for infringing upon their "freedom" in ways that keeping them away from the dangers that social media and other technologies present.

    • Mashimo 2 hours ago
      > the responsibility of a parent to protect their children from harm

      I agree with you, but only in theory. Because that's where we are now and it does not seem to work that well.

      Maybe through more education? But then again I think reducing addictive tactics like endless scrolling could be part of a 2 prong attack.

      With alcohol we have education on what happens, but we also have laws that regulate it.

    • kubb 1 hour ago
      When it works.
  • bschwarz 2 hours ago
    Imagine the pressure on Instagram and Tiktok to serve better content if they were forced to pick out, say, 100 short videos per person per day. And not just for kids, adults need a break from this addiction machine as well.
  • seydor 1 hour ago
    they are going to put kids on a drip basis. addiction is still there, just limited amount per session. Intermittent rewards is actually the perfect schedule for an advertising company, you don't want people to be making unmonetizable page views.
  • shevy-java 54 minutes ago
    I do not buy this "holy knight war" by the EU at all.

    It also makes no real sense to me.

    Nothing against US mega-corporations paying fins, mind you, but I equally do not trust the EU bureaucrats either. There has to be a limit to both what politicians can do, what corporations can do and what bureaucrats can do, while retaining a democratic base system at all times. If you go against addictive design, then why not against ALL ads? I don't want to see any ads. Ublock origin made me change my mind here - I literally see no reason as to why I would ever want to burden my brain cells with irrelevant content.

    This is a bit different to website layout though. I equally fail to see why the EU should meta-regulate what is permissive in regards to design and what is not. Why would I have to accept any random EU bureaucrat here? If a user interface sucks, I'd rather expect ublock origin to kill it off. This could also be community maintained. No need for the EU to waste taxpayers money. After the EU wants to sniff for age data and then also declared its holy war against VPNs, I do not trust anything coming from Brussels. Even less so with Ms. Leyen in charge - can't the anti-corruption offices in Germany get rid of such lobbyists?

  • epolanski 1 hour ago
    Never understood the kids focus, looks to me like 50+ are by far the most addicted.

    Which makes it also a matter of also parents and grandparents setting bad examples.

  • thedetailsguy 1 hour ago
    Isn’t it more of “emotional” design than “addictive” design?
  • nirui 1 hour ago
    You know, yeah, you can crack down "addictive design", but then what?

    If you don't provide better alternative, the "kids" (and please, stop using "kids" as excuse because everybody can see through it now) will just stick on these platforms because, believe or not, these platforms are much MUCH safer than the alternatives.

    How about, let's see the real problem here: 24% of EU children at poverty risk or social exclusion (2024), see https://ec.europa.eu/eurostat/web/products-eurostat-news/w/d.... That's not just a statistic about children, it's also about their parents.

    Do you know that if you go outside, then there's this huge risk of having to PAY for stuff you don't actually need to live? Like transportation to go to place that don't bring you wealth, like drink that you drink even you're not that thirsty, like movie tickets just so it will not be too awkward after all the dialogue options are exhausted? Does these politicians just somehow forgot all of these costs money, in this economy that they helped to create?

    And that is not to mention the REAL risk, such as drugs the bad ones, rude or crazy drivers, unpleasant adults who's only life purpose is to earn enough money to keep them going a little bit longer, just to name a few.

    ..... ORRRR, you can just stay in your conformable home, sit on your soft and warm sofa/couch, and swipe your life away on TikTok or Instagram for free, safely.

    You see the problem here?

    I'm really sick and tired of these politicians putting up this act pretending to "love children", when in the reality what they do is putting up easy patches to hide the real problem, which is poverty and inequality, that's the real problem.

  • 1vuio0pswjnm7 33 minutes ago
    Imagine if Big Tobacco had something like Section 230
  • NickC25 15 minutes ago
    Just do what China does, how fucking hard is it? They have 4x, almost 5x the population of the US.

    STEM or verifiable educational content only. Have a review team and an AI that moderates content. No politics, no stupid dances, no monetization of content, no slop, and only credentialed people can post on certain topics (ie a delivery driver shouldn't make posts on theoretical mathematics).

  • nalekberov 1 hour ago
    Why, it’s always okay to harm adults?

    Like adults spending their hours scrolling through infinite feed is somehow beneficial to the society?

  • caaqil 1 hour ago
    In the modern world: any tech proposition that starts with protection of children as a goal can be dismissed out of hand, since it's emotional manipulation masquerading as tech policy. When I hear "protect kids", all I see is a sleazy politician bowing to their respective Security State apparatus.
  • thiago_fm 2 hours ago
    Why should only kids be protected from addiction?

    I have a hard time understanding this.

    We have plenty of adults with terrible social media addiction that is destroying their lives, and nothing being done about it.

    • indymike 1 hour ago
      This is the best question of all. Why are we allowing this?
    • gib444 2 hours ago
      Makes it an easier sell politically. If you position it as dangerous to kids in particular, your opposition then looks like they're encouraging child harm.
      • palata 2 hours ago
        Well and if you tell adults that they need to be regulated, they get pissed very, very quickly.
    • Mashimo 2 hours ago
      [dead]
  • LuckyBuddy 3 hours ago
    [dead]
  • sylware 3 hours ago
    Yeah yeah, virtue signaling, and most of EU online services are now gated by the use of one of the whatng cartel web engines (IRL, google blink), namely EU web sites are broken favoring web apps.

    They have to restore interop with noscript/basic html web engines (past/present/and future).

    Then, they have to be carefull with their file formats, for instance you never give "carte blanche" to such a disgusting format like PDF, you are very careful at defining a, as simple a possible, subset of it (with some internal software for validation).

    • Mashimo 2 hours ago
      Is ending endless scrolling really virtue signaling? Don't you think it will have a measurable effect?
    • nanapipirara 3 hours ago
      Yeah yeah, whataboutism.

      I'm very happy they're taking a stance. I've seen too many messed up kids and there's no doubt the addictive design plays a big role in the problem.

      • soco 2 hours ago
        I must notice that every time, but really every time, EU moves a pinky finger against tech industry, a sizeable chunk of comments here will be like the one above. I wonder, is it about a general sentiment against EU? Or a general sentiment against restricting technology? Or a general sentiment against humans? Or what?
        • palata 1 hour ago
          I think it's easier and safer to complain about everything than to actually have a nuanced and informed stance.

          Look at age verification: it's very easy and very safe to say "nobody sane would think that it is a good idea to force people to show their ID to every website they want to access, it will obviously leak the IDs, that is very bad!". While it is not wrong, it is manipulative: that is not the only way to implement age verification. In fact, there is technology that exists that would allow age verification in a privacy-preserving manner: some service that already have access to your ID can give you a token that proves your age, and you can then use this token to access a website. The service cannot know where you use the token, the website cannot know your ID, and they cannot collude.

          So the constructive debate around age verification is this: assuming we implement it properly (i.e. in a privacy-preserving manner), is that something that we want or not? Does it solve a problem, or at least does it help?

          But we cannot ever elevate the debate to that level, because nobody can't be arsed to get informed about it.

        • ToucanLoucan 2 hours ago
          Boiling kid's (and adult's) brains probably makes them a decent chunk of money, either directly via salary or indirectly via stocks. Ensuring kids remain healthy makes no money. An unfortunately large slice of the tech sector doesn't give the tiniest shit about the health of our broader society or any group in it if it means their lines stop going up, or even go up slightly less fast.
        • eowln 2 hours ago
          The sentiment that having to present our ID to use tiktok gives us the heebie-jeebies, and for good reason.

          Also, nobody voted for the Commission.

          • palata 2 hours ago
            > The sentiment that having to present our ID to use tiktok gives us the heebie-jeebies, and for good reason.

            So push for privacy-preserving age verification, such that you don't need to leak your ID to anyone but TikTok can still prevent kids from accessing it?

            • eowln 1 hour ago
              >privacy-preserving age verification

              No such thing.

              • palata 1 hour ago
                That's my problem with the debate: people like you seem very proud to be uninformed. It exists as much as end-to-end encryption exists. It's cryptography, it's not up to debate.

                But people who have no clue are very vocal about their belief that it does not exist.

                • eowln 1 hour ago
                  There are no active implementations that do not suffer from severe issues.
                  • soco 1 hour ago
                    There are two ongoing implementations: one weaker in the EU boooo and one good in Switzerland. None having severe issues. Questions?
                    • eowln 30 minutes ago
                      I have none, I already am aware of the functioning of the systems that have been deployed, hence my statements of fact.
                      • palata 26 minutes ago
                        You obviously have no idea, and this really looks like a new account made for trolling (negative karma).
        • watwut 2 hours ago
          Imo, both. The more right wing people started to have aggressively anti-EU stance once Vance openly stood on the side of Orban and against EU and democracies in general.

          And some people see tech companies as worship worthy and trying to restrict them is kind of a blasphemy.

          • modo_mario 1 hour ago
            The Vance thing is far too recent and inconsequential across europe?

            The sentiment precedes all that and mostly stems from the EU being in some ways originally lib left dominated and still being seen as facilitating non-eu migration

            Regular right wing people (aka not one of the many parties potentially receiving donations) don't tend to love giant webtech companies. Especially since they feel like they're often used as a tool against them and aren't a local thing that draws nationalists either.

            A focus on privacy also isn't a very left-right defined thing tho i have noticed that the most far reaching expressions of it come a bit more from the further ends of that spectrum. (you'll see some very left leaning people at fosdems privacy focused/related stands for example)

            • dgellow 1 hour ago
              > don't tend to love giant webtech companie

              That’s a bit outdated since musk bought twitter

  • evanjrowley 2 hours ago
    The most on-brand solution for the EU would be to require mobile phone users to upload brain scans in real-time so the state can check for neural activity associated with addiction.
    • buellerbueller 1 hour ago
      The most on brand solution for a kneejerk reactionary American would be to satirize the EU for its consumer protections.
      • irusensei 6 minutes ago
        Pretty sure candy colored EU paradise even today is discussing on breaking encryption and privacy for its citizens.

        I'm posting from the EU.