ChatGPT serves ads. Here's the full attribution loop

(buchodi.com)

100 points | by lmbbuchodi 1 hour ago

16 comments

  • WD-42 58 minutes ago
    Since they are served as distinct events then I would think they should be easy to block.

    Once the ads are injected directly into the main response is when things get interesting.

    • kardos 7 minutes ago
      > Once the ads are injected directly into the main response is when things get interesting.

      This would be where you post-process the LLM response with a second LLM to remove the ad..

    • lmbbuchodi 47 minutes ago
      you can block these URLs: |bzrcdn.openai.com^, ||bzr.openai.com^ It won't blanket block everything but will significantly reduce telemetry collected.
  • Aurornis 21 minutes ago
    The ads are in the free tier and the new ad-supported $8/month plan.

    Every time this comes up there are comments assuming that ads are being injected into the normal plans, but these are for the free tier and the new Go plan which warns you that it includes ads when you sign up.

    • darepublic 14 minutes ago
      Would require a lot of training to implement ads blended into convo and not have it be too obvious/ eff up the results?
  • benleejamin 34 minutes ago
    I'd always thought that ChatGPT ads would be indistinguishable from actual content.
    • irjustin 22 minutes ago
      this would be a breach of trust and short term would work great but long term is too detrimental.

      same thing could've been said for search results, so at least that part is still "safe".

      • SchemaLoad 5 minutes ago
        Long term all of the major LLM platforms will have invisible ads, influences, and propaganda woven into the content. The temptation will be irresistible for these companies.
      • bix6 20 minutes ago
        O you think trust matters? This is capitalism not trustism.
        • PradeetPatel 4 minutes ago
          Long term retention is built on brand trust and usability, then ensh*ttification happens.
        • nalekberov 3 minutes ago
          No, this is late stage capitalism without regulation.
  • infinite_spin 39 minutes ago
    I see OpenAI making a significantly larger amount from defense contracts than from advertisements pumped into chats. So I wonder whose bright idea it was to create a public perception risk.
    • peddling-brink 33 minutes ago
      Maybe the negative press from ads is better than the negative press from powering murderbots?
      • tayo42 18 minutes ago
        Bad press from a contract like that happens once and everyone forgets. Ads are in your face everytime
    • Larrikin 28 minutes ago
      Every single MBA can show for at least one quarter revenue is up after they introduced ads. They do not care what happens after if they can plan their career around that.
  • djmips 1 hour ago
    And it begins.
  • keyle 57 minutes ago
    Can't wait for "watch this ad for 90s to use xxhigh on your next prompt!"
  • dankwizard 19 minutes ago
    Really well written, technical post. Good read.
  • vicchenai 46 minutes ago
    figured this was inevitable once they started the free tier. the attribution loop being a separate event stream is actually kind of clever engineering though -- means they can A/B test ad formats without touching the core model response
  • mock-possum 9 minutes ago
    Not to me they don’t, cause I canceled my account and stopped using their products when they made the announcement.
  • avaer 39 minutes ago
    Remember that ads are the "last resort" for OpenAI, and they're doing this despite the fact that it's "uniquely unsettling", according to Sam.

    Was he lying, or has OpenAI given up hope that this train wreck works economically without enshittification? Neither option is good, but I don't really see a third.

    • Aurornis 20 minutes ago
      The ads are only for the free and $8/month plans. They basically added an ad-supported super discount level that you can ignore if you’re paying for the normal plans.
      • RussianCow 3 minutes ago
        But the fact that they've added an ad-supported tier this early into their life as a company means they're desperate for revenue. You start inserting ads when you're optimizing for profit, not when you're still growing. It took how long for Netflix to introduce an ad-supported plan?
  • singingtoday 1 hour ago
    I don't like anything about this.
  • BoredPositron 26 minutes ago
    I don't get what's wrong with charging for your product. Like get rid of the free tier and make a small tier with an easy to serve model for like 5 bucks. Is it still the DAU rage of the 2010ss that's driving burning money?
    • teaearlgraycold 20 minutes ago
      How do you pick up new paying users without letting people use the service for free for a while first? Freemium is popular because it works well.
  • uriahlight 49 minutes ago
    Let the enshittification commence!
  • gxs 1 hour ago
    This is gross

    It feels like we’ve been in the golden age and the window is coming to a close

    Let the enshitification begin, I guess

    • dannyw 45 minutes ago
      How do you expect the spend & COGS for free LLM inference to be funded? For users who don't want to pay, or maybe can't pay?
      • derektank 28 minutes ago
        Perhaps it’s a glib and easy thing to say, but after a teaser period, I would simply not offer free LLM inference. Agreeing to serve ads just completely re-aligns your interests away from providing the best possible user experience to something else entirely.
      • infinite_spin 35 minutes ago
        From things like defense/private contracts

        e.g. colleges pay for institutional subscriptions

        • 2ndorderthought 27 minutes ago
          The average person doesn't benefit from defense contracts ... Like ever.
    • iammrpayments 34 minutes ago
      It has begun ever since they nerfed chatgpt4 before releasing 4o
    • 2ndorderthought 1 hour ago
      In the past month local models have been ramping up in major way meanwhile the namesake providers have upped prices, went offline randomly, and started doing slimier and slimier things.

      I really think the future is local compute. Or at least self hosted models.

      • SchemaLoad 59 minutes ago
        The hosted ones still have the advantage of being able to search the internet for live info rather than being limited to a knowledge cut off date.
        • gbear605 58 minutes ago
          I’m not sure why a model needs to be hosted in order to make network calls?
          • hansvm 56 minutes ago
            Is there a library of good tools for LLMs to call? I have to imagine the bot-detection avoidance mechanisms are a major engineering effort and not likely to work out of the box with a simple harness and random local LLM.
            • ossa-ma 48 minutes ago
              Even the hosted ones are blocked from searching certain sites, for example Claude is banned from searching Reddit:

              `Error: "The following domains are not accessible to our user agent: ['reddit.com']."`

            • wyre 43 minutes ago
              Tavily, Exa, Firecrawl, Perplexity, and Linkup are all tools for agents to search the web.

              I’ve been building a harness the past few months and supports them all out of the box with an API key.

              • goosejuice 6 minutes ago
                Kagi also has an API. People who hate ads are probably the same folk that should be paying for Kagi. That's the sane alternative world where companies respect their users.
        • darepublic 58 minutes ago
          Local ones that support tool use can do the same
        • eightysixfour 58 minutes ago
          You can do that locally too!
      • CSMastermind 57 minutes ago
        What's the rough equivalent of a local model? Are we talking GPT-4?
        • 2ndorderthought 32 minutes ago
          Qwen 3.6 which was released this month is a large but still smaller model. Supposedly it's at about sonnet level when configured correctly. It can be run on commodity hardware without purchasing a data center. https://www.reddit.com/r/LocalLLaMA/comments/1so1533/qwen36_...

          Then there are middle size ones which require multiple gpus which are like gpts latest flagships.

          Then there is kimi 2.6 which is a monster that is beating opus in some benchmarks. https://www.reddit.com/r/LocalLLaMA/comments/1sr8p49/kimi_k2...

          It's basically whatever you can afford. Any trash heap laptop can run code auto complete models locally no problem. The rest require some level of investment, an idle gaming pc, or a serious investment

        • Terretta 52 minutes ago
          Depends on your VRAM or "unified" memory for how smart it is, and CPU/GPU for how quick it is.

          128GB of RAM? Sure, the early to mid 4s releases, except maybe 4o. And on an M5 Max, about the same speed.

          I wouldn't really bother under 64GB (meaning 32GB or less) except for entertainment value (chats, summaries, tasky read-only agent things).

        • kay_o 51 minutes ago
          GLM 5.1 and DeepSeek 4 are acceptable, but the cost of hardware and energy cost that depending on your use case you may as well purchase a Tokens. They get useless and stupid rapidilty if you quant enough to run on single 16-24GB GPU style.
    • rnxrx 55 minutes ago
      The arc of the technological universe is short, but it bends toward enshitification.
  • jesse_dot_id 35 minutes ago
    That's cool, I'll never see them.