LLMs are a 400-year-long confidence trick

(tomrenner.com)

73 points | by Growtika 2 hours ago

16 comments

  • krystofee 1 hour ago
    I disagree with the "confidence trick" framing completely. My belief in this tech isn't based on marketing hype or someone telling me it's good – it's based on cold reality of what I'm shipping daily. The productivity gains I'm seeing right now are unprecedented. Even a year ago this wouldn't have been possible, it really feels like an inflection point.

    I'm seeing legitimate 10x gains because I'm not writing code anymore – I'm thinking about code and reading code. The AI facilitates both. For context: I'm maintaining a well-structured enterprise codebase (100k+ lines Django). The reality is my input is still critically valuable. My insights guide the LLM, my code review is the guardrail. The AI doesn't replace the engineer, it amplifies the intent.

    Using Claude Code Opus 4.5 right now and it's insane. I love it. It's like being a writer after Gutenberg invented the printing press rather than the monk copying books by hand before it.

    • consp 44 minutes ago
      > It's like being a writer after Gutenberg invented the printing press rather than the monk copying books by hand before it.

      That's not how book printing works and I'd argue the monk can far more easy create new text and devise new interpretations. And they did in the sidelines of books. It takes a long time to prepare one print but nearly just as long as to print 100 which is where the good of the printing press comes from. It's not the ease of changing or making large sums of text, it's the ease of reproducing and since copy/paste exist it is a very poor analogue in my opinion.

      I'd also argue the 10x is subject/observer bias since they are the same person. My experience at this point is that boilerplate is fine with LLMs, and if that's only what you do good for you, otherwise it will hardly speed up anything as the code is the easy part.

    • vanderZwan 55 minutes ago
      Even assuming all of what you said is true, none of it disproves the arguments in the article. You're talking about the technology, the article is about the marketing of the technology.

      The LLM marketing exploits fear and sympathy. It pressures people into urgency. Those things can be shown and have been shown. Whether or not the actual LLM based tools genuinely help you has nothing to do with that.

      • remus 47 minutes ago
        The point of the article is to paint LLMs as a confidence trick, the keyword being trick. If LLMs do actually deliver very real, tangible benefits then can you say there is really a trick? If a street performer was doing the cup and ball scam, but I actually won and left with more money than I started with then I'd say that's a pretty bad trick!

        Of course it is a little more nuanced than this and I would agree that some of the marketing hype around AI is overblown, but I think it is inarguable that AI can provide concrete benefits for many people.

        • latexr 39 minutes ago
          > If LLMs do actually deliver very real, tangible benefits then can you say there is really a trick?

          Yes, yes you can. As I’ve mentioned elsewhere on this thread:

          > When a con man sells you a cheap watch for an high price, what you get is still useful—a watch that tells the time—but you were also still conned, because what you paid for is not what was advertised. You overpaid because you were tricked about what you were buying.

          LLMs are being sold as miracle technology that does way more than it actually can.

      • carpo 40 minutes ago
        But saying it's a confidence trick is saying it's a con. That they're trying to sell someone something that doesn't work. Th op is saying it makes then 10x more productive, so how is that a con?
        • trimethylpurine 22 minutes ago
          The marketing says it does more than that. This isn't just a problem unique to LLMs either. We have laws about false advertising for a reason. It's going on all the time. In this case the tech is new so the lines are blurry. But to the technically inclined, it's very obvious where they are. LLMs are artificial, but they are not literally intelligent. Calling them "AI" is a scam. I hope that it's only a matter of time until that definition is clarified and we can stop the bullshit. The longer it goes, the worse it will be when the bubble bursts. Not to be overly dramatic, but economic downturns have real physical consequences. People somewhere will literally starve to death. That number of deaths depends on how well the marketers lied. Better lies lead to bigger bubbles, which when burst lead to more deaths. These are facts. (Just ask ChatGPT, it will surely agree with me, if it's intelligent. ;p)
      • amelius 48 minutes ago
        Yeah, but it should have been in the title otherwise it uses in itself a centuries old trick.
      • latexr 44 minutes ago
        Exactly. It’s like if someone claimed to be selling magical fruit that cures cancer, and they’re just regular apples. Then people like your parent commenter say “that’s not a con, I eat apples and they’re both healthy and tasty”. Yes, apples do have great things about them, but not the exaggerations they were being sold as. Being conned doesn’t mean you get nothing, it means you don’t get what was advertised.
        • JacoboJacobi 26 minutes ago
          The claims being made that are cited are not really in that camp though..

          It may be extremely dangerous to release. True. Even search engines had the potential to be deemed too dangerous in the nuclear pandoras box arguments of modern times. Then there are high-speed phishing opportunities, etc.

          It may be an essential failure to miss the boat. True. If calculators were upgraded/produced and disseminated at modern Internet speeds someone who did accounting by hand would have been fired if they refused to learn for a few years.

          Its communication builds an unhealthy relationship that is parasitic. True. But the Internet and the way content is critiqued is a source of this even if it is not intentionally added.

          I don't like many people involved and I don't think they will be financially successful on merit alone given that anyone can create a LLM. But LLM technology is being sold by organic "con" that is how all technology such as calculators end up spreading for individuals to evaluate and adopt. A technology everyone is primarily brutally honest about is a technology that has died because no one bothers to check if the brutal honesty has anything to do with their own possible uses.

          • latexr 16 minutes ago
            > The claims being made that are cited are not really in that camp though..

            They literally are. Sam Altman has literally said multiple times this tech will cure cancer.

    • keyle 33 minutes ago
      It's fine for a Django app that doesn't innovate and just follows the same patterns for the 100 solved problems that it solves.

      The line becomes a lot blurrier when you work on non trivial issues.

      A Django app is not particularly hard software, it's hardly software but a conduit from database to screens and vice-versa; which is basic software since the days of terminals. I'm not judging your job, if you get paid well for doing that, all power to you. I had a well paying Laravel job at some point.

      What I'm raising though is the fact that AI is not that useful for applications that aren't solving what has been solved 100 times before. Maybe it will be, some day, reasoning that well that it will anticipate and solve problems that don't exist yet. But it will always be an inference on current problems solved.

      Glad to hear you're enjoying it, personally, I enjoy solving problems, not the end result as much.

      • danielbln 25 minutes ago
        I think the 'novelty' goalpost is being moved here. This notion that agentic LLMs can't handle novel or non-trivial problems needs to die. They don't merely derive solutions from the training data, but synthesize a solution path based on the context that is being built up in the agentic loop. You could make up some obscure DSL whole cloth, that has therefore never been in the training data, feed it the docs and it will happily use it to create output in said DSL.

        Also, almost all problems are composite problems where each part is either prior art or in itself somewhat trivial. If you can onboard the LLM onto the problem domain and help it decompose then it can tackle a whole lot more than what it has seen during pre- and post-training.

    • falloutx 37 minutes ago
      Are you actually reading the code? I have noticed most of the gains go away when you are reading the code outputted by the machine. And sometimes I do have to fix it by hand and then the agent is like "Oh you changed that file, let me fix it"
    • ManuelKiessling 56 minutes ago
      This. By now I don’t understand how anyone can still argue in the abstract while it’s trivial to simply give it a try and collect cold, hard facts.

      It’s like arguing that the piano in the room is out of tune and not bothering to walk over to the piano and hit its keys.

      • 112233 2 minutes ago
        So I tried it and it is worse that having random dude from Fiverr write you code — it is actively malicious and goes out of it's way do decieve and to subtly sabotage existing working code.

        Do I now get the right to talk badly about all LLM coding, or is there another exercise I need to take?

      • ozim 26 minutes ago
        Downside is a lot of those that argue, try out some stuff in ChatGPT or other chat interface without digging a bit further. Expecting "general AI" and asking general questions where LLMs are most prone for hallucinations. Other part is cheap out setups using same subscription for multiple people who get history polluted.

        They don't have time to check more stuff as they are busy with their life.

        People who did check the stuff don't have time in life to prove to the ones that argue "in exactly whatever the person arguing would find useful way".

        Personally like a year ago I was the person who tried out some ChatGPT and didn't have time to dabble, because all the hype was off putting and of course I was finding more important and interesting things to do in my life besides chatting with some silly bot that I can trick easily with trick questions or consider it not useful because it hallucinated something I wanted in a script.

        I did take a plunge for really a deep dive into AI around April last year and I saw for my own eyes ... and only that convinced me. Using API where I built my own agent loop, getting details from images, pdf files, iterating on the code, getting unstructured "human" input into structured output I can handle in my programs.

        *Data classification is easy for LLM. Data transformation is a bit harder but still great. Creating new data is hard so like answering questions where it has to generate stuff from thin air it will hallucinate like a mad man.*

        Data classification like "is it a cat, answer with yes or no" it will be hard for latest models to start hallucinating.

      • demorro 17 minutes ago
        It's like arguing that the piano goes out of tune randomly and that even if you get through 1, 2, or even 10 songs without that happening, I'm not interested in playing that piano on stage.
      • satisfice 44 minutes ago
        I am hitting the keys, and I call bullshit.

        Yes, the technology is interesting and useful. No, it is not a “10x” miracle.

        • ozim 25 minutes ago
          I call "AGI" or "100x miracle" a bullshit but still existing stuff is definitely "10x miracle".
    • abricq 29 minutes ago
      > My belief in this tech isn't based on marketing hype or someone telling me it's good – it's based on cold reality of what I'm shipping daily

      Then why is half of the big tech companies using Microsoft Teams and sending mails with .docx embedded in ?

      Of course marketing matters.

      And of course the hard facts also matters, and I don't think anybody is saying that AI agents are purely marketing hype. But regardless, it is still interesting to take a step back and observe what marketing pressures we are subject to.

    • megamix 10 minutes ago
    • energy123 43 minutes ago
      > I'm maintaining a well-structured enterprise codebase (100k+ lines Django)

      How do you avoid this turning into spaghetti? Do you understand/read all the output?

    • satisfice 46 minutes ago
      You are speculating. You don’t know. You are not testing this technology— you are trusting it.

      How do I know? Because I am testing it, and I see a lot of problems that you are not mentioning.

      I don’t know if you’ve been conned or you are doing the conning. It’s at least one of those.

  • schnitzelstoat 1 hour ago
    I agree that all the AI doomerism is silly (by which I mean those that are concerned about some Terminator-style machine uprising, the economic issues are quite real).

    But it's clear the LLM's have some real value, even if we always need a human-in-the-loop to prevent hallucinations it can still massively reduce the amount of human labour required for many tasks.

    NFT's felt like a con, and in retrospect were a con. The LLM's are clearly useful for many things.

    • latexr 1 hour ago
      Those aren’t mutually exclusive; something can be both useful and a con.

      When a con man sells you a cheap watch for an high price, what you get is still useful—a watch that tells the time—but you were also still conned, because what you paid for is not what was advertised. You overpaid because you were tricked about what you were buying.

      LLMs are useful for many things, but they’re also not nearly as beneficial and powerful as they’re being sold as. Sam Altman, while entirely ignoring the societal issues raised by the technology (such as the spread of misinformation and unhealthy dependencies), repeatedly claims it will cure all cancers and other kinds of diseases, eradicate poverty, solve the housing crisis, democracy… Those are bullshit, thus the con description applies.

      https://youtu.be/l0K4XPu3Qhg?t=60

      • BoxOfRain 1 hour ago
        I think the following things can both be true at the same time:

        * LLMs are a useful tool in a variety of circumstances.

        * Sam Altman is personally incentivised to spout a great deal of hyped-up rubbish about both what LLMs are capable of, and can be capable of.

        • latexr 57 minutes ago
          Yes, that’s the point I’m making. In the scenario you’re describing, that would make Sam Altman a con man. Alternatively, he could simply be delusional and/or stupid. But given his history of deceit with Loopt and Worldcoin, there is precedent for the former.
        • runarberg 39 minutes ago
          These are not independent hypotheses. If (b) is true it decreases the possibility that (a) is true and vice versa.

          The dependency here is that if Sam Altman is indeed a con man, it is reasonable to assume that he has in fact conned many people who then report an over inflated metric on the usefulness of the stuff they just bought (people don’t like to believe they were conned; cognitive dissonance).

          In other words, if Sam Altman is indeed a con man, it is very likely that most metrics of the usefulness of his product is heavily biased.

    • ACCount37 1 hour ago
      LLMs of today advance in incremental improvements.

      There is a finite amount of incremental improvements left between the performance of today's LLMs and the limits of human performance.

      This alone should give you second thoughts on "AI doomerism".

      • latexr 52 minutes ago
        That is not necessarily true. That would be like arguing there is a finite number of improvements between the rockets of today and Star Trek ships. To get warp technology you can’t simply improve combustion engines, eventually you need to switch to something else.

        That could also apply to LLMs, that there would be a hard wall that the current approach can’t breach.

        • ACCount37 36 minutes ago
          If that's the case, then, what's the wall?

          The "walls" that stopped AI decades ago stand no more. NLP and CSR were thought to be the "final bosses" of AI by many - until they fell to LLMs. There's no replacement.

          The closest thing to a "hard wall" LLMs have is probably online learning? And even that isn't really a hard wall. Because LLMs are good at in-context learning, which does many of the same things, and can do things like set up fine-tuning runs on themselves using CLI.

          • myrmidon 16 minutes ago
            Agree completely with your position.

            I do think though that lack of online learning is a bigger drawback than a lot of people believe, because it can often be hidden/obfuscated by training for the benchmarks, basically.

            This becomes very visible when you compare performance on more specialized tasks that LLMs were not trained for specifically, e.g. playing games like Pokemon or Factorio: General purpose LLMs are lagging behind a lot in those compared to humans.

            But it's only a matter of time until we solve this IMO.

          • latexr 21 minutes ago
            > If that's the case, then, what's the wall?

            I didn’t say that is the case, I said it could be. Do you understand the difference?

            And if it is the case, it doesn’t immediately follow that we would know right now what exactly the wall would be. Often you have to hit it first. There are quite a few possible candidates.

            • ACCount37 5 minutes ago
              And there could be a teapot in an orbit around the Sun. Do we have any evidence for that being the case though?

              So far, there's a distinct lack of "wall" to be seen - and a lot of the proposed "fundamental" limitations of LLMs were discovered to be bogus with interpretability techniques, or surpassed with better scaffolding and better training.

      • falloutx 26 minutes ago
        AI doomerism was sold by the AI companies as some sort of "learn it or you'll fall behind". But they didnt think it through, now that AI is widely seen as a bad thing by general public (except programmers who think they can deliver slop faster). Who would be buying $200/month sub when they get laid off, I am not sure the strategy of spreading fear was worth it. I also don't think this tech can ever be profitable. I hope it burns more money at this rate.
    • bodge5000 40 minutes ago
      > The LLM's are clearly useful for many things

      I don't think that's of any doubt. Even beyond programming, imo especially beyond programming, there are a great many things they're useful for. The question is; is that worth the enormous cost of running them?

      NFT's were cheap enough to produce and that didn't really scale depending on the "quality" of the NFT. With an LLM, if you want to produce something at the same scale as OpenAI or Anthropic the amount of money you need just to run it is staggering.

      This has always been the problem, LLMs (as we currently know them) they being a "pretty useful tool" is frankly not good enough for the investment put into them

      • falloutx 34 minutes ago
        All of the professions its trying to replace are very much bottom end of the tree, like programmers, designers, artists, support, lawyers etc. While you can easily already replace management and execs with it already and save 50% of the costs, but no one is talking about that.

        At this point the "trick" is to scare white collar knowledge workers into submission with low pay and high workload with the assumption that AI can do some of the work.

        And do you know a better way to increase your output without giving OpenAI/Claude thousands of dollars? Its morale, improving morale would increase the output in a much more holistic way. Scare the workers and you end up with spaghetti of everyone merging their crappy LLM enhanced code.

        • ACCount37 15 minutes ago
          "Just replace management and execs with AI" is an elaborate wagie cope. "Management and execs" are quite resistant to today's AI automation - and mostly for technical reasons.

          The main reason being: even SOTA AIs of today are subhuman at highly agentic tasks and long-horizon tasks - which are exactly the kind of tasks the management has to handle. See: "AI plays Pokemon", AccountingBench, Vending-Bench and its "real life" test runs, etc.

          The performance at long-horizon tasks keeps going up, mind - "you're just training them wrong" is in full force. But that doesn't change that the systems available today aren't there yet. They don't have the executive function to be execs.

      • ACCount37 25 minutes ago
        Yeah. Obviously. Duh. That's why we keep doing it.

        Opus 4.5 saved me about 10 hours of debugging stupid issues in an old build system recently - by slicing through the files like a grep ninja and eventually narrowing down onto a thing I surely would have missed myself.

        If I were to pay for the tokens I used at API pricing, I'd pay about $3 for that feat. Now, come up with your best estimate: what's the hourly wage of a developer capable of debugging an old build system?

        For the reference: by now, the lifetime compute use of frontier models is inference-dominated, at a rate of 1:10 or more. And API costs at all major providers represent selling the model with a good profit margin.

    • runarberg 47 minutes ago
      > it can still massively reduce the amount of human labour required for many tasks.

      I want to see some numbers before I believe this. So far my feelings is that the best case scenario is that it reduces the time it needs to do bureaucratic tasks, tasks that were not needed anyway and could have just been removed for an even grater boost in productivity. Maybe, it seems to be automating tasks from junior engineer, tasks which they need to perform in order to gain experience and develop their expertise. Although I need to see the numbers before I believe even that.

      I have a suspicion that AI is not increasing productivity by any meaningful metric which couldn’t be increased by much much much cheaper and easier means.

    • dgxyz 1 hour ago
      [dead]
  • leogao 1 hour ago
    > The purpose here is not to responsibly warn us of a real threat. If that were the aim there would be a lot more shutting down of data centres and a lot less selling of nuclear-weapon-level-dangerous chatbots.

    you're lumping together two very different groups of people and pointing out that their beliefs are incompatible. of course they are! the people who think there is a real threat are generally different people from the ones who want to push AI progress as fast as possible! the people who say both do so generally out of a need to compromise rather than there existing many people who simultaneously hold both views.

    • BoxOfRain 57 minutes ago
      > nuclear-weapon-level-dangerous chatbots

      I feel this framing in general says more about our attitudes to nuclear weapons than it does about chatbots. The 'Peace Dividend' era which is rapidly drawing to a close has made people careless when they talk about the magnitude of effects a nuclear war would have.

      AI can be misused, but it can't be misused to the point an enormously depopulated humanity is forced back into subsistence agriculture to survive, spending centuries if not millennia to get back to where we are now.

  • grumbel 31 minutes ago
    > GPT-3 was supposedly so powerful OpenAI refused to release the trained model because of “concerns about malicious applications of the technology”. [...] This has, of course, not happened.

    What parallel world are they living in? Every single online platform has been flooded with AI generated content and had to enact counter measures, or went the other way, embraced it and replaced humans with AI. AI use in scams has also become common place.

    Everything they warned about with the release of GPT‑2 did in fact happen.

  • mossTechnician 1 hour ago
    "AI safety" groups are part of what's described here: you might assume from the general "safety" label that organizations like PauseAI or ControlAI would focus things like data center pollution, the generation of sexual abuse material, causing mental harm, or many other things we can already observe.

    But they don't. Instead, "AI safety" organizations all appear to exclusively warn of unstoppable, apocalyptic, and unprovable harms that seem tuned exclusively to instill fear.

    • iNic 1 hour ago
      We should do both and it makes sense that different orgs have different focuses. It makes no sense to berate one set of orgs for not working on the exact type of thing that you want. PauseAI and ControlAI have each received less than $1 million in funding. They are both very small organizations as far as these types of advocacy non-profits go.
      • mossTechnician 1 hour ago
        If it makes sense to handle all of these issues, then couldn't these organizations just acknowledge all of these issues? If reducing harm is the goal, I don't see a reason to totally segregate different issues, especially not by drawing a dividing line between the ones OpenAI already acknowledges and the ones it doesn't. I've never seen any self-described "AI safety" organizations that tackles any of the present-day issues AI companies cause.
        • iNic 20 minutes ago
          If you've never seen it then you haven't been paying attention. For example Anthropic (the biggest AI org which is "safety" aligned) released a big report last year on metal well being [1]. Also here is their page on societal impacts [2]. Here is PauseAI's list of risks [3], it has deepfakes as its second issue!

          The problem is not that no one is trying to solve the issues that you mentioned, but that it is really hard to solve them. You will probably have to bring large class action law suits, which is expensive and risky (if it fails it will be harder to sue again). Anthropic can make their own models safe, and PauseAI can organize some protests, but neither can easily stop grok from producing endless CSAM.

          [1] https://www.anthropic.com/news/protecting-well-being-of-user...

          [2] https://www.anthropic.com/research/team/societal-impacts

          [3] https://pauseai.info/risks

    • ACCount37 56 minutes ago
      I'd rather the "AI safety" of the kind you want didn't exist.

      The catastrophic AI risk isn't "oh no, people can now generate pictures of women naked".

      • mossTechnician 41 minutes ago
        Why would you rather it not exist?

        In a vacuum, I agree with you that there's probably no harm in AI-generated nudes of fictional women per se; it's the rampant use to sexually harass real women and children[0], while "causing poor air quality and decreasing life expectancy" in Tennessee[1], that bothers me.

        [0]: https://arstechnica.com/tech-policy/2026/01/x-blames-users-f...

        [1]: https://arstechnica.com/tech-policy/2025/04/elon-musks-xai-a...

        • ACCount37 5 minutes ago
          Because it's just a vessel for the puritans and the usual "cares more about feeling righteous than about being right" political activists. I have no love for either.

          The whole thing with "AI polluting the neighborhoods" falls apart on a closer examination. Because, as it turns out, xAI put its cluster in an industrial area that already has: a defunct coal power plant, an operational steel plant, and an operational 1 GW grid-scale natural gas power plant that powers the steel plant - that one being across the road from xAI's cluster.

          It's quite hard for me to imagine a world where it's the AI cluster that moves the needle on local pollution.

    • rl3 1 hour ago
      It's almost like there's enough people in the world that we can focus on and tackle multiple problems at once.
    • ltbarcly3 1 hour ago
      You are the masses. Are you afraid?
      • das_keyboard 1 hour ago
        They don't need to instill fear in everyone, but only a critical mass and most importantly _regulators_.

        So there will be laws because not everyone can be trusted to host and use this "dangerous", new tech.

        And then you have a few "trusted" big tech firms forming an oligopoly of ai, with all of the drawbacks.

      • Xss3 1 hour ago
        Hn commenters are not representative
  • mono442 58 minutes ago
    I don't think it's true. It is probably overhyped but it is legitimately useful. Current agents can do around 70% of coding stuff I do at work with light supervision.
    • latexr 33 minutes ago
      > It is probably overhyped

      That’s exactly what a con is: selling you something as being more than what it actually is. If you agree it’s overhyped by its sellers, you agree it’s a con.

      > Current agents can do around 70% of coding stuff I do

      LLMs are being sold as capable of significantly more than coding. Focusing on that singular aspect misses the point of the article.

  • falcor84 33 minutes ago
    > We should be afraid, they say, making very public comments about “P(Doom)” - the chance the technology somehow rises up and destroys us.

    > This has, of course, not happened.

    This is so incredibly shallow. I can't think of even a single doomer, who ever claimed that AI will destroy us by now. P(doom) is about the likelihood of it destroying us "eventually". And I haven't seen anything in this post or in any recent developments to make my reduce my own p(doom), which is not close to zero.

    Here are some representative values: https://pauseai.info/pdoom

    • Meneth 9 minutes ago
      > This has, of course, not happened.

      And that's the anthropic fallacy. In the worlds where it has happened, the author is dead.

      • falcor84 6 minutes ago
        A very good point too.

        Though I personally hope that we'll have enough of a warning to convince people that there is a problem and give us a fighting chance. I grew up on Terminator and would be really disappointed if the AI kills me in an impersonal way.

  • vegabook 31 minutes ago
    the other urgency trick that is not mentioned is "oooh China!!" which is used to short circuit all types of regulations and ethics, especially concerning fair access to energy for actual humans, and plundering the public balance sheet with requests for government guarantees for their wild spending plans.
  • lxgr 58 minutes ago
    Considerations around current events aside, what exactly is the supposed "confidence trick" of mechanical or electronic calculators? They're labor-saving devices, not arbiters of truth, and as far as I can tell, they're pretty good at saving a lot of labor.
  • baq 1 hour ago
    "People are falling in love with LLMs" and "P(Doom) is fearmongering" so close to each other is some cognitive dissonance.

    The 'are LLMs intelligent?' discussion should be retired at this point, too. It's academic, the answer doesn't matter for businesses and consumers; it matters for philosophers (which everyone is even a little bit). 'Are LLMs useful for a great variety of tasks?' is a resounding 'yes'.

  • petesergeant 25 minutes ago
    Reading AI-denier articles in 2026 is almost as boring as reading crypto-booster articles was 10 years ago. You may not like LLMs, you may not want LLMs, but pretending they're not doing anything clever or useful is bizarre, however flowery you make your language.
  • lyu07282 1 hour ago
    I think it's interesting how gamers have developed a pretty healthy aversion to generative ai in video games. Steam and Itch both now make it mandatory that games disclose generative ai use and recently even beloved Larian Studios was under fire for using ai for concept art. Gamers hate that shit.

    I think that's good, but the whole "AI is literally not doing anything", that it's just some mass hallucination has to die. Gamers argue it takes jobs from artists away, programmers seem to have to argue it doesn't actually do anything for some reason. Isn't that telling?

    • timschmidt 1 hour ago
      > programmers seem to have to argue it doesn't actually do anything for some reason.

      It's not really hard to see... spend your whole life defining yourself around what you do that others can't or won't, then an algorithm comes along which can do a lot of the same. Directly threatens the ego, understandings around self-image and self-worth, as well as future financial prospects (perceived). Along with a heavy dose of change scary, change bad.

      Personally, I think the solution is to avoid building your self-image around material things, and to welcome and embrace new tools which always bring new opportunities, but I can see why the polar opposite is a natural reaction for many.

    • bandrami 1 hour ago
      IDK, I think it's at least reasonable to look at the fact that there isn't a ton of new software available out there and conclude "AI isn't actually making software creation any faster". I understand the counterarguments to that but it's hardly an unreasonable conclusion.
    • falloutx 18 minutes ago
      That is consumer choice, a consumer has rights to know whether something is made by using a tech which could make them unemployed or not. I wouldnt pay $70 or $10 on a game that I know someone didnt put effort into.
    • Chance-Device 1 hour ago
      I think this is probably a trend that will erode with time, even now it’s probably just moved underground. How many human artists are using AI for concepts then laundering the results? Even if it’s just idea generation, that’s a part of the process. If it speeds up throughput, then maybe that’s fewer jobs in the long run.

      And if AI assisted products are cheaper, and are actually good, then people will have to vote with their wallets. I think we’ve learned that people aren’t very good at doing that with causes they claim to care about once they have to actually part with their money.

      • HWR_14 47 minutes ago
        A huge issue with voting with your wallet is fraud. It's easy to lie about having no AI in your process. Especially if the final product is laundered by a real artist.
      • lyu07282 1 hour ago
        Because voting with your wallet is nonsense, we can decide what society we want to live in we don't have to accept one in which human artists can't make a living. Capitalism isn't a force of nature we discovered like gravity, it's deliberate choices we made.
        • Chance-Device 14 minutes ago
          Which I assume is why you pay someone to hand-paint scenes from your holidays instead of taking photographs? And why you employ someone to wash your clothes on a scrubbing board instead of using a machine?

          Or would you prefer these things be outlawed to increase employment?

    • Al-Khwarizmi 1 hour ago
      I haven't gamed much in the last few years due to severe lack of time so I'm out of touch, but I used to play a lot of CRPGs and I always dreamed of having NPCs who could talk and react beyond predefined scripted lines. This seems to finally be possible thanks to LLMs and I think it was desired by many (not only me). So why are gamers not excited about generative AI?
    • danielbln 59 minutes ago
      > Gamers hate that shit.

      Unless AI is used for code (which it is, surely, almost everywhere), then Gamers don't give a damn. Also, Larian didn't use it for concept art, they used it to generate the first mood board to give to the concept artist as a guideline. And then there is Ark Raiders, who uses AI for all their VO, and that game is a massive hit.

      This is just a breathless bubble, the wider gaming audience couldn't give two shits if studios use AI or not.

    • lpcvoid 59 minutes ago
      I think the costs of LLMs (huge energy hunger, people being fired because of it, hostile takeover of human creativity, and it causing computer hardware to rise in cost exponentially) is by far larger than the uses (generating videos of fish with arms, programming slightly faster, writing slop emails to talented people).

      I know LLMs won't vanish again magically, but I wish they would every time I have to deal with their output.

  • self_awareness 40 minutes ago
    > If your answer doesn’t match the calculator’s, you need to redo your work.

    Hm... is it wrong to think like this?

  • Traubenfuchs 43 minutes ago
    Yeah there is overhyped marketing, but at this point, AI has revolutionized software engineering and is writing the majority of code world wide whether you like it or not and is still improving.
  • ltbarcly3 1 hour ago
    I think anyone who thinks that LLMs are not intelligent in any sense is simply living in denial. They might not be intelligent in the same way a human is intelligent, they might make mistakes a person wouldn't make, but that's not the question.

    Any standard of intelligence devised before LLMs is passed by LLMs relatively easily. They do things that 10 years ago people would have said are impossible for a computer to do.

    I can run claude code on my laptop with an instruction like "fix the sound card on this laptop" and it will analyze what my current settings are, determine what might be wrong, devise tests to have me gather information it can't gather itself, run commands to probe hardware for it's capabilities, and finally offer a menu of solutions, give the commands to implement the solution, and finally test that the solution works perfectly. Can you do that?

    • qsera 5 minutes ago
      It is the imitation of intelligence.

      It works because people have answered similar questions a million times on the internet and the LLMs are trained on it.

      So it will work for a while. When the human generated stuff stops appearing online, then LLMs ll quickly fall in usefulness.

      But that is enough time for the people who might think that it going to last for ever to make huge investments into it, and the AI companies to get away with the loot.

      Actually it is the best kind of scam...

    • dependency_2x 1 hour ago
      I'm vibe coding now, after work. I am able to much more quickly explore the landscape of a problem, get into and out of dead ends in minutes instead of wasting an evening. At some point I need to go in and fix, but the benefit of the tool is there. It is like a electric screwdriver vs. normal one. Sometimes the normal one can do things the electric can't, but hell if you get an IKEA deliver you want the electric one.
      • HWR_14 45 minutes ago
        Bad example. IKEA assembles better with a manual screwdriver.
        • Traubenfuchs 42 minutes ago
          You wouldn't say that anymore if you would have ever assembled PAX doors.
          • HWR_14 36 minutes ago
            Maybe? I'm not familiar with every ikea product. But it looks like it take a dozen small screws into soft wood.
      • hexbin010 1 hour ago
        Got any recent specific examples of it saving you an entire evening?
        • Traubenfuchs 28 minutes ago
          0. Claude, have a look at frontend project A and backend project B.

          1. create a skeleton clone of frontend A, named frontend B, which is meant to be the frontend for backend project B, including the oAuth configuration

          2. create the kubernetes yaml and deployment.sh, it should be available under b.mydomain.com for frontend B and run it, make sure the deployment worked by checking the page on b.mydomain.com

          3. in frontend B, implement the UI for controller B1 from backend B, create the necessary routing to this component and add a link to it to the main menu, there should be a page /b1 that lists the entries, /b1/xxx to display details, /b1/xxx/edit to edit an entry and /b1/new to create one

          4. in frontend B, implement the UI for controller B2 from backend B, create the necessary routing to this component and add a link to it to the main menu, etc.

          etc.

          All of this is done in 10 minutes. Yeah I could do all of this myself, but it would take longer.

          • falloutx 11 minutes ago
            Did you need it though? Like most projects I see being done by people with Claude Code are just their personal projects, which they wouldn't have wasted their time on in the past but now they will get pulled into the terminal thinking its only gonna take 20 mins and they end up burning 100s of subscription dollars on it. If there is no other maintainer & the project is all yours, I dont see any harm in doing it.
      • SwoopsFromAbove 1 hour ago
        And is the electric one intelligent? :p
    • slg 58 minutes ago
      >I can run claude code on my laptop with an instruction like "fix the sound card on this laptop" and it will analyze what my current settings are, determine what might be wrong, devise tests to have me gather information it can't gather itself, run commands to probe hardware for it's capabilities, and finally offer a menu of solutions, give the commands to implement the solution, and finally test that the solution works perfectly. Can you do that?

      Yes, I have worked in small enough companies in which the developers just end up becoming the default IT help desk. I never had any formal training in IT, but most of that kind of IT work can be accomplished with decent enough Google skills. In a way, it worked the same as you and the LLM. I would go poking through settings, run tests to gather info, run commands, and overall just keep trying different solutions until either one worked or it became reasonable to give up. I'm sure many people here have had similar experiences doing the same thing in their own families. I'm not too impressed with an LLM doing that. In this example, it's functionally just improving people's Googling skills.

    • kusokurae 1 hour ago
      It's incredible that on Hacker News we still encounter posts by people who will or cannot differentiate mathematics from magic.
      • adrianN 1 hour ago
        Intelligence is not magic though. The difference between intelligence and mathematics can plausibly be the same kind of difference between chemistry and intelligence.
        • qsera 14 minutes ago
          There is Intelligence and there is Imitation of Intelligence. LLMs do the latter.

          Talk to any model about deep subjects. You ll understand what I am saying. After a while it will start going around in circles.

          FFS ask it to make an original joke, and be amused..

          • adrianN 1 minute ago
            Many animals are clearly intelligent, but I can't talk to them at all.
          • obsoleetorr 10 minutes ago
            > After a while it will start going around in circles.

            so like your average human

            > FFS ask it to make an original joke, and be amused..

            let's try this one on you - say an original joke

            oh, right, you dont respond to strangers prompts, thus you have agency, unlike an LLM

      • energy123 1 hour ago
        Human intelligence is chemistry and biology, not magic. OK, now what?
      • ACCount37 52 minutes ago
        Your brain is just math implemented in wet meat.
      • obsoleetorr 1 hour ago
        it's also incredible we find people which can't differentiate physics/mathematics from the magic of the human brain
    • jaccola 1 hour ago
      There are dozens of definitions of "intelligence", we can't even agree what intelligence means in humans, never mind elsewhere. So yes, by some subset of definitions it is intelligent.

      But by some subset of definitions my calculator is intelligent. By some subset of definitions a mouse is intelligent. And, more interestingly, by some subset of definitions a mouse is far more intelligent than an LLM.

    • SwoopsFromAbove 1 hour ago
      I also cannot calculate the square root of 472629462.

      My pocket calculator is not intelligent. Nor are LLMs.

      • HWR_14 41 minutes ago
        You'd be surprised. You could probably get three digits of the square root in under a minute if you tried.
    • techpression 1 hour ago
      I did that when I was 14 because I had no other choice, damn you SoundBlaster! I didn't get any menu but I got sound in the end.

      I don't think conflating intelligence with "what a computer can do" makes much sense though. I can't calculate the X digit of PI in less than Z, I'm still intelligent (or I pretend to be).

      But the question is not about intelligence, it's a red herring, it's just about utility and they (LLM's) are useful.

    • TeriyakiBomb 1 hour ago
      Everything is magic when you don't understand how things work.
    • dgxyz 1 hour ago
      [dead]
    • exceptione 1 hour ago
      In a way LLMs are intelligence tests indeed.
  • huflungdung 1 hour ago
    [dead]