7 comments

  • woolion 1 hour ago
    > We assert that artificial intelligence is a natural evolution of human tools developed throughout history to facilitate the creation, organization, and dissemination of ideas, and argue that it is paramount that the development and application of AI remain fundamentally human-centered.

    While this is a noble goal, it seems obvious that this isn't how it usually goes. For instance, "free market" is often used as a dogma against companies that are actively harmful to society, as "globalization" might be. An unstoppable force, so any form of opposition is "luddite behavior". Another one is easier transport and remote communication, that generally broke down the social fabric. Or social media wreaking havoc among teen's minds. From there, it's easy to see why the technological system might be seen as an inherent evil. In 1872's Erewhon, Butler already described the technological system as a force that human society could contain as soon as it tolerated it. There are already many companies persecuting their employees for not using AI enough, even when the employee's response is that the quality of its output is not good enough for the work at hand, rather than any ideological reason.

    I'm neither optimistic nor pessimistic about the changes that AI might bring, but hoping it to become "human-centered" seems almost as optimistic as hoping for "humane wars".

    • cowpig 57 minutes ago
      > "free market" is often used as a dogma against companies that are actively harmful to society

      This is a predominantly America-specific piece of propaganda, and it's pretty recent.

      Adam Smith's ideas are primarily arguments against mercantilism (e.g. things like using tariffs to wield self-interested state power), something he showed to be against the common good. The "invisible hand" concept is used to show how self-interested action can, under conditions of *competitive markets*, lead to unintentional alignment with the common good.

      Obviously that's a significant departure from the way it's commonly used today, where Thiel's book has influenced so many entrepreneurs into believing Monopolies are Good.

      But the history of this is very Cold War-influenced, where "free markets" were politically positioned as alternatives to the USSR's "planned economy", and slowly pushed to depart further and further from Adam Smith's original argument about moral philosophy.

    • Izikiel43 52 minutes ago
      Globalization was great for poor countries, not so much developed economies.
      • js8 43 minutes ago
        No it wasn't. Look at Joseph Stiglitz (Globalization and Its Discontents) and Ha-Joon Chang (Bad Samaritans, Kicking Away the Ladder) for counter-examples.
      • nutjob2 23 minutes ago
        This isn't correct. The deal is that the poor countries get development and increased employment, and the rich countries get lower prices. Generally speaking both types of countries get richer.

        That some workers lost their jobs is a symptom of any change. I don't know why people always get upset people losing their jobs. It's like death, if no one died relatively few people would be born. If you resist job losses you reduce overall employment and economic development.

        • chromacity 13 minutes ago
          Are you serious? People get upset about losing jobs because they need jobs to pay their bills. Further, we often build our life identities around work; if you're a good car mechanic or a successful restaurant owner, you're proud of that. It's a part of you.

          Having to repeatedly restart your career is risky, painful, and demoralizing. I have no problem seeing why people don't like that and why it can lead to populist backlash or even violent revolutions (as it did in the past).

          By the way, to address your closing comment: people don't like dying either and tend to get upset when others die?

  • sendes 59 minutes ago
    > We assert that artificial intelligence is a natural evolution of human tools.

    While nowhere in the paper this is actually asserted but the abstract, a whiggish narrative of a genuinely unprecedented technology --such that it can replace and supersede human "labour" altogether (one is reminded of The Evolution of Human Science by Ted Chiang)-- sounds naive at best, dangerous at worst.

    • jebarker 55 minutes ago
      I don’t see why “natural evolution of human tools” implies “such that it can replace and supersede human labor altogether”. Can you clarify?
      • sendes 48 minutes ago
        A common error in historical thinking tends to see human tools essentially as a positive linear plot between time and progress. But these tools until AI had the common property of being enhancing of human cognition, because they couldn't do the thinking _for you_. AI can do just that, and for all the benefit it brings, seeing it simply as the next step in the "natural evolution of human tools" is alarmingly disarming coming from frontier thinkers.
        • nutjob2 18 minutes ago
          * For certain speculative definitions of AI
        • lovelearning 10 minutes ago
          > these tools until AI had the common property of being enhancing of human cognition, because they couldn't do the thinking for you

          I have a different take, centered around this idea: Not everyone was into thinking about everything all the time even before AI. I'd say most people most of the time outsourced actual thinking to someone else.

          1) Reading non-fiction books:

          Not all books, even the non-fiction ones, necessarily require any thinking by the reader. A book that narrates history, for example, requires much less thinking than something like "The Road to Reality" or "Godel Escher Bach."

          Most of us outsourced the thinking and historical method to the authors of the history book and just passively consumed some facts or factoids. Some of us memorize and remember these factoids well, but that's not thinking, just knowledge storage.

          Philosophically, what's the difference between consuming books this way and reading an LLM's output?

          2) Reading research papers:

          Most people don't read any research papers at all. No thinking there. Most people don't head to some forum to ask about latest research either. Also, researchers in most fields don't come out and do outreach regularly.

          Indeed, an LLM may actually be the only pathway for a lot of people to get at least _some_ knowledge and awareness about latest research.

          Those of us in scientific, engineering, humanities, healthcare fields may read some to many papers. But only a small subset reads very critically, looking for data errors, inconsistencies, etc. For most of us, the knowledge and techniques may be beyond our current understanding and possibly without any interest in understanding them in future either.

          Most of us are just interested in the observations or conclusions or applications. Those may involve some thinking but also may not involve any thinking, just blind acceptance of the paper's claims and possible applications.

          3) Coding:

          Again, deep thinking is only done by a small set of programmers. Like the ones who write kernels, compilers, distributed algorithms, complex libraries.

          But most are just passive consumers who read some examples online or ask stackoverflow or reddit for direct answers. Some even outsource all their coding entirely to gig sites. Not much thinking there except pricing and scheduling. What's the difference between that and asking an LLM or copying an LLM's answers? At least, the LLMs patiently explain their code, unlike salty SO users!

          ----

          IMO, most people weren't doing much thinking even pre-AI.

          Post-AI, it's true that some people who did do some thinking may reduce it.

          But it's equally true that those people who weren't doing much thinking due to access or language barriers can actually start doing some thinking now with the help of AI.

    • Zigurd 47 minutes ago
      I'm glad I can still count on HN to come across the correct use of a lesser known definition of a word.
    • nutjob2 20 minutes ago
      > supersede human "labour" altogether

      For certain types of labor this has always been the case.

      The idea that AI will entirely replace all, or most, human labor makes no sense and is just AI hype.

      Like all technology before it AI will improve most people's lives.

  • gradstudent 1 hour ago
    I skimmed the paper a couple of times, hoping to find the promised (from the abstract)

    > pathway to integrating AI into our most challenging and intellectually rigorous fields to the benefit of all humankind.

    There's very little insight here though. It seems mostly a retread of conversations we've been having in the academic community for a few years now. In particular, I was hoping to see some discussion of how we might restructure our educational institutions around this technology, when the machines rob students of the opportunity to develop critical thinking skills. Right now our best idea seems to be a retreat to oral and written examinations; an idea which doesn't scale and which ignores the supposed benefits of human+AI reasoning. The alternative suggestion I've seen is to teach prompt engineering, which seems (a) hard for foundational subjects and (b) again, seems to outsource much of the thinking to the AI, instead of extending the reach of human thought.

    • nutjob2 14 minutes ago
      > when the machines rob students of the opportunity to develop critical thinking skills

      This is a fundamental misunderstanding of human nature. Machines don't rob people of critical thinking skills, people do. Mostly people do it to themselves, often inheriting it from their parents or social environment.

    • BDPW 1 hour ago
      Physical classrooms don't really scale either, is that really a fundamental problem?
      • bonoboTP 30 minutes ago
        Yes. Tools like Khan Academy help lots of talented kids to progress in the curriculum beyond what's available in physical classrooms available to them.
      • lo_zamoyski 1 hour ago
        Indeed. Education isn't supposed to "scale". We've mucked around with education so much and subjected it to tech fad after tech fad that we hardly have anything resembling education.

        Because this has been going on so long, most people's reference point for what constitutes "education" is simply off, mistaking "training" or something like that for it. But the purpose of education is intellectual formation, the ability to reason competently, and the comprehension of basic reality, which enables genuine intellectual freedom (there are moral presuppositions, too; immorality deranges the mind). This is what the classical liberal arts were about.

        The very bare minimum criterion (and it is a very bare minimum) for someone to be able to claim to be educated is not only knowledge of their field, but knowledge of the intellectual nature, foundations, and basis of their field in the greater intellectual scope. I would not hold someone with only that bare minimum in especially high esteem vis-a-vis education, but even that bar is higher than what education today provides.

        • bonoboTP 32 minutes ago
          There are simply not enough teachers who can provide such an ideal, imagined education, at least not for the current rate of teacher salaries (and it's very far off). The educational strategy has to scale to real people, real teachers and real students as they are in the flesh, not some ivory tower pipe dream. We've had decades of this "we should teach how to think, not what to think".

          Alternatively,if you don't care about scale, as in rolling out a system to the population at large, then yeah, this kind of advanced education exists, it's just very selective and is in advanced extracurricular or obtained through private tutors.

  • GodelNumbering 52 minutes ago
    > Today, unlike in the Luddites’ time, we are already seeing skilled workers replaced not with lower-wage human labor, but with AI.

    To me this is the weakest claim of the article. This claim been thrown around endlessly without proof.

    https://fred.stlouisfed.org/series/IHLIDXUSTPSOFTDEVE

    Software Engineer job openings for instance is at 2 year high (still far lower than covid dislocations though), but arguably all Enterprise AI was built or deployed in the last two years. We should have seen a crash in the job openings if the AI job replacement claim was correct.

    This is something I've spend some time thinking about (personally written article, not AI slop): https://www.signalbloom.ai/posts/why-task-proficiency-doesnt...

  • zaikunzhang 3 hours ago
    • mchinen 23 minutes ago
      He also was on Dwarkesh's podcast last week (https://www.youtube.com/watch?v=Q8Fkpi18QXU ).

      I enjoyed the human->depth vs AI->breadth discussion and the waterline rising slowly to fill the 50 lowest hanging Erdos problems but struggling on the next few.

    • anotherpaulg 2 hours ago
      Recorded 10 February 2026. Terence Tao of the University of California, Los Angeles, presents "Machine assistance and the future of research mathematics" at IPAM's AI for Science Kickoff.
  • e7h4nz 43 minutes ago
    [dead]
  • bluecheese452 2 hours ago
    Enough Terence Tao spam.
    • ancillary 23 minutes ago
      So much of HN is half-baked anecdotes about and by LLMs or philosophizing from VCs who talked to an LLM about Rene Girard for twenty minutes or pop sci articles that appear to be posted so that some bored developer can read the abstract and one experiment and dunk on it. Tao is uniquely positioned as a mathematician who has made enormous contributions to many areas and is old enough to contextualize it all against the past and young enough to be open to its possible futures. More Tao spam sounds good to me!
    • mchinen 26 minutes ago
      I haven't seen any negative sentiment toward Terence Tao before. Coming from outside the academic math sphere, genuinely curious if there's a real issue or if this comment is just spam itself.