LLMs can be exhausting

(tomjohnell.com)

128 points | by tjohnell 7 hours ago

27 comments

  • cglan 4 hours ago
    I find LLMs so much more exhausting than manual coding. It’s interesting. I think you quickly bump into how much a single human can feasibly keep track of pretty fast with modern LLMs.

    I assume until LLMs are 100% better than humans in all cases, as long as I have to be in the loop there will be a pretty hard upper bound on what I can do and it seems like we’ve roughly hit that limit.

    Funny enough, I get this feeling with a lot of modern technology. iPhones, all the modern messaging apps, etc make it much too easy to fragment your attention across a million different things. It’s draining. Much more draining than the old days

    • superfrank 24 minutes ago
      > I find LLMs so much more exhausting than manual coding

      I do as well, so totally know what you're talking about. There's part of me that thinks it will become less exhausting with time and practice.

      In high school and college I worked at this Italian place that did dine in, togo, and delivery orders. I got hired as a delivery driver and loved it. A couple years in there was a spell where they had really high turnover so the owners asked me to be a waiter for a little while. The first couple months I found the small talk and the need to always be "on" absolutely exhausting, but overtime I found my routine and it became less exhausting. I definitely loved being a delivery driver far more, but eventually I did hit a point where I didn't feel completely drained after every shift of waiting tables.

      I can't help but think coding with LLMs will follow a similar pattern. I don't think I'll ever like it more than writing the code myself, but I have to believe at some point I'll have done it enough that it doesn't feel completely draining.

    • hombre_fatal 4 hours ago
      I think the upper limit is your ability to decide what to build among infinite possibilities. How should it work, what should it be like to use it, what makes the most sense, etc.

      The code part is trivial and a waste of time in some ways compared to time spent making decisions about what to build. And sometimes even a procrastination to avoid thinking about what to build, like how people who polish their game engine (easy) to avoid putting in the work to plan a fun game (hard).

      The more clarity you have about what you’re building, then the larger blocks of work you can delegate / outsource.

      So I think one overwhelming part of LLMs is that you don’t get the downtime of working on implementation since that’s now trivial; you are stuck doing the hard part of steering and planning. But that’s also a good thing.

      • SchemaLoad 4 hours ago
        I've found writing the code massively helps your understanding of the problem and what you actually need or want. Most times I go into a task with a certain idea of how it should work, and then reevaluate having started. While an LLM will just do what you ask without questing, leaving you with none of the learnings you would have gained having done it. The LLM certainly didn't learn or remember anything from it.
        • jeremyjh 4 hours ago
          In some cases, yes. But I’ve been doing this awhile now and there is a lot of code that has to be written that I will not learn anything from. And now, I have a choice to not write it.
          • orbisvicis 1 hour ago
            Ehh, I find that the most tedious code is also the most sensitive to errors, stuff that blurs the divide between code and data.
            • jeremyjh 1 hour ago
              I doubt if we're talking about the same sort of things at all. I'm talking about stuff like generic web crud. Too custom to be generated deterministically but recent models crush it and make fewer errors than I do. But that is not even all they can do. But yes, once you get into a large complicated code base its not always worth it, but even there one benefit is it to develop more test cases - and more complicated ones - than I would realistically bother with.
        • stavros 3 hours ago
          It depends on how you use them. In my workflow, I work with the LLM to get the desired result, and I'm familiar with the system architecture without writing any of the code.

          I've written it up here, including the transcript of an actual real session:

          https://www.stavros.io/posts/how-i-write-software-with-llms/

          • jeremyjh 1 hour ago
            Thanks for writing this up.

            I just woke up recently myself and found out these tools were actually becoming really, really good. I use a similar prompt system, but not as much focus on review - I've found the review bots to be really good already but it is more efficient to work locally.

            One question I have since you mention using lots of different models - is do you ever have to tweak prompts for a specific model, or are these things pretty universal?

      • galaxyLogic 4 hours ago
        Right when you're coding with LLM it's not you asking the LLM questions, it's LLM asking you questions, about what to build, how should it work exactly, should it do this or that under what conditions. Because the LLM does the coding, it's you have to do more thinking. :-)

        And when you make the decisions it is you who is responsible for them. Whereas if you just do the coding the decisions about the code are left largely to you nobody much sees them, only how they affect the outcome. Whereas now the LLM is in that role, responsible only for what the code does not how it does it.

      • clickety_clack 4 hours ago
        I’d love to see what you’ve built. Can you share?
      • grey-area 3 hours ago
        Maintenance is the hard part, not writing new code or steering and planning.
    • raincole 4 hours ago
      If you care at code quality of course it is exhausting. It's supposed to be. Now there is more code for you to assure quality in the same length of time.
    • akomtu 2 hours ago
      You used to be a Formula 1 driver. Now you are an instructor for a Formula 1 autopilot. You have to watch it at all times with full attention for it's a fast and reckless driver.
    • senectus1 4 hours ago
      I imagine code reviewing is a very different sort of skill than coding. When you vibe code (assuming you're reading teh code that is written for you) you become a coder reviewer... I suspect you're learning a new skill.
      • qudat 3 hours ago
        It’s easier to write code than read it.
        • j3k3 3 hours ago
          Id argue the read-write procedures are happening simultaneously as one goes along, writing code by hand.
      • pessimizer 3 hours ago
        The way I've tried to deal with it is by forcing the LLM to write code that is clear, well-factored and easy to review i.e. continually forcing it to do the opposite of what it wants to do. I've had good outcomes but they're hard-won.

        The result is that I could say that it was code that I myself approved of. I can't imagine a time when I wouldn't read all of it, when you just let them go the results are so awful. If you're letting them go and reviewing at the end, like a post-programming review phase, I don't even know if that's a skill that can be mastered while the LLMs are still this bad. Can you really master Where's Waldo? Everything's a mess, but you're just looking for the part of the mess that has the bug?

        I'm not reviewing after I ask it to write some entire thing. I'm getting it to accomplish a minimal function, then layering features on top. If I don't understand where something is happening, or I see it's happening in too many places, I have to read the code in order to tell it how to refactor the code. I might have to write stubs in order to show it what I want to happen. The reading happens as the programming is happening.

  • rednafi 4 hours ago
    I have always enjoyed the feeling of aporia during coding. Learning to embrace the confusion and the eventual frustration is part of the job. So I don’t mind running in a loop alongside an agent.

    But I absolutely loathe reviewing these generated PRs - more so when I know the submitter themselves has barely looked at the code. Now corporate has mandated AI usage and is asking people to do 10k LOC PRs every day. Reviewing this junk has become exhausting.

    I don’t want to read your code if you haven’t bothered to read it yourselves. My stance is: reviewing this junk is far more exhausting. Coding is actually the fun part.

    • bmurphy1976 2 hours ago
      > Now corporate has mandated AI usage and is asking people to do 10k LOC PRs every day.

      That's a big red flag if I ever saw one. Corporate should be empowering the engineering team to use AI tooling to improve their own process organically. Is this true or exaggeration? If it's true I'd start looking for a more balanced position at more disciplined org.

      • rednafi 50 minutes ago
        True at Doordash, Amazon, and salesforce - speaking from experience.
    • anonzzzies 4 hours ago
      I always wonder where HNers worked or work; we do ERP and troubleshooting on legacy systems for medium to large corps; PRs by humans were always pretty random and barely looked at as well, even though the human wrote it (copy/pasted from SO and changed it somewhat); if you ask what it does they cannot tell you. This is not an exception, this is the norm as far as I can see outside HN. People who talk a lot, don't understand anything and write code that is almost alien. LLMs, for us, are a huge step up. There is a 40 nested if with a loop to prevent it from failing on a missing case in a critical Shell (the company) ERP system. LLMs would not do that. It is a nightmare but makes us a lot of money for keeping things like that running.
      • sarchertech 3 hours ago
        I currently work at one of the biggest tech companies. I’ve been doing this for over 20 years, and I’ve worked at scrappy startups, unicorns, and medium size companies.

        I’ve certainly seen my share of what I call slot driven development where a developer just throws things at the wall until something mostly works. And plenty if cut and paste development.

        But it’s far from the majority. It’s usually the same few developers at a company doing it, while the people who know what they’re doing furiously work to keep things from falling apart.

        If the majority of devs were doing this nothing would work. My worry is that AI lets the bad devs produce this kind of work on a massive scale that overwhelms the good devs ability to fight back or to even comprehend the system.

        • rednafi 42 minutes ago
          I also work at a huge company, and this observation is true. The way AI is being rammed down our throats is burning out the best engineers. OTOH, the mediocre simian army “empowered” by AI is pushing slop like there’s no tomorrow. The expectation from leadership, who tried Claude for a single evening, is that you should be able to deliver everything yesterday.

          The resilience of the system has taken a massive hit, and we were told that it doesn’t matter. Managers, designers, and product folks are being asked to make PRs. When things cause Sev0 or Sev1 incidents, engineers are being held responsible. It’s a huge clown show.

        • anonzzzies 3 hours ago
          Tech companies. How about massive non software tech companies. I don't know where it is not the norm and I have been in very many of them as supplier for the past 30 years. Tech companies are a bit different as they usually have leadership that prioritizes these things.
          • sarchertech 1 hour ago
            None tech companies too. You can’t build large scale software with everyone merging PRs like that. My guess is that if you’re a supplier your are getting a pretty severe sampling bias.
      • nightpool 3 hours ago
        I would hope that most people who are technically competent enough to be on HN are technically competent enough to quit orgs with coding standards that bad. Or, they're masochists who have taken on the chamllenge of working to fix them
        • heromal 39 minutes ago
          Neither of those. The pay is great and if all leadership cares about is making the whole company "AI Native" and pushing bullshit diffs, I'll play ball.
    • shiandow 3 hours ago
      The one thing I don't quite get is how running a loop alongside an agent is any different from reviewing those PRs.
    • bsjshshsb 2 hours ago
      Use AI to review.
  • P-MATRIX 15 minutes ago
    I think the fatigue is specifically about opacity. When you review agent output, you're not just checking correctness—you're trying to reconstruct what state the agent was in when it made each call. That reconstruction is the expensive part. If you already know the agent's tool pattern and drift trajectory while it ran, review shifts from guessing to confirming. Still work, but a different kind.
  • olejorgenb 3 hours ago
    I find working more asynchronous with the agents help. I've disabled the in-your-face agent-is-done/need-input notifications [1]. I work across a few different tasks at my own pace. It works quite well, and when/if I find a rhythm to it, it's absolutely less intense than normal programming.

    You might think that the "constant" task switching is draining, but I don't switch that frequently. Often I keep the main focus on one task and use the waiting time to draft some related ideas/thoughts/next prompt. Or browse through the code for light review/understanding. It also helps to have one big/complex task and a few simpler things concurrently. And since the number of details required to keep "loaded" in your head per task is fewer, switching has less cost I think. You can also "reload" much quicker by simply chatting with the agent for a minute or two, if some detail have faded.

    I think a key thing is to NOT chase after keeping the agents running at max efficiency. It's ok to let them be idle while you finish up what your doing. (perhaps bad of KV cache efficiency though - I'm not sure how long they keep the cache)

    (And obviously you should run the agent in a sandbox to limit how many approvals you need to consider)

    [1] I use the urgent-window hint to get a subtle hint of which workspace contain an agent ready for input.

    EDIT: disclaimer - I'm relative new to using them, and have so far not used them for super complex tasks.

    • skybrian 3 hours ago
      Yes, I briefly felt like I needed to keep agents busy but got over it. The point of having multiple things going on is so you have a another task to work on.
  • rsanheim 48 minutes ago
    I’ve found LLM development expands the scope of what I can do to an absurd level. This is what exhausts me.

    My limits are now many of the same things that are have always been core to software dev, but are now even more obvious:

    - what is the thing we are building? What is the core product or bug fix or feature?

    - what are we _not_ building? What do we not care about?

    - do I understand the code enough to guide design and architecture?

    - can I guide dev and make good choices when it’s far outside my expertise but I know enough to “smell” when things are going off the rails

    It’s a weird time

  • nanobuilds 2 hours ago
    Your human context also needs compacting at some point. After hours of working with an LLM, your prompts tend to become less detailed, you tend to trust the LLM more, and it's easier to go down a solution that is not necessarily the best one. It becomes more of a brute forcing LLM assisted "solve this issue flow". What's funny is that it sometimes feels that the LLM itself is exhausted as well as the human and then the context compacting makes it even worse.

    It's like with regular non-llm assisted coding. Sometimes you gotta sleep on it and make a new /plan with a fresh direction.

  • 193572 4 hours ago
    It looks like Stockholm syndrome or a traditional abusive relationship 100 years ago where the woman tries to figure out how to best prompt her husband to do something.

    You know you can leave abusive relationships. Ditch the clanker and free your mind.

  • razorbeamz 4 hours ago
    LLMs do not actually make anything better for anyone. You have to constantly correct them. It's like having a junior coder under your wing that never learns from its mistakes. I can't imagine anyone actually feeling productive using one to work.
    • bmurphy1976 3 hours ago
      I don't know what to think about comments like this. So many of them come from accounts that are days or at most weeks old. I don't know if this is astroturfing, or you really are just a new account and this is your experience.

      As somebody who has been coding for just shy of 40 years and has gone through the actual pain on learning to run a high level and productive dev team, your experience does not match mine. Even great devs will forget some of the basics and make mistakes and I wish every junior (hell even seniors) were as effective as the LLMs are turning out to be. Put the LLM in the hands of a seasoned engineer who also has the skills to manage projects and mentor junior devs and you have a powerful accelerator. I'm seeing the outcome of that every day on my team. The velocity is up AND the quality is up.

      • qudat 3 hours ago
        > The velocity is up AND the quality is up.

        This is not my experience on a team of experienced SWEs working on a product worth 100m/year.

        Agents are a great search engine for a codebase and really nice for debugging but anytime we have it write feature code it makes too many mistakes. We end up spending more time tuning the process than it takes to just write the code AND you are trading human context with agent context that gets wiped.

        • bmurphy1976 3 hours ago
          I can't speak to your experience. I can only speak to mine.

          We've spent years reducing old debt and modernizing our application and processes. The places where we've made that investment are where we are currently seeing the additional acceleration. The places where we haven't are still stuck in the mud, but per your "search engine for a codebase" comment our engineers are starting to engage with systems they would not have previously touched.

          There are areas for sure where LLMs would fall down. That's where we need the experts to guide them and restructure the project so that it is LLM friendly (which also just happens to be the same things that make the app better for human engineers).

          And I'm serious about the quality comment. Maybe there's a difference in how your team is using the tools, but I have individuals on my team who are learning to leverage the tools to create better outputs, not just pump out features faster.

          I'm not saying LLMs solve everything, FAR from it. But it's giving a master weapon to an experienced warrior.

          • ccosky 2 hours ago
            Your experience matches mine too. Experienced devs are increasing their output while maintaining quality. I'm personally writing better-quality code than before because it's trival to tell AI to refactor or rename something. I care about good code, but I'm also lazy, so I have my Claude skills set up to have AI do it for me. (Of course, I always keep the human in the loop and review the outputs.)

            You said that you're restructuring the project to be LLM friendly, which also makes the app better for humans. I 100% agree with this. Code that is unreadable and unmaintainable for humans is much more difficult for AI to understand. I think companies that practiced or prioritized code hygiene will be ahead of the game when it comes to getting good results with agentic AI.

      • razorbeamz 3 hours ago
        Who would I possibly be astroturfing for? The entire industry is all-in on LLMs.
        • bmurphy1976 3 hours ago
          I can't speak for you specifically, it's just a trend I'm seeing and unfortunately your 2 day old account falls into that bucket. There's a lot of people who have a lot to lose or who are very afraid of what LLMs will do. There's plenty of incentive to do this.

          I would be curious to see if I'm just imaging this or it really is a trend.

          • j3k3 3 hours ago
            At the same time you have astro-turfing from LLM producers though, so...
            • bmurphy1976 3 hours ago
              Agreed, but I find that astro-turfing far more obvious.
    • jatora 3 hours ago
      You need to learn to use the tool better, clearly, if you have such an unhinged take as this.
      • Sirental 3 hours ago
        No to be fair I do see what he's saying. I see a major difference between the more expensive models and the cheaper ones. The cheaper (usually default) ones make mistakes all the damn time. You can be as clear as day with them and they simply don't have the context window or specs to make accurate, well reasoned desicions and it is a bit like having a terrible junior work alongside you, fresh out of university.
      • voxl 3 hours ago
        It's not unhinged at all, it's a lack of imagination on both of your parts.
      • razorbeamz 3 hours ago
        The only people who use LLMs "as a tool" are those who are incapable of doing it without using it at all.
  • codance 1 hour ago
    The shift from creation to verification is real, but I think the bigger issue is people treating LLM output as a black box to review. What works better: write specs and tests first, then let the LLM implement against them. You're no longer "reviewing code" — you're checking if tests pass. The cognitive load drops dramatically when verification is automated rather than manual.
  • simonw 4 hours ago
    I wonder if it's more or less tiring to work with LLMs in YOLO/--dangerously-skip-permissions mode.

    I mostly use YOLO mode which means I'm not constantly watching them and approving things they want to do... but also means I'm much more likely to have 2-3 agent sessions running in parallel, resulting in constant switching which is very mentally taxing.

    • anilgulecha 1 hour ago
      It's orthogonal IMO. YOLO or not is simply a sign of trust for the harness or not. Trust slightly affects cognition, but not much. My working hypothesis: exhaustion is the residue of use of cognition.

      What impacts cognition for me, and IMO for a lot of folks, is how well we end up defining our outcomes. Agents are tremendous at working towards the outcome (hence by TDD red-green works wonderfully), but if you point them to a goal slightly off, then you'll have to do the work of getting them on track, demanding cognition.

      So the better you're at your initial research/plan phase, where you document all of your direction and constraints, the lesser effort is needed in the review.

      The other thing impacting cognition is how many parallel threads you're running. I have defaulted to major/minor system - at any time I have 1 major project (higher cognition) and 1 minor agent (lower cognition) going. It's where managing this is comfortable.

  • sigbottle 3 hours ago
    I am rewriting an agent framework from scratch because another agent framework, combined with my prompting, led to 2023-level regressions in alignment (completely faking tests, echoing "completed" then validating the test by grepping for the string "completed", when it was supposed to bootstrap a udp tunnel over ssh for that test...).

    Many top labs [1] [2] already have heavily automated code review already and it's not slowing down. That doesn't mean I'm trusting everything blindly, but yes, over time, it should handle less and less "lower level" tasks and it's a good thing if it can.

    [1] https://openai.com/index/harness-engineering/ [2] https://claude.com/blog/code-review

    Further I want to vent about two things:

    - Things can be improved.

    - You are allowed to complain about anything, while not improving things yourself.

    I think the mid 2010s really popularized self improvement in a way that you can't really argue with (if you disagree with "put in more effort and be more focused", you're obviously just lazy!). It's funny because the point of engineering is to find better solutions, but technically yes, an always valid solution is just "suck it up".

    But moreover, if you do not allow these two premises, what ends up happening in practice for a lot of people, is that basically you can just interpret any slightly pushback as "oh they're just a whiner", and if they're not doing something to fix their problem this instant, that "obviously" validates your claim (and even if they are, it doesn't count, they should still not be a "debbie downer", etc.).

    Sometimes a premise can sound extreme, but people forget that premises are not in a complete logical vaccuum, you actually live out and believe said premises, and by taking on a certain position, it's often more about what follows downstream from the behavior than the actual words themselves.

  • jeremyjh 4 hours ago
    Most people reading this have probably had the experience of wasting hours debugging when exhausted, only to find it was a silly issue you’ve seen multiple times, or maybe you solve it in a few minutes the next morning.

    Working with an agent coding all day can be exhilarating but also exhausting - maybe it’s because consequential decisions are packed more tightly together. And yes cognition still matters for now.

  • anthonySs 4 hours ago
    llms aren’t exhausting it’s the hype and all the people around it

    same thing happened with crypto - the underlying technology is cool but the community is what makes it so hated

  • chalupa-supreme 4 hours ago
    I wanna say that it is indeed a “skill issue” when it comes to debugging and getting the agent in your editor of choice to move forward. Sometimes it takes an instruction to step back and evaluate the current state and others it’s about establishing the test cases.

    I think the exhausting part is more probably more tied to the evaluation of the work the agent is doing, understanding its thought process and catching the hang up can be tedious in the current state of AI reasoning.

  • owentbrown 1 hour ago
    I really appreciate the author for writing this.

    I learned years ago that I when I write code after 10 PM, I'm go backward instead of forward. It was easy to see, because the test just wouldn't pass, or I'd introduce several bugs that each took 30 minutes to fix.

    I'm learning now that it's no different, working with agents.

  • iainctduncan 50 minutes ago
    Everytime I read articles here describing the LLM prompt engineering workflow, all I can think is, "This sounds like such a fucking awful job".

    I imagine I will greatly reduce my job prospects as a hold out, but honestly, from what I've read I think I'd rather take a hefty pay hit and not go there. It sounds like a mental heath disaster and fast track to serious burnout.

    YMMV, I realize I'm in the minority, this is unproductive ranting, yada yada yada

  • babas03 2 hours ago
    This is exactly what was needed. Seamlessly transitioning from manual inspection in the Elements/Network panels to agent-led investigation is going to save so much 'context-setting' time.
  • otterley 3 hours ago
    One way to help, I think, is to take advantage of prompt libraries. Claude makes this easy via Skills (which can be augmented via Plugins). Since skills themselves are just plain text with some front matter, they're easy to update and improve, and you can reuse them as much as you like.

    There's probably a Codex equivalent, but I don't know what it is.

  • veryfancy 4 hours ago
    In agent-mode mode, IMO, the sweet spot is 2-3 concurrent tasks/sessions. You don’t want to sit waiting for it, but you don’t want to context-switch across more than a couple contexts yourself.
    • SchemaLoad 4 hours ago
      That sounds exhausting having to non stop prompt and review without a second to stop and think.
      • colecut 4 hours ago
        There is nothing dictating how long you stop and think for.
        • bombdailer 4 hours ago
          Until that becomes the metric measured in performance reviews.
  • siliconc0w 4 hours ago
    I mostly do 2-3 agents yoloing with self "fresh eyes" review
  • somewhereoutth 4 hours ago
    Of course. Any scenario where you are expected to deliver results using non-deterministic tooling is going to be painful and exhausting. Imagine driving a car that might dive one way or the other of its own accord, with controls that frequently changed how they worked. At the end of any decently sized journey you would be an emotional wreck - perhaps even an actual wreck.
  • quantum_state 3 hours ago
    It seems to me that LLM is a tool after all. One needs to learn to use it effectively.
  • j3k3 3 hours ago
    There's nothing more annoying than the feeling of "oh FFS why you doing that?!".

    Its amazing how right and wrong LLMs can be in the output produced. Personally the variance for me is too much... I cant stand when it gets things wrong on the most basic of stuff. I much prefer doing things without output from an LLM.

  • dinkumthinkum 4 hours ago
    Does anyone else see this as dystopian? Someone is unironically writing about how exhausted they are and up at night thinking about how they can be a better good-boy at prompting the LLM and reminding us how we shouldn't cope by blaming the AI or its supposed limitations (context size, etc). This is not a dig at the author. It just seems crazy that this is an unironic post. It's like we are gleefully running to the "Laughterhouse" and each reminding our smiling fellow passengers not to be annoyed at the driver if he isn't getting us there fast enough, without realizing the Slaughterhouse (yes, I am stealing the reference).

    Another way you can read this is as a new cult member that his chiding himself whenever he might have an intrusive thought that Dear Leader may not be perfect, after all.

    • foolserrandboy 29 minutes ago
      Yup, and we arr wasting our weekends worried about keeping pace in an imagined red queen's race. Another similar post today.

      https://news.ycombinator.com/item?id=47388646

    • coffeefirst 3 hours ago
      Oh, entirely. But the hype cycle is such that if you find a legitimate criticism or run into the hard limits of human cognition (there are real limits to multitasking), a lot of people blame themselves.

      My pet theory is we haven't figured out what the best way to use these tools are, or even seen all the options yet. But that's a bigger topic for another day.

      • ccosky 1 hour ago
        With the trend going towards devs coordinating multiple agents at once, I am very curious to see how cognitive load increases due to the multitasking. We know multitasking reduces productivity and increases the likelihood of mistakes. Cal Newport talked about how important is to engage in "deep work." We're going in the opposite direction.
    • adi_kurian 3 hours ago
      Not at all.
    • Apocryphon 4 hours ago
      I mean, how often do we feel the same thing about the compiler?
      • layer8 2 hours ago
        What the compiler will do is highly predictable. What an LLM will produce considerably less so. That is the problem.
      • jplusequalt 3 hours ago
        I don't feel this? When my code breaks, I'm more likely to get frustrated with myself.

        The only time I've felt something akin to this with a compiler is when I was learning Rust. But that went away after a week or two.

  • stainlu 2 hours ago
    [flagged]
    • orbital-decay 1 hour ago
      LLM spam, ironically
    • keeda 32 minutes ago
      Actually I find verification pretty lightweight, because I tend to decompose tasks intended for AI to a level where I already know the "shape" of the code in my head, as well as what the test cases should look like. So reviewing the generated code and tests for me is pretty quick because it's almost like reading a book I've already read before, and if something is wrong it jumps out quickly.

      That said I have a different theory for why AI coding can be exhausting: the part where we translate concrete ideas into code, where the flow state usually occurs, is actually somewhat meditative and relaxing. But with that offloaded to AI, we're left mostly alternating between the cognitively intense idea-generation / problem-solving phases, and the quick dopamine hits of seeing things work: https://news.ycombinator.com/item?id=46938038

    • kbmr 2 hours ago
      >Reviewing LLM output requires constant context-switching between "what does this code do" and "is this what I actually wanted."

      Best way I've seen it framed

    • flir 1 hour ago
      I've always preferred brownfield work. In the past I've said "it's easier to be an editor than an author" to describe why. I think you're on to something. For me the new structure's cognitively easier, but it's not faster. Might even be slightly slower.
      • zahlman 1 hour ago
        It takes all kinds, I suppose.
    • j3k3 2 hours ago
      Great post.

      So the people who are claiming huge jumps in productivity in the workplace, how are they dealing with this 'review fatigue'?

      • cgh 1 hour ago
        What we once called “vibe coding” is increasingly known as just coding. There’s no reasonable way to review thousands of lines of code a day and many organizations simply aren’t. No review fatigue there! Just a black box of probable spaghetti.
      • dbalatero 1 hour ago
        I notice myself not reviewing in depth, and I assume many many others are not either.
      • steve_adams_86 1 hour ago
        My intuition is that they're aren't really doing it.
      • c0brac0bra 1 hour ago
        Somatic experiencing techniques.
  • rubyrfranklin2 3 hours ago
    [flagged]
  • diven_rastdus 36 minutes ago
    The shift from generation to verification is the key insight here. Writing code is flow-state work — you build a model in your head and express it. Reviewing LLM output is interrupt-driven work — you must context-switch into someone else's model repeatedly. Those cognitive modes don't mix well, which explains why a full day of agentic coding feels more draining than a full day of writing code yourself, even if the output volume is much higher. The fix I've found: write the spec and tests first so verification becomes mechanical rather than judgment-heavy.
    • jawarner 31 minutes ago
      Is this AI? You've copied the best comment and put into AI-speak.