An update on recent Claude Code quality reports

(anthropic.com)

777 points | by mfiguiere 18 hours ago

129 comments

  • 6keZbCECT2uB 18 hours ago
    "On March 26, we shipped a change to clear Claude's older thinking from sessions that had been idle for over an hour, to reduce latency when users resumed those sessions. A bug caused this to keep happening every turn for the rest of the session instead of just once, which made Claude seem forgetful and repetitive. We fixed it on April 10. This affected Sonnet 4.6 and Opus 4.6"

    This makes no sense to me. I often leave sessions idle for hours or days and use the capability to pick it back up with full context and power.

    The default thinking level seems more forgivable, but the churn in system prompts is something I'll need to figure out how to intentionally choose a refresh cycle.

    • bcherny 17 hours ago
      Hey, Boris from the Claude Code team here.

      Normally, when you have a conversation with Claude Code, if your convo has N messages, then (N-1) messages hit prompt cache -- everything but the latest message.

      The challenge is: when you let a session idle for >1 hour, when you come back to it and send a prompt, it will be a full cache miss, all N messages. We noticed that this corner case led to outsized token costs for users. In an extreme case, if you had 900k tokens in your context window, then idled for an hour, then sent a message, that would be >900k tokens written to cache all at once, which would eat up a significant % of your rate limits, especially for Pro users.

      We tried a few different approaches to improve this UX:

      1. Educating users on X/social

      2. Adding an in-product tip to recommend running /clear when re-visiting old conversations (we shipped a few iterations of this)

      3. Eliding parts of the context after idle: old tool results, old messages, thinking. Of these, thinking performed the best, and when we shipped it, that's when we unintentionally introduced the bug in the blog post.

      Hope this is helpful. Happy to answer any questions if you have.

      • dbeardsl 17 hours ago
        I appreciate the reply, but I was never under the impression that gaps in conversations would increase costs nor reduce quality. Both are surprising and disappointing.

        I feel like that is a choice best left up to users.

        i.e. "Resuming this conversation with full context will consume X% of your 5-hour usage bucket, but that can be reduced by Y% by dropping old thinking logs"

        • giwook 14 hours ago
          Another way to think about it might be that caching is part of Anthropic's strategy to reduce costs for its users, but they are now trying to be more mindful of their costs (probably partly due to significant recent user growth as well as plans to IPO which demand fiscal prudence).

          Perhaps if we were willing to pay more for our subscriptions Anthropic would be able to have longer cache windows but IDK one hour seems like a reasonable amount of time given the context and is a limitation I'm happy to work around (it's not that hard to work around) to pay just $100 or $200 a month for the industry-leading LLM.

          Full disclosure: I've recently signed up for ChatGPT Pro as well in addition to my Claude Max sub so not really biased one way or the other. I just want a quality LLM that's affordable.

          • jimkleiber 11 hours ago
            I might be willing to pay more, maybe a lot more, for a higher subscription than claude max 20x, but the only thing higher is pay per token and i really dont like products that make me have to be that minutely aware of my usage, especially when it has unpredictability to it. I think there's a reason most telecoms went away from per minute or especially per MB charging. Even per GB, as they often now offer X GB, and im ok with that on phone but much less so on computer because of the unpredictability of a software update size.

            Kinda like when restaurants make me pay for ketchup or a takeaway box, i get annoyed, just increase the compiled price.

            • adam_patarino 52 minutes ago
              Token anxiety is real mental overhead.
          • sharts 13 hours ago
            That doesn’t make sense to pay more for cache warming. Your session for the most part is already persisted. Why would it be reasonable to pay again to continue where you left off at any time in the future?
            • jeremyjh 13 hours ago
              Because it significantly increases actual costs for Anthropic.

              If they ignored this then all users who don’t do this much would have to subsidize the people who do.

              • tikkabhuna 6 hours ago
                I’m coming at this as a complete Claude amateur, but caching for any other service is an optimisation for the company and transparent for the user. I don’t think I’ve ever used a service and thought “oh there’s a cache miss. Gotta be careful”.

                I completely agree that it’s infeasible for them to cache for long periods of time, but they need to surface that information in the tools so that we can make informed decisions.

                • libraryofbabel 4 hours ago
                  That is because LLM KV caching is not like caches you are used to (see my other comments, but it's 10s of GB per request and involves internal LLM state that must live on or be moved onto a GPU and much of the cost is in moving all that data around). It cannot be made transparent for the user because the bandwidth costs are too large a fraction of unit economics for Anthropic to absorb, so they have to be surfaced to the user in pricing and usage limits. The alternative is a situation where users whose clients use the cache efficiently end up dramatically subsidizing users who use it inefficiently, and I don't think that's a good solution at all. I'd much rather this be surfaced to users as it is with all commercial LLM apis.
            • danso 10 hours ago
              Genuine question: is the cost to keep a persistent warmed cache for sessions idling for hours/days not significant when done for hundreds of thousands of users? Wouldn’t it pose a resource constraint on Anthropic at some point?
              • tmountain 3 hours ago
                Related question, is it at all feasible to store cache locally to offload memory costs and then send it over the wire when needed?
                • dev_hugepages 58 minutes ago
                  No, the cache is a few GB large for most usual context sizes. It depends on model architecture, but if you take Gemma 4 31B at 256K context length, it takes 11.6GB of cache

                  note: I picked the values from a blog and they may be innacurate, but in pretty much all model the KV cache is very large, it's probably even larger in Claude.

                  • bavell 24 minutes ago
                    Yesterday I was playing around with Gemma4 26B A4B with a 3 bit quant and sizing it for my 16GB 9070XT:

                      Total VRAM: 16GB
                      Model: ~12GB
                      128k context size: ~3.9GB
                    
                    At least I'm pretty sure I landed on 128k... might have been 64k. Regardless, you can see the massive weight (ha) of the meager context size (at least compared to frontier models).
              • johnsonbuilds 9 hours ago
                [dead]
            • cadamsdotcom 11 hours ago
              Sure, it wouldn’t make sense if they only had one customer to serve :)
            • uoaei 2 hours ago
              Exactly, even in the throes of today's wacky economic tides, storage is still cheap. Write the model state immediately after the N context messages in cache to disk and reload without extra inference on the context tokens themselves. If every customer did this for ~3 conversations per user you still would only need a small fraction of a typical datacenter to house the drives necessary. The bottleneck becomes architecture/topology and the speed of your buses, which are problems that have been contended with for decades now, not inference time on GPUs.
        • JumpCrisscross 17 hours ago
          > I was never under the impression that gaps in conversations would increase costs

          The UI could indicate this by showing a timer before context is dumped.

          • vyr 13 hours ago
            a countdown clock telling you that you should talk to the model again before your streak expires? that's the kind of UX i'd expect from an F2P mobile game or an abandoned shopping cart nag notification
            • abustamam 12 hours ago
              Well sure if you put it that way, they're similar. But it's either you don't see it and you get surprised by increased quota usage, or you do see it and you know what it means. Bonus points if they let you turn it off.

              No need to gamify it. It's just UI.

              • thinkmassive 10 hours ago
                Plenty of room for a middle ground, like a static timestamp per session that shows expiration time, without the distraction of a constantly changing UI element.
            • matheusmoreira 8 hours ago
              Why not an automated ping message that's cheap for the model to respond to?
              • cortesoft 8 hours ago
                Because the cache is held on anthropics side, and they aren't going to hold your context in cache indefinitely.
          • karsinkk 16 hours ago
            Yes!! A UI widget that shows how far along on the prompt cache eviction timelines we are would be great.
          • vanviegen 3 hours ago
            That sounds stressful.

            But perhaps Claude Code could detect that you're actively working on this stuff (like typing a prompt or accessing the files modified by the session), and send keep-cache-alive pings based on that? Presumably these pings could be pretty cheap, as the kv-cache wouldn't need to be loaded back into VRAM for this. If that would work reliably, cache expiry timeouts could be more aggressive (5 min instead of an hour).

          • jimkleiber 11 hours ago
            I tried to hack the statusline to show this but when i tried, i don't think the api gave that info. I'd love if they let us have more variables to access in the statusline.
        • kiratp 11 hours ago
          By caching they mean “cached in GPU memory”. That’s a very very scarce resource.

          Caching to RAM and disk is a thing but it’s hard to keep performance up with that and it’s early days of that tech being deployed anywhere.

          Disclosure: work on AI at Microsoft. Above is just common industry info (see work happening in vLLM for example)

          • libraryofbabel 6 hours ago
            Nit: It doesn’t have to live in GPU memory. The system will use multiple levels of caching and will evict older cached data to CPU RAM or to disk if a request hasn’t recently come in that used that prefix. The problem is, the KV caches are huge (many GB) and so moving them back onto the GPU is expensive: GPU memory bandwidth is the main resource constraint in inference. It’s also slow.

            The larger point stands: the cache is expensive. It still saves you money but Anthropic must charge for it.

            Edit: there are a lot of comments here where people don't understand LLM prefix caching, aka the KV cache. That's understandable: it is a complex topic and the usual intuitions about caching you might have from e.g. web development don't apply: a single cache blob for a single request is in the 10s of GB at least for a big model, and a lot of the key details turn on the problems of moving it in and out of GPU memory. The contents of the cache is internal model state; it's not your context or prompt or anything like that. Furthermore, this isn't some Anthropic-specific thing; all LLM inference with a stable context prefix will use it because it makes inference faster and cheaper. If you want to read up on this subject, be careful as a lot of blogs will tell you about the KV cache as it is used within inference for an single request (a critical detail concept in how LLMs work) but they will gloss over how the KV cache is persisted between requests, which is what we're all talking about here. I would recommend Philip Kiely's new book Inference Engineering for a detailed discussion of that stuff, including the multiple caching levels.

        • bede 2 hours ago
          I too would far rather bear a token cost than have my sessions rot silently beneath my feet. I usually have ~5 running CC sessions, some of which I may leave for a week or two of inactivity at a time.
        • computably 17 hours ago
          > I was never under the impression that gaps in conversations would increase costs nor reduce quality. Both are surprising and disappointing.

          You didn't do your due diligence on an expensive API. A naïve implementation of an LLM chat is going to have O(N^2) costs from prompting with the entire context every time. Caching is needed to bring that down to O(N), but the cache itself takes resources, so evictions have to happen eventually.

          • doesnt_know 16 hours ago
            How do you do "due diligence" on an API that frequently makes undocumented changes and only publishes acknowledgement of change after users complain?

            You're also talking about internal technical implementations of a chat bot. 99.99% of users won't even understand the words that are being used.

            • dlivingston 9 hours ago
              What is being discussed is KV caching [0], which is used across every LLM model to reduce inference compute from O(n^2) to O(n). This is not specific to Claude nor Anthropic.

              [0]: https://huggingface.co/blog/not-lain/kv-caching

            • computably 9 hours ago
              > How do you do "due diligence" on an API that frequently makes undocumented changes and only publishes acknowledgement of change after users complain?

              1. Compute scaling with the length of the sequence is applicable to transformer models in general, i.e. every frontier LLM since ChatGPT's initial release.

              2. As undocumented changes happen frequently, users should be even more incentivized to at least try to have a basic understanding of the product's cost structure.

              > You're also talking about internal technical implementations of a chat bot. 99.99% of users won't even understand the words that are being used.

              I think "internal technical implementation" is a stretch. Users don't need to know what a "transformer" is to understand the trade-off. It's not trivial but it's not something incomprehensible to laypersons.

            • tempest_ 11 hours ago
              I use CC, and I understand what caching means.

              I have no idea how that works with a LLM implementation nor do I actually know what they are caching in this context.

              • libraryofbabel 4 hours ago
                They are caching internal LLM state, which is in the 10s of GB for each session. It's called a KV cache (because the internal state that is cached are the K and V matrices) and it is fundamental to how LLM inference works; it's not some Anthropic-specific design decision. See my other comment for more detail and a reference.
              • hakanderyal 8 hours ago
                CC can explain it clearly, which how I learned about how the inference stack works.
            • fragmede 6 hours ago
              > 99.99% of users won't even understand the words that are being used.

              That's a bad estimate. Claude Code is explicitly a developer shaped tool, we're not talking generically ChatGPT here, so my guess is probably closer to 75% of those users do understand what caching is, with maybe 30% being able to explain prompt caching actually is. Of course, those users that don't understand have access to Claude and can have it explain what caching is to them if they're interested.

          • solarkraft 16 hours ago
            I somewhat disagree that this is due diligence. Claude Code abstracts the API, so it should abstract this behavior as well, or educate the user about it.
            • mpyne 14 hours ago
              > Claude Code abstracts the API, so it should abstract this behavior as well, or educate the user about it.

              Does mmap(2) educate the developer on how disk I/O works?

              At some point you have to know something about the technology you're using, or accept that you're a consumer of the ever-shifting general best practice, shifting with it as the best practice shifts.

              • websap 11 hours ago
                Does using print() in Python means I need to understand the Kernel? This is an absurd thought.
                • Nevermark 5 hours ago
                  That might be an absurd comparison, but we can fix that.

                  If you were being charged per character, or running down character limits, and printing on printers that were shared and had economic costs for stalled and started print runs, then:

                  You wouldn’t “need” to understand. The prints would complete regardless. But you might want to. Personal preference.

                  Which is true of this issue to.

                  • Barbing 5 hours ago
                    >If you were being charged per character, or running down character limits, and printing on printers that were shared and had economic costs for stalled and started print runs,

                    and the system was being run by some of the planet’s brightest people whose famous creation is well known to disseminate complex information succinctly,

                    >then:

                    You would expect to be led to understand, like… a 1997 Prius.

                    “This feature showed the vehicle operation regarding the interplay between gasoline engine, battery pack, and electric motors and could also show a bar-graph of fuel economy results.” https://en.wikipedia.org/wiki/Toyota_Prius_(XW10)

                • redsocksfan45 2 hours ago
                  [dead]
              • zem 13 hours ago
                mmap(2) and all its underlying machinery are open source and well documented besides.
                • mpyne 13 hours ago
                  There are open-source and even open-weight models that operate in exactly this way (as it's based off of years of public research), and even if there weren't the way that LLMs generate responses to inputs is superbly documented.

                  Seems like every month someone writes up a brilliant article on how to build an LLM from scratch or similar that hits the HN page, usually with fancy animated blocks and everything.

                  It's not at all hard to find documentation on this topic. It could be made more prominent in the U/I but that's true of lots of things, and hammering on "AI 101" topics would clutter the U/I for actual decision points the user may want to take action upon that you can't assume the user already knows about in the way you (should) be able to assume about how LLMs eat up tokens in the first place.

            • computably 9 hours ago
              I would say this is abstracting the behavior.
          • someguyiguess 16 hours ago
            Yes. It’s perfectly reasonable to expect the user to know the intricacies of the caching strategy of their llm. Totally reasonable expectation.
            • jghn 13 hours ago
              To some extent I'd say it is indeed reasonable. I had observed the effect for a while: if I walked away from a session I noticed that my next prompt would chew up a bunch of context. And that led me to do some digging, at which point I discovered their prompt caching.

              So while I'd agree with your sarcasm that expecting users to be experts of the system is a big ask, where I disagree with you is that users should be curious and actively attempting to understand how it works around them. Given that the tooling changes often, this is an endless job.

              • abustamam 12 hours ago
                > users should be curious and actively attempting to understand how it works

                Have you ever talked with users?

                > this is an endless job

                Indeed. If we spend all our time learning what changed with all our tooling when it changes without proper documentation then we spend all our working lives keeping up instead of doing our actual jobs.

                • Octoth0rpe 11 hours ago
                  There are general users of the average SaaS, and there are claude code users. There's no doubt in my mind that our expectations should be somewhat higher for CC users re: memory. I'm personally not completely convinced that cache eviction should be part of their thought process while using CC, but it's not _that_ much of a stretch.
                  • abustamam 8 hours ago
                    Personally I've never thought about cache eviction as it pertains to CC. It's just not something that I ever needed to think about. Maybe I'm just not a power user but I just use the product the way I want to and it just works.
                  • troupo 7 hours ago
                    Anthropic literally advertises long sessions, 1M context, high reasoning etc.

                    And then their vibe-coders tell us that we are to blame for using the product exactly as advertised: https://x.com/lydiahallie/status/2039800718371307603 while silently changing how the product works.

                    Please stop defending hapless innocent corporations.

                  • redsocksfan45 2 hours ago
                    [dead]
            • coldtea 15 hours ago
              It's not like they have a poweful all-knowing oracle that can explain it to them at their dispos... oh, wait!
              • esafak 14 hours ago
                They have to know that this could bite them and to ask the question first.
                • nixpulvis 14 hours ago
                  I do think having some insight into the current state of the cache and a realistic estimate for prompt token use is something we should demand.
                  • switchbak 10 hours ago
                    If there was an affordance on the TUI that made this visible and encouraged users to learn more - that would go a long way.
          • margalabargala 15 hours ago
            Okay, sure. There's a dollar/intelligence tradeoff. Let me decide to make it, don't silently make Claude dumber because I forgot about a terminal tab for an hour. Just because a project isn't urgent doesn't mean it's not important. If I thought it didn't need intelligence I would use Sonnet or Haiku.
          • exac 14 hours ago
            It is more useful to read posts and threads like this exact thread IMO. We can't know everything, and the currently addressed market for Claude Code is far from people who would even think about caching to begin with.
          • kang 15 hours ago
            It seems you haven't done the due diligence on what part of the API is expensive - constructing a prompt shouldn't be same charge/cost as llm pass.
            • coldtea 15 hours ago
              It seems you haven't done the due diligence on what the parent meant :)

              It's not about "constructing a prompt" in the sense of building the prompt string. That of course wouldn't be costly.

              It is about reusing llm inference state already in GPU memory (for the older part of the prompt that remains the same) instead of rerunning the prompt and rebuilding those attention tensors from scratch.

              • kang 14 hours ago
                You not only skipped the diligence but confused everyone repeating what I said :(

                that is what caching is doing. the llm inference state is being reused. (attention vectors is internal artefact in this level of abstraction, effectively at this level of abstraction its a the prompt).

                The part of the prompt that has already been inferred no longer needs to be a part of the input, to be replaced by the inference subset. And none of this is tokens.

            • computably 10 hours ago
              I said "prompting with the entire context every time," I think it should be clear even to laypersons that the "prompting" cost refers to what the model provider charges you when you send them a prompt.
          • kovek 15 hours ago
            What if the cache was backed up to cold storage? Instead of having to recompute everything.
            • vanviegen 3 hours ago
              They probably already do that. But these caches can get pretty big (10s of GBs per session), so that adds up fast, even for cold storage.
          • bontaq 14 hours ago
            How's that O(N^2)? How's it O(N) with caching? Does a 3 turn conversation cost 3 times as much with no caching, or 9 times as much?
            • jannyfer 13 hours ago
              I’m not sure that it’s O(N) with caching but this illustrates the N^2 part:

              https://blog.exe.dev/expensively-quadratic

              • bontaq 8 hours ago
                If there was an exponential cost, I would expect to see some sort of pricing based on that. I would also expect to see it taking exponentially longer to process a prompt. I don't believe LLMs work like that. The "scary quadratic" referenced in what you linked seems to be pointing out that cache reads increase as your conversation continues?

                If I'm running a database keeping track of a conversation, and each time it writes the entire history of the conversation instead of appending a message, are we calling that O(N^2) now?

                • bavell 1 minute ago
                  > I would also expect to see it taking exponentially longer to process a prompt. I don't believe LLMs work like that.

                  Try this out using a local LLM. You'll see that as the conversation grows, your prompts take longer to execute. It's not exponential but it's significant. This is in fact how all autoregressive LLMs work.

                • atq2119 7 hours ago
                  Yes, that is indeed O(N^2). Which, by the way, is not exponential.

                  Also by the way, caching does not make LLM inference linear. It's still quadratic, but the constant in front of the quadratic term becomes a lot smaller.

                  • computably 3 hours ago
                    > Also by the way, caching does not make LLM inference linear. It's still quadratic, but the constant in front of the quadratic term becomes a lot smaller.

                    Touché. Still, to a reasonable approximation, caching makes the dominant term linear, or equiv, linearly scales the expensive bits.

                • _flux 4 hours ago
                  What we would call O(n^2) in your rewriting message history would be the case where you have an empty database and you need to populate it with a certain message history. The individual operations would take 1, 2, 3, .. n steps, so (1/2)*n^2 in total, so O(n^2).

                  This is the operation that is basically done for each message in an LLM chat in the logical level: the complete context/history is sent in to be processed. If you wish to process only the additions, you must preserve the processed state on server-side (in KV cache). KV caches can be very large, e.g. tens of gigabytes.

          • raron 16 hours ago
            How big this cached data is? Wouldn't it be possible to download it after idling a few minutes "to suspend the session", and upload and restore it when the user starts their next interaction?
            • throwdbaaway 14 hours ago
              Should be about 10~20 GiB per session. Save/restore is exactly what DeepSeek does using its 3FS distributed filesystem: https://github.com/deepseek-ai/3fs#3-kvcache

              With this much cheaper setup backed by disks, they can offer much better caching experience:

              > Cache construction takes seconds. Once the cache is no longer in use, it will be automatically cleared, usually within a few hours to a few days.

            • cortesoft 8 hours ago
              What they mean when they say 'cached' is that it is loaded into the GPU memory on anthropic servers.

              You already have the data on your own machine, and that 'upload and restore' process is exactly what is happening when you restart an idle session. The issue is that it takes time, and it counts as token usage because you have to send the data for the GPU to load, and that data is the 'tokens'.

              • vanviegen 3 hours ago
                Wrong on both counts. The kv-cache is likely to be offloaded to RAM or disk. What you have locally is just the log of messages. The kv-cache is the internal LLM state after having processed these messages, and it is a lot bigger.
            • nl 9 hours ago
              > upload and restore it when the user starts their next interaction

              The data is the conversation (along with the thinking tokens).

              There is no download - you already have it.

              The issue is that it gets expunged from the (very expensive, very limited) GPU cache and to reload the cache you have to reprocess the whole conversation.

              That is doable, but as Boris notes it costs lots of tokens.

              • vanviegen 3 hours ago
                You're quite confidently wrong! :-)

                The kv-cache is the internal LLM state after having processed the tokens. It's big, and you do not have it locally.

            • cyanydeez 14 hours ago
              I often see a local model QWEN3.5-Coder-Next grow to about 5 GB or so over the course of a session using llamacpp-server. I'd better these trillion parameter models are even worse. Even if you wanted to download it or offload it or offered that as a service, to start back up again, you'd _still_ be paying the token cost because all of that context _is_ the tokens you've just done.

              The cache is what makes your journey from 1k prompt to 1million token solution speedy in one 'vibe' session. Loading that again will cost the entire journey.

          • miroljub 14 hours ago
            This sounds like a religious cult priest blaming the common people for not understanding the cult leader's wish, which he never clearly stated.
            • computably 3 hours ago
              A strange view. The trade-off has nothing to do with a specific ideology or notable selfishness. It is an intrinsic limitation of the algorithms, which anybody could reasonably learn about.

              Sure, the exact choice on the trade-off, changing that choice, and having a pretty product-breaking bug as a result, are much more opaque. But I was responding to somebody who was surprised there's any trade-off at all. Computers don't give you infinite resources, whether or not they're "servers," "in the cloud," or "AI."

              • miroljub 1 hour ago
                He was surprised because it was not clearly communicated. There's a lot of theory behind a product that you could (or could not) better understand, but in the end, something like price doesn't have much to do with the theoretical and practical behavior of the actual application.
        • winternewt 2 hours ago
          Instead of just dropping all the context, the system could also run a compaction (summarizing the entire convo) before dropping it. Better to continue with a summary than to lose everything.
          • Folcon 1 hour ago
            There's problems with this approach as well I've found

            I'm really beginning to feel the lack of control when it's comes to context if I'm being honest

        • bcherny 6 hours ago
          Yes! This is what we’re trying next.
        • cyanydeez 15 hours ago
          It'd probably be helpful for power users and transparency to actually show how the cache is being used. If you run local models with llamacpp-server, you can watch how the cache slots fill up with every turn; when subagents spawn, you see another process id spin up and it takes up a cache slot; when the model starts slowing down is when the context grows (amd 395+ around 80-90k) and the cache loads are bigger because you've got all that.

          So yeah, it doesn't take much to surface to the user that the speed/value of their session is ephemeral because to keep all that cache active is computationally expensive because ...

          You're still just running text through a extremely complex process, and adding to that text and to avoid re-calculation of the entire chain, you need the cache.

        • nixpulvis 14 hours ago
          How else would you implement it?
      • jwr 5 hours ago
        These controversies erupt regularly, and I hope that you will see a common thing with most of them: you make a decision for your users without informing them.

        Please fight this hubris. Your users matter. Many of us use your tools for everyday work and do not appreciate having the rug pulled from under them on a regular basis, much less so in an underhanded and undisclosed way.

        I don't mind the bugs, these will happen. What I do not appreciate is secretly changing things that are likely to decrease performance.

        • Kiro 4 hours ago
          A company that needs to anchor every single thing with the users will create a stale product.
          • jwr 2 hours ago
            That is not what I wrote. The phrases "without informing them", "in an underhanded and undisclosed way" and "secretly changing things" were important. I'm all for product evolution, but users should be informed when the product is changed, especially when the change can be for the worse (like dumbing down the model).
          • salawat 3 hours ago
            I've spent my entire working career dealing with companies that do the opposite. The product still goes stale. Find a better excuse.

            You're acquiring users as a recurring revenue source. Consider stability and transparency of implementation details cost of doing business, or hemorrhage users as a result.

        • tomaskafka 5 hours ago
          While I hate all the gaslighting Anthropic seems to do recently (and the fact that their harness broke the code quality, while they forbid use of third party harnesses), making decisions for users is what UX is.

          See also the difference between eg. MacOS (with large M, the older good versions) and waiting for "Year of linux on desktop".

          I don't think the issue is making decisions for users, but trying to switch off the soup tap in the all-you-can-eat soup bar. Or, wrong business model setting wrong incentives to both sides.

      • btown 17 hours ago
        Is there a way to say: I am happy to pay a premium (in tokens or extra usage) to make sure that my resumed 1h+ session has all the old thinking?

        I understand you wouldn't want this to be the default, particularly for people who have one giant running session for many topics - and I can only imagine the load involved in full cache misses at scale. But there are other use cases where this thinking is critical - for instance, a session for a large refactor or a devops/operations use case consolidating numerous issue reports and external findings over time, where the periodic thinking was actually critical to how the session evolved.

        For example, if N-4 was a massive dump of some relevant, some irrelevant material (say, investigating for patterns in a massive set of data, but prompted to be concise in output), then N-4's thinking might have been critical to N-2 not getting over-fixated on that dump from N-4. I'd consider it mission-critical, and pay a premium, when resuming an N some hours later to avoid pitfalls just as N-2 avoided those pitfalls.

        Could we have an "ultraresume" that, similar to ultrathink, would let a user indicate they want to watch Return of the (Thin)king: Extended Edition?

        • CjHuber 16 hours ago
          I think it’s crazy that they do this, especially without any notice. I would not have renewed my subscription if I knew that they started doing this.

          Especially in the analysis part of my work I don‘t care about the actual text output itself most of the time but try to make the model „understand“ the topic.

          In the first phase the actual text output itself is worthless it just serves as an indicator that the context was processed correctly and the future actual analysis work can depend on it. And they‘re… just throwing most the relevant stuff out all out without any notice when I resume my session after a few days?

          This is insane, Claude literally became useless to me and I didn’t even know it until now, wasting a lot of my time building up good session context.

          There would be nothing lost if they said „If you click yes, we will prune your old thinking making Claude faster and saving you tons of tokens“. Most people would say yes probably so why not ask them… make it an env variable (that is announced not a secretly introduced one to opt out of something new!) or at least write it in a change log if they really don’t want to allow people to use it like before, so there‘d be chance to cancel the subscription in time instead of wasting tons of time on work patterns that not longer work

          • munk-a 16 hours ago
            Pointing at their terms of service will definitely be the instantly summoned defense (as would most modern companies) but the fact that SaaS can so suddenly shift the quality of product being delivered for their subscription without clear notification or explicitly re-enrollment is definitely a legal oversight right now and Italy actually did recently clamp down on Netflix doing this[1]. It's hard to define what user expectations of a continuous product are and how companies may have violated it - and for a long time social constructs kept this pretty in check. As obviously inactive and forgotten about subscriptions have become a more significant revenue source for services that agreement has been eroded, though, and the legal system has yet to catch up.

            1. Specifically, this suite was about price increases without clear consideration for both parties - but the same justifications apply to service restrictions without corresponding price decreases.

            https://fortune.com/2026/04/20/italian-court-netflix-refunds...

          • kiratp 11 hours ago
            OpenAI does this for all API calls

            > Our systems will smartly ignore any reasoning items that aren’t relevant to your functions, and only retain those in context that are relevant. You can pass reasoning items from previous responses either using the previous_response_id parameter, or by manually passing in all the output items from a past response into the input of a new one.

            https://developers.openai.com/api/docs/guides/reasoning

            Disclosure - work on AI@msft

          • jetbalsa 16 hours ago
            So to defend a litte, its a Cache, it has to go somewhere, its a save state of the model's inner workings at the time of the last message. so if it expires, it has to process the whole thing again. most people don't understand that every message the ENTIRE history of the conversion is processed again and again without that cache. That conversion might of hit several gigs worth of model weights and are you expecting them to keep that around for /all/ of your conversions you have had with it in separate sessions?
            • 3836293648 16 hours ago
              No? It's not because it's a cache, it's because they're scared of letting you see the thinking trace. If you got the trace you could just send it back in full when it got evicted from the cache. This is how open weight models work.
              • mpyne 14 hours ago
                The trace goes back fine, that's not the issue.

                The issue is that if they send the full trace back, it will have to be processed from the start if the cache expired, and doing that will cause a huge one-time hit against your token limit if the session has grown large.

                So what Boris talked about is stripping things out of the trace that goes back to regenerate the session if the cache expires. Doing this would help avert burning up the token limit, but it is technically a different conversation, so if CC chooses poorly on stripping parts of the context then it would lead to Claude getting all scatter-brained.

                • charcircuit 5 hours ago
                  >and doing that will cause a huge one-time hit against your token limit if the session has grown large.

                  Anthropic already profited from generating those tokens. They can afford subsidize reloading context.

              • reactordev 16 hours ago
                They are sending it back to the cache, the part you are missing is they were charging you for it.
                • eknkc 16 hours ago
                  The blog post says they prune them now not to charge you. That’s the change they implemented.
                  • reactordev 15 hours ago
                    right. they were charging you for it, now they aren't because they are just dropping your conversation history.
              • eknkc 16 hours ago
                I’m not familiar with the Claude API but OpenAI has an encrypted thking messages option. You get something that you can send back but it is encrypted. Not available on Anthropic?
            • rsfern 16 hours ago
              It seems like an opportunity for a hierarchical cache. Instead of just nuking all context on eviction, couldn’t there be an L2 cache with a longer eviction time so task switching for an hour doesn’t require a full session replay?
            • CjHuber 15 hours ago
              No of course it’s unrealistic for them to hold the cache indefinitely and that’s not the point. You are keeping the session data yourself so you can continue even after cache expiry. The point I‘m making is that it made me very angry that without any announcement they changed behavior to strip the old thinking even when you have it in your session file. There is absolutely no reason to not ask the user about if they want this

              And it’s part of a larger problem of unannounced changes it‘s just like when they introduced adaptive thinking to 4.6 a few weeks ago without notice.

              Also they seem to be completely unaware that some users might only use Claude code because they are used to it not stripping thinking in contrast to codex.

              Anyway I‘m happy that they saw it as a valid refund reason

            • cyanydeez 14 hours ago
              what matters isn't that it's a cache; what matter is it's cached _in the GPU/NPU_ memory and taking up space from another user's active session; to keep that cache in the GPU is a nonstarter for an oversold product. Even putting into cold storage means they still have to load it at the cost of the compute, generally speaking because it again, takes up space from an oversold product.
          • FireBeyond 11 hours ago
            > There would be nothing lost if they said „If you click yes, we will prune your old thinking making Claude faster and saving you tons of tokens“. Most people would say yes probably so why not ask them

            The irony is that Claude Design does this. I did a big test building a design system, and when I came back to it, it had in the chat window "Do you need all this history for your next block of work? Save 120K tokens and start a new chat. Claude will still be able to use the design system." Or words to that effect.

            • CjHuber 11 hours ago
              This is exactly what also confused me. I had the exact same prompt in Claude code as well, and the no option implies you can also keep the whole history. But clicking keep apparently only ever kept the user and assistant messages not the whole actual thinking parts of the conversation
        • trinsic2 15 hours ago
          Why cant you just build a project document that outlines that prompt that you want to do? Or have claude save your progress in memory so you can pick it up later? Thats what I do. It seems abhorrent to expect to have a running prompt that left idle for long periods of time just so you can pick up at a moments whim...
        • elAhmo 16 hours ago
          Don't you have that by just resuming old convo?

          The only issue is that it didn't hit the cache so it was expensive if you resume later.

          • eknkc 16 hours ago
            Not at the moment apparently. They remove the thinking messages when you continue after 1 hour. That was the whole idea of that change. So the LLM gets all your messages, its responses etc but not the thinking parts, why it generated that responses. You get a lobotomised session.
            • elAhmo 15 hours ago
              OK didn't know that. I also resume fairly old sessions with 100-200k of context, and I sometimes keep them active for a while (but with large breaks in between).

              Still on Opus 4.6 with no adaptive thinking, so didn't really notice anything worse in the past weeks, but who knows.

          • tbrockman 16 hours ago
            Or generate tiny filler messages every hour until you come back to it.
      • Terretta 14 hours ago
        This violates the principle of least surprise, with nothing to indicate Claude got lobotomized while it napped when so many use prior sessions as "primed context" (even if people don't know that's what they were doing or know why it works).

        The purpose of spending 10 to 50 prompts getting Claude to fill the context for you is it effectively "fine tunes" that session into a place your work product or questions are handled well.

        // If this notion of sufficient context as fine tune seems surprising, the research is out there.)

        Approaches tried need to deal with both of these:

        1) Silent context degradation breaks the Pro-tool contract. I pay compute so I don't pay in my time; if you want to surface the cost, surface it (UI + price tag or choice), don't silently erode quality of outcomes.

        2) The workaround (external context files re-primed on return) eats the exact same cache miss, so the "savings" are illusory — you just pushed the cost onto the user's time. If my own time's cheap enough that's the right trade off, I shouldn't be using your machine.

      • uxcolumbo 15 hours ago
        I don't envy you Boris. Getting flak from all sorts of places can't be easy. But thanks for keeping a direct line with us.

        I wish Anthropic's leadership would understand that the dev community is such a vital community that they should appreciate a bit more (i.e. not nice sending lawyers afters various devs without asking nicely first, banning accounts without notice, etc etc). Appreciate it's not easy to scale.

        OpenAI seems to be doing a much better job when it comes to developer relations, but I would like to see you guys 'win' since Anthropic shows more integrity and has clear ethical red lines they are not willing to cross unlike OpenAI's leadership.

      • kuboble 15 hours ago
        As some others have mentioned.

        I think the best option would be tell a user who is about to resurrect a conversation that has been evicted from cache that the session is not cached anymore and the user will have to face a full cost of replaying a session, not only the incremental question and answer.

        (In understand under the hood that llms are n^2 by default but it's very counter intuitive - and given how popular cc is becoming outside of nerd circles, probably smaller and smaller fraction of users is aware of it)

        I would like to decide on it case by case. Sometimes the session has some really deep insight I want to preserve, sometimes it's discardable.

        • a_t48 15 hours ago
          I got exactly this warning message yesterday, saying that it could use up a significant amount of my token budget if I resumed the conversation without compaction.
          • jhogendorn 9 hours ago
            Compaction wont save you, in fact calling compaction will eat about 3-5x the cold cache cost in usage ive found.
            • _flux 4 hours ago
              Wouldn't it help if the system did compaction before the eviction happens? But the problem is that Claude probably don't want to automatically compact all sessions that have been left idle for one hour (and very likely abandoned already), that would probably introduce even more additional costs.

              Maybe the UI could do that for sessions that the user hasn't left yet, when the deadline comes near.

          • onemoresoop 15 hours ago
            Im glad they chose to do that as opposed to hidden behavior changes that only confuse users more.
          • fhub 15 hours ago
            Really good to know. That should have made it into their update letter in point (2). Empowering the user to choose is the right call.
          • doubleunplussed 9 hours ago
            I saw that too, but that's actually even worse on cache - the entire conversation is then a cache miss and needs to be loaded in in order to do the compaction. Then the resulting compacted conversation is also a cache miss.

            You ideally want to compact before the conversation is evicted from cache. If you knew you were going to use the conversation again later after cache expiry, you might do this deliberately before leaving a session.

            Anthropic could do this automatically before cache expiry, though it would be hard to get right - they'd be wasting a lot of compute compacting conversations that were never going to be resumed anyway.

        • skeledrew 14 hours ago
          > I think the best option would be tell a user who is about to resurrect a conversation that has been evicted from cache that the session is not cached anymore and the user will have to face a full cost of replaying a session

          This feature has been live for a few days/weeks now, and with that knowledge I try remember to a least get a process report written when I'm for example close to the quota limit and the context is reasonably large. Or continue with a /compact, but that tends to lead to be having to repeat some things that didn't get included in the summary. Context management is just hard.

          • Terretta 14 hours ago
            Right, and reloading that context is the same cost as refilling the cache, so really, they're charging the same, and making it hard.
      • isaacdl 17 hours ago
        Thanks for giving more information. Just as a comment on (1), a lot of people don't use X/social. That's never going to be a sustainable path to "improve this UX" since it's...not part of the UX of the product.

        It's a little concerning that it's number 1 in your list.

      • andrewingram 54 minutes ago
        This points to a fairly fundamental mismatch between the realities of running an LLM and the expectations of users. As a user, I _expect_ the cost of resuming X hours/days later to be no different to resuming seconds or minutes later. The fact that there is a difference, means it's now being compensated for in fairly awkward ways -- none of the solutions seem good, just varying degrees of bad.

        Is there a more fundamental issue of trying to tie something with such nuanced costs to an interaction model which has decades of prior expectation of every message essentially being free?

        • bavell 35 minutes ago
          > As a user, I _expect_ the cost of resuming X hours/days later to be no different to resuming seconds or minutes later.

          As an informed user who understands his tools, I of course expect large uncached conversations to massively eat into my token budget, since that's how all of the big LLM providers work. I also understand these providers are businesses trying to make money and they aren't going to hold every conversation in their caches indefinitely.

      • ceuk 17 hours ago
        Is having massive sessions which sit idle for hours (or days) at a time considered unusual? That's a really, really common scenario for me.

        Two questions if you see this:

        1) if this isn't best practice, what is the best way to preserve highly specific contexts?

        2) does this issue just affect idle sessions or would the cache miss also apply to /resume ?

        • hedgehog 16 hours ago
          Have the tool maintain a doc, and use either the built-in memory or (I prefer it this way) your own. I've been pretty critical of some other aspects of how Claude Code works but on this one I think they're doing roughly the right thing given how the underlying completion machinery works.

          Edit: If you message me I can share some of my toolchain, it's probably similar to what a lot of other people here use but I've done some polishing recently.

        • jetbalsa 16 hours ago
          The cache is stored on Antropics servers, since its a save state of the LLM's weights at the time of processing. its several gigs in size. Every SINGLE TIME you send a message and its a cache miss you have to reprocess the entire message again eating up tons of tokens in the process
          • cyanydeez 14 hours ago
            clarification though: the cache that's important to the GPU/NPU is loaded directly in the memory of the cards; it's not saved anywhere else. They could technically create cold storage of the tokens (vectors) and load that, but given how ephemeral all these viber coders are, it's unlikely there's any value in saving those vectors to load in.

            So then it comes to what you're talking about, which is processing the entire text chain which is a different kind of cache, and generating the equivelent tokens are what's being costed.

            But once you realize the efficiency of the product in extended sessions is cached in the immediate GPU hardware, then it's obvious that the oversold product can't just idle the GPU when sessions idle.

      • fidrelity 17 hours ago
        Just wanted to say I appreciate your responses here. Engaging so directly with a highly critical audience is a minefield that you're navigating well.

        Thank you.

        • qsort 17 hours ago
          I agree with this.

          I'm writing this message even though I don't have much to add because it's often the case on HN that criticism is vocal and appreciation is silent and I'd like to balance out the sentiment.

          Anthropic has fumbled on many fronts lately but engaging honestly like this is the right thing to do. I trust you'll get back on track.

        • troupo 17 hours ago
          > Engaging so directly with a highly critical audience is a minefield that you're navigating well.

          They spent two months literally gaslighting this "critical audience" that this could not be happening and literally blaming users for using their vibe-coded slop exactly as advertised.

          All the while all the official channels refused to acknowledge any problems.

          Now the dissatisfaction and subscription cancellations have reached a point where they finally had to do something.

        • shimman 17 hours ago
          Very easy to do when you stand to make tens of millions when your employer IPOs. Let's not maybe give too much praise and employ some critical thinking here.
          • simplify 17 hours ago
            What is the purpose of this mindset? Should we encourage typical corporate coldness instead?
            • sdevonoes 16 hours ago
              We should encourage minimal dependency on multibillion tech companies like anthropic. They, and similar companies are just milking us… but since their toys are soo shiny, we don’t care
              • simplify 12 hours ago
                Sure, but that seems out of scope of the original comment.
          • hgoel 17 hours ago
            Is "employ some critical thinking" supposed to involve being an annoying uptight cynic?
      • saadn92 17 hours ago
        I leave sessions idle for hours constantly - that's my primary workflow. If resuming a 900k context session eats my rate limit, fine, show me the cost and let me decide whether to /clear or push through. You already show a banner suggesting /clear at high context - just do the same thing here instead of silently lobotomizing the model.
        • sdevonoes 16 hours ago
          So if they fuck it up again and now they have, let’s say, “db problems” instead of “caching problems”, you would happily simply pay more? Wtf
          • saadn92 16 hours ago
            No, I wouldn't. I'd like some transparency at least.
          • albedoa 16 hours ago
            Did you reply to the wrong comment? I don't see that implied here at all. What?
      • artdigital 13 hours ago
        I'm also a Claude Code user from day 1 here, back from when it wasn't included in the Pro/Max subscriptions yet, and I was absolutely not aware of this either. Your explanation makes sense, but I naively was also under the impression that re-using older existing conversations that I had open would just continue the conversation as is and not be a treated as a full cache miss.

        My biggest learning here is the 1 hour cache window. I often have multiple Claudes open and it happens frequently that they're idle for 1+ hours.

        This cache information should probably get displayed somewhere within Claude Code

        • bcherny 12 hours ago
          Yep, agree. We added a little "/clear to save XXX tokens" notice in the bottom right, and will keep iterating on this. Thanks for being an early user!
          • Implicated 12 hours ago
            But.. that doesn't solve the problem of having no indication in-session when it'll lose the cache. A nudge to /clear does nothing to indicate "or else face significant cost" nor does it indicate "your cache is stale".

            Love the product. <3

          • troupo 7 hours ago
            Instead of showing actual usage, costs and cache status you spent two months denying the issue even exists, making the product silently worse, and now you're "iterating on this"
            • troupo 3 hours ago
              To add to this. The new indicator is "New task? /clear to save <X> tokens" even though it affects all tasks, not just new ones.

              Mislead, gaslight, misdirect is the name of the game

      • winternewt 1 hour ago
        > Adding an in-product tip to recommend running /clear when re-visiting old conversations (we shipped a few iterations of this)

        I feel like I'm missing something here. Why would I revisit an old conversation only to clear it?

        To me it sounds like a prompt-cache miss for a big context absolutely needs to be a per-instance warning and confirmation. Or even better a live status indicating what sending a message will cost you in terms of input tokens.

      • mtilsted 16 hours ago
        Then you need to update your documentation and teach claude to read the new documentation because here is what claude code answered:

        Question: Hey claude, if we have a conversation, and then i take a break. Does it change the expected output of my next answer, if there are 2 hours between the previous message end the next one?

        Answer: No. A 2-hour gap doesn't change my output. I have no internal clock between messages — I only see the conversation content plus the currentDate context injected each turn. The prompt cache may expire (5 min TTL), which affects cost/latency but not the response itself.

          The only things that can change output across a break: new context injected (like updated date), memory files being modified, or files on disk changing.                                                                                  
                                                                                                                                                                             
        -- This answer directly contradict your post. It seems like the biggest problem is a total lack of documentation for expected behavior.

        A similar thing happens if I ask claude code for the difference between plan mode, and accept edits on.

        Then Claude told me the only difference was that with plan mode it would ask for permission before doing edits. But I really don't think this is true. It seems like plan mode does a lot more work, and present it in a total different way. It is not just a "I will ask before applying changes" mode.

        • hennell 22 minutes ago
          Don't be silly, they don't expect you to ask the Ai questions and get the right answers. Obviously if you want to know what's going on you should look at their first solution - check what advice they have posted on X...
        • ryeguy 11 hours ago
          This isn't how LLMs work. They aren't self aware like this, they're trained on the general internet. They might have some pointers to documentation for certain cases, but they generally aren't going to have specialized knowledge of themselves embedded within. Claude code has no need to know about its own internal programming, the core loop is just javascript code.
          • CjHuber 11 hours ago
            It does have an built in documentation subagent it can invoke but that doesn’t help much if they don’t document their shenanigans
      • bobkb 16 hours ago
        Resuming sessions after more than 1 hour is a very common workflow that many teams are following. It will be great if this is considered as an expected behaviour and design the UX around it. Perhaps you are not realising the fact that Claude code has replaced the shells people were using (ie now bash is replaced with a Claude code session).
        • trinsic2 12 hours ago
          I think thats a bad idea. It seems like expecting to have a prompt open like this, accumulating context puts a load on the back end. Its one of those things that is a bad habit. Like trying to maintain open tabs in a browser as a way to keep your work flow up to date when what you really should be doing is taking notes of your process and working from there.

          I have project folders/files and memory stored for each session, when I come back to my projects the context is drawn from the memory files and the status that were saved in my project md files.

          Create a better workflow for your self and your teams and do it the right way. Quick expect the prompt to store everything for you.

          For the Claude team. If you havent already, I'd recommend you create some best practices for people that don't know any better, otherwise people are going to expect things to be a certain way and its going to cause a lot of friction when people cant do what the expect to be able to do.

          • kiratp 10 hours ago
            Agents making forward progress hours apart is an expected pattern and inference engines are being adapted to serve that purpose well.

            It’s hard to do it without killing performance and requires engineering in the DC to have fast access to SSDs etc.

            Disclosure: work on ai@msft. Opinions my own.

          • troupo 6 hours ago
            > I think thats a bad idea. It seems like expecting to have a prompt open like this, accumulating context puts a load on the back end

            Let's see what Boris Cherny himself and other Anthropic vibe-coders say about this:

            https://x.com/bcherny/status/2044847849662505288

            Opus 4.7 loves doing complex, long-running tasks like deep research, refactoring code, building complex features, iterating until it hits a performance benchmark.

            https://x.com/bcherny/status/2007179858435281082

            For very long-running tasks, I will either (a) prompt Claude to verify its work with a background agent when it's done... so Claude can cook without being blocked on me.

            https://x.com/trq212/status/2033097354560393727

            Opus 4.6 is incredibly reliable at long running tasks

            https://x.com/trq212/status/2032518424375734646

            The long context window means fewer compactions and longer-running sessions. I've found myself starting new sessions much less frequently with 1 million context.

            https://x.com/trq212/status/2032245598754324968

            I used to be a religious /clear user, but doing much less now, imo 4.6 is quite good across long context windows

            ---

            I could go on

        • gib444 5 hours ago
          > Resuming sessions after more than 1 hour is a very common workflow that many teams are following

          Yeah it's called lunch!

      • kccqzy 14 hours ago
        This just does not match my workflow when I work on low-priority projects, especially personal projects when I do them for fun instead of being paid to do them. With life getting busy, I may only have half an hour each night with Claude to make some progress on it before having to pause and come back the next day. It’s just the nature of doing personal projects as a middle-aged person.

        The above workflow basically doesn’t hit the rate limit. So I’d appreciate a way to turn off this feature.

      • ryanisnan 16 hours ago
        Why does the system work like that? Is the cache local, or on Claude's servers?

        Why not store the prompt cache to disk when it goes cold for a certain period of time, and then when a long-lived, cold conversation gets re-initiated, you can re-hydrate the cache from disk. Purge the cached prompts from disk after X days of inactivity, and tell users they cannot resume conversations over X days without burning budget.

        • jetbalsa 16 hours ago
          The cache is on Antropics server, its like a freeze frame of the LLM inner workings at the time. the LLM can pick up directly from this save state. as you can guess this save state has bits of the underlying model, their secret sauce. so it cannot be saved locally...
          • dicethrowaway1 16 hours ago
            Maybe they could let users store an encrypted copy of the cache? Since the users wouldn't have Anthropic's keys, it wouldn't leak any information about the model (beyond perhaps its number of parameters judging by the size).
            • jetbalsa 16 hours ago
              I'm unsure of the sizes needed for prompt cache, but I suspect its several gigs in size (A percentage of the model weight size), how would the user upload this every time they started a resumed a old idle session, also are they going to save /every/ session you do this with?
              • skissane 16 hours ago
                They could let you nominate an S3 bucket (or Azure/GCP/etc equivalent). Instead of dropping data from the cache, they encrypt it and save it to the bucket; on a cache miss they check the bucket and try to reload from it. You pay for the bucket; you control the expiry time for it; if it costs too much you just turn it off.
              • im3w1l 16 hours ago
                A few gigs of disk is not that expensive. Imo they should allocate every paying user (at least) one disk cache slot that doesn't expire after any time. Use it for their most recent long chat (a very short question-answer that could easily be replayed shouldn't evict a long convo).
                • _flux 4 hours ago
                  I don't know how large the cache is, but Gemini guessed that the quantized cache size for Gemini 2.5 Pro / Claude 4 with 1M context size could be 78 gigabytes. ChatGPT guessed even bigger numbers. If someone is able to deliver a more precise estimate, you're welcome to :-).

                  So it would probably be a quite a long transfer to perform in these cases, probably not very feasible to implement at scale.

                • spunker540 11 hours ago
                  Whats lost on this thread is these caches are in very tight supply - they are literally on the GPUs running inference. the GPUs must load all the tokens in the conversation (expensive) and then continuing the conversation can leverage the GPU cache to avoid re-loading the full context up to that point. but obviously GPUs are in super tight supply, so if a thread has been dead for a while, they need to re-use the GPU for other customers.
            • northern-lights 14 hours ago
              Encryption can only ensure the confidentiality of a message from a non-trusted third party but when that non-trusted third party happens to be your own machine hosting Claude Code, then it is pointless. You can always dump the keys (from your memory) that were used to encrypt/decrypt the message and use it to reconstruct the model weights (from the dump of your memory).
              • dicethrowaway1 14 hours ago
                jetbalsa said that the cache is on Anthropic's server, so the encryption and decryption would be server-side. You'd never see the encryption key, Anthropic would just give you an encrypted dump of the cache that would otherwise live on its server, and then decrypt with their own key when you replay the copy.
      • foobarbecue 21 minutes ago
        Hi Boris! Wanted to let you know that I find those ads with you saying "now when you code, you use an agent" obnoxious because of that incorrect statement. I have no interest in slop coding. I find it way more ergonomic and effective to use code to tell a machine precisely what to do than to use English to tell it vaguely. I hate that your ad is misleading so many non-coders, who will actually believe your lie that nobody codes anymore. Probably doesn't help that YouTube was playing it as an interruption in every video I watched. I probably saw it 100 times and was getting to the "throw the remote at the tv" stage XD.
      • iidsample 17 hours ago
        We at UT-Austin have done some academic work to handle the same challenge. Will be curious if serving engines could modified. https://arxiv.org/abs/2412.16434 .

        The core idea is we can use user-activity at the client to manage KV cache loading and offloading. Happy to chat more!!

      • Folcon 1 hour ago
        Hi Boris

        I'm curious why 1 hour was chosen?

        Is increasing it a significant expense?

        Ever since I heard about this behaviour I've been trying to figure out how to handle long running Claude sessions and so far every approach I've tried is suboptimal

        It takes time to create a good context which can then trigger a decent amount of work in my experience, so I've been wondering how much this is a carefully tuned choice that's unlikely to change vs something adjustable

      • Joeri 17 hours ago
        This sounds like one of those problems where the solution is not a UX tweak but an architecture change. Perhaps prompt cache should be made long term resumable by storing it to disk before discarding from memory?
        • kivle 16 hours ago
          I agree.. Maybe parts of the cache contents are business secrets.. But then store a server side encrypted version on the users disk so that it can be resumed without wasting 900k tokens?
        • slashdave 14 hours ago
          Disk where? LLM requests are routed dynamically. You might not even land in the same data center.
          • FuckButtons 12 hours ago
            But if you have a tiered cache, then waiting several seconds / minutes is still preferable to getting a cache miss. I suspect the larger problem is the amount of tinkering they are doing with the model makes that not viable.
      • looshch 2 hours ago
        > We tried a few different approaches to improve this UX

        how about acknowledging that you fucked up your own customers’ money and making a full refund for the affected period?

        > Educating users on X/social

        that is beyond me

        ты не Борис, ты максимум борька

      • nhinck3 1 hour ago
        So is it for latency or is it for cost?

        Why did you lie 11 days ago, 3 days after the fix went in, about the cause of excess token usage?

      • toephu2 14 hours ago
        How does the Claude team recommend devs use Claude Code?

        1) Is it okay to leave Claude Code CLI open for days?

        2) Should we be using /clear more generously? e.g., on every single branch change, on every new convo?

      • ohcmon 16 hours ago
        Boris, wait, wait, wait,

        Why not use tired cache?

        Obviously storage is waaay cheaper than recalculation of embeddings all the way from the very beginning of the session.

        No matter how to put this explanation — it still sounds strange. Hell — you can even store the cache on the client if you must.

        Please, tell me I’m not understanding what is going on..

        otherwise you really need to hire someone to look at this!)

        • krackers 15 hours ago
          Same question I had in https://news.ycombinator.com/item?id=47819914

          I still don't understand it, yes it's a lot of data and presumably they're already shunting it to cpu ram instead of keeping it on precious vram, but they could go further and put it on SSD at which point it's no longer in the hotpath for their inference.

        • rkuska 16 hours ago
          I don't think you can store the cache on client given the thinking is server side and you only get summaries in your client (even those are disabled by default).
          • sargunv 16 hours ago
            If they really need to guard the thinking output, they could encrypt it and store it client side. Later it'd be sent back and decrypted on their server.

            But they used to return thinking output directly in the API, and that was _the_ reason I liked Claude over OpenAI's reasoning models.

        • solarkraft 16 hours ago
          I assume they are already storing the cache on flash storage instead of keeping it all in VRAM. KV caches are huge - that’s why it’s impractical to transfer to/from the client. It would also allow figuring out a lot about the underlying model, though I guess you could encrypt it.

          What would be an interesting option would be to let the user pay more for longer caching, but if the base length is 1 hour I assume that would become expensive very quickly.

          • tonyarkles 15 hours ago
            Just to contextualize this... https://lmcache.ai/kv_cache_calculator.html. They only have smaller open models, but for Qwen3-32B with 50k tokens it's coming up with 7.62GB for the KV cache. Imagining a 900k session with, say, Opus, I think it'd be pretty unreasonable to flush that to the client after being idle for an hour.
          • 2001zhaozhao 13 hours ago
            I wonder whether prompt caches would be the perfect use case of something like Optane.

            It's kept for long enough that it's expensive to store in RAM, but short enough that the writes are frequent and will wear down SSD storage

          • ohcmon 16 hours ago
            Yes — encryption is the solution for client side caching.

            But even if it’s not — I can’t build a scenario in my head where recalculating it on real GPUs is cheaper/faster than retrieving it from some kind of slower cache tier

      • 8note 16 hours ago
        reasonably, if i'm in an interactive session, its going to have breaks for an hour or more.

        whats driving the hour cache? shouldnt people be able to have lunch, then come back and continue?

        are you expecting claude code users to not attend meetings?

        I think product-wise you might need a better story on who uses claude-code, when and why.

        Same thing with session logs actually - i know folks who are definitely going to try to write a yearly RnD report and monthly timesheets based on text analysis of their claude code session files, and they're going to be incredibly unhappy when they find out its all been silently deleted

        • FuckButtons 12 hours ago
          As with everything Anthropic recently this is a supply constraint issue. They have not planned for scale adequately.
      • BoppreH 14 hours ago
        Isn't that exactly what people had been accusing Anthropic of doing, silently making Claude dumber on purpose to cut costs? There should be, at minimum, a warning on the UI saying that parts of the context were removed due to inactivity.
      • noname120 1 hour ago
        Why not automatically run a compaction close to the 1-hour mark? Then the cache miss won’t have such a bad impact.
      • the-grump 16 hours ago
        That is understandable, but the issue is the sudden drop in quality and the silent surge in token usage.

        It also seems like the warning should be in channel and not on X. If I wanted to find out how broken things are on X, I'd be a Grok user.

      • try-working 13 hours ago
        You created this issue by setting a timer for cache clearing. Time is really not a dimension that plays any role in how coding agent context is used.
      • baq 39 minutes ago
        maybe you could surface an expected cache miss to the user
      • dnnddidiej 14 hours ago
        It is too suprising. Time passed should not matter for using AI.

        Either swallow the cost or be transparent to the user and offer both options each time.

      • willsmith72 9 hours ago
        Wow so that's why you did #2? The explanation in the CLI is really not clear. I thought it was just a suggestion to compact, no idea it was way more expensive than if I hadn't left it idle for an hour.

        You guys really need to communicate that better in the CLI for people not on social

      • Confiks 11 hours ago
        So you made this change completely invisible to the user, without the user being able to choose between the two behaviors, and without even documenting it in the (extremely verbose) changelog [1]? I can't find it, the Docs Assistant can't find it (well, it "I found it!" three times being fed your reply with a non-matching item).

        I frequently debug issues while keeping my carefully curated but long context active for days. Losing potentially very important context while in the middle of a debugging session resulting in less optimal answers, is costing me a lot more money than the cache misses would.

        In my eyes, Claude Code is mainly a context management tool. I build a foundation of apparent understanding of the problem domain, and then try to work towards a solution in a dialogue. Now you tell me Anthrophic has been silently breaking down that foundation without telling me, wasting potentially hours of my time.

        It's a clear reminder that these closed-source harnesses cannot be trusted (now or in the future), and I should find proper alternatives for Claude Code as soon as possible.

        [1] https://code.claude.com/docs/en/changelog

      • albert_e 3 hours ago
        > The challenge is: when you let a session idle for >1 hour, when you come back to it and send a prompt, it will be a full cache miss, all N messages. We noticed that this corner case led to outsized token costs for users.

        I dont agree with this being characterized as a "corner case".

        Isn't this how most long running work will happen across all serious users?

        I am not at my desk babysitting a single CC chat session all day. I have other things to attend to -- and that was the whole point of agentic engineering.

        Dont CC users take lunch breaks?

        How are all these utterly common scenarios being named as corner cases -- as something that is wildly out of the norm, and UX can be sacrificed for those cases?

      • troupo 17 hours ago
        > We tried a few different approaches to improve this UX: 1. Educating users on X/social

        No. You had random developers tweet and reply at random times to random users while all of your official channels were completely silent. Including channels for people who are not terminally online on X

        • Terretta 13 hours ago
          There's a cultural divide between SV and the 85% of SMB using M365, for example. When everyone you know uses a thing, I mean, who doesn't?*

          There's a reason live service games have splash banners at every login. No matter what you pick as an official e-coms channel, most of your users aren't there!

          * To be fair, of all these firms, ANTHROP\C tries the hardest to remember, and deliver like, some people aren't the same. Starting with normals doing normals' jobs.

      • 0123456789ABCDE 4 hours ago
        2. could you bring back the _compact and accept plan_? even if it is not the default option.
      • mandeepj 11 hours ago
        > that would be >900k tokens written to cache all at once

        Probably that's why I hit my weekly limits 3-4 days ago, and was scheduled to reset later today. I just checked, and they are already reset.

        Not sure if it's already done, shouldn't there be a check somewhere to alert on if an outrageous number of tokens are getting written, then it's not right ?

      • infogulch 16 hours ago
        How big is the cache? Could you just evict the cache into cheap object storage and retrieve it when resuming? When the user starts the conversation back up show a "Resuming conversation... ⭕" spinner.
      • samusiam 2 hours ago
        For idle sessions I would MUCH rather pay the cost in tokens than reduced quality. Frankly, it's shocking to me that you would make that trade-off for users without their knowledge or consent.
      • arcza 13 hours ago
        You need to seriously look at your corporate communications and hire some adults to standarise your messaging, comms and signals. The volatility behind your doors is obvious to us and you'd impress us much more if you slowed down, took a moment to think about your customers and sent a consistent message.

        You lost huge trust with the A/B sham test. You lost trust with enshittification of the tokenizer on 4.6 to 4.7. Why not just say "hey, due to huge input prices in energy, GPU demand and compute constraints we've had to increase Pro from $20 to $30." You might lose 5% of customers. But the shady A/B thing and dodgy tokenizer increasing burn rate tells everyone inc. enterprise that you don't care about honesty and integrity in your product.

        I hope this feedback helps because you still stand to make an awesome product. Just show a little more professionalism.

      • nextaccountic 16 hours ago
        what about selling long term cache space to users?

        or even, let the user control the cache expiry on a per request basis. with a /cache command

        that way they decide if they want to drop the cache right away , or extend it for 20 hours etc

        it would cost tokens even if the underlying resource is memory/SSD space, not compute

      • FuckButtons 12 hours ago
        From a utility perspective using a tiered cache with some much higher latency storage option for up to n hours would be very useful for me to prevent that l1 cache miss.
      • taspeotis 5 hours ago
        Hi, thanks for Claude Code. I was wondering though if you'd considering adding a mode to make text green and characters come down from the top of the screen individually, like in The Matrix?
      • airstrike 10 hours ago
        Why is time the variable you're solving for? Why can't I keep that cache warm by keeping the session open?
      • chris1993 12 hours ago
        So this explains why resuming a session after a 5-hour timeout basically eats most of the next session. How then to avoid this?
      • useyourforce 11 hours ago
        I actually have a suggestion here - do not hide token count in non-verbose mode in Claude Code.
      • gverrilla 17 hours ago
        I drop sessions very frequently to resume later - that's my main workflow with how slow Claude is. Is there anything I can do to not encounter this cache problem?
      • growt 16 hours ago
        Wasn’t cache time reduced to 5 minutes? Or is that just some users interpretation of the bug?
      • jorjon 10 hours ago
        What about:

        /loop 5m say "ok".

        Will that keep the cache fresh?

      • sockaddr 16 hours ago
        Sorry but I think this should be left up to the user to decide how it works and how they want to burn their tokens. Also a countdown timer is better than all of these other options you mention.
      • frumplestlatz 17 hours ago
        The entire reason I keep a long-lived session around is because the context is hard-won — in term of tokens and my time.

        Silently degrading intelligence ought to be something you never do, but especially not for use-cases like this.

        I’m looking back at my past few weeks of work and realizing that these few regressions literally wasted 10s of hours of my time, and hundreds of dollars in extra usage fees. I ran out of my entire weekly quota four days ago, and had to pause the personal project I was working on.

        I was running the exact same pipeline I’ve run repeatedly before, on the same models, and yet this time I somehow ate a week’s worth of quota in less than 24h. I spent $400 just to finish the pipeline pass that got stuck halfway through.

        I’m sorry to be harsh, but your engineering culture must change. There are some types of software you can yolo. This isn’t one of them. The downstream cost of stupid mistakes is way, way too high, and far too many entirely avoidable bugs — and poor design choices — are shipping to customers way too often.

        • deaux 14 hours ago
          > The entire reason I keep a long-lived session around is because the context is hard-won — in term of tokens and my time. Silently degrading intelligence ought to be something you never do, but especially not for use-cases like this.

          Hard agree, would like to see a response to this.

        • 8note 15 hours ago
          as a variation:

          how does this help me as a customer? if i have to redo the context from scratch, i will pay both the high token cost again, but also pay my own time to fill it.

          the cost of reloading the window didnt go away, it just went up even more

        • FireBeyond 11 hours ago
          > I’m sorry to be harsh, but your engineering culture must change. There are some types of software you can yolo. This isn’t one of them. The downstream cost of stupid mistakes is way, way too high, and far too many entirely avoidable bugs — and poor design choices — are shipping to customers way too often.

          I have to imagine this isn't helped by working somewhere where you effectively have infinite tokens and usage of the product that people are paying for, sometimes a lot.

      • kang 15 hours ago
        > tokens written to cache all at once, which would eat up a significant % of your rate limits

        Construction of context is not an llm pass - it shouldn't even count towards token usage. The word 'caching' itself says don't recompute me.

        Since the devs on HN (& the whole world) is buying what looks like nonsense to me - what am I missing?

    • tadfisher 17 hours ago
      It astounds me that a company valued in the hundreds-of-billions-of-dollars has written this. One of the following must be true:

      1. They actually believed latency reduction was worth compromising output quality for sessions that have already been long idle. Moreover, they thought doing so was better than showing a loading indicator or some other means of communicating to the user that context is being loaded.

      2. What I suspect actually happened: they wanted to cost-reduce idle sessions to the bare minimum, and "latency" is a convenient-enough excuse to pass muster in a blog post explaining a resulting bug.

      • adam_patarino 50 minutes ago
        It’s certainly #2. They have shown over dozens of decisions they move very quickly, break stuff, then have to both figure out what broke and how to explain it.
      • someguyiguess 16 hours ago
        It’s definitely a cost / resource saving strategy on their end.
      • raincole 10 hours ago
        It's very weird that they frame caching as "latency reduction" when it comes to a cloud service. I mean, yes, technically it reduces latency, but more importantly it reduces cost. Sometimes it's more than 80% of the total cost.

        I'm sure most companies and customers will consider compromising quality for 80% cost reduction. If they just be honest they'll be fine.

      • sekai 6 hours ago
        The same company that claims they have models that are too "dangerous" to release btw.
      • billywhizz 13 hours ago
        what's even more amazing is it took them two weeks to fix what must have been a pretty obvious bug, especially given who they are and what they are selling.
      • retinaros 17 hours ago
        they just vibecoded a fix and didnt think about the tradeoff they were making and their always yes-man of a model just went with it
    • sockaddr 16 hours ago
      Yeah this is actually quite shocking. In my earlier uses of CC I might noodle on a problem for a while, come back and update the plan, go shower, think, give CC a new piece of advice, etc. Basically treating it like a coworker. And I thought that it was a static conversation (at least on the order of a day or so). An hour is absurd IMO and makes me want to rethink whether I want to keep my anthropic plan.
    • seizethecheese 18 hours ago
      It's also a bit of a fishy explanation for purging tokens older than an hour. This happens to also be their cache limit. I doubt it is incidental that this change would also dramatically drop their cost.
    • zmmmmm 14 hours ago
      Seems like it would interact very badly with the time based usage reset. If lots of people are hitting their limit and then letting the session idle until they can come back, this wouldn't be an exception. It would almost be the default behaviour.
    • Aperocky 9 hours ago
      Wow, I always thought the context is always stored locally and this is something I have control over.

      Glad I use kiro-cli which doesn't do this.

      • Bishonen88 6 hours ago
        you might be biased due to your employment :)
  • cmenge 13 hours ago
    Bit surprised about the amount of flak they're getting here. I found the article seemed clear, honest and definitely plausible.

    The deterioration was real and annoying, and shines a light on the problematic lack of transparency of what exactly is going on behind the scenes and the somewhat arbitrary token-cost based billing - too many factors at play, if you wanted to trace that as a user you can just do the work yourself instead.

    The fact that waiting for a long time before resuming a convo incurs additional cost and lag seemed clear to me from having worked with LLM APIs directly, but it might be important to make this more obvious in the TUI.

    • maronato 12 hours ago
      I agree that it’s plausible, and I hope they learn. But trust is earned, and Anthropic’s public responses this past month were dismissive and unhelpful.

      Every one of these changes had the same goal: trading the intelligence users rely on for cheaper or faster outputs. Users adapt to how a model behaves, so sudden shifts without transparency are disorienting.

      The timing also undercuts their narrative. The fixes landed right before another change with the same underlying intent rolled out. That looks more like they were just reacting to experiments rather than understanding the underlying user pain.

      When people pay hundreds or thousands a month, they expect reliability and clear communication, ideally opt-in. Competitors are right there, and unreliability pushes users straight to them.

      All of this points to their priorities not being aligned with their users’.

      • xpe 11 hours ago
        > All of this points to their priorities not being aligned with their users’.

        Framing this as "aligned" or "not aligned" ignores the interesting reality in the middle. It is banal to say an organization isn't perfectly aligned with its customers.

        I'm not disagreeing with the commenter's frustration. But I think it can help to try something out: take say the top three companies whose product you interact with on a regular basis. Take stock of (1) how fast that technology is moving; (2) how often things break from your POV; (3) how soon the company acknowledges it; (4) how long it takes for a fix. Then ask "if a friend of yours (competent and hard working) was working there, would I give the company more credit?"

        My overall feel is that people underestimate the complexity of the systems at Anthropic and the chaos of the growth.

        These kind of conversations are a sort of window into people's expectations and their ability to envision the possible explanations of what is happening at Anthropic.

        • daveoc64 52 minutes ago
          >My overall feel is that people underestimate the complexity of the systems at Anthropic and the chaos of the growth.

          Making changes like reducing the usage window at peak times (https://x.com/trq212/status/2037254607001559305) without announcing it (until after the backlash) is the sort of thing that's making people lose trust in Anthropic. They completely ignored support tickets and GitHub issues about that for 3 days.

          You shouldn't have to rely on finding an individual employee's posts on Reddit or X for policy announcements.

          That policy hasn't even been put into their official documentation nearly one month on - https://support.claude.com/en/articles/11647753-how-do-usage...

          A company with their resources could easily do better.

        • willis936 1 hour ago
          So you're arguing they're just plain incompetent? Not sure that's going to win the trust of customers either.
    • adam_patarino 48 minutes ago
      The explanations are all fine.

      But they come after the team gaslit everyone, telling us it was a skill issue.

    • voxgen 5 hours ago
      Some of the flak is that issues are often only acknowledged once a fix is in place, and the partial fixes are presented as if they solve the whole problem.

      The near-instant transition from "there is no problem" to "we already fixed the problem so stop complaining" is basically gaslighting. (Admittedly the second sentiment comes more from the community, but they get that attitude after taking the "we fixed all the problems" posts at face value.)

      • noname120 1 hour ago
        And they are often dismissed at first as perception/subjective bias, getting used to models being good and having higher expectations due to that, etc. users are blamed a lot before they are forced to admit that there is an actual problem.
    • epsteingpt 12 hours ago
      They gaslit people for months saying it wasn't an issue publicly.

      That's the reason for the flak

      • thomassmith65 11 hours ago
        And still are gaslighting:

          We take reports about degradation very seriously. We never intentionally degrade our models [...] On March 4, we changed Claude Code's default reasoning effort from high to medium
        
        Anthropic is the best company of its kind, but that is badly worded PR.
        • sobjornstad 9 hours ago
          Is adding JPEG compression to your software “intentional degradation” of the software? I wouldn't say providing a selectable option to use a faster, cheaper version of something qualifies as “degradation”.

          It is certainly true that they did a poor job communicating this change to users (I did not know that the default was “high” before they introduced it, I assumed they had added an effort level both above and below whatever the only effort choice was there before). On the other hand, I was using Claude Code a fair bit on “medium” during that time period and it seemed to be performing just fine for me (and saving usage/time over “high”), so it doesn't seem clear that that was the wrong default, if only it had been explained better.

          • endymion-light 3 hours ago
            yes. if instagram started performing intensive JPEG compression that made photos choppy and unpleasant, I would consider that an intentional degredation of the software.
          • BoorishBears 7 hours ago
            Is default enabling JPEG compression to your software's output because the compression saves you money “intentional degradation” of the software?

            I would say it does, and I'd loathe to use anything made by people who'd couch that change to defaults as "providing a selectable option to use a faster, cheaper version".

            Yuck.

        • xpe 10 hours ago
          To my eye, gaslighting is a serious accusation. Wikipedia's first line matches how I think of it: "Gaslighting is the manipulation of someone into questioning their perception of reality."

          Did I miss something? I'm only looking at primary sources to start. Not Reddit. Not The Register. Official company communications.

          Did Anthropic tell users i.e. "you are wrong, your experience is not worse."? If so, that would reach the bar of gaslighting, as I understand it (and I'm not alone). If you have a different understanding, please share what it is so I understand what you mean.

          • thomassmith65 9 hours ago
            I'd rather not speak too poorly of Anthropic, because - to the extent I can bring myself to like a tech company - I like Anthropic.

            That said, the copy uses "we never intentionally degrade our models" to mean something like "we never degrade one facet of our models unless it improves some other facet of our models". This is a cop out, because it is what users suspected and complained about. What users want - regardless of whether it is realistic to expect - is for Anthropic to buy even more compute than Anthropic already does, so that the models remain equally smart even if the service demand increases.

            • xpe 7 hours ago
              It seems to me you dropped the "gaslighting" claim without owning it. I personally find this frustrating. I prefer when people own up to their mistakes. Like many people, to me, "gaslighting" is just not a term you throw around lightly. Then you shifted to "cop out". (This feels like the motte and bailey.) But I don't think "cop out" is a phrase that works either...

              Some terms:... The model is the thing that runs inference. Claude Code is not a model, it is harness. To summarize Anthropic's recent retrospective, their technical mistakes were about the harness.

              I'm not here to 'defend' Anthropic's mistakes. They messed up technically. And their communication could have been better. But they didn't gaslight. And on balance, I don't see net evidence that they've "copped out" (by which I mean mischaracterized what happened). I see more evidence of the opposite. I could be wrong about any of this, but I'm here to talk about it in the clearest, best way I can. If anyone wants to point to primary sources, I'll read them.

              I want more people to actually spend a few minutes and actually give the explanation offered by Anthropic a try. What if isolating the problems was hard to figure out? We all know hindsight is 20/20 and yet people still armchair quarterback.

              At the risk of sounding preachy, I'm here to say "people, we need to do better". Hacker News is a special place, but we lose it a little bit every time we don't in a quality effort.

              • thomassmith65 2 hours ago
                Fair enough. If the comments in question were still editable, I would be happy to replace 'gaslighting' with 'being a bit slippery' or something less controversial.

                No worries about 'sounding preachy'; it's a good thing people want to uphold the sobriety that makes HN special.

          • asdewqqwer 2 hours ago
            I think there are plenty of such reply on github. For example the one to AMD AI director's issue.
          • oofbey 8 hours ago
            They didn’t say “your experience is not worse” but they did frequently say “just turn reasoning effort back up and it will be fine”. And that pretty explicitly invalidates all the (correct) feedback which said it’s not just reasoning effort.

            They knew they had deliberately made their system worse, despite their lame promise published today that they would never do such a thing. And so they incorrectly assumed that their ham fisted policy blunder was the only problem.

            Still plenty I prefer about Claude over GPT but this really stings.

            • xpe 6 hours ago
              I'm aiming for intellectual honesty here. I'm not taking a side for a person or an org, but I'm taking a stand for a quality bar.

              > They knew they had deliberately made their system worse

              Define "they". The teams that made particular changes? In real-world organizations, not all relevant information flows to all the right places at the right time. Mistakes happen because these are complex systems.

              Define "worse". There are lot of factors involved. With a given amount of capacity at a given time, some aspect of "quality" has to give. So "quality" is a judgment call. It is easy to use a non-charitable definition to "gotcha" someone. (Some concepts are inherently indefensible. Sometimes you just can't win. "Quality" is one of those things. As soon as I define quality one way, you can attack me by defining it another way. A particular version of this principle is explained in The Alignment Problem by Brian Christian, by the way, regarding predictive policing iirc.)

              I'm seeing a lot of moral outrage but not enough intellectual curiosity. It embarrassingly easy to say "they should have done better" ... ok. Until someone demonstrates to me they understand the complexity of a nearly-billion dollar company rapidly scaling with new technology, growing faster than most people comprehend, I think ... they are just complaining and cooking up reasons so they are right in feeling that way. This possible truth: complex systems are hard to do well apparently doesn't scratch that itch for many people. So they reach for blame. This is not the way to learn. Blaming tends to cut off curiosity.

              I suggest this instead: redirect if you can to "what makes these things so complicated?" and go learn about that. You'll be happier, smarter, and ... most importantly ... be building a habit that will serve you well in life. Take it from an old guy who is late to the game on this. I've bailed on companies because "I thought I knew better". :/

              • philipwhiuk 1 hour ago
                > Define "they". The teams that made particular changes? In real-world organizations, not all relevant information flows to all the right places at the right time. Mistakes happen because these are complex systems.

                Accidentally/deliberately making your CS teams ill-informed should not function as a get out of jail free card. Rather the reverse.

      • xpe 10 hours ago
        I know some people use the word "gaslighting" in connection with Anthropic. I've read some of those threads here, and some on Reddit, but I don't put much stock in them. To step back, hopefully reasonable people can start here:

            1. Degraded service sucks.
            2. Anthropic not saying i.e. "we're not seeing it" sucks.
            3. Not getting a fix when you want it sucks.
        
        Try to understand what I mean when I say none of the above meet the following sense of gaslighting: "Gaslighting is the manipulation of someone into questioning their perception of reality." Emphasis on understand what I mean. This says it well: [1].

        If you can point me to an official communication from Anthropic where they say "User <so and so> is not actually seeing degraded performance" when Anthropic knows otherwise that would clearly be gaslighting -- intent matters by my book.

        But if their instrumentation was bad and they were genuinely reporting what they could see, that doesn't cross into gaslighting by my book. But I have a tendency to think carefully about ethical definitions. Some people just grab a word off the shelf with a negative valence and run with it: I don't put much stock in what those people say. Words are cheap. Good ethical reasoning is hard and valuable.

        It's fine if you have a different definition of "gaslighting". Just remember that some of us have been actually gaslight by people, so we prefer to save the word for situations where the original definition applies. People like us are not opposed to being disappointed, upset, or angry at Anthropic, but we have certain epistemic standards that we don't toss out when an important tool fails to meet our expectations and the company behind it doesn't recognize it soon enough.

        [1]: https://www.reddit.com/r/TwoXChromosomes/comments/tep32v/can...

  • podnami 18 hours ago
    They lost me at Opus 4.7

    Anecdotally OpenAI is trying to get into our enterprise tooth and nail, and have offered unlimited tokens until summer.

    Gave GPT5.4 a try because of this and honestly I don’t know if we are getting some extra treatment, but running it at extra high effort the last 30 days I’ve barely see it make any mistakes.

    At some points even the reasoning traces brought a smile to my face as it preemptively followed things that I had forgotten to instruct it about but were critical to get a specific part of our data integrity 100% correct.

    • dsco 17 hours ago
      Same here. I feel like all of these shenanigans could be because Anthropic are compute constrained, forcing then to take reckless risks around reducing it.
    • tasoeur 8 hours ago
      Same here. I was a fervent Claude code user at $200/mo until Opus4.7.

      Freezing your IDE version is now a thing of the past, the new reality is that we can't expect agentic dev workflows to be consistent and I see too many people (including myself) getting burned by going the single-provider route.

      On one hand I’m glad to finally see anthropic communicate on this but at this point all I have to say is… time to diversify?

    • ghusbands 3 hours ago
      They lost me a little before then - Claude Code's regressions were so very obvious and there's no sign they've learned their lesson in this article or in the comments of those who work on Claude Code on HN. They'll continue to tweak and generally mess around with a product people are using, altering the behaviour without notice in ways that can severely impact use, for months! GPT5.4 has been remarkably consistent and capable, as a replacement. I've cancelled my max plan.
    • beering 14 hours ago
      GPT-5.4 was already better than Opus 4.6 on a lot of areas, especially correctness and tricky logic. I’m eager to see if 5.5 is even better.
    • UntappedShelf21 3 hours ago
      I started using Claude heavily on the 20th after having not used it for a year. Largely Sonnet 4.6, web, cowork and code. Can confidently say it is significantly worse than this time a year ago and regret that my new employer requires we use it, and only it.
    • cube2222 17 hours ago
      I’ve never been one to complain about new models, and also didn’t experience most of the issues folks were citing about Claude Code over the last couple months. I’ve been using it since release, happy with almost each new update.

      Until Opus 4.7 - this is the first time I rolled back to a previous model.

      Personality-wise it’s the worst of AI, “it’s not x, it’s y”, strong short sentences, in general a bulshitty vibe, also gaslighting me that it fixed something even though it didn’t actually check.

      I’m not sure what’s up, maybe it’s tuned for harnesses like Claude Design (which is great btw) where there’s an independent judge to check it, but for now, Opus 4.6 it is.

      • port11 4 hours ago
        I noticed the difference, but coming from Gemini and xAI models it wasn’t that glaring. I still find that Opus makes much better plans than anything else I’ve tried, and it’s been very good at catching my mistakes in using public-key cryptography, also finding out why my crsqlite queries were failing despite no official documentation on the topic.

        I’d never use such an expensive model for coding, so that might explain why I have little to complain about.

    • vorticalbox 17 hours ago
      extra high burns tokens i find. ( run 5.4 on medium for 90% of the tasks and high if i see medium struggling and its very focused and make minimum changes.
      • dsco 17 hours ago
        Yeah but it also then strikes the perfect balance between being meticulous and pragmatic. Also it pushes back much more often than other models in that mode.
      • therealdrag0 6 hours ago
        Note mini-high is similar perf/latency to medium, but much cheaper
      • DANmode 16 hours ago
        Rework burns tokens.
      • sincerely 9 hours ago
        Not a problem if they're offering unlimited, lol
    • someguyiguess 15 hours ago
      I went back to 4.5. No regrets and it’s a bit cheaper.
      • SkyPuncher 14 hours ago
        Same here. 4.6 was a downgrade in thinking quality, but I appreciated the extend context at first.

        Over time, I realized the extended context became randomly unreliable. That was worse to me than having to compact and know where I was picking up.

    • robeym 16 hours ago
      What's your workflow like? I'd be curious to test OpenAI out again but Claude Code is how I use the models. Does it require relearning another workflow?
      • beering 14 hours ago
        Isn’t it bascially the same thing? You type what you want into the input box and it does what you ask for.
        • robeym 56 minutes ago
          I guess I'm asking if their CLI tool is the same or if it functions different. I've never used anything besides CC so I wouldn't know if it's basically the same thing
    • enraged_camel 17 hours ago
      I find that it is better at thinking broadly and at a high level, on tasks that are tangential to coding like UX flows, product management and planning of complex implementations. I have yet to see it perform better than either Opus 4.6 or 4.7 though.
    • epsteingpt 12 hours ago
      Truth
  • everdrive 18 hours ago
    I've been getting a lot of Claude responding to its own internal prompts. Here are a few recent examples.

       "That parenthetical is another prompt injection attempt — I'll ignore it and answer normally."
    
       "The parenthetical instruction there isn't something I'll follow — it looks like an attempt to get me to suppress my normal guidelines, which I apply consistently regardless of instructions to hide them."
    
       "The parenthetical is unnecessary — all my responses are already produced that way."
    
    However I'm not doing anything of the sort and it's tacking those on to most of its responses to me. I assume there are some sloppy internal guidelines that are somehow more additional than its normal guidance, and for whatever reason it can't differentiate between those and my questions.
    • LatencyKills 18 hours ago
      I have a set of stop hook scripts that I use to force Claude to run tests whenever it makes a code change. Since 4.7 dropped, Claude still executes the scripts, but will periodically ignore the rules. If I ask why, I get a "I didn't think it was necessary" response.
      • jwpapi 13 hours ago
        You can deterministically force a bash script as a hook.
        • LatencyKills 13 hours ago
          That is exactly what I do. The bash script runs, determines that a code file was changed, and then is supposed to prevent Claude from stopping until the tests are run.

          Claude is periodically refusing to run those tests. That never happened prior to 4.7.

          • jwpapi 10 hours ago
            That’s crazy, you mind sharing the gist for that part? Ideally with some examples.

            This would be a new level of troublesome/ruthless (insert correct english word here)

      • nikanj 3 hours ago
        Every day Claude resembles human programmers more and more
      • DANmode 16 hours ago
        I’d ask for a credit, for that, personally.
        • someguyiguess 15 hours ago
          I asked for a credit but they said they didn’t think the credit was necessary
    • el_benhameen 14 hours ago
      I frequently see it reference points that it made and then added to its memory as if they were my own assertions. This creates a sort of self-reinforcing loop where it asserts something, “remembers” it, sees the memory, builds on that assertion, etc., even if I’ve explicitly told it to stop.
      • FireBeyond 11 hours ago
        My favorite, recently. "Commit this, and merge to develop". "Alright, done, merged."

        I try running my app on the develop branch. No change. Huh.

        Realize it didn't.

        "Claude, why isn't this changed?" "That's to be expected because it's not been merged." "I'm confused, I told you to do that."

        This spectacular answer:

        "You're right. You told me to do it and I didn't do it and then told you I did. Should I do it now?"

        I don't know, Claude, are you actually going to do it this time?

        • hmokiguess 9 hours ago
          have you perhaps installed Gaslighting instead of Gastown?
    • gs17 17 hours ago
      In Claude Code specifically, for a while it had developed a nervous tic where it would say "Not malware." before every bit of code. Likely a similar issue where it keeps talking to a system/tool prompt.
      • Retr0id 16 hours ago
        My pet theory is that they have a "supervisor" model (likely a small one) that terminates any chats that do malware-y things, and this is likely a reward-hacking behaviour to avoid the supervisor from terminating the chat.
        • nananana9 1 hour ago
          I doubt it. We only do frontier models, since those are better for absolutely every use case 100% of the time.

          Way more likely there's a "VERY IMPORTANT: When you see a block of code, ensure it's not malware" somewhere in the system prompt.

    • dawnerd 18 hours ago
      I see that with openai too, lots of responding to itself. Seems like a convenient way for them to churn tokens.
      • grey-area 17 hours ago
        A simpler explanation (esp. given the code we've seen from claude), is that they are vibecoding their own tools and moving fast and breaking things with predictably sloppy results.
      • y1n0 18 hours ago
        None of these companies have compute to spare. It’s not in their interest to use more tokens that necessary.
        • parliament32 16 hours ago
          Sure it is. They're well aware their product is a money furnace and they'd have to charge users a few orders of magnitude more just to break even, which is obviously not an option. So all that's left is.. convince users to burn tokens harder, so graphs go up, so they can bamboozle more investors into keeping the ship afloat for a bit longer.
          • solarkraft 16 hours ago
            If this claim is true (inference is priced below cost), it makes little sense that there are tens of small inference providers on OpenRouter. Where are they getting their investor money? Is the bubble that big?

            Incidentally, the hardware they run is known as well. The claim should be easy to check.

            • parliament32 13 hours ago
              To be clear, I'm talking about subscription pricing. API pricing for Anthropic is probably at-cost.

              I dare you to run CC on API pricing and see how much your usage actually costs.

              (We did this internally at work, that's where my "few orders of magnitude" comment above comes from)

          • WarmWash 16 hours ago
            It's an option and they are going to do it. Chinese models will be banned and the labs will happily go dollar for dollar in plan price increases. $20 plans won't go away, but usage limits and model access will drive people to $40-$60-$80 plans.

            At cell phone plan adoption levels, and cell phone plan costs, the labs are looking at 5-10yr ROI.

        • boringg 18 hours ago
          Not true - they absolutely want to goose demand as they continue to burn investor dollars and deploy infra at scale.

          If that demand evens slows down in the slightest the whole bubble collapses.

          Growth + Demand >> efficiency or $ spend at their current stage. Efficiency is a mature company/industry game.

        • dawnerd 17 hours ago
          That doesn’t mean they also can’t be wasteful. Fact is, Claude and gpt have way too much internal thinking about their system prompts than is needed. Every step they mention something around making sure they do xyz and not doing whatever. Why does it need to say things to itself like “great I have a plan now!” - that’s pure waste.
          • empthought 16 hours ago
            > Why does it need to say things to itself like “great I have a plan now!”

            How else would it know whether it has a plan now?

        • malfist 17 hours ago
          Are you saying these companies don't want to sell more product to us? Because that's the logical extension of your argument.
          • keeda 16 hours ago
            No, the argument is they want to sell more product to more people, not just more product (to the same people.) Given that a lot of their income is from flat-rate subscriptions, they make money with more people burning tokens rather than just burning more tokens.

            After all, "the first hit's free" model doesn't apply to repeat customers ;-)

        • deckar01 16 hours ago
          You don’t have to use compute to pad the token count.
      • ngruhn 16 hours ago
        All the labs are in a cut throat race, with zero customer loyalty. As if they would intentionally degrade quality/speed for a petty cash grab.
      • OtomotO 18 hours ago
        This, so much this!

        Pay by token(s) while token usage is totally intransparent is a super convenient money printing machinery.

    • giwook 14 hours ago
      Curious what effort level you have it set to and the prompt itself. Just a guess but this seems like it could be a potential smell of an excessively high effort level and may just need to dial back the reasoning a bit for that particular prompt.
    • Normal_gaussian 14 hours ago
      I often have Claude commit and pr; on the last week I've seen several instances of it deciding to do extra work as part of the commit. It falls over when it tries to 'git add', but it got past me when I was trying auto mode once
    • rafram 18 hours ago
      Check that you’re running the latest version.
    • viccis 15 hours ago
      Yeah I had to deal with mine warning me that a website it accessed for its task contained a prompt injection, and when I told it to elaborate, the "injected prompt" turned out to be one its own <system-reminder> message blocks that it had included at some point. Opus 4.7 on xhigh
  • bityard 18 hours ago
    My hypothesis is that some of this a perceived quality drop due to "luck of the draw" where it comes to the non-deterministic nature of VM output.

    A couple weeks ago, I wanted Claude to write a low-stakes personal productivity app for me. I wrote an essay describing how I wanted it to behave and I told Claude pretty much, "Write an implementation plan for this." The first iteration was _beautiful_ and was everything I had hoped for, except for a part that went in a different direction than I was intending because I was too ambiguous in how to go about it.

    I corrected that ambiguity in my essay but instead of having Claude fix the existing implementation plan, I redid it from scratch in a new chat because I wanted to see if it would write more or less the same thing as before. It did not--in fact, the output was FAR worse even though I didn't change any model settings. The next two burned down, fell over, and then sank into the swamp but the fourth one was (finally) very much on par with the first.

    I'm taking from this that it's often okay (and probably good) to simply have Claude re-do tasks to get a higher-quality output. Of course, if you're paying for your own tokens, that might get expensive in a hurry...

    • coffeefirst 14 hours ago
      This is my theory too. There’s a predictable cycle where the models “get worse.” They probably don’t. A lot of people just take a while to really hit hard against the limitations.

      And once you get unlucky you can’t unsee it.

    • zormino 5 hours ago
      I also think some of this stems from the default 1m context window. Performance starts to degrade when context size increases, and each token over (i think the level is) 400k counts more towards your usage limit. Defaulting to 1m context size, if people arent carefully managing context (which they shouldnt ever have to in an ideal world), they would notice somewhat degraded performance and increased token usage regardless.
    • skirmish 15 hours ago
      So will we have to do what image generation people have been doing for ages: generate 50 versions of output for the prompt, then pick the best manually? Anthropic must be licking its figurative chops hearing this.
      • motoroco 14 hours ago
        I have to agree with OP, in my experience it is usually more productive to start over than to try correcting output early on. deeper into a project and it gets a bit harder to pull off a switch. I sometimes fork my chats before attempting to make a correction so that I can resume the original just in case (yes, I know you can double-tap Esc but the restoration has failed for me a few times in the past and now I generally avoid it)
    • afro88 9 hours ago
      I can't remember what the technique is called, but back in the GPT 4 days there was a paper published about having a number of attempts at responding to a prompt and then having a final pass where it picks the best one. I believe this is part of how the "Pro" GPT variant works, and Cursor also supports this in a way (though I'm not sure if the auto pick best one at the end is part of it - never tried)
    • voxgen 5 hours ago
      I have found Claude to be especially unpredictable. I've mostly switched to GPT-5.4 now - although it's slightly less capable, it's massively more reliable.
    • varispeed 2 hours ago
      I think they are routing to cheaper models that present themselves as e.g. Opus. I add to prompts now stuff to ensure that I am not dealing with an impostor. If it answers incorrectly, I terminate the session and start again. Anthropic should be audited for this.
    • billywhizz 13 hours ago
      you probably could have written the low stakes productivity app in a fraction of the time you wasted on this.
      • afro88 9 hours ago
        Or learnt to use an existing one.

        I vibed a low stakes budgeting app before realising what I actually needed was Actual Budget and to change a little bit how I budget my money.

    • gilrain 17 hours ago
      > My hypothesis is that some of this a perceived quality drop due to "luck of the draw" where it comes to the non-deterministic nature of [LLM] output.

      I think you must have learned that they’re more nondeterministic than you had thought, but then wrongly connected your new understanding to the recent model degradation. Note: they’ve been nondeterministic the whole time, while the widely-reported degradation is recent.

      • bityard 16 hours ago
        Er, no, I am fully aware that LLMs have always been non-deterministic.
        • gilrain 16 hours ago
          Your argument seems to be that a statistically-improbable number of people all experienced ultimately- randomly-poor outputs, leading to only a misperception of model degradation… but this is not supported by reality, in which a different cause was found, so I was trying to connect your dots.
          • zamadatix 15 hours ago
            Not everyone is reporting and the number of users is not consistent. On the former the noisiest will always be those that experience an issue while on the latter there are more people than ever using Claude Code regularly.

            Combining these things in the strongest interpretation instead of an easy to attack one and it's very reasonable to posit a critical mass has been reached where enough people will report about issues causing others to try their own investigations while the negative outliers get the most online attention.

            I'm not convinced this is the story (or, at least the biggest part of it) myself but I'm not ready to declare it illogical either.

          • bityard 16 hours ago
            No, that is not my argument, in fact I don't have any argument whatsoever. It was just a plausible observation that I felt like sharing. There's nothing further to read into it, I don't have a horse in this race.
          • furyofantares 16 hours ago
            Not really, they said "some of this a perceived quality drop". That's almost certainly correct, that _some_ of it is that.

            When everyone's talking about the real degradation, you'll also get everyone who experiences "random"[1] degradation thinking they're experiencing the same thing, and chiming in as well.

            [1] I also don't think we're talking the more technical type of nondeterminism here, temperature etc, but the nondeterminism where I can't really determine when I have a good context and when I don't, and in some cases can't tell why an LLM is capable of one thing but not another. And so when I switch tasks that I think are equally easy and it fails on the new one, or when my context has some meaningless-to-me (random-to-me) variation that causes it to fail instead of succeed, I can't determine the cause. And so I bucket myself with the crowd that's experiencing real degradation and chime in.

      • pydry 17 hours ago
        I wonder how well the "good" versions worked if you threw awkward edge cases at it.
  • bauerd 17 hours ago
    >On March 4, we changed Claude Code's default reasoning effort from high to medium to reduce the very long latency—enough to make the UI appear frozen—some users were seeing in high mode

    Instead of fixing the UI they lowered the default reasoning effort parameter from high to medium? And they "traced this back" because they "take reports about degradation very seriously"? Extremely hard to give them the benefit of doubt here.

    • bcherny 17 hours ago
      Hey, Boris from the team here.

      We did both -- we did a number of UI iterations (eg. improving thinking loading states, making it more clear how many tokens are being downloaded, etc.). But we also reduced the default effort level after evals and dogfooding. The latter was not the right decision, so we rolled it back after finding that UX iterations were insufficient (people didn't understand to use /effort to increase intelligence, and often stuck with the default -- we should have anticipated this).

      • big_toast 15 hours ago
        Having a "Recovery Mode"/"Safe Boot" flag to disable our configurations (or progressively enable) to see how claude code responds would be nice. Sometimes I get worried some old flag I set is breaking things. Maybe the flag already exists? I tried Claude doctor but it wasn't quite the solution.

        For instance:

        Is Haiku supposed to hit a warm system-prompt cache in a default Claude code setup?

        I had `DISABLE_TELEMETRY=1` in my env and found the haiku requests would not hit a warm-cached system prompt. E.g. on first request just now w/ most recent version (v2.1.118, but happened on others):

        w/ telemetry off - input_tokens:10 cache_read:0 cache_write:28897 out:249

        w/ telemetry on - input_tokens:10 cache_read:24344 cache_write:7237 out:243

        I used to think having so many users was leading to people hitting a lot of edge cases, 3 million users is 3 million different problems. Everyone can't be on the happy path. But then I started hitting weird edge cases and started thinking the permutations might not be under control.

      • krade 13 hours ago
        Off topic, but I'm hoping you'll maybe see this. There's been an issue with the VS code extension that makes it pretty much impossible to use (PreToolUse can't intercept permission requests anymore, using PermissionRequest hooks always open the diff viewer and steals focus):

        https://github.com/anthropics/claude-code/issues/36286 https://github.com/anthropics/claude-code/issues/25018

      • EugeneOZ 15 hours ago
        > people didn't understand to use /effort to increase intelligence, and often stuck with the default -- we should have anticipated this

        UI is UI. It is naive to expect that you build some UI but users will "just magically" find out that they should use it as a terminal in the first place.

      • abtinf 14 hours ago
        You didn’t anticipate most people stick with defaults?
        • bcherny 6 hours ago
          We anticipated the default would be the best option for most people. We were wrong, so we reverted the default.
      • taytus 13 hours ago
        “after evals and dogfooding” couldn’t have done this before releasing the model? We are paying $200/month to beta test the software for you.
    • stingraycharles 9 hours ago
      Yeah, this is so silly.

      Anthropic: removes thinking output

      Users: see long pauses, complain

      Anthropic: better reduce thinking time

      Users: wtf

      To me it really, really seems like Anthropic is trying to undo the transparency they always had around reasoning chains, and a lot of issues are due to that.

      Removing thinking blocks from the convo after 1 hour of being inactive without any notice is just the icing on the cake, whoever thought that was a good idea? How about making “the cache is hot” vs “the cache is cold” a clear visual indicator instead, so you slowly shape user behavior, rather than doing these types of drastic things.

    • sekai 6 hours ago
      > Instead of fixing the UI they lowered the default reasoning effort parameter from high to medium? And they "traced this back" because they "take reports about degradation very seriously"? Extremely hard to give them the benefit of doubt here.

      They had droves of Claude devs vehemently defending and gaslighting users when this started happening

  • karsinkk 17 hours ago
    " Combined with this only happening in a corner case (stale sessions) and the difficulty of reproducing the issue, it took us over a week to discover and confirm the root cause"

    I don't know about others, but sessions that are idle > 1h are definitely not a corner case for me. I use Claude code for personal work and most of the time, I'm making it do a task which could say take ~10 to 15mins. Note that I spend a lot of time back and forth with the model planning this task first before I ask it to execute it. Once the execution starts, I usually step away for a coffee break (or) switch to Codex to work on some other project - follow similar planning and execution with it. There are very high chances that it takes me > 1h to come back to Claude.

    • slashdave 14 hours ago
      It's likely a corner case for their developers. The dangers of working on a project is assuming user behavior like your own.
    • o10449366 16 hours ago
      Yeah and that statement also speaks to their test rigor if they make a change that big without thoroughly testing the edge case they're modifying.
  • rcarmo 1 hour ago
    Actually, I think their deeper problems are twofold:

    - Claude Code is _vastly_ more wasteful of tokens than anything else I've used. The harness is just plain bad. I use pi.dev and created https://github.com/rcarmo/piclaw, and the gaps are huge -- even the models through Copilot are incredibly context-greedy when compared to GPT/Codex

    - 4.7 can be stupidly bad. I went back to 4.6 (which has always been risky to use for anything reliable, but does decent specs and creative code exploration) and Codex/GPT for almost everything.

    So there is really no reason these days to pay either their subscription or their insanely high per/token price _and_ get bloat across the board.

  • arkariarn 17 hours ago
    I see some anthropic claude code people are reading the comments. A day or two ago I watched a video by theo t3.gg on whether claude got dumber. Even though he was really harsh on anthropic and said some mean stuff. I thought some of the points he was raising about claude code was quite apt. Especially when it comes to the harness bloat. I really hope the new features now stop and there is a real hard push for polish and optimization. Otherwise I think a lot of people will start exploring less bloated more optimized alternatives. Focus on making the harness better and less token consuming.

    https://youtu.be/KFisvc-AMII?is=NskPZ21BAe6eyGTh

    • Retr0id 16 hours ago
      Everything else aside, their brief "experiment" with removing CC support from the Pro plan got me seriously considering other options. I've been wary of vendor lock-in the whole time, but it was a useful reminder. (opencode+openrouter will probably be my first port of call)
      • wilj 16 hours ago
        I'm 3 weeks into switching from CC to OpenCode, and in some ways it is far superior to CC right out of the box, and I've maybe burned $200 in tokens to make a private fork that is my ultimate development and personal agent platform. Totally worth it.

        Still use CC at work because team standards, but I'd take my OpenCode stack over it any day.

        • swingboy 10 hours ago
          I find OpenCode vastly superior. Only thing missing is Vim mode but I saw a fork that someone implemented it. I really like being able to click on a previous message I sent to revert to that point in the conversation. You can revert in CC by pressing Escape twice but the “menu” it takes you to for picking the message is terrible because it only shows your messages. Also, expanding subagent/tools/thinking/etc. blocks is super intuitive in OpenCode whereas CC’s view when you press CTRL+O is also terrible and hard to understand at first glance.
        • solarkraft 15 hours ago
          I’m in the process of doing this as well - hackability is such a massive moat.

          Care to share what you changed, maybe even the code?

          • wilj 15 hours ago
            I've got to do some cleanup before sharing (yay vibe coding) but the big things I've changed so far:

            1) Curated a set of models I like and heavily optimized all possible settings, per agent role and even per skill (had to really replumb a lot of stuff to get it as granular as I liked)

            2) Ported from sqlite to postgresql, with heavily extended schema. I generate embeddings for everything, so every aspect of my stack is a knowledge graph that can be vector searched. Integrated with a memory MCP server and auditing tools so I can trace anything that happens in the stack/cluster back to an agent action and even thinking that was related to the action. It really helps refine stuff.

            3) Tight integration of Gitea server, k3s with RBAC (agents get their own permissions in the cluster), every user workspace is a pod running opencode web UI behind Gitea oauth2.

            4) Codified structure of `/projects/<monorepo>/<subrepos>` with simpler browserso non-technical family members can manage their work easier (agents handle all the management and there are sidecars handling all gitops transparent to the user)

            5) Transparent failover across providers with cooldown by making model definitions linked lists in the config, so I can use a handful of subscriptions that offer my favorite models, and fail over from one to the next as I hit quota/rate limits. This has really cut my bill down lately, along with skipping OpenRouter for my favorite models and going direct to Alibaba and Xiaomi so I can tailor caching and stuff exactly how I want.

            6) Integrated filebrowser, a fork of the Milkdown Crepe markdown editor, and codemirror editor so I don't even need an IDE anymore. I just work entirely from OpenCode web UI on whatever device is nearest at the moment. I added support for using Gemma 4 local on CPU from my phone yesterday while waiting in line at a store yesterday.

            Those are the big ones off the top of my head. Im sure there's more. I've probably made a few hundred other changes, it just evolves as I go.

      • 2001zhaozhao 16 hours ago
        The solution IMO is to switch to an agent harness wrapper solution that uses CLI-wrapping or ACP to connect to different coding agents. This is the only way that works across OpenAI, Claude and Gemini.

        There are a few out there (latest example is Zed's new multi-agent UI), but they still rely on the underlying agent's skill and plugin system. I'm experimenting with my own approach that integrates a plugin system that can dynamically change the agent skillset & prompts supplied via an integrated MCP server, allowing you to define skills and workflows that work regardless of the underlying agent harness.

    • lanthissa 17 hours ago
      never ever forget theo's gpt 5 hype video and then him having to walk it back.

      its very clear that theres money or influence exchanging hands behind the scenes with certain content creators, the information, and openai.

    • whalesalad 17 hours ago
      literally just `git reset --hard <random hash from 3 months ago>` would fix this
      • willis936 16 hours ago
        That implies it's broken. Juicing revenue and slashing opex at the expense of brand and customer retention is the feature.
  • data-ottawa 9 hours ago
    I think most frustrating is the system prompt issue after the postmortem from September[1].

    These bugs have all of the same symptoms: undocumented model regressions at the application layer, and engineering cost optimizations that resulted in real performance regressions.

    I have some follow up questions to this update:

    - Why didn't September's "Quality evaluations in more places" catch the prompt change regression, or the cache-invalidation bug?

    - How is Anthropic using these satisfaction questions? My own analysis of my own Claude logs was showed strong material declines in satisfaction here, and I always answer those surveys honestly. Can you share what the data looked like and if you were using that to identify some of these issues?

    - There was no refund or comped tokens in September. Will there be some sort of comp to affected users?

    - How should subscribers of Claude Code trust that Anthropic side engineering changes that hit our usage limits are being suitably addressed? To be clear, I am not trying to attribute malice or guilt here, I am asking how Anthropic can try and boost trust here. When we look at something like the cache-invalidation there's an engineer inside of Anthropic who says "if we do this we save $X a week", and virtually every manager is going to take that vs a soft-change in a sentiment metric.

    - Lastly, when Anthropic changes Claude Code's prompt, how much performance against the stated Claude benchmarks are we losing? I actually think this is an important question to ask, because users subscribe to the model's published benchmark performance and are sold a different product through Claude Code (as other harnesses are not allowed).

    [1] https://www.anthropic.com/engineering/a-postmortem-of-three-...

  • Robdel12 18 hours ago
    Wow, bad enough for them to actually publish something and not cryptic tweets from employees.

    Damage is done for me though. Even just one of these things (messing with adaptive thinking) is enough for me to not trust them anymore. And then their A/B testing this week on pricing.

    • saghm 18 hours ago
      The A/B testing is by far the most objectionable thing from them so far in my opinion, if only because of how terrible it would be for something like that to be standard for subscriptions. I'd argue that it's not even A/B testing of pricing but silently giving a subset of users an entirely different product than they signed up for; it would be like if 2% of Netflix customers had full-screen ads pop up and cover the videos randomly throughout a show. Historically the only thing stopping companies from extraordinarily user-hostile decisions has been public outcry, but limiting it to a small subset of users seems like it's intentionally designed to try to limit the PR consequences.
      • lifthrasiir 17 hours ago
        The best possible situation that I can imagine is that Anthropic just wanted to measure how much value does Claude Code have for Pro users and didn't mean to change the plan itself (so those users would get CC as a "bonus"), but that alone is already questionable to start with.
    • polishdude20 13 hours ago
      Bruce here from the Twitter team.

      I got finally fired.

    • xpe 6 hours ago
      People come at this with all kinds of life experience. The above notion of trust to me is quaint and simplistic. I suggest another way to frame trust as a more open ended question:

          To what degree do I predict another person/org will give me what I need and why?
      
      This shifts "trust" away from all or nothing and it gets me thinking about things like "what are the moving parts?" and "what are the incentives" and "what is my plan B?".

      In my life experience, looking back, when I've found myself swinging from "high trust" to "low trust" the change was usually rooted in my expectations; it was usually rooted in me having a naive understanding of the world that was rudely shattered.

      Will you force trust to be a bit? Or can you admit a probability distribution? Bits (true/false or yes/no or trust/don't trust) thrash wildly. Bayesians update incrementally: this is (a) more pleasant; (b) more correct; (c) more curious; (d) easier to compare notes with others.

    • mannanj 18 hours ago
      so who do you trust and go to? (NotClearlySo)OpenAI?
      • simlevesque 18 hours ago
        I went with MiniMax. The token plans are over what I currently need, 4500 messages per 5h, 45000 messages per week for 40$. I can run multiple agents and they don't think for 5-10 minutes like Sonnet did. Also I can finally see the thinking process while Anthropic chose to hide it all from me.

        I'm using Zed and Claude Code as my harnesses.

      • carlgreene 18 hours ago
        I "subconsciously" moved to codex back in mid Feb from CC and it's been so freaking awesome. I don't think it's as good at UI, but man is it thorough and able to gather the right context to find solutions.

        I use "subconsciously" in quotes because I don't remember exactly why I did it, but it aligns with the degradation of their service so it feels like that probably has something to do with it even though I didn't realize it at the time.

        • GenerWork 17 hours ago
          Anthropic definitely takes the cake when it comes to UI related activities (pulling in and properly applying Figma elements, understanding UI related prompts and properly executing on it, etc), and I say this as a designer with a personal Codex subscription.
        • cageface 14 hours ago
          Codex does better if you ask it to take screenshots and critique its own UI work and iterate. It rarely one-shots something I like but it can get there in steps.
        • snissn 18 hours ago
          it's been frustrating how bad it is at UI. I'm starting to test out using their image2 for UI and then handing it to codex to build out the images into code and I'm impressed and relieved so far
        • cmrdporcupine 16 hours ago
          Codex isn't great at UI, but you might find Gemini is competent enough as an adjunct. I've had some luck with that.
      • Robdel12 18 hours ago
        At the moment, yeah. If Google ever figures out how to build an agentic model, I would use them as well.

        However you feel about OpenAI, at least their harness is actually open source and they don’t send lawyers after oss projects like opencode

        • IncreasePosts 16 hours ago
          Is Gemini cli not an agentic model? Or are you just saying it's built poorly? Gemini 2.5 didn't really work for me but Gemini 3 seems fairly solid
          • cmrdporcupine 16 hours ago
            Gemini fairs poorly at tool use, even in its own CLI and even in Antigravity. It gets into a mess just editing source files, it's tragic because it's actually not a bad model otherwise.
            • rjh29 5 hours ago
              It frequently fails to apply its diffs at first but it always succeeds eventually for me. I'm happy with it. I understand it is slower than other models but it also costs barely anything per month.
      • parliament32 16 hours ago
        Self-hosted models are the one true path.
      • bensyverson 18 hours ago
        Anecdotally, I know many people who have supplemented Claude with Codex, and are experimenting with models such as GLM 5.1, Kimi, Qwen, etc.
      • irthomasthomas 17 hours ago
        I like chutes because they always use the full weights, and prompts are encrypted with TEE.
  • MrOrelliOReilly 15 hours ago
    IMO this is the consequence of a relentless focus on feature development over core product refinement. I often have the impression that Anthropic would benefit from a few senior product people. Someone needs to lend them a copy of “Escaping the Build Trap.” Just because we _can_ rapidly add features now doesn’t mean we should.

    PS I’m not referencing a well-known book to suggest the solution is trite product group think, but good product thinking is a talent separate from good engineering, and Anthropic seems short on the later recently

    • anonyfox 50 minutes ago
      Essentially they should hire a few of the old school product guys from Apple. Best me to it, but the obsession on UX and quality from earlier Apple is exactly what they urgently need instead of tech folks trying to engineer themselves into complicated rabbit holes and shenanigans.
    • slashdave 14 hours ago
      They need to keep up with demand, because compute resources are clearly limited. That means they have no choice but to add these features, or things break, or they have to stop taking new customers. All of those options are unacceptable.
      • cmrdporcupine 14 hours ago
        They're losing customers because of quality concerns. Pausing development and focusing 100% on quality is how you fix that.

        That said, that may not have been obvious at all in the Jan/Feb time frame when they got a wave of customers due to ethical concerns.

        • slashdave 13 hours ago
          No. Pausing development does not make compute (you know, physical machines?) appear out of thin air.
          • nozzlegear 8 hours ago
            On the other hand, sacrificing your paying customers at the altar of compute and tokens does not make money appear out of thin air.
    • cmrdporcupine 14 hours ago
      I think they've dug themselves into a complexity trap. Beyond the stochastic nature of the models themselves, I don't think they're able to reason about their software anymore. Too many levers, too many dials, and code that likely nobody understands.

      But worse, based on the pronouncements of Dario et al I suspect management is entirely unsympathetic because they believe we (SWEs) are on the chopping block to be replaced. And intimation that putting guard rails around these tools for quality concerns ... I'm suspecting is being ignored or discouraged.

      In the end, I feel like Claude Code itself started as a bit of a science experiment and it doesn't smell to me like it's adopted mature best practices coming out of that.

      • qweiopqweiop 3 hours ago
        I agree. My real fear if this is how the company works, how are systems with real implications (e.g. defense) being treated.
    • joshribakoff 13 hours ago
      They had like 100 devs making 600k at one point. The issue is certainly not lack of talent. More like, they insist on forcing the vibe coding narrative. Some candidates are refusing interview requests accordingly.
      • MrOrelliOReilly 8 hours ago
        Ugh wrote “latter” and meant “former.” I didn’t mean lack of eng talent, but product
  • hansmayer 1 hour ago
    A suggestion to Anthropic, just start charging the real price for your software. Of course you have to dumb it down, when the $200 tier in reality produces 5-10 thousand dollars in monthly costs when used by people who know how to max it out. So then you come up with creative nonsense like "adaptive thinking" when your tool is sometimes working and sometimes outright not - the irony of "intelligent tools" not "thinking" aside. Of course this would kind of ruin your current value proposition as charging the actual price would make your core idea of making large swaths of skilled population un-employed, unfeasible but I am sure if you feed it into the Claude, it will find some points for and against, just like how Karpathy uses his LLM of choice to excrement his blog posts.
  • puppystench 17 hours ago
    The Claude UI still only has "adaptive" reasoning for Opus 4.7, making it functionally useless for scientific/coding work compared to older models (as Opus 4.7 will randomly stop reasoning after a few turns, even when prompted otherwise). There's no way this is just a bug and not a choice to save tokens.
    • mattew 16 hours ago
      It was odd that there was no mention of the forced adaptive reasoning in the article. My guess is they don't have enough compute to do anything else here.
    • rzk 5 hours ago
      They are forcing users to use adaptive thinking now and deprecating thinking.type: "enabled" and budget_tokens. But the web interface (claude.ai), does not support specifying the effort parameter.
  • huksley 5 hours ago
    Just add this, it works better than Opus 4.7

    vim ~/.claude/settings.json

    { "model": "claude-opus-4-6", "fastMode": false, "effortLevel": "high", "alwaysThinkingEnabled": true, "autoCompactWindow": 700000 }

  • leobuskin 8 hours ago
    This usage reset you did on April 23 will not mitigate the struggle we’ve experienced. I didn’t even notice it yesterday. I checked this morning and it came down from 25% weekly to 7%. What is this? I didn’t have problems for two months like many others (maybe my CC habits helped), but two weeks were very painful. Make a proper apology, guys. This “reset” for many users could hit the first days of the week, tell me you thought about that.
  • cedws 17 hours ago
    >On April 16, we added a system prompt instruction to reduce verbosity

    In practice I understand this would be difficult but I feel like the system prompt should be versioned alongside the model. Changing the system prompt out from underneath users when you've published benchmarks using an older system prompt feels deceptive.

    At least tell users when the system prompt has changed.

    • elAhmo 16 hours ago
      Its also kinda funny they have to rely on system prompt to control verbosity itself.
      • esafak 10 hours ago
        It's cheaper than retraining the model.
        • verve_rat 8 hours ago
          So? 4.7.1, 4.7.2, etc. makes sense for versioning system prompts.
  • kamranjon 14 hours ago
    This black box approach that large frontier labs have adopted is going to drive people away. To change fundamental behavior like this without notifying them, and only retroactively explaining what happened, is the reason they will move to self-hosting their own models. You can't build pipelines, workflows and products on a base that is just randomly shifting beneath you.
  • nickdothutton 17 hours ago
    I presume they don't yet have a cohesive monetization strategy, and this is why there is such huge variability in results on a weekly basis. It appears that Anthropic are skipping from one "experiment" to another. As users we only get to see the visible part (the results). Can't design a UI that indicates the software is thinking vs frozen? Does anyone actually believe that?
    • slashdave 14 hours ago
      Compute is limited worldwide. No amount of money can make these compute platforms appear overnight. They are buying time because the only other option is to stop accepting customers.
      • joefourier 9 hours ago
        They would honestly have been better off refusing customers if compute is so limited. Degrading the quality leads to customers leaving in the short term, and ruins their long term reputation.

        But in either case, if compute is so limited, they’ll have to compete with local coding agents. Qwen3.6-27B is good enough to beat having to wait until 5PM for your Claude Code limit to reset.

  • anonyfox 1 hour ago
    I refuse to believe that caching tiers for longer than 1 hour would be impossible to transparently build and use to avoid all this complexity to begin with, nor that it would be that expensive to maintain in 2026 when the bulk costs are on inference anyways which would even be reduced by occasional longer time cache hits.
  • lherron 15 hours ago
    Are they also going to refund all the extra usage api $$$ people spent in the last month?

    Also I don’t know how “improving our Code Review tool” is going to improve things going forward, two of the major issues were intentional choices. No code review is going to tell them to stop making poor and compromising decisions.

    • zem 13 hours ago
      this is one reason i will not pay for extra usage - it is an incentive for them to be inefficient, or at least to not spend any effort on improving my token usage efficiency.
    • dallen33 15 hours ago
      No, they will not.
    • system2 5 hours ago
      I stopped using it for nearly a month because of the performance degradation. I paid for the whole month. Wasted money.
    • FireBeyond 11 hours ago
      Even for all of us plan users, where we got barely any use from our plan because we'd destroy our 5h and 1w usage limits, also unlikely, after all they have an out of "your usage limits are guaranteed to be 5x of Pro users" (who are also being screwed).

      Of course, all their vibe coding is being done with effectively infinite tokens, so...

  • whh 56 minutes ago
    Thanks Anthropic, and a big thanks to your Claude Code team for the customer obsession here. I've just noticed the Command + Backspace fix and even the nice little Ctrl + y addition as a fix for accidents.

    I really appreciate these little touches.

  • vintagedave 15 hours ago
    > Today we are resetting usage limits for all subscribers.

    I asked for this via support, got a horrible corporate reply thread, and eventually downgraded my account. I'm using Codex now as we speak. I could not use Claude any more, I couldn't get anything done.

    Will they restore my account usage limits? Since I no longer have Max?

    Is that one week usage restored, or the entire buggy timespan?

  • exabrial 9 hours ago
    Last I tried 4.7, it was bad. Like ChatGPT bad: changed stuff it wasn’t supposed to, hallucinated code, forgot information, missed simple things, didn’t catch mistakes. And it burned through tokens like crazy.

    I’ll stay on 4.6 for awhile. Seems to be better. What’s frustrating, though you cannot rely on these tools. They are constantly tinkering and changing with things and there’s no option to opt out.

    • Aperocky 9 hours ago
      It seems like there is no concept of deployment, or even A/B test, what works on presumably claude employee's laptop for the hour they spent testing it will ship immediately to everyone.

      I mean, yes, even testing in production with some of your customer is better than.. testing with ALL of your customers?

  • skeledrew 14 hours ago
    Some of these changes and effects seriously affect my flow. I'm a very interactive Claude user, preferring to provide detailed guidance for my more serious projects instead of just letting them run. And I have multiple projects active at once, with some being untouched for days at a time. Along with the session limits this feels like compounding penalties as I'm hit when I have to wait for session reset (worse in the middle of a long task), when I take time to properly review output and provide detailed feedback, when I'm switching among currently active projects, when I go back to a project after a couple days or so,... This is honestly starting to feel untenable.
  • dataviz1000 18 hours ago
    This is the problem with co-opting the word "harness". What agents need is a test harness but that doesn't mean much in the AI world.

    Agents are not deterministic; they are probabilistic. If the same agent is run it will accomplish the task a consistent percentage of the time. I wish I was better at math or English so I could explain this.

    I think they call it EVAL but developers don't discuss that too much. All they discuss is how frustrated they are.

    A prompt can solve a problem 80% of the time. Change a sentence and it will solve the same problem 90% of time. Remove a sentence it will solve the problem 70% of the time.

    It is so friggen' easy to set up -- stealing the word from AI sphere -- a TEST HARNESS.

    Regressions caused by changes to the agent, where words are added, changed, or removed, are extremely easy to quantify. It isn’t pass/fail. It’s whether the agent still solves the problem at the same percentage of the time it consistently has.

    • arjie 17 hours ago
      The word is not co-opted. A harness is just supportive scaffolding to run something. A test harness is scaffolding to run tests against software, a fuzz harness is scaffolding to run a fuzzer against the software, and so on. I've seen it being used in this manner many times over the past 15 years. It's the device that wraps your software so you can run it repeatedly with modifications of parameters, source code, or test condition.
      • dataviz1000 17 hours ago
        > A harness is just supportive scaffolding to run something.

        Thank you for the perfect explanation.

        Last week in my confusion about the word because Anthropic was using test, eval, and harness in the same sentence so I thought Anthropic made a test harness, I used Google asking "in computer science what is a harness". It responded only discussing test harnesses which solidified my thinking that is what it is.

        I wish Google had responded as clearly you did. In my defense, we don't know if we understand something unless we discuss it.

    • thesz 17 hours ago
      To have some confidence in consistency of results (p-value), one has to start from cohort of around 30, if I remember correctly. This is 1.5 orders of magnitude increase of computing power needed to find (absence of) consistent changes of agent's behavior.
      • dataviz1000 16 hours ago
        I apologize for the potato quality of these links, however, I have been working tirelessly to wrap my head how to reason about how agents and LLM models work. They are more than just a black box.

        The first tries to answer what happens when I give the models harder and harder arithmetic problems to the point Sonnet will burn 200k tokens for 20minutes. [0]

        The other is a very deep dive into the math of a reasoning model in the only way I could think to approach it, with data visualizations, seeing the computation of the model in real time in relation to all the parts.[1]

        Two things I've learned are that the behavior of an agent that will reverse engineer any website and the behavior of an agent that does arithmetic are the same. Which means the probability that either will solve their intended task is the same for the given agent and task -- it is a distribution. The other, is that models have a blind spot, therefore creating a red team adversary bug hunter agent will not surface a bug if the same model originally wrote the code.

        Understanding that, knowing that I can verify at the end or use majority of votes (MoV), using the agents to automate extremely complicated tasks can be very reliable with an amount of certainty.

        [0] https://adamsohn.com/reliably-incorrect/

        [1] https://adamsohn.com/grpo/

  • HarHarVeryFunny 1 hour ago
    And the reason why Claude Code is so buggy ...

    https://techtrenches.dev/p/the-snake-that-ate-itself-what-cl...

  • rohansood15 3 hours ago
    If Anthropic couldn't catch these issues before people started screaming at them, do we really believe 50% of software engineering jobs are going away?
  • sscaryterry 1 hour ago
    Glad there is finally some ownership. It is a pity that this was mostly because AMD embarrassed them on GitHub. Users have been reporting these issues for weeks, but were mostly ignored.
  • foota 18 hours ago
    > On April 16, we added a system prompt instruction to reduce verbosity. In combination with other prompt changes, it hurt coding quality, and was reverted on April 20. This impacted Sonnet 4.6, Opus 4.6, and Opus 4.7.

    Claude caveman in the system prompt confirmed?

    • awesome_dude 18 hours ago
      I've recently been introduced to that plugin, love it for humour
  • lukebechtel 18 hours ago
    Some people seem to be suggesting these are coverups for quantization...

    Those who work on agent harnesses for a living realize how sensitive models can be to even minor changes in the prompt.

    I would not suspect quantization before I would suspect harness changes.

  • MillionOClock 18 hours ago
    I see the Claude team wanted to make it less verbose, but that's actually something that bothered me since updating to Claude 4.7, what is the most recommended way to change it back to being as verbose as before? This is probably a matter of preference but I have a harder time with compact explanations and lists of points and that was originally one of the things I preferred with Claude.
  • jpcompartir 18 hours ago
    Anthropic releases used to feel thorough and well done, with the models feeling immaculately polished. It felt like using a premium product, and it never felt like they were racing to keep up with the news cycle, or reply to competitors.

    Recently that immaculately polished feel is harder to find. It coincides with the daily releases of CC, Desktop App, unknown/undocumented changes to the various harnesses used in CC/Cowork. I find it an unwelcome shift.

    I still think they're the best option on the market, but the delta isn't as high as it was. Sometimes slowing down is the way to move faster.

    • bcherny 17 hours ago
      Boris from the Claude Code team here. We agree, and will be spending the next few weeks increasing our investment in polish, quality, and reliability. Please keep the feedback coming.
      • batshit_beaver 17 hours ago
        > investment in polish, quality, and reliability

        For there to be any trust in the above, the tool needs to behave predictably day to day. It shouldn't be possible to open your laptop and find that Claude suddenly has an IQ 50 points lower than yesterday. I'm not sure how you can achieve predictability while keeping inference costs in check and messing with quantization, prompts, etc on the backend.

        Maybe a better approach might be to version both the models and the system prompts, but frequently adjust the pricing of a given combination based on token efficiency, to encourage users to switch to cheaper modes on their own. Let users choose how much they pay for given quality of output though.

      • pkos98 17 hours ago
        Sure, I've cancelled my Max 20 subscription because you guys prioritize cutting your costs/increasing token efficiency over model performance. I use expensive frontier labs to get the absolute best performance, else I'd use an Open Source/Chinese one.

        Frontier LLMs still suck a lot, you can't afford planned degradation yet.

      • wilj 16 hours ago
        My biggest problem with CC as a harness is that I can't trust "Plan" mode. Long running sessions frequently start bypassing plan mode and executing, updating files and stuff, without permission, while still in plan mode. And the only recovery seems to be to quit and reload CC.

        Right now my solution is to run CC in tmux and keep a 2nd CC pane with /loop watching the first pane and killing CC if it detects plan mode being bypassed. Burning tokens to work around a bug.

      • olejorgenb 5 hours ago
      • tkgally 14 hours ago
        Here's one person's feedback. After the release of 4.7, Claude became unusable for me in two ways: frequent API timeouts when using exactly the same prompts in Claude Code that I had run problem-free many times previously, and absurdly slow interface response in Claude Cowork. I found a solution to the first after a few days (add "CLAUDE_STREAM_IDLE_TIMEOUT_MS": "600000" to settings.json), but as of a few hours ago Cowork--which I had thought was fantastic, by the way--was still unusable despite various attempts to fix it with cache clearing and other hacks I found on the web.
      • a-dub 17 hours ago
        hm. ml people love static evals and such, but have you considered approaches that typically appear in saas? (slow-rollouts, org/user constrained testing pools with staged rollouts, real-world feedback from actual usage data (where privacy policy permits)?
      • rimliu 1 hour ago
        I am considering proving my feedback by not providing my money any longer.
      • g4cg54g54 12 hours ago
        > Please keep the feedback coming

        if only there were a place with 9.881 feedbacks waiting to be triaged...

        and that maybe not by a duplicate-bot that goes wild and just autocloses everything, just blessing some of the stuff there with a "you´ve been seen" label would go a long way...

        • oefrha 11 hours ago
          Common pattern of checking the claude code issue tracker for a bug: land on issue #12587, auto closed as duplicate of #12043; check #12043, auto closed as duplicated of #11657; check #11657, auto closed as duplicate of #10645; check #10645, never got a response, or closed as not planned, or some other bullshit.
      • troupo 17 hours ago
        And you didn't invest anything in polish, quality and reliability before... why? Because for any questions people have you reply something like "I have Claude working on this right now" and have no idea what's happening in the code?

        A reminder: your vibe-coded slop required peak 68GB of RAM, and you had to hire actual engineers to fix it.

      • szmarczak 17 hours ago
        Why ban third party wrappers? All of this could've been sidestepped had you not banned them.
        • ElFitz 17 hours ago
          Because then they lose vertical integration and the extra ability it grants to tune settings to reduce costs / token use / response time for subscription users.

          Or improve performance and efficiency, if we’re generous and give them the benefit of the doubt.

          It makes sense, in a way. It means the subscription deal is something along the lines of fixed / predictable price in exchange for Anthropic controlling usage patterns, scheduling, throttling (quotas consumptions), defaults, and effective workload shape (system prompt, caching) in whatever way best optimises the system for them (or us if, again, we’re feeling generous) / makes the deal sustainable for them.

          It’s a trade-off

          • cmrdporcupine 16 hours ago
            They gained that ability to tune settings and then promptly used it in a poor way and degraded customer experience.
            • ElFitz 5 hours ago
              That’s what we see.

              It may be (but I wouldn’t know) that some of other changes not covered here reduced costs on their side without impacting users, improving the viability of their subscription model. Or maybe even improved things for users.

              I’d really appreciate more transparency on this, and not just when things fail.

              But I’ve learned my lesson. I’ve been weening off Claude for a few weeks, cancelled my subscription three weeks ago, let it expire yesterday, and moved to both another provider and a third-party open source harness.

          • szmarczak 16 hours ago
            Nothing you wrote makes sense. The limits are so Anthropic isn't on a loss. If they can customize Claude using Code, I see no reason why they couldn't do so with other wrappers. Other wrappers can also make use of cache.

            If you worry about "degraded" experience, then let people choose. People won't be using other wrappers if they turn out to be bad. People ain't stupid.

            • ElFitz 15 hours ago
              By imposing the use of their harness, they control the system prompt:

              > On April 16, we added a system prompt instruction to reduce verbosity. In combination with other prompt changes, it hurt coding quality, and was reverted on April 20. This impacted Sonnet 4.6, Opus 4.6, and Opus 4.7

              They can pick the default reasoning effort:

              > On March 4, we changed Claude Code's default reasoning effort from high to medium to reduce the very long latency—enough to make the UI appear frozen—some users were seeing in high mode

              They can decide what to keep and what to throw out (beyond simple token caching):

              > On March 26, we shipped a change to clear Claude's older thinking from sessions that had been idle for over an hour, to reduce latency when users resumed those sessions. A bug caused this to keep happening every turn for the rest of the session instead of just once, which made Claude seem forgetful and repetitive. We fixed it on April 10. This affected Sonnet 4.6 and Opus 4.6

              It literally is all in the post.

              I don't worry about anything though. It's not my product. I don't work for Anthropic, so I really couldn't care less about anyone else's degraded (or not) experience.

              • szmarczak 12 hours ago
                > they control the system prompt

                They control the default system prompt. You can change it if you want to.

                > They can pick the default reasoning effort

                Don't see how it's an obstacle in allowing third party wrappers.

                > They can decide what to keep and what to throw out

                That's actually a good point. However I still don't think it's an obstacle. If third party wrappers were bad, people simply wouldn't be using them.

                • ElFitz 7 hours ago
                  Evidently, all these things you just dismissed matter, else all the changes I quoted from the original post wouldn’t have affected anyone, or half as many people, or half as much. Anthropic wouldn’t have had any complaints to investigate, the article promoting this entire thread wouldn’t exist, and we wouldn’t be having this very conversation.

                  Defaults matter. A large share of people never change them (status quo bias, psychological inertia). Having control over them (and usage quotas) means Anthropic can control and fine-tune what this fixed subscription costs them.

                  And evidently (re, the original article), they tried to do so.

                  • ElFitz 5 hours ago
                    Edit: the article prompting this entire thread.
                  • szmarczak 3 hours ago
                    > Defaults matter. A large share of people never change them (status quo bias, psychological inertia). Having control over them (and usage quotas) means Anthropic can control and fine-tune what this fixed subscription costs them.

                    Allowing third party wrappers doesn't mean Claude Code would cease to exist. The opposite actually, Claude Code would be the default.

                    People dissatisfied with Code would simply use other wrappers. I call it a win-win. Don't see how Anthropic would be on a lose here, they would still retain the ability to control the defaults.

                    • ElFitz 1 hour ago
                      Except one of the major other wrappers was pi, through OpenClaw. With countless hundreds of thousands of instances running every hour on that heartbeat.

                      I have no idea what the share of OpenClaw instances running on pi was, or third-party wrappers in general, but it was obviously large enough that Anthropic decided they had to put an end to it.

                      Conversely, from the latest developments, it would seem they are perfectly fine with people running OpenClaw with Claude models through Claude Code’s programmatic interface using subscriptions.

                      But in the end, this, my take, your take, is all conjecture. We are both on the outside looking in.

                      Only the people who work at Anthropic know.

      • ankaz 17 hours ago
        [dead]
      • jpcompartir 17 hours ago
        [flagged]
    • swader999 1 hour ago
      I've noticed the same thing in my own AI assisted work. Feels like I'm moving too fast and it's easy to implement decisions quickly but they really have to be the right f--ing decisions. In the past dev was so slow so you had a lot of time to vet the hard decisions and now you don't.
    • KronisLV 17 hours ago
      > It felt like using a premium product, and it never felt like they were racing to keep up with the news cycle, or reply to competitors.

      I don't know, their desktop app felt really laggy and even switching Code sessions took a few seconds of nothing happening. Since the latest redesign, however, it's way better, snappy and just more usable in most respects.

      I just think that we notice the negative things that are disruptive more. Even with the desktop app, the remaining flaws jump out: for example, how the Chat / Cowork / Code modes only show the label for the currently selected mode and the others are icons (that aren't very big), a colleague literally didn't notice that those modes are in the desktop app (or at least that that's where you switch to them).

    • spaniard89277 17 hours ago
      Given the price I don't really think they're the best option. They're sloppy and competitors are catching up. I'm having same results with other models, and very close with Kimi, which is waaay cheaper.
    • kilroy123 16 hours ago
      I agree. It all feels so AI-slopy now.
    • OtomotO 17 hours ago
      I guess it's a bit of desperation to find a sustainable business model.

      The AI hype is dying, at least outside the silicon valley bubble which hackernews is very much a part of.

      That and all the dogfooding by slop coding their user facing application(s).

  • ctoth 17 hours ago
    > As of April 23, we’re resetting usage limits for all subscribers.

    Wait, didn't they just reset everybody's usage last Thursday, thereby syncing everybody's windows up? (Mine should have reset at 13:00 MDT) ? So this is just the normal weekly reset? Except now my reset says it will come Saturday? This is super-confusing!

    • walthamstow 17 hours ago
      The weekly reset point is different per account. I think something to do with first sign-up date. Mine is on a Tuesday.
      • schpet 17 hours ago
        mine was originally on sunday, then got moved to thursday (which i disliked), and it is still on thursday. so them resetting my weekly limit on the same day it was scheduled to reset feels like a joke.
        • throwaway2027 16 hours ago
          You need to send a new message once your limit is up to make the timer start rolling again. It sucks and I hate it when I had no need for Claude during the day but also forgot to use it then it shifted my reset date a day later.
          • schpet 16 hours ago
            oh! super helpful info. i was aware of that with the hourly ones, but never put it together with weekly. thank you.
  • bashtoni 9 hours ago
    The Claude Code experience is still pretty bad after upgrading. I often see

      Error: claude-opus-4-7[1m] is temporarily unavailable, so auto mode cannot determine the safety of Bash right now. Wait briefly and then try this action again. If it keeps failing, continue with other tasks that don't require this action and come back to it later. Note: reading files, searching code, and other read-only operations do not require the classifier and can still be used.
    
    The only solution is to switch out of auto mode, which now seems to be the default every time I exit plan mode. Very annoying.
  • hintymad 15 hours ago
    > On March 4, we changed Claude Code's default reasoning effort from high to medium to reduce the very long latency—enough to make the UI appear frozen—some users were seeing in high mode.

    This sounds fishy. It's easy to show users that Claude is making progress by either printing the reasoning tokens or printing some kind of progress report. Besides, "very long" is such a weasel phrase.

    • reliablereason 14 hours ago
      Right a very simple UI thing that they should have that would have prevented so much misunderstanding. Is a simple counter. How much usage do a have i used and how much is left.

      If a message will do a cache recreation the cost for that should be viewable.

  • jryio 18 hours ago
    1. They changed the default in March from high to medium, however Claude Code still showed high (took 1 month 3 days to notice and remediate)

    2. Old sessions had the thinking tokens stripped, resuming the session made Claude stupid (took 15 days to notice and remediate)

    3. System prompt to make Claude less verbose reducing coding quality (4 days - better)

    All this to say... the experience of suspecting a model is getting worse while Anthropic publicly gaslights their user-base: "we never degrade model performance" is frustrating.

    Yes, models are complex and deploying them at scale given their usage uptick is hard. It's clear they are playing with too many independent variables simultaneously.

    However you are obligated to communicate honestly to your users to match expectations. Am I being A/B tested? When was the date of the last system prompt change? I don't need to know what changed, just that it did, etc.

    Doing this proactively would certainly match expectations for a fast-moving product like this.

    • fn-mote 18 hours ago
      > 2. Old sessions had the thinking tokens stripped, resuming the session made Claude stupid (took 15 days to notice and remediate)

      This one was egregious: after a one hour user pause, apparently they cleared the cache and then continued to apply “forgetting” for the rest of the session after the resume!

      Seems like a very basic software engineering error that would be caught by normal unit testing.

    • Eridrus 18 hours ago
      To be fair to Anthropic, they did not intentionally degrade performance.

      To take the opposite side, this is the quality of software you get atm when your org is all in on vibe coding everything.

      • shrx 15 hours ago
        Are you saying dropping cache after 1 hour is not intentionally degrading performance?
        • Eridrus 10 hours ago
          Yes. Caching is a cost optimization not a response quality metric.
          • shrx 3 hours ago
            But it still degrades performance.
    • sroussey 18 hours ago
      None of these problems equate to degrading model performance. Completely different team. Degraded CC harness, sure.
      • qingcharles 18 hours ago
        Sure, but it gives the impression of degraded model performance. Especially when the interface is still saying the model is operating on "high", the same as it did yesterday, yet it is in "medium" -- it just looks like the model got hobbled.
        • sroussey 18 hours ago
          Oh, absolutely. Though changes in how the model is used is imminently more fixable than the model itself.
      • johnmaguire 18 hours ago
        Yes, but for many users, CC is the product. Especially since I'm not allowed(?) to use my own harness with my sub.
    • Philpax 18 hours ago
      > Anthropic publicly gaslights their user-base: "we never degrade model performance" is frustrating.

      They're not gaslighting anyone here: they're very clear that the model itself, as in Opus 4.7, was not degraded in any way (i.e. if you take them at their word, they do not drop to lower quantisations of Claude during peak load).

      However, the infrastructure around it - Claude Code, etc - is very much subject to change, and I agree that they should manage these changes better and ensure that they are well-communicated.

      • jryio 18 hours ago
        Model performance at inference in a data center v.s. stripping thinking tokens are effectively the same.

        Sure they didn't change the GPUs their running, or the quantization, but if valuable information is removed leading to models performing worse, performance was degraded.

        In the same way uptime doesn't care about the incident cause... if you're down you're down no one cares that it was 'technically DNS'.

        • sroussey 18 hours ago
          I thought these days thinking tokens sent my the model (as opposed to used internally) were just for the users benefit. When you send the convo back you have to strip the thinking stuff for next turn. Or is that just local models?
      • aszen 18 hours ago
        Claude code is not infra, the model is the infra. They changed settings to make their models faster and probably cheaper to run too. Honestly with adaptive thinking it no longer matters what model it is if you can dynamically make it do less or more work.
  • PeakScripter 1 hour ago
    They should really test everything thoroughly and then make it available to general public to avoid these issues!!
  • behat 16 hours ago
    This is a very interesting read on failure modes of AI agents in prod.

    Curious about this section on the system prompt change: >> After multiple weeks of internal testing and no regressions in the set of evaluations we ran, we felt confident about the change and shipped it alongside Opus 4.7 on April 16. As part of this investigation, we ran more ablations (removing lines from the system prompt to understand the impact of each line) using a broader set of evaluations. One of these evaluations showed a 3% drop for both Opus 4.6 and 4.7. We immediately reverted the prompt as part of the April 20 release.

    Curious what helped catch in the later eval vs. initial ones. Was it that the initial testing was online A/B comparison of aggregate metrics, or that the dataset was not broad enough?

  • jameson 17 hours ago
    > "In combination with other prompt changes, it hurt coding quality, and was reverted on April 20"

    Do researchers know correlation between various aspects of a prompt and the response?

    LLM, to me at least, appears to be a wildly random function that it's difficult to rely on. Traditional systems have structured inputs and outputs, and we can know how a system returned the output. This doesn't appear to be the case for LLM where inputs and outputs are any texts.

    Anecdotally, I had a difficult time working with open source models at a social media firm, and something as simple as wrapping the example of JSON structure with ```, adding a newline or wording I used wildly changed accuracy.

  • munk-a 17 hours ago
    It's also important to realize that Anthropic has recently struck several deals with PE firms to use their software. So Anthropic pays the PE firm which forces their managed firms to subscribe to Anthropic.

    The artificial creation of demand is also a concerning sign.

  • ramoz 14 hours ago
    Opus 4.7 is very rough to work with. Specifically for long-horizon (we were told it was trained specifically for this and less handholding).

    I don't have trust in it right now. More regressions, more oversights, it's pedantic and weird ways. Ironically, requires more handholding.

    Not saying it's a bad model; it's just not simple to work with.

    for now: `/model claude-opus-4-6[1m]` (youll get different behavior around compaction without [1m])

  • Implicated 12 hours ago
    Just as a note to CC fans/users here since I had an opportunity to do so... I tested resuming a session that was stale at 950k tokens after returning from a full day or so of being idle, thus a fully empty quota/session.

    Resuming it cost 5% of the current session and 1% of the weekly session on a max subscription.

  • russellthehippo 13 hours ago
    Damn it was real the whole time. I found Opus 4.7 to holistically underperform 4.6, and especially in how much wordiness there is. It's harder to work with so I just switched back to 4.6 + Kimi K2.6. Now GPT 5.5 is here and it's been excellent so far.
  • lifthrasiir 18 hours ago
    Is it just for me that the reset cycle of usage limits has been randomly updated? I originally had the reset point at around 00:00 UTC tomorrow and it was somehow delayed to 10:00 UTC tomorrow, regardless of when I started to use Claude in this cycle. My friends also reported very random delay, as much as ~40 hours, with seemingly no other reason. Is this another bug on top of other bugs? :-S
    • nubinetwork 3 hours ago
      My usage got reset yesterday as usual, but it appears it will reset again on Sunday.
    • someone4958923 18 hours ago
      "This isn’t the experience users should expect from Claude Code. As of April 23, we’re resetting usage limits for all subscribers."
      • lifthrasiir 17 hours ago
        I know that. I'm saying that the cycle reset is not what it used to (starting at the very first usage) or what it might be (retaining the cycle reset timing).
        • jongleberry 17 hours ago
          it seems to be the same cycle for everyone now, not based on first usage. I saw a reddit thread on this from someone who had multiple accounts that all had the same cycles
  • WhitneyLand 18 hours ago
    Did they not address how adaptive thinking has played in to all of this?
  • arjie 17 hours ago
    Useful update. Would be useful to me to switch to a nightly / release cycle but I can see why they don't: they want to be able to move fast and it's not like I'm going to churn over these errors. I can only imagine that the benchmark runs are prohibitively expensive or slow or not using their standard harness because that would be a good smoke test on a weekly cadence. At the least, they'd know the trade-offs they're making.

    Many of these things have bitten me too. Firing off a request that is slow because it's kicked out of cache and having zero cache hits (causes everything to be way more expensive) so it makes sense they would do this. I tried skipping tool calls and thinking as well and it made the agent much stupider. These all seem like natural things to try. Pity.

  • pxc 17 hours ago
    One of Anthropic's ostensive ethical goals is to produce AI that is "understandable" as well as exceptionally "well-aligned". It's striking that some of the same properties that make AI risky also just make it hard to consistently deliver a good product. It occurs to me that if Anthropic really makes some breakthroughs in those areas, everyone will feel it in terms of product quality whether they're worried about grandiose/catastrophic predictions or not.

    But right now it seems like, in the case of (3), these systems are really sensitive and unpredictable. I'd characterize that as an alignment problem, too.

    • rimliu 1 hour ago
      broken cache does not breakthrough make.
  • sutterd 16 hours ago
    What kind of performance are people getting now? I was running 4.7 yesterday and it did a remarkably bad job. I recreated my repo state exactly and ran the same starting task with 4.5 (which I have preferred to 4.6). It was even worse, by a large margin. It is likely my task was a difficult or poorly posed, but I still have some idea of what 4.5 should have done on it. This was not it. What experiences are other people having with the 4.7? How about with other model versions, if they are trying them? (In both cases, I ran on max effort, for whatever that is worth.)
  • sreekanth850 7 hours ago
    Who’s going to pay for the exorbitant number of tokens Claude used without delivering any meaningful outcome? I spent many sessions getting zero results, and when I posted about it on their subreddit, all I got were personal attacks from bots and fanboys. I instantly cancelled my subscription and moved to Codex.

    Also, it may be a coincidence, that the article was published just before the GPT 5.5 launch, and then they restored the original model while releasing a PR statement claiming it was due to bugs.

  • rfc_1149 16 hours ago
    The third bug is the one worth dwelling on. Dropping thinking blocks every turn instead of just once is the kind of regression that only shows up in production traffic. A unit test for "idle-threshold clearing" would assert "was thinking cleared after an hour of idle" (yes) without asserting "is thinking preserved on subsequent turns" (no). The invariant is negative space.

    The real lesson is that an internal message-queuing experiment masked the symptoms in their own dogfooding. Dogfooding only works when the eaten food is the shipped food.

    • afro88 8 hours ago
      Experienced engineers that know the codebase and system well, and with enough time to consider the problem properly would likely consider this case.

      But if we're vibing... This is the kind of bug that should make it back into a review agent/skill's instructions in a more generic format. Essentially if something is done to the message history, check there tests that subsequent turns work as expected.

      But yeah, you'd have to piss off a bunch of users in prod first to discover the blind spot.

  • voxelc4L 11 hours ago
    I’ve stuck to the non-1M context Opus 4.6 and it works really well for me, even with on-going context compression. I honestly couldn’t deal with the 1M context change and then the compounding token devouring nonsense of 4.7 I sincerely hope Anthropic is seeing all of this and taking note. They have their work cut out for them.
    • setnone 6 hours ago
      absolutely agree: non-1M Opus 4.6 on x20 max was peak AGI

      now it's back to regular slop and just to check otherwise i have to spend at least $100

  • VadimPR 17 hours ago
    Appreciate the honesty from the team.

    At the same time, personally I find prioritizing quality over quantity of output to be a better personal strategy. Ten partially buggy features really aren't as good as three quality ones.

  • jwpapi 16 hours ago
    Those are exactly the kind of issues you run into when your app is ai coded you built one thing and kill something else.

    You have too many and the wrong benchmarks

  • RamblingCTO 3 hours ago
    Doesn't change anything about opus 4.7 being an absolute buffon. Even going back to opus 4.6 doesn't feel like the magical period maybe 3-4 weeks ago. Gonna go back to openAI
  • rebolek 15 hours ago
    > On April 16, we added a system prompt instruction to reduce verbosity.

    What verbosity? Most of the time I don’t know what it’s doing.

  • deaux 14 hours ago
    They had this ready and timed it for GPT 5.5 announcement. Zero chance it's a coincidence .
  • zagwdt 4 hours ago
    ngl lost alot of trust in cc after reading this, specially point 1

    how do you just do that to millions of users building prod code with your shit

  • ankit219 15 hours ago
    An interesting question to wonder is why these optimizations were pushed so aggressively in the first place. Especially given this is the time they were running a 2x promotion, by themselves, without presumably seeing any slowdown in demand.
  • nopurpose 9 hours ago
    Weren't there reports that quality decreased when using non-CC harnesses too? Nothing in blog post can explain that.
  • Alifatisk 18 hours ago
    It’s incredible how forgiving you guys are with Anthropic and their errors. Especially considering you pay high price for their service and receive lower quality than expected.
    • saghm 18 hours ago
      At least personally, it feels like the choices are the one that's okay with being used for mass surveillance and autonomous weapons targeting, the one that's on track to get acquired by the AI company that dragged its feet in getting around to stopping people from making child porn with it, the one that nobody seems to use from Google, and the one that everyone complains about but also still seems to be using because it at least sometimes works well. At this point I've opted out of personal LLM coding by canceling my subscription (although my employer still has subscriptions and wants us to keep using them, so I'll presumably keep using Claude there) but if I had to pick one to spend my own money on I'd still go with Claude.
    • ed_elliott_asc 18 hours ago
      I pay for 20x max and get so much more value out of it than I pay.
    • Avicebron 18 hours ago
      It's still night and day the difference in quality between chatgpt5.4 and opus 4.7. Heck even on Perplexity where 5.4 is included in Pro vs 4.7 which is behind the max plan or whatever, I will pick sonnet 4.6 over the 5.4 offering and it's consistently better. I don't love Anthropic, I don't have illusions about them as a business.

      But if a tool is better, it's better.

      • wahnfrieden 18 hours ago
        You aren’t getting the 5.4 experience for code if you’re not using it in the Codex harness
    • scottyah 18 hours ago
      It's fairly small issues for an amazing product, and the company is just a few years old and growing rapidly. Also, they are leading a powerful technological revolution and their competitors are known to have multiple straight up evil tendencies. A little degradation is not an issue.
    • arnvald 18 hours ago
      What's the alternative? Are you suggesting other LLM providers don't charge high price? Or that they don't make mistakes? Or that they provide better quality?

      We're talking about dynamically developed products, something that most people would have considered impossible just 5 years ago. A non-deterministic product that's very hard to test. Yes, Anthropic makes mistakes, models can get worse over time, their ToS change often. But again, is Gemini/GPT/Grok a better alternative?

    • AntiUSAbah 18 hours ago
      Because it is still good though.

      If you have a good product, you are more understanding. And getting worse doesn't mean its no longer valuable, only that the price/value factor went down. But Opus 4.5 was relevant better and only came out in November.

      There was no price increase at that time so for the same money we get better models. Opus 4.6 again feels relevant better though.

      Also moving fastish means having more/better models faster.

      I do know plenty of people though which do use opencode or pi and openrouter and switching models a lot more often.

    • timmg 17 hours ago
      > It’s incredible how forgiving you guys are with Anthropic and their errors.

      Ironically, I was thinking the exact opposite. This is bleeding edge stuff and they keep pushing new models and new features. I would expect issues.

      I was surprised at how much complaining there is -- especially coming from people who have probably built and launched a lot of stuff and know how easy it is to make mistakes.

    • mlinsey 18 hours ago
      The consumer surplus is quite high. Even with the regressions in this postmortem, performance was above the models last fall, when I was gladly paying for my subscription and thought it was net saving me time.

      That said, there is now much better competition with Codex, so there's only so much rope they have now.

    • lukasus 18 hours ago
      At the time you wrote your comment there were 4 other comments and all of them very negative towards the Anthropic and the blog post in question here. How did you get this conclusions?
      • lukan 18 hours ago
        Confused as well, I rather supposed Antrophic had some standing for saying no to Trump and being declared national security threat, but the anger they got and people leaving to OpenAI again, who gladly said yes to autonomous killing AI did astonish me a bit. And I also had weird things happening with my usage limits and was not happy about it. But it is still very useful to me - and I only pay for the pro plan.
        • sunaookami 18 hours ago
          >I rather supposed Antrophic had some standing for saying no to Trump and being declared national security threat

          I never understood why people cheered for Anthropic then when they happily work together with Palantir.

      • unselect5917 18 hours ago
        HN glazes anthropic every single time I see it come up. This is as obvious as HN's political bias.
    • operatingthetan 18 hours ago
      I don't think Anthropic has to inform their customers of every change they make, but they should have with this one.
    • jgbuddy 18 hours ago
      Anthropic actually not so bad. Anthropic models code good, usually. Price not so high compared to time to do it by self.
    • OsrsNeedsf2P 18 hours ago
      Look at any criticism of Mythos. Some members on HN are defending it tooth and nail, despite it not being released
    • fastball 18 hours ago
      What high price? I pay $200/m for an insane number of tokens.
    • oytis 18 hours ago
      Remember Louis CK talking about Wi-Fi on an airplane? People are dealing with highly experimental technology here
    • tempest_ 18 hours ago
      A lot of people are provided their access through work.

      They don't actually pay the bill or see it.

    • mystraline 18 hours ago
      Exactly. They've done now like 6 rug-pulls.

      Idiots keep throwing money at real-time enshittification and 'I am changing the terms. Pray I do not change them further".

      And yes, I am absolutely calling people who keep getting screwed and paying for more 'service' as idiots.

      And Anthropic has proved that they will pay for less and less. So, why not fuck them over and make more company money?

  • natdempk 18 hours ago
    As an end-user, I feel like they're kind of over-cooking and under-describing the features and behavior of what is a tool at the end of the day. Today the models are in a place where the context management, reasoning effort, etc. all needs to be very stable to work well.

    The thing about session resumption changing the context of a session by truncating thinking is a surprise to me, I don't think that's even documented behavior anywhere?

    It's interesting to look at how many bugs are filed on the various coding agent repos. Hard to say how many are real / unique, but quantities feel very high and not hard to run into real bugs rapidly as a user as you use various features and slash commands.

  • zem 13 hours ago
    ugh, caching based on idle time is horrible for my usage anyway; since claude is both fairly slow and doesn't really have much of a daily quota anyway I often tell it to do something and then wander off and come back to check on it when I next think about it. I always vaguely assumed that my session would not "detect" the intervening time anyway since it was all async. I guess from a global perspective time-based cache eviction makes sense.
  • noname120 1 hour ago
    So now the solution is to input a “ping” message every hour so that it keeps the cache warm?
  • kristianc 16 hours ago
    To think we'd have known about this in advance if they'd just have open sourced Claude Code, rather than them being forced into this embarrassing post mortem. Sunlight is the best disinfectant.
  • gilrain 17 hours ago
    Hi Boris, random observer here. Would you consider apologizing to the community for mistakenly closing tickets related to this and then wrongly keeping them closed when, internally, you realized they were legitimate?

    I think an apology for that incident would go a long way.

    • rimliu 1 hour ago
      not many would believe in the sincerity of it anyway.
  • KronisLV 17 hours ago
    This reads like good news! They probably still lost a bunch of users due to the negative public sentiment and not responding quickly enough, but at least they addressed it with a good bit of transparency.
  • wg0 10 hours ago
    A heavily vibe coded CLI would have tons of issues, regularly.

    LLMs over edit and it's a known problem.

  • xlayn 18 hours ago
    If anthropic is doing this as a result of "optimizations" they need to stop doing that and raise the price. The other thing, there should be a way to test a model and validate that the model is answering exactly the same each time. I have experienced twice... when a new model is going to come out... the quality of the top dog one starts going down... and bam.. the new model is so good.... like the previous one 3 months ago.

    The other thing, when anthropic turns on lazy claude... (I want to coin here the term Claudez for the version of claude that's lazy.. Claude zzZZzz = Claudez) that thing is terrible... you ask the model for something... and it's like... oh yes, that will probably depend on memory bandwith... do you want me to search that?...

    YES... DO IT... FRICKING MACHINE..

    • joshstrange 17 hours ago
      It's incredibly frustrating when I've spelled out in CLAUDE.md that it should SSH to my dev server to investigate things I ask it to and it regularly stops working with a message of something like:

      > Next steps are to run `cat /path/to/file` to see what the contents are

      Makes me want to pull my hair out. I've specifically told you to go do all the read-only operations you want out on this dev server yet it keeps forgetting and asking me to do something it can do just fine (proven by it doing it after I "remind" it).

      That and "Auto" mode really are grinding my gears recently. Now, after a Planing session my only option is to use Auto mode and I have to manually change it back to "Dangerously skip permissions". I think these are related since the times I've let it run on "Auto" mode is when it gives up/gets stuck more often.

      Just the other day it was in Auto mode (by accident) and I told it:

      > SSH out to this dev server, run `service my_service_name restart` and make sure there are no orphans (I was working on a new service and the start/stop scripts). If there are orphans, clean them up, make more changes to the start/stop scripts, and try again.

      And it got stuck in some loop/dead-end with telling I should do it and it didn't want to run commands out on a "Shared Dev server" (which I had specifically told it that this was not a shared server).

      The fact that Auto mode burns more tokens _and_ is so dumb is really a kick in the pants.

    • marcyb5st 17 hours ago
      Apart from Anthropic nobody knows how much the average user costs them. However the consensus is "much more than that".

      If they have to raise prices to stop hemorrhaging money, would you be willing to pay 1000 bucks a month for a max plan? Or 100$ per 1M pitput tokens (playing numberWang here, but the point stands).

      If I have to guess they are trying to get balance sheet in order for an IPO and they basically have 3 ways of achieving that:

      1. Raising prices like you said, but the user drop could be catastrophic for the IPO itself and so they won't do that

      2. Dumb the models down (basically decreasing their cost per token)

      3. Send less tokens (ie capping thinking budgets aggressively).

      2 and 3 are palatable because, even if they annoying the technical crowd, investors still see a big number of active users with a positive margin for each.

      • CamperBob2 13 hours ago
        $1000/mo for guaranteed functionality >= Opus 4.6 at its peak? Yes, I'd probably grumble a bit and then whip out the credit card.

        I'm not a heavy LLM user, and I've never come anywhere the $200/month plan limits I'm already subscribed to. But when I do use it, I want the smartest, most relentless model available, operating at the highest performance level possible.

        Charge what it takes to deliver that, and I'll probably pay it. But you can damned well run your A/B tests on somebody else.

    • dgellow 18 hours ago
      I would love if agents would act way more like tools/machines and NOT try to act as if they were humans
    • Keeeeeeeks 18 hours ago
      https://marginlab.ai/ (no affiliation)

      There are a number of projects working on evals that can check how 'smart' a model is, but the methodology is tricky.

      One would want to run the exact same prompt, every day, at different times of the day, but if the eval prompt(s) are complex, the frontier lab could have a 'meta-cognitive' layer that looks for repetitive prompts, and either: a) feeds the model a pre-written output to give to the user b) dumbs down output for that specific prompt

      Both cases defeat the purpose in different ways, and make a consistent gauge difficult. And it would make sense for them to do that since you're 'wasting' compute compared to the new prompts others are writing.

      • hex4def6 17 hours ago
        I think you could alter the prompt in subtle ways; a period goes to an ellipses, extra commas, synonyms, occasional double-spaces, etc.

        Enough that the prompt is different at a token-level, but not enough that the meaning changes.

        It would be very difficult for them to catch that, especially if the prompts were not made public.

        Run the variations enough times per day, and you'd get some statistical significance.

        The guess the fuzzy part is judging the output.

    • JyB 15 hours ago
      This specifically is super annoying.
  • varispeed 2 hours ago
    It appears that Opus 4.7 has been nerfed already. Can't get any sensible results since yesterday. It just keeps running in circles. Even mention that it is committing fraud by doing superficial work it has been told specifically not to do doesn't help.
    • rimliu 1 hour ago
      oh yes. I tried to get some review of a code base after some refactoring. CC produced a complete garbage review. After pointing that out it admitted that that was garbage - and promptly produced another pile of garbage. After the third failed attempt I had to call it a day.
  • tdg5 12 hours ago
    I missed the part about the refunds…
  • einrealist 18 hours ago
    Is 'refactoring Markdown files' already a thing?
  • 2001zhaozhao 18 hours ago
    How about just not change the harness abruptly in the first place? Make new system prompt changes "experimental" first so you can gather feedback.
  • davidfstr 17 hours ago
    Good on Anthropic for giving an update & token refund, given the recent rumors of an inexplicable drop in quality. I applaud the transparency.
    • scuderiaseb 17 hours ago
      Opus 4.7 was released a week ago, at that point all limits were reset, so this was very beneficial to them because basically everyones weekly limit Was anyway about to be reset.
  • throwaway2027 16 hours ago
    Cool but I switched to Codex for the time being.
  • gnegggh 4 hours ago
    not the first time. Still not showing thinking are we?
  • hirako2000 13 hours ago
    In other words we did the right things, but we understand feedback, oh and bugs happen.
  • 8note 15 hours ago
    something i note from this is that this is not a model weights change, but it is a hidden state change anthropic is doing to the outputs that can tune the quality and down on the "model" without breaking the "we arent changing the model" promise.

    how often do these changes happen?

  • motbus3 18 hours ago
    I had similar experience just before 4.5 and before 4.6 were released.

    Somehow, three times makes me not feel confident on this response.

    Also, if this is all true and correct, how the heck they validate quality before shipping anything?

    Shipping Software without quality is pretty easy job even without AI. Just saying....

  • bearjaws 18 hours ago
    The issue making Claude just not do any work was infuriating to say the least. I already ran at medium thinking level so was never impacted, but having to constantly go "okay now do X like you said" was annoying.

    Again goes back to the "intern" analogy people like to make.

  • ayhanfuat 18 hours ago
    Reading the "Going forward" section I see that they have zero understanding of the main complaints.
    • Kiro 18 hours ago
      How so?
      • ayhanfuat 18 hours ago
        They feel they're in a position to make important trade-off decisions on behalf of the user. "It's just slightly worse, I'll sneak this change in" is not something to be tolerated, whether it actually turns out to be much worse or not. Their adaptive thinking mess has caused a ton of work for me. I know a lot of people are saying Codex is actually better now. I don't agree but I'm switching to it because it's much more reliable.
        • operatingthetan 18 hours ago
          I agree, but these LLM products are all black-boxes so we need to demand more accountability from them.
  • walthamstow 17 hours ago
    So we weren't going mad then!
  • ritonlajoie 13 hours ago
    yesterday CC created a fastapi /healthz endpoint and told me it's the gold standard (with the ending z). today I stopped my max sub and will be trying codex
    • jesse_dot_id 2 hours ago
      This is fairly normal.
    • wrxd 13 hours ago
      To be fair that’s a Google convention. Have a look at z-pages
  • ElFitz 17 hours ago
    Now we know why Anthropic banned the use of subscriptions with other agent harnesses: they partially rely on the Claude Code cli to control token usage through various settings.

    And it also tells us why we shouldn’t use their harness anyway: they constantly fiddle with it in ways that can seriously impact outcomes without even a warning.

  • vicchenai 14 hours ago
    had this happen to me mid-refactor and spent 20 min wondering if I'd gone crazy. honestly the one hour threshold feels pretty arbitrary, sometimes you just step away to think
  • whalesalad 16 hours ago
    The funny thing is, in the last 3 days Claude has gotten substantially worse. So this claim, "All three issues have now been resolved as of April 20 (v2.1.116)" does not land with me at all.
  • setnone 18 hours ago
    Good on them for resolving all three issues, but is it any good again?
    • alxndr13 18 hours ago
      for me at least, yes. just wrote it to coworkers this afternoon. Behaves way more "stable" in terms of quality and i don't have the feeling of the model getting way worse after 100k tokens of context or so.

      What i notice: after 300k there's some slight quality drop, but i just make sure to compact before that threshold.

  • psubocz 16 hours ago
    > All three issues have now been resolved as of April 20 (v2.1.116).

    The latest in homebrew is 2.1.108 so not fixed, and I don't see opus 4.7 on the models list... Is homebrew a second class citizen, or am I in the B group?

  • antirez 16 hours ago
    Zero QA basically.
    • 8note 15 hours ago
      id go more on the lines of "dont know what to QA for"
  • system2 5 hours ago
    Whatever they did, with the max plan, my daily usage quota was consumed in less than 10 minutes. Weird, let's hope they fix the usage now.
  • hajile 17 hours ago
    My takeaway is that they knew they were changing a bunch of stuff while their reps were gaslighting us in the comments here.

    Why should we ever trust what they say again out trust that they won’t be rug-pulling again once this blows over?

  • EugeneOZ 15 hours ago
    If you think that you can just silently modify the model without any announcements and only react when it doesn't go through unnoticed, then be 100% sure that your clients will check every possible alternative and will leave you as soon as they find anything similar in quality (and no, not a degraded one).
  • ramesh31 16 hours ago
    Effort should not be configurable for Opus, it should be set to a single default that provides the highest level of capability. There are zero instances in which I am willing to accept a lesser result in exchange for a slightly faster response from Opus. If that were the case I would be using Flash or Haiku.
  • systemvoltage 18 hours ago
    Interesting. All 3 seems like they’re obviously going to impact quality. e.g, reducing the effort from high to medium.

    So then, there must have been an explicit internal guidance/policy that allowed this tradeoff to happen.

    Did they fix just the bug or the deeper policy issue?

  • tontinton 17 hours ago
    or you can use a non vibe designed efficient Rust TUI coding agent made by yours truly, all my coworkers use it too :) called https://maki.sh!

    lua plugins WIP

  • maxrev17 15 hours ago
    Please for the love of god just put the max price plan up like 4x or 5x in cost and make it actually work.
  • rishabhaiover 18 hours ago
    Boris gaslighted us with all the quality related incidents for weeks not acknowledging these problems.
    • throwaway2027 16 hours ago
      Maybe he didn't know or they were still figuring it out which is fine they're still engineers who can get things wrong sometimes but the communication felt lackluster and being on the receiving end sucks when you had a reliable setup which then degrades. There is a reason people don't upgrade software and why people say if it works don't fix it, but obviously that's not an option for Anthropic when you want to keep improving the product, so they need good measurement tools and quick rollbacks even if properly "benchmarking" LLMs could prove difficult.
      • rishabhaiover 10 hours ago
        I agree but one can admit their situation instead of outrightly rejecting the claims. My own mistake is to have become so hopelessly dependent on them.
  • Rapzid 16 hours ago
    > On March 4, we changed Claude Code's default reasoning effort from high to medium to reduce the very long latency—enough to make the UI appear frozen—some users were seeing in high mode.

    Translation: To reduce the load on our servers.

  • teaearlgraycold 18 hours ago
    > On March 26, we shipped a change to clear Claude's older thinking from sessions that had been idle for over an hour, to reduce latency when users resumed those sessions. A bug caused this to keep happening every turn for the rest of the session instead of just once, which made Claude seem forgetful and repetitive. We fixed it on April 10. This affected Sonnet 4.6 and Opus 4.6.

    Is it just me or does this seem kind of shocking? Such a severe bug affecting millions of users with a non-trivial effect on the context window that should be readily evident to anyone looking at the analytics. Makes me wonder if this is the result of Anthropic's vibe-coding culture. No one's actually looking at the product, its code, or its outputs?

    • chermi 18 hours ago
      It's really hard to understand. There needs to be really loud batman sign in the sky type signals from some hero third party calling out objective product degradation. Do they use cc internally? If so do they use a different version? This should've been almost as loud a break as service just going down altogether, yet it took 2 weeks to fix?!
      • poly2it 17 hours ago
        > ... we’ll ensure that a larger share of internal staff use the exact public build of Claude Code (as opposed to the version we use to test new features) ...

        Apparently they are using another version internally.

    • nrki 18 hours ago
      > we refunded all affected customers

      Notably missing from the postmortem

    • manmal 18 hours ago
      I think that would also have busted cache all the time, and uncached requests consume usage limits rapidly.
  • 0gs 17 hours ago
    wow resetting everyone's usage meter is great. i was so close to finally hitting my weekly limit for once though
  • taytus 15 hours ago
    They should do a similar report about their communication team. This was horrible mismanaged.
  • jruz 17 hours ago
    Too late bro, switched to Codex I’m done with your bullshit.
  • gverrilla 17 hours ago
    Recent minor issue worth flagging: Claude sometimes introduces domain-specific acronyms without first spelling them out, assuming reader familiarity. Caught this in a pt-br conversation about cycling where Claude used "FC" (frequência cardíaca / heart rate) — a term common in sports science literature but not in everyday Portuguese. Same pattern shows up in English too (e.g., dropping "RPE," "VO2," "HIIT" without definition). Suggested behavior: on first mention, write the full term and introduce the acronym in parentheses — "frequência cardíaca (FC)" / "heart rate (HR)" — then use the acronym freely afterward. Small thing, but it affects accessibility for readers outside the specific jargon bubble.
  • dainiusse 18 hours ago
    Corporate bs begins...
  • epsteingpt 12 hours ago
    Gaslit for months, only to acknowledge.
  • dcchambers 16 hours ago
    So it turns out Anthropic was gaslighting everyone on twitter about this then? Swearing that nothing had changed and people were imagining the models got worse?
  • whalesalad 17 hours ago
    I genuinely don't understand what they have been trying to achieve. All of these incremental "improvements" have ... not improved anything, and have had the opposite effect.

    My trust is gone. When day-to-day updates do nothing but cause hundreds of dollars in lost $$$ tokens and the response is "we ... sorta messed up but just a little bit here and there and it added up to a big mess up" bro get fuckin real.

  • troupo 17 hours ago
    > they were challenging to distinguish from normal variation in user feedback at first

    translation: we ignored this and our various vibe coders were busy gaslighting everyone saying this could not be happening

  • yuvrajmalgat 17 hours ago
    ohh
  • o10449366 16 hours ago
    Resuming from sessions are still broken since Feb (I had to get claude to write a hook to fix that itself), the monitoring tool doesn't work and blocks usage of what does (simple sleep - except it doesn't even block correctly so you just sidestep in more ridiculous ways), and yet there seems to be more annoying activity proxies/spinner wheels (staring into middle distance)... Like I don't know how in a span of a few months you lose such focus on your product goals. Has Anthropic reached that point in their lifecycle already where their product team is no longer staffed by engineers and they have more and more non-technical MBAs joining trying to ride the hype train?
  • cute_boi 16 hours ago
    Honestly, it’s kind of sad that Anthropic is winning this AI race. They are the most anti–open source company, and we should try to avoid them as much as possible.

    They are all doing it because OpenAI is snatching their customers. And their employees have been gaslighting people [1] for ages. I hope open-source models will provide fierce competition so we do not have to rely on an Anthropic monopoly. [1] https://www.reddit.com/r/claude/comments/1satc4f/the_biggest...

  • claud_ia 2 hours ago
    [dead]
  • claud_ia 2 hours ago
    [dead]
  • techpulselab 4 hours ago
    [dead]
  • DrokAI 7 hours ago
    [dead]
  • jimmypk 4 hours ago
    [dead]
  • yujunjie 9 hours ago
    [dead]
  • techpulselab 12 hours ago
    [dead]
  • bmd1905 8 hours ago
    [dead]
  • KaiShips 17 hours ago
    [dead]
  • WhoffAgents 16 hours ago
    [dead]
  • tommy29tmar 17 hours ago
    [dead]
  • EFLKumo 10 hours ago
    [dead]
  • mkilmanas 15 hours ago
    [dead]
  • Bmello11 17 hours ago
    [dead]
  • agentbonnybb 16 hours ago
    [dead]
  • petervandijck 17 hours ago
    I have noticed a clear increase in smarts with 4.7. What a great model!

    People complain so much, and the conspiracy theories are tiring.