Ask HN: What's the latest concensus on OpenAI vs. Anthropic $20/month tier?

I'm considering $20/month variants only.

I've had a Claude subscription for the past year, although I only really started properly using LLMs in the past couple of months. With Opus, I get about 5 messages every 5 hours (fairly small codebase); more with Sonnet. I then cancelled that, since its practically unusable and got ChatGPT sub about a week ago. Currently using it with 5.4 High and I haven't had to worry about limits. But the code it produces is definitely "different" and I need to plan more in advance. Its plan mode is also not as precise as with Claude (it doesn't lay out method stubs it plans to implement etc) so I suppose I may need to change how I work with it? Lastly, for normal chats it produces significantly more verbose output (with personality set to Efficient) and fast (with Thinking) but often it feels as though its not as thorough as I'd like it to be.

My question; is this a "you're holding it wrong" type of situation, where I just need to get used to a different mode of interaction? Or are others noticing material difference in quality? Ideally I'd like to stick with ChatGPT due to borderline impractical limits with Anthropic.

3 points | by whatarethembits 4 hours ago

1 comments

  • pcael 2 hours ago
    Have you tried Claude console client?
    • whatarethembits 1 hour ago
      Do you mean Claude Code? If so, that's what I use(d) primarily for development, and Claude Desktop for general chats. My issue with Opus was that, every time I start a new task in Plan mode, it'd use 50k - 100k tokens and that'd by about 20% of the session limit. A bit of back and forth and its done for most of the work day. Just not feasible at all. The tasks I wanted it to perform were fairly small and contained, "Look at these three files @@@ and add xxx to @file. DON'T read any other files. If you need more context, ask me.". That worked sometimes but not always, still burned a lot of tokens.