13 comments

  • myky22 8 minutes ago
    Love it! I have a similar inuitiom in my use of Gemini (3 and 3.1). Great at "turn 1" task but degrades faster than opus or gpt.
  • EwanG 22 minutes ago
    At least until one of the competitors is overheard saying "A strange game. The only winning move is not to play"
  • ph4rsikal 31 minutes ago
    Reminds me of this fantastic series on Game Theory and Agent Reasoning https://jdsemrau.substack.com/p/nemotron-vs-qwen-game-theory...
  • egeozcan 1 hour ago
    This is amazing. What I do is something else: I make AI agents develop AI scripts (good ol' computer player scripts) and try to beat each other:

    https://egeozcan.github.io/unnamed_rts/game/

    I occasionally run my tournament script: https://github.com/egeozcan/unnamed_rts/blob/main/src/script...

    That calculates the ELOs for each AI implementation, and I feed it to different agents so they get really creative trying to beat each other. Also making rule changes to the game and seeing how some scripts get weaker/stronger is a nice way to measure balance.

    Funny thing, Codex gets really aggressive and starts cheating a lot of times: https://bsky.app/profile/egeozcan.bsky.social/post/3mfdtj5dh...

  • PeterUstinox 1 hour ago
    Wouldn't it be interesting if the LLMs would write realtime RTS-commands instead of Code? After all it is a RTS game.

    This would bring another dimension to it since then quality of tokens would be one dimension (RTS-language: Decision Making) and speed of tokens the other (RTS-language: Actions Per Minute; APM).

    Also there are a lot of coding benchmarks, that way it would test something more abstract, similar to AlphaStar https://en.wikipedia.org/wiki/AlphaStar_(software)

    You could just use the exposed APIs of OpenAI, Anthropic etc. and let them battle.

  • wongarsu 1 hour ago
    I know visualization is far from the most important goal here, but it really gets me how there's fairly elaborately rendered terrain, and then the units are just unnamed roombas with hard to read status indicators that have no intuitive meaning. Even in the match viewer I have no clue what's going on, there is no overlay or tooltip when you hover or click units either. There is a unit list that tries (and mostly fails) to give you some information, but because units don't have names you have to hover them in the list to have them highlighted in the field (the reverse does not work). Not exactly a spectator sport. Oh, but there is a way to switch from having all units in one sidebar to having one sidebar per player, as if that made a difference.

    I find this pretty funny because it seems like a perfect representation of what's easy with today's tools and what isn't

    Love the idea though

    • embedding-shape 1 hour ago
      Yeah, it's all what you get when you basically ask an agent "Build X" without any constraints about how the UI and UX actually should work, and since the agents have about 0 expertise when it comes to "How would a human perceive and use this?", you end up with UIs that don't make much sense for humans unless you strictly steer them with what you know.
  • busfahrer 44 minutes ago
    This reminds me of this yearly StarCraft AI competition (since 2010), however I think it uses a special API that makes it easy for bots to access the game

    Edit: Forgot link: https://davechurchill.ca/starcraft/

    • KeplerBoy 8 minutes ago
      Very interesting project. I'm a bit confused about the lack of hardware specification. The rules make it clear that one's bot has defined deadlines:

      > Make sure that each onframe call does not run longer than 42ms. Entries that slow down games by repeatedly exceeding this time limit will lose games on time.

      But I'm missing something like: "Your program will be pinned to CPU cores 5-8 and your bot has access to a dedicated RTX 5090 GPU." Also no mention about whether my bot can have network access to offload some high-level latency insensitive planning. Maybe that's just a bad idea in general, haven't played SC in ages.

  • cahaya 56 minutes ago
    Nice. Curious about 5.3-codex-high results
  • dakolli 11 minutes ago
    Yay, I love how we just keep coming up with magic tricks, like toddlers playing with velcro.. These magic tricks do nothing but convince people who don't know any better that LLMs are the real deal, when they simply aren't.

    This is just free propaganda for Anthropic && OpenAI who will leverage these (useless) capabilities to convince your boss to give your salary to them, or at least a substantial portion of it.

    • LatencyKills 7 minutes ago
      This technology exists. It isn’t just a toy. I think it is amazing to see people use it for interesting things even if it isn’t groundbreaking.

      I’ve been an engineer for almost 40 years and love seeing what Claude Code can do.

      Like it or not, young people will not know a world where this technology doesn’t exist. It is just part of their toolset now.

    • p-e-w 9 minutes ago
      Yeah, I guess the tens of thousands of PhDs who are working on LLMs full time are just collectively wasting their lives. Everyone except you is simply too dumb to see it.
  • datawars 1 hour ago
    Great project! It would be interesting to have a meta layer of AIs betting on the player LLMs
  • hmontazeri 1 hour ago
    This is actually fun to watch :D
  • xanth 1 hour ago
    Now I'd love to see if fast > smart over time with Mercury 2.