18 comments

  • franze 1 hour ago
    I created "apfel" https://github.com/Arthur-Ficial/apfel a CLI for the apple on-device local foundation model (Apple intelligence) yeah its super limited with its 4k context window and super common false positives guardrails (just ask it to describe a color) ... bit still ... using it in bash scripts that just work without calling home / out or incurring extra costs feels super powerful.
    • AbuAssar 17 minutes ago
      nice project, thanks for sharing.

      any plans for providing it through brew for easy installation?

  • babblingfish 4 hours ago
    LLMs on device is the future. It's more secure and solves the problem of too much demand for inference compared to data center supply, it also would use less electricity. It's just a matter of getting the performance good enough. Most users don't need frontier model performance.
    • troad 2 hours ago
      I very recently installed llama.cpp on my consumer-grade M4 MBP, and I've been having loads of fun poking and prodding the local models. There's now a ChatGPT style interface baked into llama.cpp, which is very handy for quick experimentation. (I'm not entirely sure what Ollama would get me that llama.cpp doesn't, happy to hear suggestions!)

      There are some surprisingly decent models that happily fit even into a mere 16 gigs of RAM. The recent Qwen 3.5 9B model is pretty good, though it did trip all over itself to avoid telling me what happened on Tiananmen Square in 1989. (But then I tried something called "Qwen3.5-9B-Uncensored-HauhauCS-Aggressive", which veers so hard the other way that it will happily write up a detailed plan for your upcoming invasion of Belgium, so I guess it all balances out?)

      • theshrike79 13 minutes ago
        Qwen3.5 has tool calling, so you can give it a wikipedia tool which it uses to know what happened in Tiananmen Square without issues =)
      • whackernews 1 hour ago
        Oh does llama.cpp use MLX or whatever? I had this question, wonder if you know? A search suggests it doesn’t but I don’t really understand.
    • melvinroest 3 hours ago
      I have journaled digitally for the last 5 years with this expectation.

      Recently I built a graphRAG app with Qwen 3.5 4b for small tasks like classifying what type of question I am asking or the entity extraction process itself, as graphRAG depends on extracted triplets (entity1, relationship_to, entity2). I used Qwen 3.5 27b for actually answering my questions.

      It works pretty well. I have to be a bit patient but that’s it. So in that particular use case, I would agree.

      I used MLX and my M1 64GB device. I found that MLX definitely works faster when it comes to extracting entities and triplets in batches.

      • nkzd 2 hours ago
        Did you get any insights about yourself from this process? I am thinking of doing the same
        • melvinroest 9 minutes ago
          TL;DR: you don't need to do any treasure hunt on your notes by just typing stuff into the search bar. Having your own graphRAG system + LLM on your notes is basically a "Google" but then on your own notes. Any question you have: if you have a note for it, it will bubble up. The annoying thing is that false positives will also bubble up.

          ----

          Full reaction:

          Yes but perhaps not in a way you might expect. Qwen's reasoning ability isn't exactly groundbreaking. But it's good enough to weave a story, provided it has some solid facts or notes. GraphRAG is definitely a good way to get some good facts, provided your notes are valuable to you and/or contain some good facts.

          So the added value is that you now have a super charged information retrieval system on your notes with an LLM that can stitch loose facts reasonably well together, like a librarian would. It's also very easy to see hallucinations, if you recognize your own writing well, which I do.

          The second thing is that I have a hard time rereading all my notes. I write a lot of notes, and don't have the time to reread any of them. So oftentimes I forget my own advice. Now that I have a super charged information retrieval system on my notes, whenever I ask a question: the graphRAG + LLM search for the most relevant notes related to my question. I've found that 20% of what I wrote is incredibly useful and is stuff that I forgot.

          And there are nuggets of wisdom in there that are quite nuanced. For me specifically, I've seen insights in how I relate to work that I should do more with. I'll probably forget most things again but I can reuse my system and at some point I'll remember what I actually need to remember. For example, one thing I read was that work doesn't feel like work for me if I get to dive in, zoom out, dive in, zoom out. Because in the way I work as a person: that means I'm always resting and always have energy for the task that I'm doing. Another thing that it got me to do was to reboot a small meditation practice by using implementation intentions (e.g. "if I wake up then I meditate for at least a brief amount of time").

          What also helps is to have a bit of a back and forth with your notes and then copy/paste the whole conversation in Claude to see if Claude has anything in its training data that might give some extra insight. It could also be that it just helps with firing off 10 search queries and finds a blog post that is useful to the conversation that you've had with your local LLM.

    • AugSun 3 hours ago
      "Most users don't need frontier model performance" unfortunately, this is not the case.
      • theshrike79 8 minutes ago
        It depends. If they're using a small/medium local model as a 1:1 ChatGPT replacement as-is, they'll have a bad time. Even ChatGPT refers to external services to get more data.

        But a local model + good harness with a robust toolset will work for people more often than not.

        The model itself doesn't need to know who was the president of Zambia in 1968, because it has a tool it can use to check it from Wikipedia.

      • helsinkiandrew 1 hour ago
        > unfortunately, this is not the case

        Most users are fixing grammar/spelling, summarising/converting/rewriting text, creating funny icons, and looking up simple facts, this is all far from frontier model performance.

        I've a feeling that if/when Apple release their onboard LLM/Siri improvements that can call out if needed, the vast majority of people will be happy with what they get for free that's running on their phone.

      • selcuka 2 hours ago
        Any citations? Because that was my impression, too. I want frontier model performance for my coding assistant, but "most users" could do with smaller/faster models.

        ChatGPT free falls back to GPT-5.2 Mini after a few interactions.

        • lxgr 1 hour ago
          Have you used GPT instant or mini yourself? I think it’s pretty cynical to assume that this is “good enough for most people”, even if they don’t know the difference between that and better models.
        • asutekku 2 hours ago
          Frontier model has much better knowledge and they usually hallucinate less. It's not about the coding capabilities, it's about how much you can trust the model.
          • Barbing 1 hour ago
            re: trust-

            Have you tried the free version of ChatGPT? It is positively appalling. It’s like GPT 3.5 but prompted to write three times as much as necessary to seem useful. I wonder how many people have embarrassed themselves, lost their jobs, and been critically misinformed. All easy with state-of-the-art models but seemingly a guarantee with the bottom sub-slop tier.

            Is the average person just talking to it about their day or something?

            • theshrike79 7 minutes ago
              Even the paid version of ChatGPT tends to use a 1000 words when 10 will do.

              You can try asking it the same question as Claude and compare the answers. I can guarantee you that the ChatGPT answer won't fit on a single screen on a 32" 4k monitor.

              Claude's will.

            • throwaway27448 9 minutes ago
              If someone blindly submits chatbot output they deserve to be embarrassed and fired. But I don't think that's going to improve.
            • jychang 1 hour ago
              The free version of ChatGPT is insanely crippled, so that's not surprising.
      • throwaway27448 10 minutes ago
        Say more. Why do you think this?
      • blitzar 10 minutes ago
        "Hey dingus, set timer for 30 minutes"
      • AugSun 3 hours ago
        [flagged]
        • seanhunter 3 hours ago
          Complaining about downvotes is futile and is also against hn guidelines.
          • AugSun 2 hours ago
            I'm not complaining "about downvotes" LOL I'm explaining why some people will be replaced by LLMs because of their own "context window" length.
    • ZeroGravitas 1 hour ago
      It feels like you'll soon need a local llm to intermediate with the remote llm, like an ad blocker for browsers to stop them injecting ads or remind you not to send corporate IP out onto the Internet.
      • tomashubelbauer 1 hour ago
        I'd like to coin the term "user agent" for this
        • blitzar 10 minutes ago
          "copilot" seems a good term

          could also be considered a triage layer

    • jl6 1 hour ago
      Not sure about the using less electricity part. With batching, it’s more efficient to serve multiple users simultaneously.
      • TeMPOraL 1 hour ago
        Indeed. Data centers have so many ways and reasons to be much more energy-efficient than local compute it's not even funny.
    • karimf 2 hours ago
      Depending on the use case, the future is already here.

      For example, last week I built a real-time voice AI running locally on iPhone 15.

      One use case is for people learning speaking english. The STT is quite good and the small LLM is enough for basic conversation.

      https://github.com/fikrikarim/volocal

      • Barbing 1 hour ago
        Brilliant. Hope to see you in the App Store!
        • karimf 1 hour ago
          Oh thank you! I wasn’t sure if it was worth submitting to the app store since it was just a research preview, but I could do it if people want it.
    • amelius 27 minutes ago
      LLM in silicon is the future. It won't be long until you can just plug an LLM chip into your computer and talk to it at 100x the speed of current LLMs. Capability will be lower but their speed will make up for it.
      • theshrike79 11 minutes ago
        I'm expecting someone to come up with an LLM version of the Coral USB Accelerator: https://www.coral.ai/products/accelerator

        Just plug in a stick in your USB-C port or add an M.2 or PCIe board and you'll get dramatically faster AI inference.

    • thih9 1 hour ago
      > it also would use less electricity

      How would it use less electricity? I’d like to learn more.

      • jychang 1 hour ago
        That's completely not true. LLM on device would use MORE electricity.

        Service providers that do batch>1 inference are a lot more efficient per watt.

        Local inference can only do batch=1 inference, which is very inefficient.

    • pezgrande 3 hours ago
      You could argue that the only reason we have good open-weight models is because companies are trying to undermine the big dogs, and they are spending millions to make sure they dont get too far ahead. If the bubble pops then there wont be incentive to keep doing it.
      • aurareturn 3 hours ago
        I agree. I can totally see in the future that open source LLMs will turn into paying a lumpsum for the model. Many will shut down. Some will turn into closed source labs.

        When VCs inevitably ask their AI labs to start making money or shut down, those free open source LLMS will cease to be free.

        Chinese AI labs have to release free open source models because they distill from OpenAI and Anthropic. They will always be behind. Therefore, they can't charge the same prices as OpenAI and Anthropic. Free open source is how they can get attention and how they can stay fairly close to OpenAI and Anthropic. They have to distill because they're banned from Nvidia chips and TSMC.

        Before people tell me Chinese AI labs do use Nvidia chips, there is a huge difference between using older gimped Nvidia H100 (called H20) chips or sneaking around Southeast Asia for Blackwell chips and officially being allowed to buy millions of Nvidia's latest chips to build massive gigawatt data centers.

        • pezgrande 2 hours ago
          > have to release free open source models because they distill from OpenAI and Anthropic

          They dont really have to though, they just need to be good enough and cheaper (even if distilled). That being said, it is true they are gaining a lot of visibility (specially Qwen) because of being open-source(weight).

          Hardware-wise they seem they will catch-up in 3-5 years (Nvidia is kind of irrelevant, what matters is the node).

          • aurareturn 1 hour ago
            I highly doubt they can catch up in 3-5 years to Nvidia.

            Chips take about 3 years to design. Do you think China will have Feymann-level AI systems in 3 years?

            I think in 3 years, they'll have H200-equivalent at home.

        • spiderfarmer 3 hours ago
          “They will always be behind”

          Car manufacturers said the same.

          • aurareturn 2 hours ago
            It did take decades to catch and surpass US car makers right?
            • seanmcdirmid 2 hours ago
              About 2.5 decades from the start of the JVs, but they did it. Semiconductors and jet turbines are really the last two tech trees that China has yet to master.
              • aurareturn 1 hour ago
                Right. When I said "they'll always be behind", I meant in the next 5-10 years. They're gated by EUV tech. And once they have EUV tech, they need to scale up chip manufacturing.
              • Barbing 1 hour ago
                Which might they master first?
      • Lio 2 hours ago
        This seems to be somewhat similar to web browsers.

        I could see the model becoming part of the OS.

        Of course Google and Microsoft will still want you to use their models so that they can continue to spy on you.

        Apple, AMD and Nvidia would sell hardware to run their own largest models.

      • mirekrusin 2 hours ago
        You can have viable business model around open weight models where you offer fine tuning at a fee.
      • Eufrat 3 hours ago
        [dead]
    • overfeed 2 hours ago
      > It's just a matter of getting the performance good enough.

      Who will pay for the ongoing development of (near-)SoTA local models? The good open-weight models are all developed by for-profit companies - you know how that story will end.

    • miki123211 1 hour ago
      > would use less electricity

      Sorry to shatter your bubble, but this is patently false, LLMs are far more efficient on hardware that simultaneously serves many requests at once.

      There's also the (environmental and monetary) cost of producing overpowered devices that sit idle when you're not using them, in contrast to a cloud GPU, which can be rented out to whoever needs it at a given moment, potentially at a lower cost during periods of lower demand.

      Many LLM workloads aren't even that latency sensitive, so it's far easier to move them closer to renewable energy than to move that energy closer to you.

      • zozbot234 12 minutes ago
        > LLMs are far more efficient on hardware that simultaneously serves many requests at once.

        The LLM inference itself may be more efficient (though this may be impacted by different throughput vs. latency tradeoffs; local inference makes it easier to run with higher latency) but making the hardware is not. The cost for datacenter-class hardware is orders of magnitude higher, and repurposing existing hardware is a real gain in efficiency.

      • ysleepy 38 minutes ago
        I'm actually not sure that's true. Apart from people buying the device with or without the neural accelerator, the perf/watt could be on par or better with the big iron. The efficiency sweet-spot is usually below the peak performance point, see big.little architectures etc.
      • kortilla 1 hour ago
        Well this is an article about running on hardware I already have in my house. In the winter that’s just a little extra electricity that converts into “free” resistive heating.
    • nikanj 1 hour ago
      That also means sending every user a copy of the model that you spend billions training. The current model (running the models at the vendor side) makes it much easier to protect that investment
    • gedy 4 hours ago
      Man I really hope so, as, as much as I like Claude Code, I hate the company paying for it and tracking your usage, bullshit management control, etc. I feel like I'm training my replacement. Things feel like they are tightening vs more power and freedom.

      On device I would gladly pay for good hardware - it's my machine and I'm using as I see fit like an IDE.

      • aurareturn 3 hours ago
        When local LLMs get good enough for you to use delightfully, cloud LLMs will have gotten so much smarter that you'll still use it for stuff that needs more intelligence.
        • gedy 3 hours ago
          True, but I'm already producing code/features faster than company knows what to do with, (even though every company says "omg we need this yesterday", etc). Even coding before AI was basically same.

          Code tools that free my time up is very nice.

    • aurareturn 3 hours ago
      It isn't going to replace cloud LLMs since cloud LLMs will always be faster in throughput and smarter. Cloud and local LLMs will grow together, not replace each other.

      I'm not convinced that local LLMs use less electricity either. Per token at the same level of intelligence, cloud LLMs should run circles around local LLMs in efficiency. If it doesn't, what are we paying hundreds of billions of dollars for?

      I think local LLMs will continue to grow and there will be an "ChatGPT" moment for it when good enough models meet good enough hardware. We're not there yet though.

      Note, this is why I'm big on investing in chip manufacture companies. Not only are they completely maxed out due to cloud LLMs, but soon, they will be double maxed out having to replace local computer chips with ones that are suited for inferencing AI. This is a massive transition and will fuel another chip manufacturing boom.

      • raincole 2 hours ago
        Yep. People were claiming DeepSeek was "almost as good as SOTA" when it came out. Local will always be one step away like fusion.

        It's just wishful thinking (and hatred towards American megacorps). Old as the hills. Understandable, but not based on reality.

        • kortilla 1 hour ago
          Don’t try to draw trend lines for an industry that has existed for <5 years.
      • virtue3 3 hours ago
        We are 100% there already. In browser.

        the webgpu model in my browser on my m4 pro macbook was as good as chatgpt 3.5 and doing 80+ tokens/s

        Local is here.

        • AndroTux 2 hours ago
          Sir, ChatGPT 3.5 is more than 3 years old, running on your bleeding edge M4 Pro hardware, and only proves the previous commenters point.
        • AugSun 2 hours ago
          It works really well for "You're helpful assistant / Hi / Hello there. how may I help you today?" Anything else (esp in non-EN language) and you will see the limitations yourself. just try it.
      • mirekrusin 2 hours ago
        Local RTX 5090 is actually faster than A100/H100.
        • aurareturn 1 hour ago
          It's a $4,000 GPU with 32GB of VRAM and needs a 1,000 watt PSU. It's not realistic for the masses.

          If it has something like 80GB of VRAM, it'll cost $10k.

          The actual local LLM chip is Apple Silicon starting at the M5 generation with matmul acceleration in the GPU. You can run a good model using an M5 Max 128GB system. Good prompt processing and token generation speeds. Good enough for many things. Apple accidentally stumbled upon a huge advantage in local LLMs through unified memory architecture.

          Still not for the masses and not cheap and not great though. Going to be years to slowly enable local LLMs on general mass local computers.

      • hrmtst93837 2 hours ago
        You're assuming throughput sets the value, but offline use and privacy change the tradeoff fast.
        • aurareturn 1 hour ago
          Yea I get that there will always be demand for local waifus. I never said local LLMs won't be a thing. I even said it will be a huge thing. Just won't replace cloud.
      • AugSun 3 hours ago
        Looking at downvotes I feel good about SDE future in 3-5 years. We will have a swamp of "vibe-experts" who won't be able to pay 100K a month to CC. Meanwhile, people who still remember how to code in Vim will (slowly) get back to pre-COVID TC levels.
        • QuantumNomad_ 3 hours ago
          What is CC and TC? I have not heard these abbreviations (except for CC to mean credit card or carbon copy, neither of which is what I think you mean here).
          • Ericson2314 3 hours ago
            I figured it out from context clues

            CC: Claude Code

            TC: total comp(ensation)

            • AugSun 2 hours ago
              Thank you for clarifying! (I had no idea it needs to be explained, sorry.)
  • Yukonv 1 hour ago
    Good to see Ollama is catching up with the times for inference on Mac. MLX powered inference makes a big difference, especially on M5 as their graphs point out. What really has been a game changer for my workflow is using https://omlx.ai/ that has SSD KV cold caching. No longer have to worry about a session falling out of memory and needing to prefill again. Combine that with the M5 Max prefill speed means more time is spend on generation than waiting for 50k+ content window to process.
  • LuxBennu 3 hours ago
    Already running qwen 70b 4-bit on m2 max 96gb through llama.cpp and it's pretty solid for day to day stuff. The mlx switch is interesting because ollama was basically shelling out to llama.cpp on mac before, so native mlx should mean better memory handling on apple silicon. Curious to see how it compares on the bigger models vs the gguf path
    • zozbot234 17 minutes ago
      They initially messed up this launch and overwrote some of the GGUF models in their library, making them non-downloadable on platforms other than Apple Silicon. Hopefully that gets fixed.
  • harel 40 minutes ago
    What would be the non Mac computer to run these models locally at the same performance profile? Any similar linux ARM based computers that can reach the same level?
    • sgt 32 minutes ago
      Not even close. If you want to run this on PC's you need to get a GPU like 5090 but that's still not the same cost per token, and it will be less reliable and use a lot more power. Right now the Apple Silicon machines are the most cost effective per token and per watt.
  • robotswantdata 1 hour ago
    Why are people still using Ollama? Serious.

    Lemonade or even llama.cpp are much better optimised and arguably just as easy to use.

  • codelion 4 hours ago
    How does it compare to some of the newer mlx inference engines like optiq that support turboquantization - https://mlx-optiq.pages.dev/
  • mfa1999 3 hours ago
    How does this compare to llama.cpp in terms of performance?
    • solarkraft 2 hours ago
      MLX is a bit faster (low double digit percentage), but uses a bit more RAM. Worthwhile tradeoff for many.
      • ysleepy 19 minutes ago
        On my M4 Pro MLX has almost 2x tok/s
  • dial9-1 4 hours ago
    still waiting for the day I can comfortably run Claude Code with local llm's on MacOS with only 16gb of ram
    • gedy 4 hours ago
      How close is this? It says it needs 32GB min?
      • HDBaseT 3 hours ago
        You can run Qwen3.5-35B-A3B on 32GB of RAM sure, although to get 'Claude Code' performance, which I assume he means Sonnet or Opus level models in 2026, this will likely be a few years away before its runnable locally (with reasonable hardware).
        • Foobar8568 3 hours ago
          I fully agree, I run that one with Q4 on my MBP, and the performance (including quality of response) is a let down.

          I am wondering how people rave so much about local "small devices" LLM vs what codex or Claude code are capable of.

          Sadly there are too much hype on local LLM, they look great for 5min tests and that's it.

  • AugSun 3 hours ago
    "We can run your dumbed down models faster":

    #The use of NVFP4 results in a 3.5x reduction in model memory footprint relative to FP16 and a 1.8x reduction compared to FP8, while maintaining model accuracy with less than 1% degradation on key language modeling tasks for some models.

  • puskuruk 2 hours ago
    Finally! My local infra is waiting for it for months!
  • darshanmakwana 1 hour ago
    Really nice to see this!
  • brcmthrowaway 3 hours ago
    What is the difference between Ollama, llama.cpp, ggml and gguf?
    • benob 3 hours ago
      Ollama is a user-friendly UI for LLM inference. It is powered by llama.cpp (or a fork of it) which is more power-user oriented and requires command-line wrangling. GGML is the math library behind llama.cpp and GGUF is the associated file format used for storing LLM weights.
      • redmalang 1 hour ago
        i've found llama.cpp (as i understand it, ollama now uses their own version of this) to work much better in practice, faster and much more flexible.
    • xiconfjs 3 hours ago
      Ollama on MacOS is a one-click solution with stable obe-click updates. Happy so far. But the mlx support was the only missing piece for me.
      • yard2010 1 hour ago
        Can you please write about your hardware?
  • techpulselab 44 minutes ago
    [dead]
  • charlotte12345 1 hour ago
    [dead]
  • firekey_browser 2 hours ago
    [dead]
  • charlotte12345 1 hour ago
    [flagged]
    • universa1 1 hour ago
      i am curious: is the performance gap between x86 cpu inference and apple silicon, or, a imho more apples-to-apples comparison, e.g., amd strixpoint halo vs apple silicon?

      i would expect the "pure" cpu inference to be behind, but an approach like strix halo/dgx spark to be much closer?