Endian wars and anti-portability: this again?

(dalmatian.life)

33 points | by awilfox 1 day ago

13 comments

  • Morpheus_Matrix 27 minutes ago
    The maintenance burden argument is the one that actually lands for me. The "endian port maintainer" model sounds reasonable in theory but CJefferson's description above is basically how it plays out in practice every single time. You end up owning the bugs even tho you never wanted the port.

    That said, I think theres a meaningful distinction between code that never touches wire formats or serialization (where endianness barely matters if you use ntohl/htonl correctly) and code that does its own binary parsing. The second category is where genuine bugs hide and where BE testing actually catches real issues, not just hypothetical ones.

    QEMU userspace emulation has gotten pretty turnkey for CI at this point, so the "I dont have BE hardware" excuse is a lot weaker than it was five years ago. A dockerfile with qemu-user-static and a few extra CI minutes isnt a huge ask. The harder problem is exactly what CJefferson said: when BE CI goes red you now own that problem whether you want to or not, because nobody can fix it faster than you since nobody knows the codebase like you do.

    If I were starting a new project today id probably limit to x86-64 and ARM too. The platform fragmentation of the early 2000s that made portability worth the pain just isnt really there anymore.

  • josephg 2 hours ago
    > If the community is offering you a port to an architecture, whether it is 4 days old or 40 years old, that means the community actively wants to use your software on it – otherwise, nobody would put in the effort. Ports like this are hard, and authors like me already know we are fighting an uphill battle just trying to make upstream projects care.

    I've had plenty of opensource contributions over the years for some feature or other I don't care about. I used to accept these pull requests. But all too often, the person who wrote the patch disappears. Then for years I receive bug reports about a feature I didn't write and I don't care about. What do I do with those reports? Ignore them? Fix the bugs myself? Bleh.

    I don't publish opensource projects so I can be a volunteer maintainer, in perpetuity, for someone else's feature ideas.

    If its a small change, or something I would have done myself eventually, then fine. But there is a very real maintenance burden that comes from maintaining support for weird features and rare computer architectures. As this article points out, you need to actively test on real hardware to make sure code doesn't rust. Unfortunately I don't have a pile of exotic computers around that I can use to test my software. And you need to test software constantly or there's a good chance you'll break something and not notice.

    That said, is there an easy way to run software in "big endian" mode on any modern computer? I'd happily run my test suite in big endian mode if I could do so easily.

    • ori_b 1 hour ago
      > What do I do with those reports? Ignore them? Fix the bugs myself? Bleh.

      "I don't have access to a test environment, but if you need help writing a fix, let me know" is a perfectly reasonable response.

    • zamadatix 2 hours ago
      > That said, is there an easy way to run software in "big endian" mode on any modern computer?

      QEMU userspace emulation is usually the easiest for most "normal" programs IMO. Once you set it up you just run the other architecture binaries like normal binaries on your test system (with no need to emulate a full system). Very much the same concept as Prism/Rosetta on Windows/macOS for running x86 apps on ARM systems except it can be any target QEMU supports.

  • jandrewrogers 2 hours ago
    The glaring omission from that long post is the term "opportunity cost".

    Ensuring a code base indefinitely supports arbitrary architectures carries a substantial code architecture cost. Furthermore, it is difficult to guarantee testing going forward or that the toolchains available for those architectures will continue to evolve with your code base. I'm old enough to have lived this reality back when it was common. It sucked hard. I've also written a lot of code that was portable to some very weird silicon so I know what that entails. It goes far beyond endian-ness, that is just one aspect of silicon portability.

    The expectation that people should volunteer their time for low ROI unpleasantness that has a high risk of being unmaintainable in the near future is unreasonable. There are many other facets of the code base where that time may be better invested. That's not "anti-portable", it is recognition of the potential cost to a large base of existing users when you take it on. The Pareto Principle applies here.

    Today, I explicitly only support two architectures: 64-bit x86 and ARM (little-endian). It is wonderful that we have arrived at the point where this is a completely viable proposition. In most cases the cost of supporting marginal users on rare architectures in the year 2026 is not worth it. The computing world is far, far less fragmented than it used to be.

    • pjc50 5 minutes ago
      > Today, I explicitly only support two architectures: 64-bit x86 and ARM (little-endian). It is wonderful that we have arrived at the point where this is a completely viable proposition. In most cases the cost of supporting marginal users on rare architectures in the year 2026 is not worth it.

      This - and efforts to reintroduce BE should be resisted in the same way as people who want to drive on the other side of the road for pure whimsy.

      I note that we've mostly converged on one set of floating point semantics as well, although across a range of bit widths.

    • nextaccountic 1 hour ago
      Why not wasm?
      • jltsiren 52 minutes ago
        Wasm is in an awkward place, because Memory64 is widely but not universally supported. Which means that if you want to support Wasm, you probably have to support 32-bit environments in general. Depending on the project, that can be trivial, but it may also require you to rewrite a lot of low-level code in the project and its dependencies.
    • Brian_K_White 1 hour ago
      [flagged]
  • userbinator 2 hours ago
    If you want to keep software working on systems with a 9-bit byte or other weirdness, that's entirely on you. No one else needs or wants the extra complexity. Little endian is logical and won, big endian is backwards and lost for good reason. (Look at how arbitrary precision arithmetic is implemented on a BE system; chances are it's effectively LE anyway.)
    • yjftsjthsd-h 1 hour ago
      > Little endian is logical and won, big endian is backwards and lost for good reason.

      No, BE is logical, but LE is efficient (for machines).

      • lowbloodsugar 1 hour ago
        No, BE is intuitive for humans who write digits with the highest power on the left.

        LE is logical which is also why it is more efficient and more intuitive for humans once they get past “how we write numbers with a pencil”.

        • yjftsjthsd-h 1 hour ago
          No, BE is logical because it puts bits and bytes in the same order. That humans use BE is also nice but secondary to that. I don't have strong feelings about whether fifty-one thousand nine hundred sixty-six is written as 0xcafe or 0xefac, but I feel quite comfortable suggesting that 0xfeca is absurd. (FWIW, this is a weak argument for what computers should do; if LE is more efficient for machines then let them use it)

          Edit: switched example to hex

          Edit2: actually this is still slightly out of whack, but I don't feel like switching to binary so take it as a loose representation rather than literal

          • dataflow 26 minutes ago
            > No, BE is logical because it puts bits and bytes in the same order.

            This sounds confused. The "order" of bits is only an artifact of our human notation, not some inherent order. If you look at how an integer is implemented in hardware (say in a register or in combinational logic), you're not going to find the bits being reversed every byte.

          • zephen 34 minutes ago
            Your example is only for dumping memory.

            > this is a weak argument for what computers should do; if LE is more efficient for machines then let them use it

            Computers really don't care. Literally. Same number of gates either way. But for everything besides dumping it makes sense that the least significant byte and the least significant bit are numbered starting from zero. It makes intuitive mathematical sense.

        • zephen 1 hour ago
          > BE is intuitive for humans who write digits with the highest power on the left.

          But only because when they dump memory, they start with the lowest address, lol.

          Why don't these people reverse numberlines and cartesian coordinate systems while they're at it?

  • dataflow 19 minutes ago
    The whole thing rests on this assertion:

    > It is usually easy to write code that is endian-safe.

    And it is false.

    The tower collapses once you remove this base.

  • tjwebbnorfolk 37 minutes ago
    > and you refuse a community port to another architecture, you are doing a huge disservice to your community

    Someone who has a computer that my software can't run on isn't in my community. If they really want to use the software, they have the option of: 1) get a different computer, or 2) maintain their own custom-special port of my software forever.

    In other words, they have to JOIN the community if the want the BENEFITS of the community. It's not my job to extend my community to encompass every possible use case and hardware platform.

  • chmod775 39 minutes ago
    > Big endian systems store numbers the way us humans do: the largest number is written first.

    Obviously the author was trying to just give a quick example to aid visualization, but here's some nitpicking: I can probably come up with at least IV writing systems used by humans that don't use "big endian" for numbers. Or either, really.

    Examples: Tally marks, Ancient Egyptian numerals, Hebrew and Attic numerals, and obviously Roman numerals.

    Also lots languages in written form order words somewhat... randomly (French, Danish, old English, ...).

  • bastawhiz 2 hours ago
    I think the tricky thing here is that I simply don't have the time, patience, or resources to maintain this stuff. Let's say I have a LE-only project. Someone ports it to work on BE. Now it needs ci for BE. I write a patch in the future and the BE tests fail. Now I need to fix them. Potential contributors need to get the tests to pass. Who's using BE, though? Is the original porter even still using it?

    The author betrays their own point with the anecdote about 586 support: they had tests, the tests passed, but the emulator was buggy, masking the issue. Frankly, if you're the Linux kernel and nobody has the hardware to run the tests on an actual device, it says a lot. But it also shows that qemu is struggling to make it work if the emulation isn't working as it should. How is someone who runs a small project supposed to debug a BE issue when you might have to debug the emulator when a user report comes in?

    For me, I'll always welcome folks engaging with my work. But I'll be hesitant to take on maintenance of anything that takes my attention away from delivering value to the overwhelming majority of my users, especially if the value of the effort disappears over time (e.g., because nobody is making those CPUs anymore).

  • smj-edison 2 hours ago
    > I happen to prefer big endian systems in my own development life because they are easier for me to work with, especially reading crash dumps.

    If hex editors were mirrored both left to right and right to left, would it be easier to read little endian dumps?

    • ronsor 1 hour ago
      xxd has the `-e` option for exactly this use case:

          -e          little-endian dump (incompatible with -ps,-i,-r).
  • RcouF1uZ4gsC 2 hours ago
    > In closing, let me reiterate this point so it is crystal clear. If you are a maintainer of a libre software project and you refuse a community port to another architecture, you are doing a huge disservice to your community and to your software’s overall quality. As the Linux kernel has demonstrated, you can accept new ports, and deprecate old ports, as community demands and interest waxes and wanes.

    Every feature has a cost and port to a different architecture has a huge cost in ongoing maintenance and testing.

    This is open source. The maintainer isn’t refusing a port. The maintainer is refusing to accept being a maintainer for that port.

    A person is always free to fork the open source project and maintain the port themselves as a fork.

    • nine_k 2 hours ago
      Hmm, if the author of the port cares, why won't the author of the port become a maintainer of that port? This should be a two-way street.
      • CJefferson 2 hours ago
        In my experience, as someone who has gone through this as maintainer of two decent sized projects, that simply doesn't work.

        The author of the 'port' probably doesn't know your whole codebase like you, so they are going to need help to get their code polished and merged.

        For endian issues, the bugs are often subtle and can occur in strange places (it's hard to grep for 'someone somewhere made an endian assumption'), so you often get dragged into debugging.

        Now let's imagine we get everything working, CI set up, I make a PR which breaks the big-endian build. My options are:

        1) Start fixing endian bugs myself -- I have other stuff to do!

        2) Wait for my 'endian maintainer' to find and fix the bug -- might take weeks, they have other stuff to do!

        3) Just disable the endian tests in CI, eventually someone will come complain, maybe a debian packager.

        At the end of the day I have finite hours on this earth, and there are just so few big endian users -- I often think there are more packagers who want to make software work on their machine in a kind of 'pokemon-style gotta catch em all', than actual users.

  • lovich 2 hours ago
    I don’t have an opinion either way on this authors belief around the port being accepted upstream or not.

    I did however learn a lot googling some of the terms they dropped and finding out things like PowerPC architecture getting and update as recently as 2025.

    Several of their references I knew from my first tech leads mentioning their own early career. I am surprised at how much still has active development.

  • zephen 2 hours ago
    > In closing, let me reiterate this point so it is crystal clear. If you are a maintainer of a libre software project and you refuse a community port to another architecture, you are doing a huge disservice to your community and to your software’s overall quality.

    Linus Torvalds disagrees. Vehemently.

    https://www.phoronix.com/news/Torvalds-No-RISC-V-BE

    > For those who don’t know, endianness is simply how the computer stores numbers. Big endian systems store numbers the way us humans do: the largest number is written first.

    Really, what's first? You're so keen on having the big end first, but when it comes to looking at memory, you look... starting at the little end of memory first??? What's up with that?

    > I happen to prefer big endian systems in my own development life because they are easier for me to work with, especially reading crash dumps.

    It always comes back to this. But that's not a good rationale for either the inconsistency of mixed-endianness where the least significant bit is zero but the most significant byte is zero, or true big endianness, where the least significant bit of a number might be a bit numbered 7 or numbered 15, or even 31 or 63, depending on what size integer it is.

    > (Porting to different endianness can help catch obscure bugs.)

    Yeah, I'm sure using 9 bit bytes would catch bugs, too, but nobody does that either.

    • userbinator 2 hours ago
      BE was a huge mistake. Arabic numerals originated in a right-to-left language too.

      depending on what size integer it is

      That's the worst part about BE: values that have a size-dependent term in them, in addition to a subtraction. 2^n vs. 2^(l-n) and 256^N vs 256^(L-N).

      According to Linus, BE has been "effectively dead" for at least a decade: https://news.ycombinator.com/item?id=9451284

  • lowbloodsugar 1 hour ago
    I wrote a big long reply but I realized that there’s really no point arguing with these people. BE is wrong. We all know why. Some people are personally interested in BE and believe they are entitled to everyone else incorporating their special interest into other these code base. Fuck them.