Optimize for change not application performance

(echooff.dev)

33 points | by lo1tuma 2 days ago

7 comments

  • po1nt 3 hours ago
    Author fails to acknowledge that there are many fields where we ship only once and we should strive towards that if we want to avoid running firmware updates on our ultrasonic knives.

    While we talk about maintainability, we all admire Fast Inverse Square algorithm.

    Optimize for what best serves your purpose. If you have high team fluctuation, optimize for readability. If you develop a spacecraft, optimize for safety. If you ship audio gear, optimize for latency.

    • bunderbunder 18 minutes ago
      I do wonder if sometimes these things are set up as false dilemmas, though.

      I skimmed through NASA’s coding manual a while back, and one of the things that I took away from it was that optimizing for readability is optimizing for safety.

      It’s just that it’s hard for me to see it as readability because I’m not familiar with the problem domain. For example, their ban on reentrancy would definitely require me to rewire my brain a bit. But, for what they are doing, that is a readability decision: they needed to be able to guarantee that a spacecraft’s firmware couldn’t experience a stack overflow, and reentrant code makes it much harder to reason about stack growth.

    • account42 3 hours ago
      > If you have high team fluctuation, optimize for readability.

      Or better: If you have high team fluctuation, optimize that first so your team is actually effective.

      • po1nt 2 hours ago
        You can't fix faulty management as a developer. You can structure a code base around it.
        • AlotOfReading 13 minutes ago
          A traditional engineer can't force purchasing to buy the right parts, but that doesn't mean they should make do by ducttaping sheet metal onto the reactor as a substitute. Technical workarounds are a poor solution to social programs.
        • 0123456789ABCDE 1 hour ago
          but do we care, if management doesn't?
    • doctorpangloss 14 minutes ago
      > we all admire Fast Inverse Square algorithm.

      i don't. that guy basically made the same game over and over again, while nearly everyone else was innovating in game design, reaching new audiences, etc. that's what change is about!

      and then, he blows up the next thing he's put in charge of (VR), and blames everyone but himself. how many billions did he get and he couldn't figure it out? every bit of ethos from that guy was bad, it's not just the one little ethos of the hardcore little optimization algorithm, it's every ethos.

    • KptMarchewa 1 hour ago
      > ultrasonic knives

      Wow, TIL.

  • kikimora 1 hour ago
    If done right optimizing for performance also achieves readability and maintenance. There is an edge case when you rewrite a loop with SIMD or use branch less programming. It is so rare but a focus of so many articles.

    I do see a lot of system that are both slow and hard to maintain because people focus on maintenance. They create abstractions upon abstractions in the name of maintainability to later find it does not work well with their hardware and infrastructure prompting more complexity in the name of performance.

    • bunderbunder 11 minutes ago
      I’ve never known towering abstractions to be good for maintainability, anyway. It sounds great on paper, but in practice it often ends up being extra mechanism to have to think your way through on your way to understanding a problem. Or they constrain the set of possible solutions you can undertake without major refactoring.

      That isn’t to say abstractions are inherently harmful. But when I see codebases that really go nuts for it, it’s rarely the case that they were all carefully considered before implementation.

    • nijave 1 hour ago
      Nothing like waiting 20 minutes for a test suite that should have taken 2
  • hotfrost 55 minutes ago
    AI slop article with a few words highlighted in color or bold..
  • lo1tuma 2 days ago
    I mostly agree with the author that optimizing a code base for change should be the number 1 priority, but I think it is different topic than for example application performance. And it is not an either-or ... you can actually do both, the question - as always - is if you should do it all.

    - Optimizing for change is basically the key principle of agility. Too ofter it is confused by many people with being fast in delivery by default, just because you apply agile patterns. This is not true. You can be faster than e.g. with waterfall, but most of the time you will be slower. But that is not the point. The point is you can adapt the plan very quickly. So instead of following strictly a 6 months plan, you can change plans on a daily basis and go in completely different direction, if business demands that.

    - Application performance is actually not a "tech" thing. So I dont understand why so many developers pre-optimize for application performance without being asked to do so. Application performance is part of UX (User experience). There are studies out there, that sometimes it is even benefitial to be slow and show a loading indicator because it could increase trust from users, because they think "Hey look... the application is calculating something to fullfil my needs", instead of showing the answer instantly. In any case, Application perfomance should be driven by business and user needs, not by engineers who have a personal obligation to do this. And furthermore application performance should never be optimized blindly. Always benchmark the application and work on the bottleneck only.

    • account42 3 hours ago
      > There are studies out there, that sometimes it is even benefitial to be slow and show a loading indicator because it could increase trust from users, because they think "Hey look... the application is calculating something to fullfil my needs", instead of showing the answer instantly.

      Users being susceptible to dark patterns doesn't mean that dark patterns are something an engineer should see as acceptable.

      > Always benchmark the application and work on the bottleneck only.

      That's how you end up with software that's slow due to a million abstractions. Easily bench-marked bottlenecks can give you quick wins, but that doesn't mean you should stop there or not have any foresight to optimize things ahead of time where it makes sense. Your cost benefit calculation also needs to take into account that optimizations decisions (both architecture and lower implementation details) are much more costly to do after the code has already been written, which is why with today's YOLO software they often don't get done at all.

    • 201984 1 hour ago
      >There are studies out there, that sometimes it is even benefitial to be slow and show a loading indicator because it could increase trust from users,

      And I as a user absolutely hate programs that do this. Put an "updated" message with a timestamp if you want, but don't pointlessly waste my time.

  • locknitpicker 3 hours ago
    This blog post reads like AI slop.

    I doubt that the author even read the result, as it's readability is subpar. In general AI slop is more readable than this soup of bullet points.

    This feels like eternal September, but powered by LLMs.

    • joaohaas 1 hour ago
      Welcome to modern HN.
    • add-sub-mul-div 1 hour ago
      It's a new account that has only spammed the site with submissions from this one domain that no one else has ever submitted. This, along with it being slop, is becoming the default submission profile.
  • thesuperevil 1 hour ago
    [flagged]
  • asn_tech_2019 1 hour ago
    [dead]