• thenextguy@lemmy.world
    link
    fedilink
    arrow-up
    11
    ·
    24 hours ago

    Usually people say “premature optimization is the root of all evil” to say “small optimizations are not worth it”

    No, that’s not what people mean. They mean measure first, then optimize. Small optimizations may or may not be worth it. You don’t know until you measure using real data.

    • boonhet@sopuli.xyz
      link
      fedilink
      arrow-up
      7
      ·
      21 hours ago

      Exactly. A 10% decrease in run time for a method is a small optimization most of the time, but whether or not it’s premature depends on whether the optimization has other consequences. Maybe you lose functionality in some edge cases, or maybe it’s actually 10x slower in some edge case. Maybe what you thought was a bit faster, is actually slower in most cases. That’s why you measure when you’re optimizing.

      Maybe you took 3 hours of profiling and made a loop 10% faster but you could have trivially rewritten it to run log n times instead of n times…

    • FizzyOrange@programming.dev
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      16 hours ago

      They mean measure first, then optimize.

      This is also bad advice. In fact I would bet money that nobody who says that actually always follows it.

      Really there are two things that can happen:

      1. You are trying to optimise performance. In this case you obviously measure using a profiler because that’s by far the easiest way to find places that are slow in a program. It’s not the only way though! This only really works for micro optimisations - you can’t profile your way to architectural improvements. Nicholas Nethercote’s posts about speeding up the Rust compiler are a great example of this.

      2. Writing new code. Almost nobody measures code while they’re writing it. At best you’ll have a CI benchmark (the Rust compiler has this). But while you’re actually writing the code it’s mostly find just to use your intuition. Preallocate vectors. Don’t write O(N^2) code. Use HashSet etc. There are plenty of things that good programmers can be sure enough are the right way to do it that you don’t need to constantly second guess yourself.

    • ugo@feddit.it
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      edit-2
      20 hours ago

      No, that’s what good programmers say (measure first, then optimize). Bad programmers use it to mean it’s perfectly fine to prematurely pessimize code because they can’t be bothered to spend 10 minutes to come up with something that isn’t O(n^2) because their set is only 5 elements long so it’s “not a big deal” and “it’s fine” even though the set will be hundreds or thousands of elements long under real load.

      It would be almost funny if it didn’t happen every single time.