A bit old but still interesting

  • ExperimentalGuy@programming.dev
    link
    fedilink
    arrow-up
    2
    ·
    13 hours ago

    Does anyone understand the Pearson coefficient part enough to explain it? I don’t really understand why they’re measuring correlation between memory, energy, and time in that way/ how you’d interpret it.

  • MonkderVierte@lemmy.zip
    link
    fedilink
    arrow-up
    16
    ·
    edit-2
    13 hours ago

    Normalized global results for Energy, Time, and Memory.

    Energy (J) Time (ms) Mb
    © C 1.00 © C 1.00 © Pascal 1.00
    © Rust 1.03 © Rust 1.04 © Go 1.05
    © C++ 1.34 © C++ 1.56 © C 1.17
    © Ada 1.70 © Ada 1.85 © Fortran 1.24
    (v) Java 1.98 (v) Java 1.89 © C++ 1.34
    © Pascal 2.14 © Chapel 2.14 © Ada 1.47
    © Chapel 2.18 © Go 2.83 © Rust 1.54
    (v) Lisp 2.27 © Pascal 3.02 (v) Lisp 1.92
    © Ocaml 2.40 © Ocaml 3.09 © Haskell 2.45
    © Fortran 2.52 (v) C# 3.14 (i) PHP 2.57
    © Swift 2.79 (v) Lisp 3.40 © Swift 2.71
    © Haskell 3.10 © Haskell 3.55 (i) Python 2.80
    (v) C# 3.14 © Swift 4.20 © Ocaml 2.82
    © Go 3.23 © Fortran 4.20 (v) C# 2.85
    (i) Dart 3.83 (v) F# 6.30 (i) Hack 3.34
    (v) F# 4.13 (i) JavaScript 6.52 (v) Racket 3.52
    (i) JavaScript 4.45 (i) Dart 6.67 (i) Ruby 3.97
    (v) Racket 7.91 (v) Racket 11.27 © Chapel 4.00
    (i) TypeScript 21.50 (i) Hack 26.99 (v) F# 4.25
    (i) Hack 24.02 (i) PHP 27.64 (i) JavaScript 4.59
    (i) PHP 29.30 (v) Erlang 36.71 (i) TypeScript 4.69
    (v) Erlang 42.23 (i) Jruby 43.44 (v) Java 6.01
    (i) Lua 45.98 (i) TypeScript 46.20 (i) Perl 6.62
    (i) Jruby 46.54 (i) Ruby 59.34 (i) Lua 6.72
    (i) Ruby 69.91 (i) Perl 65.79 (v) Erlang 7.20
    (i) Python 75.88 (i) Python 71.90 (i) Dart 8.64
    (i) Perl 79.58 (i) Lua 82.91 (i) Jruby 19.84
  • HelloRoot@lemy.lol
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    edit-2
    18 hours ago

    tldr:

    • faster lang consumes less energy (more or less)

    opinionated tldr:

    • Java surprisingly good lets goooooooo Javaaaaaaa!
    • python so low, ew
    • IllNess@infosec.pub
      link
      fedilink
      arrow-up
      9
      ·
      19 hours ago

      Java has been around a really long time and I was still surprised how well it did.

      I am shocked Fortran didn’t do better. I don’t code in Fortran. I assumed languages closer machine would do well.

    • floofloof@lemmy.ca
      link
      fedilink
      arrow-up
      5
      ·
      edit-2
      19 hours ago

      Their conclusion in section 3.1:

      No, a faster language is not always the most energy efficient.

      That said, they really seem to be pointing out that there are exceptions to this rule, where the languages that solve a problem most quickly are not the ones that use the least energy doing it. If you step back from the detail of individual algorithms, speed and energy efficiency do seem broadly to vary together, as one would expect.

      • atzanteol@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        18 hours ago

        That was a fascinating discovery. It seems Pascal and Fortran in particular fit into the “faster but less efficient energy-wise” category. I wonder what’s going on there.

      • HelloRoot@lemy.lol
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        18 hours ago

        If your target audience says too lazy didn’t read - I think the bit that applies like a rule of thumb to most cases is more relevant and has a higher practical knowledge value than the intricate details or an “it depends”.

        (Similar how you can just explain gravity with newton instead of einstein, to make it short, even though it is less precise or technically false)

    • HelloRoot@lemy.lol
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      18 hours ago

      Thats not really their conclusion. Thats the Rosetta Code ranking. Table 4 is the one that has summarized results of many tests and includes a lot more languages.

    • neidu3@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      3
      ·
      edit-2
      20 hours ago

      Python should be even further down - this list doesn’t account for the fact that you have to rewrite everything because “That’s not pythonic”.

      Perl should be higher up because it let’s you just do the things with some weird smileys without jumping through hoops.

  • Pringles@sopuli.xyz
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    21 hours ago

    Rosetta Code Global Ranking Position Language

    1 C

    2 Pascal

    3 Ada

    4 Rust

    5 C++, Fortran

    6 Chapel

    7 OCaml, Go

    8 Lisp

    9 Haskell, JavaScript

    10 Java

    11 PHP

    12 Lua, Ruby

    13 Perl

    14 Dart, Racket, Erlang

    15 Python

    • Kissaki@programming.dev
      link
      fedilink
      English
      arrow-up
      3
      ·
      15 hours ago

      1 C

      2 Pascal

      When you add a dot after the number it becomes a numbered lists and you don’t have use a paragraph for each line.

      Alternatively, you can use a backslash (\) or two spaces ( ) at the end of a line to use a line-break so you can but one line after the other instead of requiring paragraphs.

  • SuperFola@programming.dev
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    5
    ·
    20 hours ago

    I find this paper false/misleading. They just translated one algorithm in many languages, without using the language constructs or specificities to make the algorithm decent performant wise.

    Also it doesn’t mean anything, as you aren’t just running your code. You are compiling/transpiling it, testing it, deploying it… and all those operations consume even more energy.

    I’d argue that C/C++ projects use the most energy in term of testing due to the quantity of bugs it can present, and the amount of CPU time needed just to compile your 10-20k lines program. Just my 2 cents

    • KRAW@linux.community
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      1
      ·
      edit-2
      18 hours ago

      The amount of CPU time compiling code is usually negligible compared to CPU time at runtime. Your comparison only really works if you are comparing against something like Rust, where less bugs are introduced due to certain guarantees by the language.

      Regarding “language constructs” it really depends on what you mean. For example using numpy in python is kind of cheating because numpy is implemented in C. However using something like the algorithm libraries in Rust woulf be considered fair game since they are likely written in Rust itself.

    • atzanteol@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      18 hours ago

      I find this paper false/misleading.

      They presented their methodology in an open and clear way and provide their data for everyone to interpret. You can disagree with conclusions but it’s pretty harsh to say it’s “misleading” simply because you don’t like the results.

      They just translated one algorithm in many languages, without using the language constructs or specificities to make the algorithm decent performant wise.

      They used two datasets, if you read the paper… It wasn’t “one algorithm” it was several from publicly available implementations of those algorithms. They chose an “optimized” set of algorithms from “The Computer Language Benchmarks Game” to produce results for well-optimized code in each language. They then used implementations of various algorithms from Rosetta Code which contained more… typical implementations that don’t have a heavy focus on performance.

      In fact - using “typical language constructs or specificities” hurt the Java language implementations since List is slower than using arrays. It performed much better (surprisingly well actually) in the optimized tests than in the Rosetta Code tests.

      • FizzyOrange@programming.dev
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        10 hours ago

        They chose an “optimized” set of algorithms from “The Computer Language Benchmarks Game” to produce results for well-optimized code in each language.

        Honestly that’s all you need to know to throw this paper away.

          • FizzyOrange@programming.dev
            link
            fedilink
            arrow-up
            1
            ·
            33 minutes ago

            It’s a very heavily gamed benchmark. The most frequent issues I’ve seen are:

            • Different uses of multi-threading - some submissions use it, some don’t.
            • Different algorithms for the same problem.
            • Calling into C libraries to do the actual work. Lots of the Python submissions do this.

            They’ve finally started labelling stupid submissions with “contentious” labels at least, but not when this study was done.