• csm10495@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    27
    ·
    4 months ago

    It’s exciting, but man there are lots of assumptions in native python built around the gil.

    I’ve seen lists, etc. modified by threads assuming the gil locks for them. Testing this e2e for any production deployment can be a bit of a nightmare.

    • overcast5348@lemmy.world
      link
      fedilink
      arrow-up
      38
      ·
      edit-2
      4 months ago

      My company makes it super easy for me - we’re just going to continue on python 2.7 and add this to the long list of reasons why we’re not upgrading.

      Please send help.

      • Corbin@programming.dev
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 months ago

        You may be pleased to know that PyPy’s Python 2.7 branch will be maintained indefinitely, since PyPy is also written in Python 2.7. Also, if you can’t leave CPython yet, ActivePython’s team is publishing CPython 2.7 security patches.

        • overcast5348@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          4 months ago

          We already have contracts in place to get security patches. That’s usually the InfoSec team’s problem anyway.

          As a developer, my life gets hard due to library support. We manage internal forks of multiple open source projects just to make them python 2 compatible. A non-trivial amount of time is wasted on this, and we don’t even have it available for public use. 🤷‍♂️

      • fubarx@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        4 months ago

        Python 2.7 and iOS mobile programmers stuck on Objective-C could start a support group.

      • verstra@programming.dev
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        4 months ago

        Why would you not be upgrading due to a new feature of python? You don’t like new features or was that a badly wordered sentence?

  • BB_C@programming.dev
    link
    fedilink
    arrow-up
    21
    arrow-down
    1
    ·
    4 months ago

    While pure Python code should work unchanged, code written in other languages or using the CPython C API may not. The GIL was implicitly protecting a lot of thread-unsafe C, C++, Cython, Fortran, etc. code - and now it no longer does. Which may lead to all sorts of fun outcomes (crashes, intermittent incorrect behavior, etc.).

    :tabclose

    • vrighter@discuss.tchncs.de
      link
      fedilink
      arrow-up
      4
      ·
      4 months ago

      those libraries include pretty much almost all popular libraries. It’s just impossible to write performant code in python.

  • roadrunner_ex@lemmy.ca
    link
    fedilink
    English
    arrow-up
    8
    ·
    4 months ago

    I’m curious to see how this whole thing shakes out. Like, will removing the GIL be an uphill battle that everyone regrets even suggesting?Will it be so easy, we wonder why we didn’t do it years ago? Or, most likely, somewhere in the middle?

      • roadrunner_ex@lemmy.ca
        link
        fedilink
        English
        arrow-up
        9
        ·
        4 months ago

        Yes, testing infrastructure is being put in place and some low-hanging fruit bugs have already been squashed. This bodes well, but it’s still early days, and I imagine not a lot of GIL-less production deployments are out there yet - where the real showstoppers will potentially live.

        I’m tenatively optimistic, but threading bugs are sometimes hard to catch

        • FizzyOrange@programming.dev
          link
          fedilink
          arrow-up
          4
          arrow-down
          2
          ·
          4 months ago

          threading bugs are sometimes hard to catch

          Putting it mildly! Threading bugs are probably the worst class of bugs to debug

          Definitely debatable if this is worth the risk of impossible bugs. Python is very slow, and multi threading isn’t going to change that. 4x extremely slow is still extremely slow. If you care remotely about performance you need to use a different language anyway.

          • Womble@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            1
            ·
            4 months ago

            Python can be extremely slow, it doesn’t have to be. I recently re-wrote a stats program at work and got a ~500x speedup over the original python and a 10x speed up over the c++ rewrite of that. If you know how python works and avoid the performance foot-guns like nested loops you can often (though not always) get good performance.

            • FizzyOrange@programming.dev
              link
              fedilink
              arrow-up
              4
              arrow-down
              3
              ·
              4 months ago

              Unless the C++ code was doing something wrong there’s literally no way you can write pure Python that’s 10x faster than it. Something else is going on there. Maybe the c++ code was accidentally O(N^2) or something.

              In general Python will be 10-200 times slower than C++. 50x slower is typical.

              • bitcrafter@programming.dev
                link
                fedilink
                arrow-up
                6
                ·
                4 months ago

                Unless the C++ code was doing something wrong there’s literally no way you can write pure Python that’s 10x faster than it. Something else is going on there.

                Completely agreed, but it can be surprising just how often C++ really is written that inefficiently; I have had multiple successes in my career of rewriting C++ code in Python and making it faster in the process, but never because Python is inherently faster than C++.

                • FizzyOrange@programming.dev
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  4 months ago

                  Yeah exactly. You made it faster through algorithmic improvement. Like for like Python is far far slower than C++ and it’s impossible to write Python that is as fast as C++.

              • Womble@lemmy.world
                link
                fedilink
                English
                arrow-up
                5
                ·
                4 months ago

                Nope, if you’re working on large arrays of data you can get significant speed ups using well optimised BLAS functions that are vectorised (numpy) which beats out simply written c++ operating on each array element in turn. There’s also Numba which uses LLVM to jit compile a subset of python to get compiled performance, though I didnt go to that in this case.

                You could link the BLAS libraries to c++ but its significantly more work than just importing numpy from python.

                • FizzyOrange@programming.dev
                  link
                  fedilink
                  arrow-up
                  3
                  arrow-down
                  3
                  ·
                  4 months ago

                  numpy

                  Numpy is written in C.

                  Numba

                  Numba is interesting… But a) it can already do multithreading so this change makes little difference, and b) it’s still not going to be as fast as C++ (obviously we don’t count the GPU backend).

              • Corbin@programming.dev
                link
                fedilink
                English
                arrow-up
                1
                ·
                4 months ago

                You’re thinking of CPython. PyPy can routinely compete with C and C++, particularly in allocation-heavy or pointer-heavy scenarios.

                • FizzyOrange@programming.dev
                  link
                  fedilink
                  arrow-up
                  1
                  arrow-down
                  2
                  ·
                  4 months ago

                  I am indeed thinking of CPython because a) approximately nobody uses PyPy, and b) this article is about CPython!!

                  In any case, PyPy is only about 4x faster than CPython on average (according to their own benchmarks) so it’s only going to be able to compete with C++ in random specifics circumstances, not in general.

                  And PyPy still has a GIL! Come on dude, think!

              • nickwitha_k (he/him)@lemmy.sdf.org
                link
                fedilink
                arrow-up
                1
                arrow-down
                1
                ·
                4 months ago

                You’re both at least partly right. The only interpreted language that can compete with compiled for execution speed is Java and it has the downside of being Java.

                That being said, you might be surprised at how fast you can make Python code execute, even pre-GIL changes. I certainly was. Using multiprocessing and code architected to be run massively parallel, it can be blazingly fast. It would still be blown out of the water by similarly optimized compiled code but, is worth serious consideration if you want to optimize for iterative development.

                My view on such workflows would be:

                1. Write iteration of code component in Python.
                2. Release.
                3. Evaluate if any functional changes are required. If so, goto 1.
                4. Port component to compiled language, changing function calls/imports to make use of the compiled binary alongside the other interpreted components.
                5. Release.
                6. Refactor code to optimize for compiled language, features that compiled language enables, and/or security/bug fixes.
                7. Release.
                8. Evaluate if further refactor is required at this time, if so, goto 6.
                • FizzyOrange@programming.dev
                  link
                  fedilink
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  4 months ago

                  The only interpreted language that can compete with compiled for execution speed is Java

                  “Interpreted” isn’t especially well defined but it would take a pretty wildly out-there definition to call Java interpreted! Java is JIT compiled or even AoT compiled recently.

                  it can be blazingly fast

                  It definitely can’t.

                  It would still be blown out of the water by similarly optimized compiled code

                  Well, yes. So not blazingly fast then.

                  I mean it can be blazingly fast compared to computers from the 90s, or like humans… But “blazingly fast” generally means in the context of what is possible.

                  Port component to compiled language

                  My extensive experience is that this step rarely happens because by the time it makes sense to do this you have 100k lines of Python and performance is juuuust about tolerable and we can’t wait 3 months for you to rewrite it we need those new features now now now!

                  My experience has also shown that writing Python is rarely a faster way to develop even prototypes, especially when you consider all the time you’ll waste on pip and setuptools and venv…

        • Socsa@sh.itjust.works
          link
          fedilink
          arrow-up
          1
          ·
          4 months ago

          The reality is just that some kind of python code will have the same race conditions as most other languages moving forward and that’s ok.

  • fubarx@lemmy.ml
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    4 months ago

    I have a project ready to try this out. It’s a software simulator, and each run (typically 10-10,000 iterations) can be done independently, with the results aggregated and shown at the end. It’s also instrumented to show CPU and memory usage, and on MacOS, you can watch how busy each core gets (hint: PEGGED in multicore mode).

    Can run it single-threaded, then with multiprocessing, then with multi-core and time each one. Pretty happy with multicore, but as soon as the no-GIL/subinterpreter version is stable, will try it out and see if it makes any difference. Under the hood it uses numpy and scipy, so will have to wait for them.

    Edit: on my todo list is to try it all out in Mojo. They make pretty big performance gain claims.