• Dr. Wesker@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      2
      ·
      edit-2
      6 hours ago

      This is extremely valid.

      The biggest reason I’m able to use LLMs efficiently and safely, is because of all my prior experience. I’m able to write up all the project guard rails, the expected architecture, call out gotchas, etc. These are the things that actually keep the output in spec (usually).

      If a junior hasn’t already manually established this knowledge and experience, much of the code that they’re going to produce with AI is gonna be crap with varying levels of deviation.

          • justOnePersistentKbinPlease@fedia.io
            link
            fedilink
            arrow-up
            2
            ·
            3 hours ago

            They use it with heavy oversight from the senior devs. We discourage its use and teach them the very basic errors it always produces as a warning not to trust it.

            E.G. that ChatGPT will always dump all of the event handlers for a form in one massive method.

            We use it within the scope of things we already know about.