• YesButActuallyMaybe@lemmy.ca
    link
    fedilink
    English
    arrow-up
    2
    ·
    7 minutes ago

    Ah get outta here! Next time they’ll say that co pilot also chooses my furry porn and controls my buttplug while it codes for me.

  • Kissaki@feddit.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    8 minutes ago

    I read “users respond with mercyless trolling” in the teaser, I have to open the article.

  • ☂️-@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 minutes ago

    where are my penguin boys at. 🐧

    seriously people. the majority of you don’t have to put up with this, you know that right?

  • Treczoks@lemmy.world
    link
    fedilink
    English
    arrow-up
    28
    ·
    4 hours ago

    What they forget to mention is that you then spend the rest of the week to fix the bugs it introduced and to explain why your code deleted the production database…

  • Siegfried@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    3 hours ago

    If thats what they are aiming at, I feel like their AI is actually suppose to be the pilot and the user the copilot

  • DupaCycki@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 hours ago

    Technically true, but nobody said the code will be at all functional. I’m pretty sure I can finish about 800000 coffees before Copilot generates anything usable that is longer than 3 lines.

  • melsaskca@lemmy.ca
    link
    fedilink
    English
    arrow-up
    7
    ·
    4 hours ago

    I would rather paint a portrait by myself, spending the time to do it, rather than asking some computer prompt to spit me out a picture. Same logic applies with coding for me.

    • jj4211@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      5 hours ago

      No, just complete. Whatever the dude does may have morning to do with what you needed it to do, but it will be “done”

    • Evotech@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 hours ago

      Depends. If it’s a script that will like, cut your video file every 10 seconds with ffmpeg or something simple. Yeah it will one-shot it.

  • katy ✨@piefed.blahaj.zone
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 hours ago

    yeah but then you have to fix everything in the code that they didn’t get right.

    like using it to automate a shell is fine; but trusting it blindly and treating it as the finishing product? you’re delusional.

  • dreadbeef@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    8
    ·
    6 hours ago

    My problem is that the dev and stage environments are giving me 502 gateway errors when hitting only certain api endpoints from the app gateway. My real problem is devops aren’t answering my support tickets and telling me which terraform var file I gotta muck with and tell me what to fix on it. I’m sure you’ll be fixed soon though right copilot?

  • kreskin@lemmy.world
    link
    fedilink
    English
    arrow-up
    36
    ·
    9 hours ago

    yes but all the code will be wrong and you will spend your entire day chasing stupid mistakes and hallucinations in the code. I’d rather just write the code myself thanks.

    • Thorry@feddit.org
      link
      fedilink
      English
      arrow-up
      44
      ·
      8 hours ago

      Also just because the code works, doesn’t mean it’s good code.

      I’ve had to review code the other day which was clearly created by an LLM. Two classes needed to talk to each other in a bit of a complex way. So I would expect one class to create some kind of request data object, submit it to the other class, which then returns some kind of response data object.

      What the LLM actually did was pretty shocking, it used reflection to get access from one class to the private properties with the data required inside the other class. It then just straight up stole the data and did the work itself (wrongly as well I might add). I just about fell of my chair when I saw this.

      So I asked the dev, he said he didn’t fully understand what the LLM did, he wasn’t familiar with reflection. But since it seemed to work in the few tests he did and the unit tests the LLM generated passed, he thought it would be fine.

      Also the unit tests were wrong, I explained to the dev that usually with humans it’s a bad idea to have the person who wrote the code also (exclusively) write the unit tests. Whenever possible have somebody else write the unit tests, so they don’t have the same assumptions and blind spots. With LLMs this is doubly true, it will just straight up lie in the unit tests. If they aren’t complete nonsense to begin with.

      I swear to the gods, LLMs don’t save time or money, they just give the illusion they do. Some task of a few hours will take 20 min and everyone claps. But then another task takes twice as long and we just don’t look at that. And the quality suffers a lot, without anyone really noticing.

      • criss_cross@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        45 seconds ago

        They’ve been great for me at optimizing bite sized annoying tasks. They’re really bad at doing anything beyond that. Like astronomically bad.

      • airgapped@piefed.social
        link
        fedilink
        English
        arrow-up
        6
        ·
        5 hours ago

        Great description of a problem I noticed with most LLM generated code of any decent complexity. It will look fantastic at first but you will be truly up shit creek by the time you realise it didn’t generate a paddle.

      • WaitThisIsntReddit@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        10 hours ago

        A couple agent iterations will compile. Definitely won’t do what you wanted though, and if it does it will be the dumbest way possible.

        • TORFdot0@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          1
          ·
          edit-2
          10 hours ago

          Yeah you can definitely bully AI into giving you some thing that will run if you yell at it long enough. I don’t have that kind of patience

          Edit: typically I see it just silently dump errors to /dev/null if you complain about it not working lol

          • Darkenfolk@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            8 hours ago

            And people say that AI isn’t humanlike. That’s peak human behavior right there, having to bother someone out of procrastination mode.

            The edit makes it even better, swiping things under the rug? Hell yeah!