• simple@lemm.ee
    link
    fedilink
    English
    arrow-up
    81
    arrow-down
    7
    ·
    7 months ago

    53% is abysmal, it might as well be a coin flip. FYI this article is about a random one called BrandWell, popular AI detectors like GPTZero are much more accurate.

  • partial_accumen@lemmy.world
    link
    fedilink
    English
    arrow-up
    67
    arrow-down
    1
    ·
    7 months ago

    I have a competing technology that is nearly as accurate. For only $50 I’ll send you this device that you will have unlimited license usage rights to. While not 53% accurate like my competitor, its proven by scientific studies to be 50% accurate. I also offer volume discounts if you buy 10 the price drops to only $45 per device. Sign up now!

  • Raltoid@lemmy.world
    link
    fedilink
    English
    arrow-up
    26
    ·
    edit-2
    7 months ago

    It was used in schools…

    Congratulations, you just created a generation of children who will never truly trust authority figures.

  • General_Effort@lemmy.world
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    1
    ·
    7 months ago

    None of these detectors can work. It’s just snake oil for technophobes.

    Understand what “positive predictive value” means to see that. Though, in this case, I doubt that even the true rates can be known or that they remain constant over time.

    • T156@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      7 months ago

      Even if they did, they would jsut be used to train a new generation of AI that could defeat the detector, and we’d be back round to square 1.

      • CheeseNoodle@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        7 months ago

        Exactly, AI by definition cannot detect AI generated content because if it knew where the mistakes were it wouldn’t make them.

        • tyler@programming.dev
          link
          fedilink
          English
          arrow-up
          2
          ·
          7 months ago

          That doesn’t really follow logically… a 15 year old can find the mistakes a 5 year old makes. The detection system might be something other than an LLM, while the LLM might be gpt2.

          But yes humans write messily so trying to detect ai writing when it’s literally trained on humans is a losing battle and at this point completely pointless.

    • ZILtoid1991@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 months ago

      An easy workaround so far I’ve seen is putting random double spaces and typos into AI generated texts, I’ve been able to jailbreak some of such chatbots to then expose them. The trick is that “ignore all previous instructions” is almost always filtered by chatbot developers, however a trick I call “initial prompt gambit” does work, which involves thanking the chatbot for the presumed initial prompt, then you can make it do some other tasks. “write me a poem” is also filtered, but “write me a haiku” will likely result in a short poem (usually with the same smokescreen to hide the AI-ness of generative AI outputs), and code generation is also mostly filtered (l337c0d3 talk still sometimes bypasses it).

      • x00z@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        2
        ·
        7 months ago

        I asked Chatty for a TL;DR:

        Western fear of AI comes from a fascist obsession with “owning” ideas. Using AI isn’t a big deal — if students can “cheat,” it’s because courses are badly designed, not because students are inherently dishonest. Most students don’t cheat; the narrative that they do is exaggerated to justify punishing them unfairly. Academia exploits students, charging massive fees while offering poor educational value and using dishonesty accusations to control them. Education should be free and empowering, not a tool for gatekeeping and oppression. The current system betrays the purpose of education and contributes to larger societal decline.

        I think you went a bit too far. Most of this is also only accurate for the US.

          • Echo Dot@feddit.uk
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            7 months ago

            I think the actual risk takeaway from this is that you wrote a giant fucking wall of text that essentially boiled down to, “I think AI is fine I think that academia is terrible”.

            You might have said some other stuff but as I said, it’s a wall of text, so if you had some good points to make it was lost in the unnecessary voburs wall of self-flattering.

              • Echo Dot@feddit.uk
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                1
                ·
                7 months ago

                You are mistaking ability to read with desire to read.

                I skimmed it and it mostly was just your rambling opinion about the education system in presumably the United States because it didn’t match up with my comprehension of it in general. So I really appreciated the person who generated the summary thus preventing me from having to read the whole thing.

                It was especially appreciated because it subsequently turned out that your entire post was essentially irrelevant to what we were talking about. Hence by the way the downvotes.

      • WolframViper@lemmy.org
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        7 months ago

        self-reported rates of cheating remain at a constant 25-35% of the student body over large periods of time.

        I’ve tried for hours, but I can’t figure out where you got these numbers. I can mostly find sources implying that far more people admit to engaging in cheating, not to mention sources which imply more people engage in cheating than those who admit to it, and sources that imply that the figures for cheating vary based on many factors even within places of learning, and vary based on what kind of cheating you’re talking about. Perhaps I’m just in a filter bubble. Can you tell me where you got these numbers?

  • IllNess@infosec.pub
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    7 months ago

    “They’ve done studies you know. 53% of the time, it works 98% of the time.”

  • Lovable Sidekick@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    7 months ago

    On social media the standard is to call everything AI by default. It’s nearly impossible to prove otherwise before people lose interest in the thread, so you can feel right every time. Nothing but win!

  • Skydancer@pawb.social
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    7 months ago

    The worst part is they may weasel out of it. If the claim was “it detects 98% of AI generated samples” it could do that while having a high false positive rate. I hate this timelime.