Nucleo’s investigation identified accounts with thousands of followers with illegal behavior that Meta’s security systems were unable to identify; after contact, the company acknowledged the problem and removed the accounts

    • surewhynotlem@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      2
      ·
      20 hours ago

      Hey.

      I’ve been in tech for 20 years. I know python, Java, c#. I’ve worked with tensorflow and language models. I understand this stuff.

      You absolutely could train an AI on safe material to do what you’re saying.

      Stable diffusion and openai have not guaranteed that they trained their AI on safe materials.

      It’s like going to buy a burger, and the restaurant says “We can’t guarantee there’s no human meat in here”. At best it’s lazy. At worst it’s abusive.

      • Captain Aggravated@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        7
        ·
        20 hours ago

        I mean, there is no photograph of a school bus with pegasus wings diving to the titanic, but I bet one of these AIs can crank out that picture. If it can do that…?

      • Cryophilia@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        19 hours ago

        Ok, but by that definition Google should be banned because their trawler isn’t guaranteed to not pick up CP.

        In my opinion, if the technology involves casting a huge net, and then creating an abstracted product from what is caught in the net, with no steps in between seen by a human, then is it really causing any sort of actual harm?