• Sims@lemmy.ml
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    12
    ·
    12 hours ago

    I don’t think “AI” is the problem here. Watching the watchers doesn’t hurt, but I think the AI-haters are grasping for straws here. In fact, when comparing to the actual suicide numbers, this “AI is causing Suicide !” seems a bit contrived/hollow, tbh. Were the haters also as active in noticing the 49 thousand suicide deaths every year, or did they just now find it a problem ?

    Besides, if there’s a criminal here, it would be the private corp that provided the AI service, not a broad category of technology - “AI”. People that hate AI, seem to really just hate the effects of Capitalism.

    https://www.cdc.gov/suicide/facts/data.html (This is for US alone !) overview

    If image not shown: Over 49,000 people died by suicide in 2023. 1 death every 11 minutes. Many adults think about suicide or attempt suicide. 12.8 million seriously thought about suicide. 3.7 million made a plan for suicide. 1.5 million attempted suicide.

    • Deestan@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      edit-2
      10 hours ago

      Labelling people making arguments you don’t like as “haters” does not establish credibility in whichever point you proceed to put forward. It signals you did not attempt to find rationality in their words.

      Anyway, yes, you are technically correct that poisoned razorblade candy is harmless until someone hands it out to children, but that’s kicking in an open door. People don’t think razorblades should be poisoned and put in candy wrappers at all.

      Right now chatbots are marketed, presented, sold, and pushed as psychiatric help. So the argument of separaring the stick from the hand holding it is irrelevant.

    • Dekkia@this.doesnotcut.it
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      2
      ·
      12 hours ago

      While a lot of people die trough suicide, it’s not exactly good or helpful when an AI guides some of them trough the process and even encourages them to do it.

      • LainTrain@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        7
        ·
        12 hours ago

        Actually being shown truthful and detailed information about suicide methods helped me avoid it as a youth. That website has since been taken down due to bs regs or some shit. If I were young now I’d probably ask a chatbot and I’d hope they give me crystal clear, honest details and instructions, that shit should be widely accessible.

        On the other hand all those helplines and social ads are just depressing to see, they feel patronising and frankly gross, if anything it’s them that should be banned.