Americans can become more cynical about the state of society when they see harmful behavior online. Three studies of the American public (n = 1,090) revealed that they consistently and substantially overestimated how many social media users contribute to harmful behavior online. On average, they believed that 43% of all Reddit users have posted severely toxic comments and that 47% of all Facebook users have shared false news online. In reality, platform-level data shows that most of these forms of harmful content are produced by small but highly active groups of users (3–7%). This misperception was robust to different thresholds of harmful content classification. An experiment revealed that overestimating the proportion of social media users who post harmful content makes people feel more negative emotion, perceive the United States to be in greater moral decline, and cultivate distorted perceptions of what others want to see on social media. However, these effects can be mitigated through a targeted educational intervention that corrects this misperception. Together, our findings highlight a mechanism that helps explain how people’s perceptions and interactions with social media may undermine social cohesion.

  • TheAlbatross@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    13
    ·
    edit-2
    3 days ago

    I think we gotta shift social media to be wildly hostile to every and any company. We gotta bully companies such that it’s risky business to advertise on social media.

    Social media can never truly belong to the people as long as advertisers wanna use it for their own means. We gotta get mean and indiscriminate.

    Every corporations post and page should be filled with hatred and disdain, toxicity heretofore unfound online.

    • RobotToaster@mander.xyz
      link
      fedilink
      arrow-up
      9
      ·
      3 days ago

      I think we gotta shift social media to be wildly hostile to every and any company.

      Especially the social media companies.

      • TheAlbatross@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        3
        ·
        3 days ago

        I thought about that, and while there’s probably some value in diversifying the approaches, I’d worry about the potential backlash of people thinking anyone trashing a company is a bot.

        I think the key to success is making it memetically hilarious to be incredibly toxic towards any company.

        • Tyrq@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          3
          ·
          3 days ago

          It’s interesting, the language models are typically good enough to fool most people in an off hand way, it seems like it’s worked well for the propaganda machine, in so far that it might encourage actual people to think the opinion is more popular than it is.

          And this all runs into the idea of forcing people to have verified online identities to limit the harm the dead internet can do to actual people. Not that I like that, but that’s the genie I see being out of the bottle with or without this route.

          Either way, you’re probably right by pointing out the toxicity of these corps with our own toxicity, so I guess it’s still fire with fire, just lke any memevent, it just needs to catch on with the right few people

    • hallettj@leminal.space
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      3 days ago

      I think publicly operated social media could be a good thing. If you want to remove profit motive, you need an operator that is not motivated by profit. Take a look at how effective public broadcasting is.

      Lemmy and Mastodon do much better than the big corporate apps. But currently these are small-scale operations. A state or federal government program would be better suited to scale up, and better able to resist capitalist takeover.