I’m going to use three examples.

  1. Reddit, High Moderation the absolute worst: I’ve seen many people including myself get wrongfully banned from that website, It has the strongest moderation possible that feels a bit authoritarian. It tracks your device with an ID and your IP albeit for 100 days. I’ve seen people getting banned because they were protesting against “ICE” as “Violence” I’ve known people getting banned on r/suicidewatch because when someone reports you on Reddit sometimes there’s a bot saying “Hey, we are here for you” which is again crazy ironic that they don’t have a team handling these sort of issues, not that it’s their job to do so but due to Reddit’s aggression with Bots and Filters it feels like hell.

I posted a NSFW themed meme on an NSFW community and within seconds the post was removed due to Reddit’s filters leading with a permanent ban, What are Reddit’s filters and what classifies as a “filter” who knows. I sent an appeal saying that my alt got banned wrongly (same email) but I know that they won’t bother to check. Leaving someone with no choice other to start clean again which is against their rules as a Ban Evasion however I still believe it was a wrong decision so I’m worthy of another chance.

You can argue after Reddit’s controversies with r/the_donald and a subreddit where there were people literally dying on camera, Reddit enforced harsher rules which is understandable, but what they still don’t understand is that in case there’s a mistake you need to have better ways of communicating with an actual person, the appeal message is 250 Characters long and that’s it. There are literal Nazis there who haven’t been banned but I did just because of a meme.

  1. Lemmy, The Perfect Middle Ground: This website pretty much is in line with what I believe, that there should be moderation but without any stupid filters, karma requirements and power tripping mods, Is it because it’s a much smaller community than reddit? Maybe. Will the rules ever change if Lemmy gets much more popular, Who knows?

  2. 4Chan. The wild west: Almost to zero moderation, which to me is a bad thing because there will be people who will abuse that system and post illegal stuff and be borderline mental, I don’t think I need to say more about that website.

To be fair there’s still moderation, for example after the GamerGate drama posts on /v/ about specific people or e-celebrities is prohibited.

  • litchralee@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    5 days ago

    Reddit has global scope, and so their moderation decisions are necessarily geared towards trying to be legally and morally acceptable in as many places as possible. Here is Mike Masnick on exactly what challenges any new social media platform faces, and even some which Lemmy et al may have to face in due course: https://www.techdirt.com/2022/11/02/hey-elon-let-me-help-you-speed-run-the-content-moderation-learning-curve/ . Note: Masnick is on the board of BlueSky, since it was his paper on Protocols, Not Platforms that inspired BlueSky. But compared to the Fediverse, BlueSky has not achieved the same level of decentralization yet, having valued scale. Every social media network chooses their tradeoffs; it’s part of the bargain.

    The good news is that the Fediverse avoids any of the problems related to trying to please advertisers. The bad news is that users still do not voluntarily go to “the Nazi bar” if they have any other equivalent option. Masnick has also written about that when dealing at scale. All Fediverse instances must still work to avoid inadvertently becoming the Nazi bar.

    But being small and avoiding scaling issues is not all roses for the Fediverse. Not scaling means fewer resources and fewer people to do moderation. Today, most instances range from individual passion projects to small collectives. The mods and admins are typically volunteers, not salaried staff. A few instances have companies backing them, but that doesn’t mean they’d commit resources as though it were crucial to business success. Thus, the challenge is to deliver the best value to users on a slim budget.

    Ideally, users will behave themselves on most days, but moderation is precisely required on the days they’re not behaving.

    • Tollana1234567@lemmy.today
      link
      fedilink
      arrow-up
      1
      ·
      4 days ago

      i suspect reddit is mostly saving money by using AI moderation, they have very few admins, and only those admins can unban a shadowban, its very noticeable most appeals are ignored.

    • litchralee@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      5 days ago

      Related to moderation are the notions of procedural fairness, including 1) the idea that rules should be applied to all users equally, that 2) rules should not favor certain users or content, and 3) that there exists a process to seek redress, to list a few examples. These are laudable goals, but I posit that these can never be 100% realized on an online platform, not for small-scale Lemmy instances nor for the largest of social media platforms.

      The first idea is demonstrably incompatible with the requisite avoidance of becoming a Nazi bar. Nazis and adjoining quislings cannot be accommodated, unless the desire is to become the next Gab. Rejecting Nazis necessarily treats them different than other users, but it keeps the platform alive and healthy.

      The second idea isn’t compatible with why most people set up instances or join a social media platform. Fediverse instances exist either as an extension of a single person (self-hosting for just themselves) or to promote some subset of communities (eg a Minnesota-specific instance). Meanwhile, large platforms like Meta exist to make money from ads. Naturally, they favor anything that gets more clicks (eg click bait) than adorable cat videos that make zero revenue.

      The third idea would be feasible, except that it is a massive attack vector: unlike an in-person complaints desk, even the largest companies cannot staff – if they even wanted to – enough customer service personnel to deal with a 24/7 barrage of malicious, auto-generated campaigns that flood them with invalid complaints. Whereas such a denial-of-service attack against a real-life complaints desk would be relatively easy to manage.

      So once again, social media platforms – and each Fediverse instance is its own small platform – have to make some choices based on practicalities, their values, and their objectives. Anyone who says it should be easy has not looked into it enough.