- cross-posted to:
- [email protected]
- [email protected]
- [email protected]
- cross-posted to:
- [email protected]
- [email protected]
- [email protected]
Authorized Fetch (also referred to as Secure Mode in Mastodon) was recently circumvented by a stupidly easy solution: just sign your fetch requests with some other domain name.
Vulnerable folk are looking for community, not a soap box. The goal is to connect with other folk whilst being as free as possible from harassment.
It’s absolutely possible to achieve that without perfect privacy controls.
reasons why i love blahaj.zone 🥹
Privacy and being free of (in-context) harassment aren’t the same thing. Your posts can all be public but your client can filter out any harassment, for example.
If the goal is privacy so that people who aren’t in the community don’t know that you’re in the community, and don’t know what the community is even talking about, I’m skeptical that it’s practical. Especially for a decentralized network, I think that the sacrifices needed to make this happen would make the social network unappealing to users. For example, you’d need to make it invite only and restrict who can invite, or turn off any kind of discovery so that you can’t find people who aren’t already in your circle. At that point you might as well just use a group chat.
They’re related. Often, the ability to limit your audience is about making it non trivial for harassers to access your content rather than impossible.
That’s not the goal. The goal is to make a community that lets vulnerable folk communicate whilst keeping the harassment to a manageable level and making the sensitive content non trivial to access for random trolls and harassers.
It’s not about stopping dedicated individuals, because they can’t be stopped in this sort of environment for all the reasons you point out. It’s about minimising harassment from the random drive by bigots
Hmmm I think I understand the intent. I’ll have to think on it some more.
My gut tells me that protecting people from drive-by bigotry is antithetical to content/community discovery. And what is a social network without the ability to find new communities to join or new content to see?
Perhaps something like reddit where they can raise the bar for commenting/posting until you’ve built up karma within the community? That’s not a privacy thing though.
What would this look like to you, and how does it relate to privacy? I’ve got my own biases that affect how I’m looking at the problem, so I’d be interested in getting another perspective.
You’re thinking about this in an all or nothing way. A community in which everyone and everything they post is open to everyone isn’t safe.
A community in which no one can find members or content unless they’re already connected to that community stagnates and dies.
A community where some content and some people are public and where some content and some people are locked down is what we need, and though it’s imperfect, things like authorised fetch brings us closer to that, and that’s the niche that future security improvements on the Fediverse need to address.
No one is looking for perfect, at least not in this space.
I don’t think I’m looking for perfect, I’m looking for “good enough” and while authorized fetch is better than nothing, it’s nowhere near “good enough” to be calling anything “private”.
I’m thinking that maybe we need to reevaluate or reconsider what it looks like to protect people from harassment, in the context of social media. Compare that to how we’re currently using half-functional privacy tools to accomplish it.
I’m not saying existing features are good enough.
I’m saying that they’re better than the alternative that started this conversation.
“Just loudly proclaim that everything is public but clients can filter out shit you don’t wanna see”
That’s what Twitter does right now. It’s also a hate filled cesspit.
The Fediverse though, even though it has hate filled cesspits, gives us tools that put barriers between vulnerable groups and those spaces. The barriers are imperfect, they have booked holes and be climbed over by people who put the effort in, but they still block the worst if it.
Right, but what im saying is that the problem of privacy is different than the problem of harassment.
I’m not saying that we should give up on anti-harassment tools, just that I think that anti-harassment tools that are bolted onto privacy tools cannot work because those privacy tools will be hamstrung by necessity, and I think there must be better solutions.
Having people think that there is privacy on a social network causes harm, because people are change their behavior based on the unfulfilled expectation of privacy. I suspect there is a way to give up privacy and also solve the problem of harassment. That solution doesn’t have to look like Twitter, but I have my own biases that may negatively affect how my ideas would work in practice.
I’m asking you
There’s no such thing. They are mutually exclusive. Take queer folk for example. We need privacy to be able to talk about our experiences without outing ourselves to the world. It’s especially important for queer kids, and folk that are still in the closet. If they don’t have privacy, they can’t be part of the community, because they open themselves to recognition and harassment in offline spaces.
With privacy, they can exist in those spaces. It won’t stop a dedicated harasser, but it provides a barrier and stops casual outing.
An “open network” where everyone can see everything, puts the onus on the minority person. Drive by harassers exist in greater numbers than a vulnerable person can cope with, and when their content is a simple search and a throw away account away from abuse, it means the vulnerable person won’t be there. Blocking them after the fact means nothing.