As a follow up to this post in this community: The Future is NOT Self-Hosted
I have thought about how to set up local, community-hosted fediverse servers that respect privacy and anonymity while still guaranteeing that users joining the server are human-beings.
The reasoning behind these requests is that:
- You want anonymity to guarantee that people won’t face repercussions in real life for the opinions they voice in the internet. (liberty of free speech)
- You want to keep the fediverse human, i.e. make sure that bot accounts are in the minority.
This might sound like an impossible and self-contradictory set of constraints, but it is indeed possible. Here’s how:
Make the local library set up a fediverse server. Once a month, there’s a “crypto party” where participants throw a piece of paper with their fediverse account name into a box. The box is then closed and shaked to mix all the tokens in it. Then, each one is picked out and the library confirms that this account name is indeed connected to a human. Since humans have to be physically present to throw in a paper, it is guaranteed that no bot army just opens a hundred anonymous accounts. Also, the papers are not associated to a particular person that way.
today, yes. it’s a very simplistic worldview to assume that AI won’t becomes less distinguishable from humans when writing applications in the future, and i don’t expect it to hold true
Can’t help but think about this old XKCD from 2010.
You can just make the questions more location or theme specific. There is no way a bot will not slip up on stuff like that, and it doesn’t need to be 100% fail proof either.
We get a lot of LLM bot applications on our instance, and even if it would get 10x harder, they would be still really easy to spot.
what if we explicitly say to user write something like i am not a robot and i am creating this account with a buch of slurs or some freaky stuff . ai will never write these stuff