Everyday we’re inching towards the cyperpunk timeline. Blackwall
A lot of focus is on how llms are severing connection between the user and the source of info. something I would say is equally if not more concerning is the fact that public services and websites that offer free public access to information are being obliterated by llm agents scraping with absolutely no regard to the site. It’s not just little operations either. Meta takes an approach that I would consider malicious and a threat to anyone hosting anything on the open web as a whole.
Is intentional, they want their AI to be the only source of information.
The age of
informationmisinformationThe age of
informationmisinformationdisinformationThat is actually more accurate.
Makes me wonder if the future of the internet is federated network hardware. There’s already efforts in bigger cities to distribute mesh networks (especially to lower-income areas), so it doesn’t seem like a far leap to create an internet by users and for users.
future of the internet is federated network hardware
At the very least, that was its past. It was built as a nuke-resilient self-healing early mesh concept.
Then it got commercialized and peers couldn’t trust each other so much, so we have the drunken-starfish setup we have now.
I know it sucks, but drunken starfish got me lol
Is there anything that prevents a tech bro buying the hardware and accessing the network to post with their LLM the way they do with the internet today?
Are there any current examples of that?
NYC Mesh!
There’s likely others, but this one has been around for about a decade and is still operational.
This is cool to see, are there any examples in the Los Angeles City/County areas? Not sure how common place backups like this are, but it’s a smart idea.
You’re gonna wanna sit down for this…
The title is a bit misleading. The issue here is data scrapers, period. They aren’t being deployed by an AI, and the bots don’t have any AI algorithms in them. It’s stuff deployed by people, using traditional data scraping algorithms.
It’s just the presumed destination of the data being scraped, combined with the fact that the people who run these bots tend to never respect consent
Yeah, that makes sense.
Also, the Internet is like a living thing, it changes constantly, that’s its nature. Essentially every day is the end of the Internet as we know it, it’s always something else the next day.
Your definition is so vague that it is quite useless in any descriptive context.
My definition of change?
We (soon) have a proofread internet for the paying class.
if all they can do is push ads and slop software then yeah, they suck.
openDemocracy’s website has been repeatedly brought down by an army of bots.
This website uses cookies to give you the best experience.
See, from my point of view, the problem is that the website is up, not that it was brought down?








