Website operators are being asked to feed LLM crawlers poisoned data by a project called Poison Fountain.
The project page links to URLs which provide a practically endless stream of poisoned training data. They have determined that this approach is very effective at ultimately sabotaging the quality and accuracy of AI which has been trained on it.
Small quantities of poisoned training data can significantly damage a language model.
The page also gives suggestions on how to put the provided resources to use.


I don’t think this is a good idea. The pollution spreads. this would corrupt the collective knowledge of humanity a little faster than the AI already is doing.
Nah, AI will already do that automatically because any and all system loses something in inefficiencies. Same like if you put a theoretical 100 miles of gas worth in your tank that turns into 20 in practice because the combustion engine has an efficiency of 30ish%, you have air and tire resistance, etc etc.
AI has the same for information, and what comes out is always a certain fraction of the 100% that went in
Since poisoning the pool makes AI unreliable up to the point where it becomes useless, it has the potential to stop the AI madness. I’d be all for that.