

The question is: What is an effective legal framework that focuses on the precise harms, doesn’t allow AI vendors to easily evade accountability, and doesn’t inflict widespread collateral damage?
This is entirely my opinion and I’m likely wrong about many things, but at minimum:
-
The model has to be open source and freely downloadable, runnable, and copyleft, satisfying the distribution license requirements of copyleft source material (I’m willing to give a free pass to making it copyleft in general, as different copyleft licenses can have different and contradictory distribution license requirements, but IMO the leap from permissive to copyleft is the more important part). I suspect this alone will kill the AI bubble, because as soon as they can’t exclusively profit off it they won’t see AI as “the future” anymore.
-
All training data needs to be freely downloadable and independently hosted by the AI creator. Goes without saying that only material you can legally copy and host on your own server can be used as training data. This solves the IP theft issue, as IMO if your work is licensed such that it can be redistributed in its entirety, it should logically also be okay to use it as training data. And if you can’t even legally host it on your own server, using it to train AI is off the table. And the independently hosted dataset (complete with metadata about where it came from) also serves as attribution, as you can then search the training data for creators.
-
Pay server owners for use of their resources. If you’re scraping for AI you at the very least need to have a way for server owners to send you bills. And no content can be scraped from the original source more than once, see point 2.
-
Either have a mechanism of tracking acknowledgement and accurately generating references along with the code, or if that’s too challenging, I’m personally also okay with a blanket policy where anything AI generated is public domain. The idea that you can use AI generated code derived from open source in your proprietary app, and can then sue anyone who has the audacity to copy your AI generated code, is ridiculous and unacceptable.















Kind of off topic, but this reminded me about something I really don’t like about the current paradigm of “intelligence” and “knowledge” being parts of a single monolithic model.
Why aren’t we training models on how to search any generic dataset for information, find patterns, draw conclusions, etc, rather than baking the knowledge itself into the model? 8 or so GB of pure abstract reasoning strategies would probably be way more intelligent and efficient than even a much larger model we have now. Imagine if you can just give it an arbitrarily sized database whose content you control, which you can then fill with the highest quality, ethically obtained, human expert moderated data complete with attributions to original creators, and have it base all its decisions from that. It would even be able to cite what it used with identifiers in the database, which can then be manually verified. You get a concrete foundation of where it’s getting its information from, and you only need to load what it currently needs into memory, whereas right now you have to load all the AI’s “knowledge,” relevant or not, into your precious and limited RAM. You would also be able to update the individual data separately from the model itself, and have it produce updated results from the new data. That would actually be what I consider an artificial “intelligence” and not a fancy statistical prediction mechanism.