The fabled Blahaj cuddle pile.
A bisexual nonbinary poster of memes and other things • They/Any
The fabled Blahaj cuddle pile.
Numbers going up does feel nice and a post not gaining any traction can be disappointing.
Nice thing about Lemmy is it is so small that local and global new feeds are actually usable. Even new communities with no subscribers can get plenty of views from those alone.
That’s exactly why it needs to be something you are willing to explain. It makes their stories better.
That’s a possibility. I would be concerned that the false positive rate is so high compared to the rate of actual CSAM that the FBI would just block anyone using this for reporting as spam.
What might be done is to track the detection rate of users. If anyone is significantly higher than the average they might be uploading CSAM. Only issue I see with this is the detector doesn’t have an equal false positive rate across all content. It could be that the user just uploads pictures of their kids playing at the park a lot.
A 1% false positive rate is probably going to be to high to reliability report every positive to the FBI. The rate of actual CSAM is likely to be much lower than this. If it’s 1 in 10,000 uploads, you will have 100 false positives and 1 true positive.
License changes that will charge developers based on how many installs their game has instead of how many units sold.
I believe that’s officially a variant rule. The system itself works fine without a grid. It can be done completely in the theater of the mind.
The grid is just commonly used because it simplifies movement and positioning greatly.
A YouTube channel called Truth and Foundation.
I was kind of thinking something similar. How close would you be willing to physically get to him knowing that at any moment there might be an assassination attempt?
They’ll just keep giving you shots of Ambien until you stop.
If you want to see how weird it can get look at blightsight. Your consciousness can be blind but your body can still react to visual stimulus.
Stop looking inside me. It’s mostly just meat.
You are kind of hitting on one of the issues I see. The model and the works created by the model may b considered two separate things. The model itself may not be infringing in of itself. It’s not actually substantially similar to any of the individual training data. I don’t think anyone can point to part of it and say this is a copy of a given work. But the model may be able to create works that are infringing.
That is not actually one of the criteria for fair use in the US right now. Maybe that’ll change but it’ll take a court case or legislation to do.
NPR reported that a “top concern” is that ChatGPT could use The Times’ content to become a “competitor” by “creating text that answers questions based on the original reporting and writing of the paper’s staff.”
That’s something that can currently be done by a human and is generally considered fair use. All a language model really does is drive the cost of doing that from tens or hundreds of dollars down to pennies.
To defend its AI training models, OpenAI would likely have to claim “fair use” of all the web content the company sucked up to train tools like ChatGPT. In the potential New York Times case, that would mean proving that copying the Times’ content to craft ChatGPT responses would not compete with the Times.
A fair use defense does not have to include noncompetition. That’s just one factor in a fair use defense and the other factors may be enyon their own.
I think it’ll come down to how “the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes” and “the amount and substantiality of the portion used in relation to the copyrighted work as a whole;” are interpreted by the courts. Do we judge if a language model by the model itself or by the output itself? Can a model itself be uninfringing and it still be able to potentially produce infringing content?
Whenever you see a post. And remember when you post it shows you your post so you must post after you post.