They’re both nuclear powers.
They’re both nuclear powers.
Oh my god, you suck
You don’t really believe that, do you?
Is this stuff you know or are you guessing?
So what does it do? Cancer?
Voting for Harris is voting for Harris. Voting for Trump is voting for Trump. Voting third party is voting for Trump. Not voting is voting for Trump. Eating spaghetti is voting for Trump. Why won’t you just vote blue!?
Currently it’s just a Lemmy client. It’ll be cool to watch the development, but at present I don’t see how it’s any better than voyager. At least on mobile the interface has a lot more dead space than voyager.
Is that why people that smoke every day act like fucking children?
Man that last paragraph is kind of a train wreck isn’t it?
Militants like the Taliban?
Maybe I misunderstood the OP? Idk
People sometimes act like the models can only reproduce their training data, which is what I’m saying is wrong. They do generalise.
During training the models are trained to predict the next word, but after training the network is always effectively interpolating between the training examples it has memorised. But this interpolation doesn’t happen in text space but in a very high dimensional abstract semantic representation space, a ‘concept space’.
Now imagine that you have memorised two paragraphs that occupy two points in concept space. And then you interpolate between them. This gives you a new point, potentially unseen during training, a new concept, that is in some ways analogous to the two paragraphs you memorised, but still fundamentally different, and potentially novel.
Not an ELI5, sorry. I’m an AI PhD, and I want to push back against the premises a lil bit.
Why do you assume they don’t know? Like what do you mean by “know”? Are you taking about conscious subjective experience? or consistency of output? or an internal world model?
There’s lots of evidence to indicate they are not conscious, although they can exhibit theory of mind. Eg: https://arxiv.org/pdf/2308.08708.pdf
For consistency of output and internal world models, however, their is mounting evidence to suggest convergence on a shared representation of reality. Eg this paper published 2 days ago: https://arxiv.org/abs/2405.07987
The idea that these models are just stochastic parrots that only probabilisticly repeat their training data isn’t correct, although it is often repeated online for some reason.
A little evidence that comes to my mind is this paper showing models can understand rare English grammatical structures even if those structures are deliberately withheld during training: https://arxiv.org/abs/2403.19827
“The new book has, for lack of a better term, completely screwed over my fanfiction”
“These books live and die by their fanfiction. For the authors not to coordinate with the fanfic writers is a disgrace”
" Several of the researchers are associated with public security authorities in China, a fact that “voids any notion of free informed consent”, said Yves Moreau, a professor of engineering at the University of Leuven, in Belgium, who focuses on DNA analysis. Moreau first raised concerns about the papers with Hart, MGGM’s editor-in-chief, in March 2021.
One retracted paper studies the DNA of Tibetans in Lhasa, the capital of Tibet, using blood samples collected from 120 individuals. The article stated that “all individuals provided written informed consent” and that work was approved by the Fudan University ethics committee.
But the retraction notice published on Monday stated that an ethical review “uncovered inconsistencies between the consent documentation and the research reported; the documentation was not sufficiently detailed to resolve the concerns raised”. "
Weird. So they had written consent forms for the blood samples, but the forms weren’t detailed enough(?), and anyway you can’t trust anyone associated with the Chinese gvmt? Is that what they’re saying?
This seems like weird reactionary virtue signalling.
At least one of them is.