It stands to reason that if you have access to an LLM’s training data, you can influence what’s coming out the other end of the inscrutable AI’s network. The obvious guess is that…
They’ve been doing it since the start. OAI was fear mongering about how dangerous gpt2 was initially as an excuse to avoid releasing the weights, while simultaneously working on much larger models with the intent to commercialize. The whole “our model is so good even we’re scared of it” shtick has always been marketing or an excuse to keep secrets.
Even now they continue to use this tactic while actively suppressing their own research showing real social, environmental and economic harms.
They’ve been doing it since the start. OAI was fear mongering about how dangerous gpt2 was initially as an excuse to avoid releasing the weights, while simultaneously working on much larger models with the intent to commercialize. The whole “our model is so good even we’re scared of it” shtick has always been marketing or an excuse to keep secrets.
Even now they continue to use this tactic while actively suppressing their own research showing real social, environmental and economic harms.