Distillation is using one model to train another. It’s not really about leaking data.
Claude was used to generate censorship-safe alternatives to politically sensitive queries like questions about dissidents, party leaders, or authoritarianism, likely in order to train DeepSeek’s own models to steer conversations away from censored topics
But you’re right, prompt injection/jailbreaking is still trivial too.
Also pictured here: Anthropic stating out loud their models will just give out all the “secret” and “secured” internal data to anyone who asks.
Of course, that’s by design. LLMs can’t have any barrier between data and instructions, so they can never be secure.
Distillation is using one model to train another. It’s not really about leaking data.
But you’re right, prompt injection/jailbreaking is still trivial too.