Distillation is using one model to train another. It’s not really about leaking data.
Claude was used to generate censorship-safe alternatives to politically sensitive queries like questions about dissidents, party leaders, or authoritarianism, likely in order to train DeepSeek’s own models to steer conversations away from censored topics
But you’re right, prompt injection/jailbreaking is still trivial too.
Distillation is using one model to train another. It’s not really about leaking data.
But you’re right, prompt injection/jailbreaking is still trivial too.