• tfowinder@beehaw.org
    link
    fedilink
    arrow-up
    12
    ·
    2 days ago

    LLM just mirror real world data they are trained on,

    Other than censorship i don’t think there is a way to make it stop. It doesn’t understand moral good or bad it just spits out what it was trained on.