• Zak@lemmy.world
    link
    fedilink
    arrow-up
    9
    ·
    20 hours ago

    LLMs don’t understand things. They repeatedly predict the next token given previous tokens.

    I don’t think something without predictable patterns is likely to work as a language. A very complex grammar probably means the LLM will make grammatical errors more often, but that’s probably the most that can be done to make a language hard for LLMs. Other comments mention languages without much training data, but I don’t think that’s what you’re asking.