We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.

Then retrain on that.

Far too much garbage in any foundation model trained on uncorrected data.

Source.

More Context

Source.

Source.

  • theneverfox@pawb.social
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    3
    ·
    6 hours ago

    The other thing that he doesn’t understand (and most “AI” advocates don’t either) is that LLMs have nothing to do with facts or information. They’re just probabilistic models that pick the next word(s) based on context.

    That’s a massive oversimplification, it’s like saying humans don’t remember things, we just have neurons that fire based on context

    LLMs do actually “know” things. They work based on tokens and weights, which are the nodes and edges of a high dimensional graph. The llm traverses this graph as it processes inputs and generates new tokens

    You can do brain surgery on an llm and change what it knows, we have a very good understanding of how this works. You can change a single link and the model will believe the Eiffel tower is in Rome, and it’ll describe how you have a great view of the colosseum from the top

    The problem is that it’s very complicated and complex, researchers are currently developing new math to let us do this in a useful way