I assume they all crib from the same training sets, but surely one of the billion dollar companies behind them can make their own?

  • kromem@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 day ago

    It’s more like they are a sophisticated world modeling program that builds a world model (or approximate “bag of heuristics”) modeling the state of the context provided and the kind of environment that produced it, and then synthesize that world model into extending the context one token at a time.

    But the models have been found to be predicting further than one token at a time and have all sorts of wild internal mechanisms for how they are modeling text context, like building full board states for predicting board game moves in Othello-GPT or the number comparison helixes in Haiku 3.5.

    The popular reductive “next token” rhetoric is pretty outdated at this point, and is kind of like saying that what a calculator is doing is just taking numbers correlating from button presses and displaying different numbers on a screen. While yes, technically correct, it’s glossing over a lot of important complexity in between the two steps and that absence leads to an overall misleading explanation.

    • khepri@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      1 day ago

      I like the analogy, I have a lot of trouble explaining to people that LLMs are anything more than just a “most likely next token” predictor. Because that is exactly what an LLM is, but saying it that way is so abstract that it abstracts away everything that is actually interesting about them lol. It’s like saying a computer is “just” a collection of switches than can be a 1 or 0. Which, yeah, base level, not wrong, but also not all that useful to someone actually curious about what they are and what they can do.