• skarn@discuss.tchncs.de
    link
    fedilink
    arrow-up
    9
    ·
    edit-2
    4 days ago

    It’s still leagues ahead of LLMs. I’m not saying it’s entirely impossible to build a computer that surpasses the human brain in actual thinking. But LLMs ain’t it.

    The feature set of the human brain is different, in a way that you can’t compensate for by just increasing scale. So you get something that works but not quite, by using several orders of magnitude more power.

    We optimize and learn constantly. We have chunking, whereby a complex idea becomes simpler for our brain once it’s been processed a few times, and this allows us to progressively work on more and more complex ideas without an increase in our working memory. And a lot of other stuff.

    If you spend enough time using LLMs you must notice how their working is different from your own.

    • Zos_Kia@lemmynsfw.com
      link
      fedilink
      arrow-up
      1
      ·
      3 days ago

      I think the moat is that when a human is born and their world model starts “training”, it’s already pre-trained by millions of years of evolution. Instead of starting from random weights like any artificial neural network, it starts with usable stuff, lessons from scenarios it may never encounter but will nevertheless gain wisdom from.