• JcbAzPx@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    2
    ·
    2 days ago

    Also not likely in the lifetime of anyone alive today. It’s a much harder problem than most want to believe.

    • Modern_medicine_isnt@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      2 days ago

      Everything is always 5 to 10 years away until it happens. Agi cpuld happen any day in the next 1000 years. There is a good chance you won’t see it coming.

      • jj4211@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        2 days ago

        Pretty much this. LLMs came out of left field going from morning to what it is more really quickly.

        If expect the same of AGI, not correlated to who spent the most or is best at LLM. It might happen decades from now or in the next couple of months. It’s a breakthrough that is just going to come out of left field when it happens.

        • JcbAzPx@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          2 days ago

          LLMs weren’t out of left field. Chatbots have been in development since the '90s at least. Probably even longer. And word prediction has been around at least a decade. People just don’t pay attention until it’s commercially available.

          • scratchee@feddit.uk
            link
            fedilink
            English
            arrow-up
            2
            ·
            12 hours ago

            Modern llms were a left field development.

            Most ai research has serious and obvious scaling problems. It did well at first, but scaling up the training didn’t significantly improve the results. LLMs went from more of the same to a gold rush the day it was revealed that they scaled “well” (relatively speaking). They then went through orders of magnitude improvements very quickly because they could (unlike previous ai training models which wouldn’t have benefited like this).

            We’ve had chatbots for decades, but with a the same low capability ceiling that most other old techniques had, they really were a different beast to modern LLMs with their stupidly excessive training regimes.