Is AI’s hype cycle leading us into a trough of disillusionment? Dive into the reality behind GPT-5’s launch and its impact on the tech world.

A recent MIT report on AI in business found that 95 percent of all generative-AI deployments in business settings generated “zero return.”

  • jacksilver@lemmy.world
    link
    fedilink
    English
    arrow-up
    24
    ·
    4 days ago

    That’s the issue, AI right now means LLMs not deep learning/ML.

    The Deep Learning/ML stuff will keep chugging along.

    • chrash0@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      8
      ·
      edit-2
      3 days ago

      but LLMs do represent a significant technological leap forward. i also share the skepticism that we haven’t “cracked AGI” and that a lot of these products are dumb. i think another comment made a better analogy to the dotcom bubble.

      ETA: i’ve been working in ML engineering since 2019, so i can sometimes forget most people didn’t even hear about this hype train until ChatGPT, but i assure you inference hardware and dumb products were picking up steam even then (Tesla FSD being a classic example).

      • jacksilver@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        ·
        3 days ago

        We definitely haven’t cracked AGI, that’s without a doubt.

        But yeah, LLMs are big (I’d say really Transformers were the breakthrough). My point though was that Deep Learning is the underlying technology driving all of this and we certainly haven’t run out of ideas in that space even if LLMs may be hitting a dead end.

        • CheeseNoodle@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 days ago

          I feel like LLMs didn’t hit a deadend so much as people started trying to use them in completely inappropriate applications.

          • jacksilver@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 days ago

            I think it’s a mixture of that and the fact that when OpenAI saw that throwing more data drastically improved the models, they thought they would continue to see jumps like that.

            However, we now know that bad benchmarks were misleading how steep the improvements were, and much like with autonomous vehicles, solving 90% of the problem is still a farcry away from 100%.