Bad news, baby. The New Yorker reports the rapid advance of AI in the workplace will create a “permanent underclass” of everyone not already hitched to the AI train.

The prediction comes from OpenAI employee Leopold Aschenbrenner, who claims AI will “reach or exceed human capacity” by 2027. Once it develops capacity to innovate, AI superintelligence will supersede even a need for its own programmers … and then wipe out the jobs done by everyone else.

Nate Soares, winner of “most sunshine in book title” and co-author of AI critique If Anyone Builds It, Everyone Dies suggests “people should not be banking on work in the long term”. Math tutors, cinematographers, brand strategists and journalists are quoted by the New Yorker, freaking out.

The consolation here is that if you are among those panicking about being forced into the permanent underclass, you are already in it. Inherited wealth makes more billionaires than entrepreneurship, the opportunity gap is growing; if your family don’t have the readies to fund your tech startup, media empire or eventual presidential ambitions, it’s probably because they were in a tech-displaced underclass, too.

  • tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    16 hours ago

    The prediction comes from OpenAI employee Leopold Aschenbrenner, who claims AI will “reach or exceed human capacity” by 2027.

    I suppose that it depends on the metric you’re using. There are some tasks at which humans are outperformed now.

    But I am pretty comfortable saying that come January 2027, the great bulk of things that humans do will continue to not be able to be done by existing AI.

    We aren’t going to just tweak an existing LLM somewhere slightly, throw a bit more hardware at it, and get general intelligence.

    • Powderhorn@beehaw.orgOP
      link
      fedilink
      English
      arrow-up
      8
      ·
      15 hours ago

      As education and expectations of humans decline, it may be the case that LLMs are an “improvement” over human drones in the future not because the tech is getting better.

  • Scrubbles@poptalk.scrubbles.tech
    link
    fedilink
    English
    arrow-up
    24
    ·
    22 hours ago

    Leopold Aschenbrenner (born 2001 or 2002[1]) is a German artificial intelligence (AI) researcher and investor. He was part of OpenAI’s “Superalignment” team before he was fired in April 2024 over an alleged information leak, which Aschenbrenner disputes. He has published a popular essay called “Situational Awareness” about the emergence of artificial general intelligence and related security risks.[2] He is the founder and chief investment officer (CIO) of Situational Awareness LP, a hedge fund investing in companies involved in the development of AI technology.

    Wikipedia

    So, I’m calling bullshit. I’ve read the papers, I’ve kept up on everything. I run AI models myself to keep up with everything, I’ve built my own agents and my own agentic workflows. It keeps coming back to a few big things that unless they’ve suddenly had another massive breakthrough - I don’t see happening.

    • LLMs already have the vast majority of data associated with them, and they still hallucinate. The docs say that it will take exponentially more data to train them on a linear trajectory. So to get double the performance, we’d need the current amount of data squared.
    • LLMs and Agentic flows are very cool, and very useful for me. But they’re incredibly unreliable. And it’s just how models work - it’s a black box. You can say “that didn’t work” and it’ll train next time that it was a bad option, but it’s never going be zero. Businesses are learning (see Taco Bell and several others), that AI is not code. It is not true or false. It’s probably true or probably false. That doesn’t work when you’re putting in an order or deciding how much money to spend.
    • We’ve certainly plateaued with AI for the time being. There will be more things that come out, but until the next major leap we’re pretty much here. GPT5 proved that, it was mediocre, it was… the next version. They promised groundbreaking, but there just isn’t any more ground to break with current AI. Like I said agents were kind of the next thing, and we’re already using them.
  • SpikesOtherDog@ani.social
    link
    fedilink
    English
    arrow-up
    14
    ·
    20 hours ago

    OpenAI employee Leopold Aschenbrenner, who claims AI will “reach or exceed human capacity” by 2027

    Sure, but is that before or after full self driving cars?

  • CaptainBlinky@lemmy.myserv.one
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    12 hours ago

    People without money are replaceable. People with money are not. What a stupid statement. MONEY IS WORTH. You could be the most talented engineer in the world, the modern Einstein in science. Unless you have capital or create capital, you’re an insect who will be squashed. I love the idea but AI and automation are about to make 90% of the world’s population obsolete. And NOBODY is training LLMs on UBS or universal healthcare LMAO we’re cooked and reporters know it, but there’s no money in saying it, so even now the media is trying to milk the last possible money from this dying system.

    I never did expect it to be Orwell x Terminator, but here we are. Enjoy the boot on our neck forever! I figure we give it like 6 years before eugenics come back because there’s no need for so many proles in the new automated world. We’ll just be resource drags that would be better replaced with renewable machines. Let’s say 2030 before the mass killings begin. Great job MAGA.

  • MelodiousFunk@slrpnk.net
    link
    fedilink
    arrow-up
    9
    ·
    21 hours ago

    if you are among those panicking about being forced into the permanent underclass, you are already in it.

    Well, yeah. And it’s been like that for many generations. AI is just the next big thing. But it’s nice to remind people every once in a while.