Bad news, baby. The New Yorker reports the rapid advance of AI in the workplace will create a “permanent underclass” of everyone not already hitched to the AI train.

The prediction comes from OpenAI employee Leopold Aschenbrenner, who claims AI will “reach or exceed human capacity” by 2027. Once it develops capacity to innovate, AI superintelligence will supersede even a need for its own programmers … and then wipe out the jobs done by everyone else.

Nate Soares, winner of “most sunshine in book title” and co-author of AI critique If Anyone Builds It, Everyone Dies suggests “people should not be banking on work in the long term”. Math tutors, cinematographers, brand strategists and journalists are quoted by the New Yorker, freaking out.

The consolation here is that if you are among those panicking about being forced into the permanent underclass, you are already in it. Inherited wealth makes more billionaires than entrepreneurship, the opportunity gap is growing; if your family don’t have the readies to fund your tech startup, media empire or eventual presidential ambitions, it’s probably because they were in a tech-displaced underclass, too.

  • Scrubbles@poptalk.scrubbles.tech
    link
    fedilink
    English
    arrow-up
    24
    ·
    22 hours ago

    Leopold Aschenbrenner (born 2001 or 2002[1]) is a German artificial intelligence (AI) researcher and investor. He was part of OpenAI’s “Superalignment” team before he was fired in April 2024 over an alleged information leak, which Aschenbrenner disputes. He has published a popular essay called “Situational Awareness” about the emergence of artificial general intelligence and related security risks.[2] He is the founder and chief investment officer (CIO) of Situational Awareness LP, a hedge fund investing in companies involved in the development of AI technology.

    Wikipedia

    So, I’m calling bullshit. I’ve read the papers, I’ve kept up on everything. I run AI models myself to keep up with everything, I’ve built my own agents and my own agentic workflows. It keeps coming back to a few big things that unless they’ve suddenly had another massive breakthrough - I don’t see happening.

    • LLMs already have the vast majority of data associated with them, and they still hallucinate. The docs say that it will take exponentially more data to train them on a linear trajectory. So to get double the performance, we’d need the current amount of data squared.
    • LLMs and Agentic flows are very cool, and very useful for me. But they’re incredibly unreliable. And it’s just how models work - it’s a black box. You can say “that didn’t work” and it’ll train next time that it was a bad option, but it’s never going be zero. Businesses are learning (see Taco Bell and several others), that AI is not code. It is not true or false. It’s probably true or probably false. That doesn’t work when you’re putting in an order or deciding how much money to spend.
    • We’ve certainly plateaued with AI for the time being. There will be more things that come out, but until the next major leap we’re pretty much here. GPT5 proved that, it was mediocre, it was… the next version. They promised groundbreaking, but there just isn’t any more ground to break with current AI. Like I said agents were kind of the next thing, and we’re already using them.