In a small room in San Diego last week, a man in a black leather jacket explained to me how to save the world from destruction by AI. Max Tegmark, a notable figure in the AI-safety movement, believes that “artificial general intelligence,” or AGI, could precipitate the end of human life. I was in town for NeurIPS, one of the largest AI-research conferences, and Tegmark had invited me, along with five other journalists, to a briefing on an AI-safety index that he would release the next day. No company scored better than a C+.

The threat of technological superintelligence is the stuff of science fiction, yet it has become a topic of serious discussion in the past few years. Despite the lack of clear definition—even OpenAI CEO Sam Altman has called AGI a “weakly defined term”—the idea that powerful AI contains an inherent threat to humanity has gained acceptance among respected cultural critics.

Granted, generative AI is a powerful technology that has already had a massive impact on our work and culture. But superintelligence has become one of several questionable narratives promoted by the AI industry, along with the ideas that AI learns like a human, that it has “emergent” capabilities, that “reasoning models” are actually reasoning, and that the technology will eventually improve itself.

I traveled to NeurIPS, held at the waterfront fortress that is the San Diego Convention Center, partly to understand how seriously these narratives are taken within the AI industry. Do AGI aspirations guide research and product development? When I asked Tegmark about this, he told me that the major AI companies were sincerely trying to build AGI, but his reasoning was unconvincing. “I know their founders,” he said. “And they’ve said so publicly.”

  • spit_evil_olive_tips@beehaw.org
    link
    fedilink
    arrow-up
    3
    ·
    19 小时前

    In a small room in San Diego last week

    I was in town for NeurIPS, one of the largest AI-research conferences, and Tegmark had invited me, along with five other journalists

    congrats to this author on getting a business trip to San Diego during December. I bet it was nice and warm.

    it seems like this is a pretty typical piece of access journalism:

    The place to be, if you could get in, was the party hosted by Cohere…

    With the help of a researcher friend, I secured an invite to a mixer hosted by the Mohamed bin Zayed University of Artificial Intelligence, the world’s first AI-focused university, named for the current UAE president.

    On the roof of the Hard Rock Hotel…

    leading to a “conclusion” pretty typical of access journalism:

    It struck me that both might be correct: that many AI developers are thinking about the technology’s most tangible problems while public conversations about AI—including those among the most prominent developers themselves—are dominated by imagined ones.

    what if the critics and the people they’re criticizing are both correct? I am a very smart person who gets paid to write for The Atlantic.

    • Powderhorn@beehaw.orgOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      19 小时前

      I sort of glossed over the access. One expects that from longform, so it felt like it came with the territory.

    • todotoro@midwest.social
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 天前

      Great link and good point. Subbed, ty for sharing. ( I’ll believe what a NetNavi says about AI anyday).

  • kbal@fedia.io
    link
    fedilink
    arrow-up
    13
    ·
    2 天前

    The AI bubble is so ridiculously huge at this point that I think we’re all living inside the AI bubble.