• tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    2
    ·
    5 hours ago

    Wooldridge sees positives in the kind of AI depicted in the early years of Star Trek. In one 1968 episode, The Day of the Dove, Mr Spock quizzes the Enterprise’s computer only to be told in a distinctly non-human voice that it has insufficient data to answer. “That’s not what we get. We get an overconfident AI that says: yes, here’s the answer,” he said. “Maybe we need AIs to talk to us in the voice of the Star Trek computer. You would never believe it was a human being.”

    Hmm. That’s probably a pretty straightforward modification for existing LLMs, at least at the token level.

    You can obtain token probabilities, so you can give some estimate out-of-band confidence in a response, down to the token level. Don’t really need to change anything for that, just expose some data.

    And you could make the AI aware of its own neural net’s confidence level, feed the confidence back into the neural net for subsequent tokens, see if you can get it to take that information into account.

    https://en.wikipedia.org/wiki/Recurrent_neural_network

    In artificial neural networks, recurrent neural networks (RNNs) are designed for processing sequential data, such as text, speech, and time series,[1] where the order of elements is important. Unlike feedforward neural networks, which process inputs independently, RNNs utilize recurrent connections, where the output of a neuron at one time step is fed back as input to the network at the next time step. This enables RNNs to capture temporal dependencies and patterns within sequences.

  • friend_of_satan@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    7 hours ago

    Tangent: that pic reminds me of the terrorizing tit in Everything You Always Wanted to Know About Sex (*But Were Afraid to Ask)

  • XLE@piefed.social
    link
    fedilink
    English
    arrow-up
    18
    ·
    11 hours ago

    “It’s the classic technology scenario,” he said. “You’ve got a technology that’s very, very promising, but not as rigorously tested as you would like it to be, and the commercial pressure behind it is unbearable.”

    Is it promising though, Michael Wooldridge? Have you recently attended any magic shows and become excited by the potential of invisibility technology?

    • Zink@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      7 hours ago

      Oh touche, not Michael Woolridge! The technology has created an entire segment of the economy worth many trillions of dollars based on NOTHING BUT promises! We are living in a promise-based economy!

      /s but not really

  • UnspecificGravity@piefed.social
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    1
    ·
    14 hours ago

    The difference being that the Hindenburg was a perfectly functioning rigid airship that had a lot of inherent risks due to the nature of its design.

    AI isn’t good enough at its actual job to be in this position. The risk of AI is people pretending that it works when it doesn’t. It would be like if you made a blimp and filled it with carbon dioxide and people kept buying tickets and just sitting there waiting for it to take off.

  • footprint@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    16 hours ago

    This is a good comparison if all it took for the Hindenburg to explode was just asking it to role-play as a ship that could explode. Conscious effort had to be expended to make the thing fail, but most models start to fail spectacularly if you use it in good-faith for more than like 30 minutes.

    • lmr0x61@lemmy.ml
      link
      fedilink
      English
      arrow-up
      5
      ·
      15 hours ago

      That’s a good point. The precarity of the AI is, as far as I’ve seen, unprecedented in human history. There simply hasn’t been anything that undergirds so much of the world economy and can fail so catastrophically in so many ways.

      I really don’t think we have a good historical analogues to illustrate the scale of the risk. The only possible exception of I can think of is mutually assured destruction during the Cold War, but that hinged on only one decision by one of (arguably) two individuals at any given time, both of whom were highly incentivized not to make that decision. That, or the global climate’s collapse, but even that overlaps significantly with the bubble. With AI, compared to MAD at least, each catastrophic outcome isn’t the result of even a small set of actors, but many unregulated companies with incentives to be reckless (making negative outcomes not only more probable but more numerous). And increasing incentives at that, as the funding starts to dry up (AI hasn’t really proven itself a proper ROI).

      Something—and possibly many somethings—will go horribly wrong. Some already have, like AI use by students at all levels robbing them of their education and their actual value to the workforce, and acceleration of the climate collapse (maybe that’s the only analogous crisis). But it remains to be what (not if) things go wrong or even worse.

      But the truth is, I’m still relatively young. I’m just old enough to get a hint of the world’s workings, scale, and stakes. And in my life, nothing has seemed more like a loaded gun pointed at out heads than the AI bubble.

  • ReverendIrreverence@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    14 hours ago

    Except for the one person on the ground, the only people harmed in the Hindenburg disaster were the ones on board. If you’re not “on board” when the AI bubbles pops and burns I expect you will not be hurt as much as those blindly taking that ride.

    • GreenBeard@lemmy.ca
      link
      fedilink
      English
      arrow-up
      18
      ·
      13 hours ago

      Unfortunately, we’re not all the ones that decide if we’re on board or not. Our employers are. We live in a world where profits are privatized and losses are socialized, so when this goes, it’s going to hurt the general public a lot more than it will every hurt the Epstein Class.

      • entropicdrift@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 hours ago

        And if you have a retirement account with investments, kinda at all. The entire US economy is hinging on AI at this point, to a deranged degree. Almost more than oil, at this point.

  • Sims@lemmy.ml
    link
    fedilink
    English
    arrow-up
    5
    ·
    14 hours ago

    …but giving AI technology to Psycho Corporations that have an open declared goal of not caring about anything but profits - is not a problem. Got it…

    Jeebus, “The Guardian” is infested with no/slow-thinking child ‘journalists’…

  • RobotToaster@mander.xyz
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    edit-2
    16 hours ago

    A disaster that causes a lot of bad publicity despite the majority (62/97) of the passengers surviving, and that may have been caused by sabotage?

    • XLE@piefed.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 hours ago

      I appreciate the people who help make sure AI doesn’t receive an ounce of the credit it doesn’t deserve

  • BeigeAgenda@lemmy.ca
    link
    fedilink
    English
    arrow-up
    4
    ·
    16 hours ago

    And now we hear stories about how easy it is to hack systems with built in LLM’s and when you think about it, they are basically trained to be as helpful and forthcoming as possible, and then we give them the keys to the system!

  • Lembot_0006@programming.dev
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    3
    ·
    17 hours ago

    What? Global interest? Self-driving cars? Hindenburg? Is this professor a cat? Markov chain? The provided info is so crazy that I decided to NOT read the article.