In early June, shortly after the beginning of the Atlantic hurricane season, Google unveiled a new model designed specifically to forecast the tracks and intensity of tropical cyclones.

Part of the Google DeepMind suite of AI-based weather research models, the “Weather Lab” model for cyclones was a bit of an unknown for meteorologists at its launch. In a blog post at the time, Google said its new model, trained on a vast dataset that reconstructed past weather and a specialized database containing key information about hurricanes tracks, intensity, and size, had performed well during pre-launch testing.

“Internal testing shows that our model’s predictions for cyclone track and intensity are as accurate as, and often more accurate than, current physics-based methods,” the company said.

Google said it would partner with the National Hurricane Center, an arm of the National Oceanic and Atmospheric Service that has provided credible forecasts for decades, to assess the performance of its Weather Lab model in the Atlantic and East Pacific basins.

  • t3rmit3@beehaw.org
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    6 hours ago

    How precisely does human thought operate that is distinct from pattern-recognition, inference, and pattern output? I ask this rhetorically, because we don’t actually have a proven model of how our own intelligence functions.

    I agree that obviously neural networks are not AGI (which requires consciousness), but I think the visceral “this isn’t intelligence” reactions I see tend to be more about the belief that human intelligence is special or unique. We know now that humans aren’t really that distinct from other animals in our ability to think, even ones that we would normally assume are “reaction-driven” like insects.

    Unless we can prove that we ourselves are not just really really complex calculators that do pattern-matching, inference, and reproduction, we can’t actually assert that machine learning is not a rudimentary version of intelligence.