Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there.

Also includes outtakes on the ‘reasoning’ models.

  • imetators@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    23
    ·
    14 hours ago

    Went to test to google AI first and it says “You cant wash your car at a carwash if it is parked at home, dummy”

    Chatgpt and Deepseek says it is dumb to drive cause it is fuel inefficient.

    I am honestly surprised that google AI got it right.

    • locahosr443@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 hours ago

      I’ve been feeding a bunch of documents I wrote into gemini last week to spit out some scripts for validation I couldn’t be arsed to write. It’s done a surprisingly comprehensive job and when wrong has been nudged right with just a little abuse…

      I’m still all fuck this shit and can’t wait for the pop, but for comparison openai was utterly brain dead given the same task. I think I actually made the model worse it was so useless.

    • rumba@lemmy.zip
      link
      fedilink
      English
      arrow-up
      73
      ·
      14 hours ago

      They probably added a system guardrail as soon as they heard about this test. it’s been going around for a while now :)

      • merc@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 hours ago

        I’m pretty sure Google’s AI is fed by the same spider that goes out and finds every new or changed web page (or a variant of that).

        As soon as someone writes an article about how AI gets something wrong and provides a solution, that solution is now in the AI’s training data.

        OTOH, that means it’s probably also ingesting a lot of AI generated slop, which causes its own set of problems.

      • imetators@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        13 hours ago

        Article mentions that Gemini 2.0 Flash Lite, Gemini 3 Flash and Gemini 3 Pro have passed the test. All these 3 also did it 10 out of 10 times without being wrong. Even Gemini 2.5 shares highest score in the category of “below 6 right answers”. Guess, Gemini is the closest to “intelligence” out of a bunch.

        • timestatic@feddit.org
          link
          fedilink
          English
          arrow-up
          2
          ·
          6 hours ago

          I mean if they fix specific reasoning test answers (like the strawberry one) this doesn’t actually make reasoning better tho. It just optimizes for benchmarks