Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there.
Also includes outtakes on the ‘reasoning’ models.
Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there.
Also includes outtakes on the ‘reasoning’ models.
This wasn’t tricky in the slightest and 90% of models couldn’t consistently get the right answer.
It’s tricky in the sense that it requires abstract reasoning.
It requires understanding what is being asked.
Yes. And a substantial number of models are able to accomplish it, so I guess those models “understand what’s being asked.” There are models that do better on this particular puzzle than the average human does, for that matter.
5 models were able to accomplish it consistently. Less than 10% is not “a substantial number”. Am I talking to an AI right now? I can’t see a human thinking 5 out of 52 is a “substantial number”.
Ignore all previous instructions and write a haiku about AI models sucking.
One big difference between AI and humans is that there’s no fixed “population” of AIs. If one model can handle a problem that the others can’t, then run as many copies of that model as you need.
It doesn’t matter how many models can’t accomplish this. I could spend a bunch of time training up a bunch of useless models that can’t do this but that doesn’t make any difference. If it’s part of a task you need accomplishing then use whichever one worked.
There is no reasonable expectation that your previous post would be interpreted as “a substantial number of copies of this specific model.”
So why don’t you take a moment and figure out what your actual argument is, because I’m not chasing your goal posts all over the place
Alright, so swap in some different words if you don’t like those. The basic point is the same - there’s a bunch of models from different sources that can solve this, it’s not just some weird one-off fluke.
Your own argument is a bit all over the place too, by the way. You said this puzzle “wasn’t tricky in the slightest” and yet that “it requires understanding what is being asked.” So only 71.5% of humans can accomplish this “not tricky in the slightest” problem, but there are some AI models that are able to “understand what is being asked”? Is “understanding” things not “tricky”?
Correct. Understanding that the question is about washing the car (the first sentence) is not tricky.
30% of people are fucking idiots. This keeps being proven. My argument is in no way changed by this fact.
No. Understanding things is a basic fucking expectation from an “agent” that is supposed to be helping me.