Full image and other similar screenshots



Makes sense, X always boosts racists.
Anti-genocide = anti-racist
Pro-Israel = racist
If true, I’d expect Furkan to be upset, but I suppose he he just respects the technology behind the algo 🤷
Furkan is a legend in the open source community.
I mean, we didn’t need grok to know this.
“this is not a hallucination” source: hallucinator
Just switch to Mastodon and start using a local SLM with a search engine MCP. It generates CP by unstripping underage boys and girls anyway.
I got banned from mastodon.social for being mildly critical of Israel. Are there any instances that are open minded?
Any good Mastodon instances which don’t severely limit political content (and have actual content)?
raphus.social.
Or host your own via masto.host.
Trash

It’s heavily censored NeoLefteralism after all. This is why people keep using Twitter

Ye don’t expect an anarchist instance to be tankie-friendly.
Know of any Mastodon instances which don’t believe the genocidal hegemony will be beaten by holding hands?
I asked for an instance without censorship not one which fits your ideology.
Nice I’ll check it out. I see Qudsnen is posting on Mastodon now that’s pretty cool. Maybe it’s getting better there
Likely just hallucinations. For example, there is no way they would store a confidence score as a string
It’s also possible that it retrieved the data from whatever sources it has access to (ie as tool calls) and then constructed the json based on its own schema. That is, the string value may not represent how the underlying data is stored, which wouldn’t be unusual/unexpected with llms.
But it could definitely also just be a hallucinations. I’m not certain, but since it looks like the schema is consistent in these screenshots, it does seems like the schema may be pre-defined. (But even if this could be verified, it wouldn’t completely rule out the possibility of hallucinations since grok could be hallucinating values into a pre-defined schema.)
yea the only way I can see confidence being stored as a string would be if the key was meant for a GUI management interface that didn’t hardcode possible values(think for private investors or untrained engineers for sugar/cosmetic reasons). In an actual system this would almost always be a number or boolean not a string.
Being said, its entierly possible that it’s also using an LLM for processing the result, which would mean they could have something like “if its rated X or higher” do Y type deal, where the LLM would then process the string and then respond whether it is or not, but that would be so inefficient. I would hope that they wouldn’t layer like that.
If it were hallucinations which it very well could be, it means the model has learned this bias somewhere. Indicating Grok has either been programmed to derank Palestine content, or Grok has learned it by himself (less likely).
It’s difficult to conceive the AI manually making this up for no reason, and doing it so consistently for multiple accounts so consistently when asked the same question.








