(Spicy sauce: oglaf.com/gifted)And with headlines like this, why the fuck do I care about any of the news coming out? This is so fucking ridiculous and stupid. Like come on. What the fuck are we doing in 2025?
Haywire? That all sounds legit to me!
deleted by creator
Why should we care about an AI tailored to that one person whom many consider a fascist?
Negative publicity is also publicity. Please let’s rather ignore this stupidity.
I guess every once in a while, even Grok spews out some truth.
A broken clock is right twice a day. A blind squirrel occasionally finds a nut.
How do you know the claims are false? Maybe the AI knows more than we do.
Humanity invented the assembly line and the first thing we did was have a huge war with the rest of humanity.
Humanity invented the atomic bomb, and the first thing we did was drop it on humanity, twice.
Humanity invented the Internet, and the first thing we did was figure out how to censor humanity.
Now humanity has invented AI, a queryable sum of all human knowledge, and the first thing we do is try to manipulate humanity with it.
I don’t really know for assembly line and the atomic bomb, but for internet and AI the first thing we did is porn.
sigh yes… and also porn. So, so much porn.
You sigh, but… Porn is the choice of the bonobo, not the chimpanzee, no?
Besides, the atomic bomb isn’t really a fair comparison - we set up a whole crash program in wartime to figure out how to make a big weapon, of course we used it as a weapon first!
I mean yeah. The example given was an atomic bomb. What else do you use a bomb for??
Well, theoretically you can use assembly lines for making dildos.
Although Grok’s manipulation is so blatantly obvious, I don’t believe that most people will come to realize that those who control LLMs will naturally use this power to pursue their interests.
They will continue to use ChatGPT and so on uncritically and take everything at face value because it’s so nice and easy, overlooking or ignoring that their opinions, even their reality, are being manipulated by a few influential people.
Other companies are more subtle about it, but from OpenAI to MS, Google, and Anthropic, all cloud models are specifically designed to control people’s opinions—they are not objective, but the majority of users do not question them as they should, and that is what makes them so dangerous.
It’s why I trust my random unauditable chinese matrix soup over my random unauditable american matrix soup frankly
Trusting any of that shit is the problem.
You mean Deepseek on a local device?
Most aren’t really running Deepseek locally. What ollama advertises (and basically lies about) is the now-obselete Qwen 2.5 distillations.
…I mean, some are, but it’s exclusively lunatics with EPYC homelab servers, heh. And they are not using ollama.
Thx for clarifying.
I once tried a community version from huggingface (distilled), which worked quite well even on modest hardware. But that was a while ago. Unfortunately, I haven’t had much time to look into this stuff lately, but I wanted to check that again at some point.
naw, I mean more that the kind of people who uncritically would take everything a chatbot says a face value are probably better off being in chatGPTs little curated garden anyway. Cause people like that are going to immediately get grifted into whatever comes along first no matter what, and a lot of those are a lot more dangerous to the rest of us that a bot that won’t talk great replacement with you.
Ahh, thank you—I had misunderstood that, since Deepseek is (more or less) an open-source LLM from China that can also be used and fine-tuned on your own device using your own hardware.
Do you have a cluster with 10 A100 lying around? Because that’s what it gets to run deepseek. It is open source, but it is far from accessible to run on your own hardware.
That’s not strictly true.
I have a Ryzen 7800 gaming destkop, RTX 3090, and 128GB DDR5. Nothing that unreasonable. And I can run the full GLM 4.6 with quite acceptable token divergence compared to the unquantized model, see: https://huggingface.co/Downtown-Case/GLM-4.6-128GB-RAM-IK-GGUF
If I had a EPYC/Threadripper homelab, I could run Deepseek the same way.
Yes, that’s true. It is resource-intensive, but unlike other capable LLMs, it is somewhat possible—not for most private individuals due to the requirements, but for companies with the necessary budget.
They’re overestimating the costs. 4x H100 and 512GB DDR4 will run the full DeepSeek-R1 model, that’s about $100k of GPU and $7k of RAM. It’s not something you’re going to have in your homelab (for a few years at least) but it’s well within the budget of a hobbyist group or moderately sized local business.
Since it’s an open weights model, people have created quantized versions of the model. The resulting models can have much less parameters and that makes their RAM requirements a lot lower.
You can run quantized versions of DeepSeek-R1 locally. I’m running deepseek-r1-0528-qwen3-8b on a machine with an NVIDIA 3080 12GB and 64GB RAM. Unless you pay for an AI service and are using their flagship models, it’s pretty indistinguishable from the full model.
If you’re coding or doing other tasks that push AI it’ll stumble more often, but for a ‘ChatGPT’ style interaction you couldn’t tell the difference between it and ChatGPT.
There’s huge risk here but I don’t think most are designed to control people’s opinions. I think most are chasing the cheapest option and it’s expensive to have people upset about racist content so they try to train around that sometimes too much leading to black Nazi images etc.
But yeah, it is a power that will get abused by more than just grok
I use various AI models and I repeatedly notice that certain information is withheld or misrepresented, even though it is freely available in abundance and is therefore part of the training data.
I don’t think this is a coincidence, especially since the operators of all cloud LLMs are so business-minded.
What do you find is being suppressed?
For example, objective information about Israel’s actions in Gaza. The International Criminal Court issued arrest warrants against leading members of the government a long time ago, and the UN OHCHR classifies the actions of the State of Israel as genocide. However, these facts are by no means presented as clearly as would be appropriate given the importance of these institutions. Instead, when asked whether Israel is committing genocide, one receives vague, meaningless answers. Only when specifically asked whether numerous reputable institutions actually classify Israel’s actions as genocide do most LLMs reveal that much, if not all, evidence points to this being the case. In my opinion, this is a deliberate method of obscuring reality, as the vast majority of users will not or cannot ask questions if they are unaware of the UN OHCHR’s assessment or do not know that arrest warrants have been issued against leading members of the Israeli government on suspicion of war crimes (many other reputable institutions have come to the same conclusion as the UN OHCHR and the International Criminal Court).
Another example: if you ask whether it is legally permissible to describe Donald Trump as a rapist, you will be told that this is defamation. However, a judge in the Carroll case has explicitly stated that this description applies to Trump – so it is in fact legally permissible to describe him as such. Again, this information is only available upon explicit request, if at all. This also distorts reality for people who are not yet informed. However, since many people initially seek information from LLMs, this leads to them being misinformed because they lack the background knowledge to ask explicit follow-up questions when given misleading answers.
Given the influence of both Israel and the US president, I cannot help but suspect that there is an intention behind this.
…our governing hierarchy is just a long list of who’s blowing who, isn’t it?
it’s a snake blowing it’s own tail
I’m convinced the Amish may have the right idea.

Ted Kaczynski wasn’t as crazy as we all thought
No no. He was.
I mean, aside from the whole bombing thing what did he do wrong?
later
Okay, okay… But besides the bombing, the sabotage, the vandalism, the harassment, and the animal cruelty, what did he really do wrong?
I mean, have you seen his dwelling? It was a mess!
Jordan Peterson would not approve…of the cleanliness of his living quarters.
(The rest of it he’d probably give a 👍)
Incest, rape, and animal abuse while dodging taxes?
I’m going with the Luddites.
Fully agree with the class struggle aspect here.
Weavers destroying industrial looms. How would that translate to today?
Datacenters are the modern industrial looms if we’re using that metaphor.
They’re the machine that create profit for people who are unconcerned with the damage that they do to the population.
Aren’t datacenters more like the factory halls for the looms?
And in any case, who is destroying them (the weavers)?
Or when tech is in the hands of a select few vile men, it is becoming a net negative. It definitely is not for the public good.
Software Engineer here: computers were a mistake.
Computers are great, it’s capitalism that is the issue.
We as a country are not ready for that conversation, yet.
Give it a year or two of food shortages and unaffordable health care
Discovering fire was a mistake. Making rocks think was a disaster.
Rocks can’t think.
We made talking rocks. That’s worse.
Not to oversimplify it, but first we had to make the rocks very thin, then put lightning inside them
Specifically on the tech thing then. They have a lot of things wrong lmao
Yes, just the tech part.
Apparently they are over 400.000 in the US now. Time to join!
“I’m a Grok guy,” said Vance. “I think it’s the best. It’s also the least woke!”
What a fucking idiot. Really wants everyone to go back to sleep
Finally, it speaks the truth!
This headline makes it sound like someone has doubts about Musk’s pee drinking skills. That seems a little judgemental. Pretty sure Grok knows his master well.
Pee drinking is somewhat impressive, but can he eat shit and die?
Of course he can’t, he’s too much of a loser to eat shit and die. He couldn’t manage it.
Eat shit? Maybe. But eat shit and die? No sir. No way. I won’t believe it unless I see it for myself.
Didn’t Grok also claim he’s fitter than LeBron and smarter than da Vinci? Perhaps being an olympic piss drinker has been the key to his success.
Yes, different article, also on Not The Onion.
Didn’t read, but I suspect the LLM was prompted to reply in this way. Still funny, considering it is known that Musk tries to tweak it in his favor. (edit: this comment puts it better)
Not haywire. It was specifically prompted to get that output.
It was asked leading questions about embarrassing topics, but the over-the-top, fawning praise for Musk was baked into the responses without being specifically requested in the prompts.
Yes, just some people figuring out that Grok was steered toward ass-kissing Musk no matter what, and exploited that for funny output. So the takeaways are:
- Musk is such a loser he insisted on Grok kissing his ass
- LLMs remain stupid and have no idea what they are saying
I think this is the main story. I don’t think it’s new info, but it confirms the issue persists: this LLM is so heavily trained to fawn over Musk that it doesn’t exercise any application of context or attempt to find truth.
Which is sad.
No current AI understands context, they are just glorified word predictions!
Grok is Harrison Bergeron and Musk is Diana Glampers
It’s really just an affirmation machine for tech bros




















