

Fuck yes! It’s about time.
/u/outwrangle before everything went to shit in 2020, /u/emma_lazarus for a while after that, now I’m all queermunist!
Fuck yes! It’s about time.
Mostly connections and luck based on the trash I hear that’s somehow popular.
China is most definitely in a position to stop this
I know? I literally said “China could stop this.”
That doesn’t tell us why they haven’t, though. The only thing that makes sense is their usual abundance (or excess) of caution. Any actions taken against Israel will be seen as an attack by the West, even something like a drone blacklist, and they’re keen to avoid direct confrontation with the West as long as possible.
This isn’t a justification, just an explanation.
This is a consequence of not cutting off trade with Israel. These are commercial drones from independent sellers that Israel is converting to military use (so it’s not like China is selling them weapons) but China could stop this.
China has been reducing trade with Israel since this phase of the genocide began, but it has been a slow process. I suspect that China is worried about Western retaliation, which doesn’t really excuse trading with Israel but does help to explain it.
I don’t get it. Why do you think this is heavily downvoted?
They aren’t necessarily trying to make us support Israel, they just don’t want us to support Palestinian resistance. They like the status quo of people condemning Hamas.
The recent Medicaid cut to make a few hundred billionaire friends happier/richer was terrible propaganda. Doesn’t that contradict your point?
That’s clearly a siren or some kind of merfolk on the left, I think the sign is telling you this is their territory.
Imagine living in a world without being voluntold to do overtime every week.
He doesn’t kill political opponents just to watch them bleed? My point is that he does things for reasons.
The fact that Westoids can only imagine Russians as subhuman violent idiots is fascinating.
He’s not a demon that craves blood for no reason. There’s geostrategic goals at stake.
You say “everyone” but it’s still just other men.
What is the reason you think philosophy of the mind exists as a field of study?
In part, so we don’t assign intelligence to mindless, unaware, unthinking things like slime mold - it’s so we keep our definitions clear and useful, so we can communicate about and understand what intelligence even is.
What you’re doing actually creates an unclear and useless definition that makes communication harder and spreads misunderstanding. Your definition of intelligence, which is what the AI companies use, has made people more confused than ever about “intelligence” and only serves the interests of the companies for generating hype and attracting investor cash.
Let me rephrase. If your definition of intelligence includes slime mold then the term is not very useful.
There’s a reason philosophy of the mind exists as a field of study. If we just assign intelligence to anything that can solve problems, which is what you seem to be doing, we are forced to assign intelligence to things which clearly don’t have minds and aren’t aware and can’t think. That’s a problem.
If your definition of intelligence doesn’t include awareness it’s not very useful.
My understanding is that the reason LLMs struggle with solving math and logic problems is that those have certain answers, not probabilistic ones. That seems pretty fundamentally different from humans! In fact, we have a tendency to assign too much certainty to things which are actually probabilistic, which leads to its own reasoning errors. But we can also correctly identify actual truth, prove it through induction and deduction, and then hold onto that truth forever and use it to learn even more things.
We certainly do probabilistic reasoning, but we also do axiomatic reasoning i.e. more than probability engines.
Slime mold can solve mazes.
So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.
What? No.
Chatbots can’t think because they literally aren’t designed to think. If you somehow gave a chatbot a body it would be just as mindless because it’s just a probability engine.
Japan in BRICS when???