queermunist she/her

/u/outwrangle before everything went to shit in 2020, /u/emma_lazarus for a while after that, now I’m all queermunist!

  • 6 Posts
  • 3.15K Comments
Joined 2 years ago
cake
Cake day: July 10th, 2023

help-circle



  • China is most definitely in a position to stop this

    I know? I literally said “China could stop this.”

    That doesn’t tell us why they haven’t, though. The only thing that makes sense is their usual abundance (or excess) of caution. Any actions taken against Israel will be seen as an attack by the West, even something like a drone blacklist, and they’re keen to avoid direct confrontation with the West as long as possible.

    This isn’t a justification, just an explanation.


  • This is a consequence of not cutting off trade with Israel. These are commercial drones from independent sellers that Israel is converting to military use (so it’s not like China is selling them weapons) but China could stop this.

    China has been reducing trade with Israel since this phase of the genocide began, but it has been a slow process. I suspect that China is worried about Western retaliation, which doesn’t really excuse trading with Israel but does help to explain it.










  • What is the reason you think philosophy of the mind exists as a field of study?

    In part, so we don’t assign intelligence to mindless, unaware, unthinking things like slime mold - it’s so we keep our definitions clear and useful, so we can communicate about and understand what intelligence even is.

    What you’re doing actually creates an unclear and useless definition that makes communication harder and spreads misunderstanding. Your definition of intelligence, which is what the AI companies use, has made people more confused than ever about “intelligence” and only serves the interests of the companies for generating hype and attracting investor cash.





  • My understanding is that the reason LLMs struggle with solving math and logic problems is that those have certain answers, not probabilistic ones. That seems pretty fundamentally different from humans! In fact, we have a tendency to assign too much certainty to things which are actually probabilistic, which leads to its own reasoning errors. But we can also correctly identify actual truth, prove it through induction and deduction, and then hold onto that truth forever and use it to learn even more things.

    We certainly do probabilistic reasoning, but we also do axiomatic reasoning i.e. more than probability engines.



  • So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

    What? No.

    Chatbots can’t think because they literally aren’t designed to think. If you somehow gave a chatbot a body it would be just as mindless because it’s just a probability engine.