We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

https://archive.ph/Fapar

  • Zaleramancer@beehaw.org
    link
    fedilink
    arrow-up
    7
    ·
    7 hours ago

    I’m not sure why so many people begin this argument on solid ground and then hurl themselves off into a void of semantics and assertions without any way of verification.

    Saying, “Oh it’s not intelligent because it doesn’t have senses,” shifts your argument to proving that’s a prerequisite.

    The problem is that LLM isn’t made to do cognition. It’s not made for analysis. It’s made to generate coherent human speech. It’s an incredible tool for doing that! Simply astounding, and an excellent example of the power of how a trained model can adapt to a task.

    It’s ridiculous that we managed to get a probabilistic software tool which generates natural language responses so well that we find it difficult to distinguish them from real human ones.

    …but it’s also an illusion with regards to consciousness and comprehension. An LLM can’t understand things for the same reason your toaster can’t heat up your can of soup. It’s not for that, but it presents an excellent illusion of doing so. Companies that are making these tools benefit from the fact that we anthropomorphize things, allowing them to straight up lie about what their programs can do because it takes real work to prove they can’t.

    Average customers will engage with LLM as if it was a doing a Google search, reading the various articles and then summarizing them, even though it’s actually just completing the prompt you provided. The proper way to respond to a question is an answer, so they always will unless a hard coded limit overrides that. There will never be a way to make a LLM that won’t create fictitious answers to questions because they can’t tell the difference between truth or fantasy. It’s all just a part of their training data on how to respond to people.

    I’ve gotten LLM to invent books, authors and citations when asking them to discuss historical topics with me. That’s not a sign of awareness, it’s proof that the model is doing what it’s intended to do- which is the problem, because it is being marketed as something that could replace search engines and online research.