• Dæmon S.@calckey.world
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    9 hours ago

    @[email protected] @[email protected]

    First: there are already robotic sensors capable of taste and smell (e.g. World’s first artificial tongue ’ tastes and learns ’ like a real human organ). Those sensors could technically be integrated to a language model, although the approach for training would be different (can’t simply feed it with gazillions worth of taste/smell corpus).

    Then, there’s a thing called multimodal, it’s a thing already. LLMs can be multimodal, and there isn’t exactly an algorithmic limit to how many “modals” (textual, vision, audio, robotic sensors and actuators, etc) can be connected together. Smell would be just another data stream to be integrated into the model’s latent space.

    The only thing I agree is that robots and language models wouldn’t have “feelings”, although this is pretty much a subjective thing: if we consider science, feelings are nothing more than the interaction of neurotransmitters (oxytocin for “love”, dopamine for “joy”, epinephrine for “fear”, etc) going on inside our gray matter, and humans aren’t the exclusive ones to be able to “feel”.

    And scientifically, living beings are no better than, say, an asteroid wandering through the cosmos, for everything is “made of star stuff” (as per Carl Sagan): humans, cats, chairs, residential buildings, AirBus A350 aircrafts, satellites, asteroids, everything is made by a bunch of baryonic particles (which is merely the collapse of waves) interacting with leptons and mesons like some kind of double pendulum dynamic system.

    Of course, we can consider things beyond the scientific strictness, such as spirituality (I myself am spiritually-leaning, even if it sounds like I’m not due to my aforemention to hard science). But then some spirituality branches believe that spiritual forces would be able to “embody” inside a computer or other electronic device (e.g. Spiritism’s Electronic Voice Phenomenon). I myself believe LLMs can be interesting digital Ouija boards.

    In the end of the day, we homininae can’t even define sentience and consciousness, just barely the concept of “intelligence” as “capability for tool usage” (in which New Caledonian crows want to have a word).

    And from a solipsistic perspective, no one exists but oneself.

    I mean, you can neither know nor prove whether I’m sentient, just like I can neither know nor prove whether you are sentient. To you, I may even sound like LLM due to the way this reply is structured alongside the seemingly non-sequiturs I used.