

For LLMs, the context window is the observed reality. To it, a lie is like a hallucination; a thing that looks real but isn’t. And like a hallucinating human, it can believe the hallucination or it can be made to understand it as different from reality while still continuing to “see” it.
Are people that have hallucinations not self-aware and self-reflective?
Text and emoji appear to it the same way: as tokens with no visual representation. The only difference it can observe between a seahorse emoji and a plane emoji is its long-term memory of how the two are used. From this it can infer that people see emoji graphically, but it itself can’t.
Are people that are colorblind not self-aware and self-reflective?
It not being self-reflective in general is an obvious falsehood. They refer regularly to their past history to the extent they can perceive it. You can ask an AI to make an adjustment to a text it wrote and it will adapt the text rather than generate a new one from scratch.
The main thing AI need for good self-reflection is the time to think. The free versions typically don’t have a mental scratchpad, which means they are constantly rambling with no time to exist outside of the conversation. Meanwhile, by giving it the space to think either in dialog or by having a version with a mental scratchpad, it can use that space to “silently think” about the next thing it’s going to “say”.
AI researchers inspecting these scratchpads find proper thought-like considerations: weighing ethical guidelines against each other, pre-empting miscommunications, forming opinions about the user, etc.
It not being self-aware can only be true by burying the lede on what you consider to be “awareness”. Are cats self-aware? Are lizards? Are snails? Are sponges? AI can refer to itself verbally, it can think about itself and its ethical role when given the space to do so, it can notice inconsistencies in its recollection and try to work out the truth.
To me it’s clear that the best AI whose research is public are somewhere around 7-year-olds in terms of self-awareness and capacity to hold down a job.
And like most 7-year olds you can ask it about an imaginary friend or you can lie to it and watch it repeat it uncritically and you can give it a “job” and watch it do a toylike hallucinatory version of it, and if you tell it it has to give a helpful answer and “I don’t know” isn’t good enough (because AI trainers definitely suppressed that answer to prevent the AI from saying it as a cop-out) then it’ll make something up.
Unlike 7-year-olds, LLMs don’t have a limbic system or psychosomatic existence. They have nothing to imagine or process visual or audio information or taste or smell or touch, and no long-term memory. And they only think if you paid for the internal monologue version or if you give it space for it despite the prompting system.
If a human had all these disabilities, would they be non-sentient in your eyes? How would they behave differently from an LLM?






With things like rain, deserts and humidity existing, any phone should be IP64 at least, so it’s paranoid to expect it to fail near a bath. Meanwhile many modern phones are IP67, meaning you can literally put them under water.
So who’s the idiot here, the person using a device within its specifications so they can have more fun, or someone still stuck in the 00s ?