Researchers at MIT decided to run a wild experiment by dropping 1,000 AI agents into Minecraft and giving them a simple goal: build a community. What followed feels like science fiction edging into reality. The AI didn’t just stack resources and wander around aimlessly. They organized, formed soci
That is not as smart of a question as you want it to be. Unfortunately for you, not everything can be modeled mathematically, or if you wish to be extremely minute, not everything can be currently mathematically modeled efficiently and precisely because it would require knowledge or resources far eclipsing what we have available. If you just want to push up your glasses and ACKSHUALLY me, then it’s also possible to do anything, hurr hurr.
To even fucking PRETEND that we can model a brain right now is hilarious to me, but to equate that to LLMs is downright moronic. Human brains are not created, trained, or used in any way similar to LLMs, no matter what anyone says, but you are insinuating that they are somehow similar??? They are a simulation of a learning algorithm, trained through brute force tactics, and used for pattern completion. That’s just not how that works!
And yet, in spite of the petabytes of data they fucking jam into these pieces of shit, they still can’t even draw hands correctly. They still can’t figure out the seahorse emoji. They still don’t know why strawberry has two Rs! They continuously repeat only the things they hear, and need to have these errors fixed manually. They don’t know anything. And that’s why they aren’t intelligent. They are fed data points. They create estimations. But they do not understand what the connections between those points are. And no amount of pointing at humans will fix that.
Information Integration Theory would suggest that phi (Φ) can be used to measure the degree to which a system generates irreducible, integrated cause–effect structure. The irreducible nature of something is exactly as you postulate: it cannot possibly be modeled mathematically. If it could, that would make it reducible to smaller parts.
You can describe the function of the human brain mathematically, of course… For example, some low hanging fruit might be:
Define the system’s transition probabilities.
Define its cause–effect repertoires.
Define Φ mathematically.
But that’s not going to model human experience. The experience isn’t reducible. That, instead, models something closer to the quality of experience. Human rationality is derived downstream of human experience. So it’s just not a fair argument to say that a tool mimicking only the downstream patterns of human experience will somehow also possess the upstream experience capacity, or even a relatable sense of rationality at all.
I don’t think we’re going to get a deterministic explanation for human behavior, ever. Most likely just statistical truths. Unless you can somehow mathematically model the entire universe as well. Good luck, because now the endeavor sounds god-like.
Im no academic, so apologies for the lack of substance. I mostly just get stuck in rabbit holes reading about philosophy and consciousness while I should be working.
Check out these theories for some interesting ideas:
Information Integration Theory
Global Workspace Theory
Recurrent Processing Theory
Higher Order Thought Theory
My summarized take is that modeling consciousness is akin to modeling the three-body problem or the double-pendulum. Even if the system is deterministic and capable of being modeled, you’ll forever be bottlenecked by finite precision in your model. The system itself is one where errors grow exponentially. For example, tiny differences in the double pendulum’s initial angle (like 0.000001°) rapidly amplify over time to produce wildly different trajectories. It is computationally intractable without unlimited precision — hence, this is why I said you’d need to model the entire universe. This is deterministic chaos, and we have no reason to think human-brains aren’t heavily dependent on its utility.
I’m sorry but what evidence do you have that the human brain cannot possibly be modeled mathematically like literally almost anything else?
That is not as smart of a question as you want it to be. Unfortunately for you, not everything can be modeled mathematically, or if you wish to be extremely minute, not everything can be currently mathematically modeled efficiently and precisely because it would require knowledge or resources far eclipsing what we have available. If you just want to push up your glasses and ACKSHUALLY me, then it’s also possible to do anything, hurr hurr.
To even fucking PRETEND that we can model a brain right now is hilarious to me, but to equate that to LLMs is downright moronic. Human brains are not created, trained, or used in any way similar to LLMs, no matter what anyone says, but you are insinuating that they are somehow similar??? They are a simulation of a learning algorithm, trained through brute force tactics, and used for pattern completion. That’s just not how that works!
And yet, in spite of the petabytes of data they fucking jam into these pieces of shit, they still can’t even draw hands correctly. They still can’t figure out the seahorse emoji. They still don’t know why strawberry has two Rs! They continuously repeat only the things they hear, and need to have these errors fixed manually. They don’t know anything. And that’s why they aren’t intelligent. They are fed data points. They create estimations. But they do not understand what the connections between those points are. And no amount of pointing at humans will fix that.
That’s not what I said, why the fuck would I think we can model a brain right now
Probably because you wanted to be pedantic and unserious.
Information Integration Theory would suggest that phi (Φ) can be used to measure the degree to which a system generates irreducible, integrated cause–effect structure. The irreducible nature of something is exactly as you postulate: it cannot possibly be modeled mathematically. If it could, that would make it reducible to smaller parts.
You can describe the function of the human brain mathematically, of course… For example, some low hanging fruit might be:
But that’s not going to model human experience. The experience isn’t reducible. That, instead, models something closer to the quality of experience. Human rationality is derived downstream of human experience. So it’s just not a fair argument to say that a tool mimicking only the downstream patterns of human experience will somehow also possess the upstream experience capacity, or even a relatable sense of rationality at all.
I don’t think we’re going to get a deterministic explanation for human behavior, ever. Most likely just statistical truths. Unless you can somehow mathematically model the entire universe as well. Good luck, because now the endeavor sounds god-like.
What you’re saying sounds extremely interesting. Do you have any recommendations on resources that could help me delve deeper into this topic?
Im no academic, so apologies for the lack of substance. I mostly just get stuck in rabbit holes reading about philosophy and consciousness while I should be working.
Check out these theories for some interesting ideas:
My summarized take is that modeling consciousness is akin to modeling the three-body problem or the double-pendulum. Even if the system is deterministic and capable of being modeled, you’ll forever be bottlenecked by finite precision in your model. The system itself is one where errors grow exponentially. For example, tiny differences in the double pendulum’s initial angle (like 0.000001°) rapidly amplify over time to produce wildly different trajectories. It is computationally intractable without unlimited precision — hence, this is why I said you’d need to model the entire universe. This is deterministic chaos, and we have no reason to think human-brains aren’t heavily dependent on its utility.