• webadict@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    1
    ·
    1 day ago

    Holy shit. This is the craziest article to write about one of the shittiest videos I have ever seen.

    That video is glazing the fuck out of LLMs, and the creator knows jackshit about how AIs or even computers work. What a fucking moron.

    So, like, the point of the experiment is that LLMs will generate outputs based on their inputs, and then those outputs are interpreted by an intermediary program to do things in games. And the video is trying to pretend that this is LITERALLY a new intelligent species emerging because you never told it to do anything other than its initial goal! Which… Isn’t impressive? LLMs generate outputs based on their datasets, like, that’s not in question. That isn’t intelligence, because it is just one giant mathematics problem.

    This article is a giant pile of shit.

    • bleistift2@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      16
      ·
      1 day ago

      If you argue like that then neither intelligence nor societies exist. A the fundamental level, every neuron just computes its output from its inputs, quite predictably even. That doesn’t mean emergent behaviours cannot exist.

      • partofthevoice@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        45 minutes ago

        Humans aren’t rational creatures though… we use rationality as a tool, but tools designed to mimic rationality aren’t actually mimicking humans. Human intelligence has a lot to do with being irrational, arational, and sometimes deciding to use rationality as a means to an end. Societies are emergent from the social patterns produced via agents with those particular behaviors. Social patterns like morals, religion, culture, … It’s really not the same thing as stuffing a bunch of LLMs in a box. The LLMs don’t have the same capabilities for growth, failure, awareness thereof, … nor any of the natural pressures that would even incentivize such awareness. They’re just little feedforward algorithms stuck in a feedback loop with each other.

      • webadict@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        1
        ·
        edit-2
        1 day ago

        Just as a brain is not a giant statistics problem, LLMs are not intelligent. LLMs are basically large math problems that take what you put into them and calculate the remainder. That isn’t an emergent behavior. That isn’t intelligence at all.

        If I type into a calculator 20*10 and it gives me 200, is that a sign of intelligence that the calculator can do math? I never programmed it to know what 10 or 20 or 200 were, though I did make it know what multiplication is and what digits and numbers are, but those particular things it totally created on its own after that!!!

        When you type a sentence into an LLM and it returns with an approximation of what a response sounds like, you should treat it the same way. People programmed these things to do the things that they are doing, so what behavior is fucking emergent?

        • deltaspawn0040@lemmy.zip
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          2 hours ago

          I’m sorry but what evidence do you have that the human brain cannot possibly be modeled mathematically like literally almost anything else?

          • partofthevoice@lemmy.zip
            link
            fedilink
            English
            arrow-up
            1
            ·
            29 minutes ago

            Information Integration Theory would suggest that phi (Φ) can be used to measure the degree to which a system generates irreducible, integrated cause–effect structure. The irreducible nature of something is exactly as you postulate: it cannot possibly be modeled mathematically. If it could, that would make it reducible to smaller parts.

            You can describe the function of the human brain mathematically, of course… For example, some low hanging fruit might be:

            • Define the system’s transition probabilities.
            • Define its cause–effect repertoires.
            • Define Φ mathematically.

            But that’s not going to model human experience. The experience isn’t reducible. That, instead, models something closer to the quality of experience. Human rationality is derived downstream of human experience. So it’s just not a fair argument to say that a tool mimicking only the downstream patterns of human experience will somehow also possess the upstream experience capacity, or even a relatable sense of rationality at all.

            I don’t think we’re going to get a deterministic explanation for human behavior, ever. Most likely just statistical truths. Unless you can somehow mathematically model the entire universe as well. Good luck, because now the endeavor sounds god-like.

            • deltaspawn0040@lemmy.zip
              link
              fedilink
              English
              arrow-up
              1
              ·
              19 minutes ago

              What you’re saying sounds extremely interesting. Do you have any recommendations on resources that could help me delve deeper into this topic?

          • webadict@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            50 minutes ago

            That is not as smart of a question as you want it to be. Unfortunately for you, not everything can be modeled mathematically, or if you wish to be extremely minute, not everything can be currently mathematically modeled efficiently and precisely because it would require knowledge or resources far eclipsing what we have available. If you just want to push up your glasses and ACKSHUALLY me, then it’s also possible to do anything, hurr hurr.

            To even fucking PRETEND that we can model a brain right now is hilarious to me, but to equate that to LLMs is downright moronic. Human brains are not created, trained, or used in any way similar to LLMs, no matter what anyone says, but you are insinuating that they are somehow similar??? They are a simulation of a learning algorithm, trained through brute force tactics, and used for pattern completion. That’s just not how that works!

            And yet, in spite of the petabytes of data they fucking jam into these pieces of shit, they still can’t even draw hands correctly. They still can’t figure out the seahorse emoji. They still don’t know why strawberry has two Rs! They continuously repeat only the things they hear, and need to have these errors fixed manually. They don’t know anything. And that’s why they aren’t intelligent. They are fed data points. They create estimations. But they do not understand what the connections between those points are. And no amount of pointing at humans will fix that.

  • MrNesser@lemmy.world
    link
    fedilink
    English
    arrow-up
    47
    arrow-down
    3
    ·
    2 days ago

    As an experiment it’s flawed the AIs had directives already and just followed them.

    • webghost0101@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      2
      ·
      edit-2
      2 days ago

      To be fair, “build a society make an efficient village” is pretty vague and there are tons of directives that ai is incapable of succeeding.

      It is interesting to see how how far they got and how it interprets “efficient village” cause one could argue that the existence of criminals is part of society and an individual ai could reason that in order to create their interpretation of society they must become a dictator first.

  • vacuumflower@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    2
    ·
    2 days ago

    A “wild experiment” of using N bots in a game.

    Every Jedi Outcast multiplayer melee in my childhood was more interesting than today’s news.

  • SaltySalamander@fedia.io
    link
    fedilink
    arrow-up
    7
    arrow-down
    4
    ·
    1 day ago

    As of this comment, this post had 3 top level comments. All three are handwaiving this away. And ALL THREE didn’t read past the headline.

    • I Cast Fist@programming.dev
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      1
      ·
      1 day ago

      The entire article is 946 characters, 146 words long. It reads like a super concise description of the 18 minute YT video embedded which blurts that “this is the first glimpse of a new species beginning to think for itself and that could soon, according to Nobel laureates and the godfathers of AI, lead to literal human extinction” before the 1 minute mark.

      Handwaving this away seems like the most responsible thing to do with your own time. The first experiment on the video, Smallville, can be somewhat replicated by Dwarf Fortress, which uses zero AI - dorfs can make parties, they can make friends, enemies, have children, etc. The only real difference with this minecraft experiment is that there’s actual chat logs you can check

    • ThePowerOfGeek@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 day ago

      Sounds about right, unfortunately.

      I read a bit of the article and watched most of the YouTube video embedded in it. It’s definitely worth a read/watch.

      The video narrator keeps going back to the argument “they didn’t tell the agents to do XYZ!” Yeah no shit, that’s the whole point behind agents! They are autonomous and extrapolate actions based on the situation they find themselves in. The implication the guy is trying to make is that these agents are sentient, which is a stretch and a bit misleading.

      But… it’s still a really interesting series of exercises. Especially the Minecraft one. And if nothing else it gives researchers pointers about how they can improve agent decision-making, and everyone more insight into how they operate.

      • I Cast Fist@programming.dev
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 day ago

        The video downplays several clear instructions and limitations as if everything came from a single line command to AI, like when humans added 2 agents that would think that “taxes are too high”. An actual newsworthy video would’ve been leaving such agents without any implicit or explicit command to bother with taxes and, after some time, find out they started to play with taxes within minecraft, which is a game that does not have any sort of in game market or currency.

        The video/experiment should’ve added 2 or 3 agents to the group that had no mention of taxes whatsoever and see if the others that are taxed would’ve been bothered.