• Rhaedas@fedia.io
    link
    fedilink
    arrow-up
    12
    ·
    19 hours ago

    Procedural generation with appropriate constraints and a connected game that stores and recalls what’s been created can do this far better than a repurposed LLM. It’s hard work on the front end but you have a much better idea of what the output will be vs. hoping the LLM “understands” and remembers the context as it goes.

    • fonix232@fedia.io
      link
      fedilink
      arrow-up
      2
      arrow-down
      10
      ·
      18 hours ago

      Sorry but procedural generation will never give you the same result as a well tuned small LLM can.

      Also there’s no “hoping”, LLM context preservation and dynamic memory can be easily fine-tuned even on micro models.

      • Rhaedas@fedia.io
        link
        fedilink
        arrow-up
        4
        ·
        18 hours ago

        I agree that the results will be different, and certainly a very narrowly trained LLM for conversation could have some potentials if it has proper guardrails. So either way there’s a lot of prep beforehand to make sure the boundaries are very clear. Which would work better is debatable and depends on the application. I’ve played around with plenty of fine tuned models, and they will get off track contextually with enough data. LLMs and procedural generation have a lot in common, but the latter is far easier to manage predictable outputs because of how the probability is used to create them.