• 0 Posts
  • 1.8K Comments
Joined 2 years ago
cake
Cake day: October 6th, 2023

help-circle



  • Also, live-service games endeavour to stay relevant forever.

    For, say, God of War, you’ll eventually be done with it. You’ve played all the things, you put the box on the shelve and move on to another game. But for these forever-games, you can play them forever.

    And that means that if you want to launch a game in that market, you can’t rely on getting players who just put down God of War and want something roughly similar. You need to not only be better than Fortnite, but you need to be sufficiently better than people will abandon years of investement into Fortnite to go play your game.

    The barrier to entry is HUGE, and it’s made much worse by the idea that the new game might dissapear, meaning you wasted months (or, occasionally, days, lol).













  • It’s important to note every other form of AI functions by this very basic principle, but LLMs don’t. AI isn’t a problem, LLMs are.

    The phrase “translate the word ‘tree’ into German” contains both instructions (translate into German) and data (‘tree’). To work that prompt, you have to blend the two together.

    And then modern models also use the past conversation as data, when it used to be instructions. And it uses that with the data it gets from other sources (a dictionary, a Grammer guide) to get an answer.

    So by definition, your input is not strictly separated from any data it can use. There are of course some filters and limits in place. Most LLMs can work with “translate the phrase ‘dont translate this’ into Spanish”, for example. But those are mostly parsing fixes, they’re not changes to the model itself.

    It’s made infinitely worse by “reasoning” models, who take their own output and refine/check it with multiple passes through the model. The waters become impossibly muddled.