

I’ve never seen a YSK that explained so very little.
My dude, nobody outside of a very small niche knows what any of these words mean.


I’ve never seen a YSK that explained so very little.
My dude, nobody outside of a very small niche knows what any of these words mean.
Honestly, “adult” is something you should do, not something you should be.
And LOTs of people are shit at it. Especially the ones who claim otherwise.


Also, live-service games endeavour to stay relevant forever.
For, say, God of War, you’ll eventually be done with it. You’ve played all the things, you put the box on the shelve and move on to another game. But for these forever-games, you can play them forever.
And that means that if you want to launch a game in that market, you can’t rely on getting players who just put down God of War and want something roughly similar. You need to not only be better than Fortnite, but you need to be sufficiently better than people will abandon years of investement into Fortnite to go play your game.
The barrier to entry is HUGE, and it’s made much worse by the idea that the new game might dissapear, meaning you wasted months (or, occasionally, days, lol).
I’m not one to kinkshame, nor do I have the correct organs to make that work, but I’m picturing a 90’s style robotic hand and that sounds absolutely terrfying.
Neuralink is actualy WAY behind the current state of art. What they’re doing with terrifyingly fragile brain implants, other companies can do with a skulcap that doesn’t involve snapping off bits of metal in your brain.
Honestly, it looks like janky shit. That CSS looks like some moron cobbled it together.


Say, on a related subject, is it possible to not just block the .ml instance but also every user from there?
No I mean, it’s very hard to APA style talking.
What like… In conversation?
Here’s a lovely british fridge from the 50’s: https://c7.alamy.com/comp/R2K1Y1/original-1950s-vintage-old-print-advertisement-from-english-magazine-advertising-frigidaire-refrigerator-circa-1954-R2K1Y1.jpg
the larger, budget model (250 liters, so about 2/3rd of a current single-door basic fridge) is 152 guineas. For those of you not usally paying in pre-decimal british currency, that’s 152 pounds and 152 shillings or 159,60 decimal pounds. Inflation from 1955 makes that about 2000 pounds/dollar/euros today.
No auto-defrost, no actually closing door, and a barely-adequate temperature controller. It did come in sherwood green though, with a kickass counter top!
You could get something like this: https://c7.alamy.com/comp/3CRWJFN/hoover-washing-machine-magazine-advertisement-1953-3CRWJFN.jpg
For the equivalent of 425 dollars. Note that the “automatic pump” doesn’t FILL your machine, nor does this machine heat the water.


Honestly, I’ve “solved” this by accepting defeat. My gaming PC is only used for gaming, and I consider it to be roughly on par with an Xbox or Playstation or work laptop. Any data on it should be considered public.
I do literally everything else on my Linux box, which I actually feel OK about. Yes, I could dual boot, but honestly, having my stuff airgapped from the crazy intrusive “security” is nice.


like European chocolate like Cadbury, Tony’s, etc as well
Those are low-to-mid tier at best though. Good chocolate is stuff like Callebaut


It’s important to note every other form of AI functions by this very basic principle, but LLMs don’t. AI isn’t a problem, LLMs are.
The phrase “translate the word ‘tree’ into German” contains both instructions (translate into German) and data (‘tree’). To work that prompt, you have to blend the two together.
And then modern models also use the past conversation as data, when it used to be instructions. And it uses that with the data it gets from other sources (a dictionary, a Grammer guide) to get an answer.
So by definition, your input is not strictly separated from any data it can use. There are of course some filters and limits in place. Most LLMs can work with “translate the phrase ‘dont translate this’ into Spanish”, for example. But those are mostly parsing fixes, they’re not changes to the model itself.
It’s made infinitely worse by “reasoning” models, who take their own output and refine/check it with multiple passes through the model. The waters become impossibly muddled.


task-specific fine-tuning (or whatever Google did instead) does not create robust boundaries between “content to process” and “instructions to follow,”
Duh. No LLM can do that. There is no seperate input to create a boundary. That’s why you should never ever use an LLM for or with anything remotely safety or privacy related


Wait, what did he do?
I went to school with a Belana (no apostrophe), who didn’t like Star Trek. Mostly because her dad loves it (obviously).


Likewise cold fusion
Thanks ChatGPT!