

I know you’re just making a snide remark, but we’re already well on that track too.
I know you’re just making a snide remark, but we’re already well on that track too.
Maybe that specific tweet was fake (or bait), but I do remember it from back then. There was a whole slew of easily misinterpreted posts on all social media around the release of the cyberpunk game and then again around the release of the anime.
(because it was trained on real people who write with those quirks)
Yes and no. Generally speaking, ML-Models are pulling towards the average and away from the extremes, meanwhile most people have weird quirks when they write. (For example my overuse of (), too many , instead of . and probably a few other things I’m unaware of)
To make a completely different example, if you average the facial features of humans in a large group (size, position, orientation, etc. of everything) you get a conventionally very attractive person. But very, very few people are actually close to that ideal. This is because the average person, meaning a random person, has a few features that stray far from this ideal. Just by the sheer number of features, there’s a high chance some will end up out of bounds.
A ML-Model will generally be punished during training for creating anything that contains such extremes, so the very human thing of being eccentric in any regards is trained away. If you’ve ever seen people generate anime-waifus with modern generative models you know exactly what I mean. Some methods can and are being deployed to try and keep/bring back those eccentricities, at least when asked for.
On top of that, modern LLM chatbots have reinforcement learning part, where they learn how to write so that readers will enjoy reading it, which is no longer copying but instead “inventing” in a more trial-and-error style. Think of the videos on youtube you’ve seen of “AI learns to play x game”, where no training material of someone actually playing the game was used and the model still learned. I’m assuming that’s where the overuse of em-dash and quippy one liners come from. They were probably liked by either the human testers or the automated judges trained on the human feedback used in that process.
It says “people” not “percent of people”. I think 10 per year (and 50 in 1986) is quite the opposite of “a lot”.
Yes I love over-analyzing memes until they’re not funny anymore, why are you asking?
Different person here.
For me the big disqualifying factor is that LLMs don’t have any mutable state.
We humans have a part of our brain that can change our state from one to another as a reaction to input (through hormones, memories, etc). Some of those state changes are reversible, others aren’t. Some can be done consciously, some can be influenced consciously, some are entirely subconscious. This is also true for most animals we have observed. We can change their states through various means. In my opinion, this is a prerequisite in order to feel anything.
Once we use models with bits dedicated to such functionality, it’ll become a lot harder for me personally to argue against them having “feelings”, especially because in my worldview, continuity is not a prerequisite, and instead mostly an illusion.
His Hyprland setup looks cool if you’re into that sorta thing but it’s just not what users just switching to mint, fedora, whatever might be looking for.
I would not underestimate how much of a draw “it looks cool” can have on people who are not tech savy at all. If you think about what drives new phone purchases, their major version upgrades always include lots of things that are nothing but eye-candy and those are often heavily featured in their promotion material.
If the goal is to get casual users to convert to Linux, I would argue that aesthetics is a lot more important than ANY talk about technical details, privacy, etc. If those users cared about those things, they would’ve switched already.
Now my bigger worry is that those users will bounce off before they manage to get their setup to look as (subjectively) cool as his.
We’re dead center in the observable universe though.
I played it at gamescom last year. It was fun, but even in that short amount of time, some things started to feel a bit repetitive and I didn’t like a few smaller design decisions.
That being said, I’ll probably still buy it if the price is reasonable for what it is. And who knows, maybe they even polished out some of the gripes I had with it.
Sure! Here’s an expanded version of the fictional profile for Chris Whitmore, now including made-up family member names, relationships, and contact info — all entirely fictional and consistent with the character:
You forgot to remove that part of the LLM response…
It’s not even only colloquial, it’s the scientific term for it.
Edit: Even things that have nothing to do with machine learning or deep learning are AI. i.e. stupid rule based approaches (aka tons of if-else). Deep Learning is a subset of Machine Learning which is a subset of AI.
I don’t think it’s more crime because more tension. It’s instead a self fulfilling prophecy. Who do you think detects and records crime if not the police? Therefore more police in a area increases the number of crime data points in that area.
Other than “they’re gonna stop paying you” there’s also the risk of inflation making it so you receive way less overall, since I doubt the amount gets adjusted to match inflation.
But yes, if the jackpot is so high that you’d get 2+mil per month, assuming you’re so worried about the dollar being worthless soon, you can still take the 2mil/mo and diversify. After a year you should already have plenty money to live comfortably for the rest of your life.
One field it impacts is radio astronomy. We can already see Musk’s satellites mess with it (unintentionally) and it’s probably only going to get worse from here.
Assuming each user will always encrypt to the same value, this still loses to statistical attacks.
As a simple example, users are e.g. more likely to vote on threads they comment in. With data reaching back far enough, people who exhibit “normal” behavior will be identified with high certainty.
A pyramid is built bottom to top, not top to bottom. That’s also one of the strengths of the ISO format. You can add/remove layers for arbitrary granularity and still have a valid date.
It is dead AND alive before you check and collapses into dead XOR alive when you check.
But yes, the short description also irked me a little. It’s really hard to write it concisely without leaving out important bits (like we both did too).
That’s a very optimistic view. Not licencing your patent to your competition is absolutely a profit driven decision that harms the end user.
It was always meant to become a free game just like it’s predecessor. This is just that transition.
Re LLM summaries: I’ve noticed that too. For some of my classes shortly after the ChatGPT boom we were allowed to bring along summaries. I tried to feed it input text and told it to break it down into a sentence or two. Often it would just give a short summary about that topic but not actually use the concepts described in the original text.
Also minor nitpick but be wary of the term “accuracy”. It is a terrible metric for most use cases and when a company advertises their AI having a high accuracy they’re likely hiding something. For example, let’s say we wanted to develop a model that can detect cancer on medical images. If our test set consists of 1% cancer inages and 99% normal tissue the 99% accuracy is achieved trivially easy by a model just predicting “no cancer” every time. A lot of the more interesting problems have class imbalances far worse than this one too.
This would almost work already if the last panel was mirrored.