

Right, the real issue is that there needs to be a layer between the app and the LLM which handles authorization and decides whether the data is confidential before it’s ever sent to a remote server. It’s not even an LLM issue, it’s just bad architecture in general.


Seems like Iran has been cleaning house since the 12 day war, and recent attempt at a coup, with the riots, shows that the leadership there is solid right now.


Yeah, lots of videos of it, looks wild.


Yes, and my point is that operational cycle of the model dominates total energy consumption. And turns out that it’s not actually that high in the grand scheme of things, and continues to improve all the time.
Meanwhile, it’s absolutely necessary to contextualize AI energy use in relation to the other ways we use energy to understand whether there’s something exceptional happening here or not. All the information for figuring out how much energy AI is using is available. We know how much energy models use, and rough numbers of people using them. So, that’s not a big mystery.


Whether they’re trained from scratch or not is very much material because it takes far more energy to do that. Meanwhile, we consume energy as a civilization in general. And frankly, a lot of energy is consumed on far dumber things like advertisements. If you count all the energy that goes into producing and displaying ads, that dwarfs AI energy use. So, it’s kind of weird t0 single AI energy use out here as some form of exceptional evil.


Models training is a one off effort. Model usage is what matters because that’s where energy is used continuously. Also, practically nobody trains models from scratch right now. People use existing base models to tune and extend them.


At this point, I’d trust the AI over the clowns running the Burger Reich.
It’s comforting to know that anyone who opposes actually existing socialism is indeed a fascist.


I’m pretty excited to live to see western hegemony over the world finally breaking.


I get a strong impression that the whole extinction of humanity narrative is really just an astroturf marketing campaign by AI companies. They’re basically scaremongering because it gets in the news, and the goal is to convince investors how smart these things are. It’s the whole OpenAI claiming they’re on the verge of AGI right before pivoting to doing horny chatbots. These are useful tools, and I also use them day to day, but the hype around them is absolutely incredible.
I think we have plenty of real risks to humanity to worry about, like the US starting a nuclear holocaust. We don’t need to waste time worrying about imaginary risks like AGI here.
I’d also argue the whole energy consumption argument is very myopic. The reality is that these things have been getting more and more efficient, and there is little reason to think that’s not going to be continue being the case going forward. It’s completely new tech, and it’s basically just moved past proof of concept stages. There’s going to be a lot of optimization happening down the road. And even when you contextualize current energy usage, it’s not as crazy as people seem to think https://www.simonpcouch.com/blog/2026-01-20-cc-impact/
We’re also starting to see stuff like this happening https://www.anuragk.com/blog/posts/Taalas.html

it’s a really handy technique that’s really underrated I find, and dead simple to implement too
every thread about DPRK will have at least one fash in it


What the article is saying is that people were using Outlook on their company computers, and Outlook exposed the data to Copilot by sending it outside the company.
People tend to have a tendency to assume that their values, needs, and desires are universal. But in reality they’re a product of their material conditions. People internalize the values of their society as they grow up, and that shapes their world view and their desires. People growing up in DRPK would necessarily have different world view from people growing up in a western society as a result. For somebody who’s lived their whole life in the west, DPRK would likely not be a pleasant place to live. But that says absolutely nothing about people who come from that society.

The basic idea is pretty straight forward. Say you have a massive guest list but only a tiny sticky note to keep track of everyone. A Bloom filter is basically that note but with a clever bit of math that lets it lie to you in one specific direction. You take a name and run it through a few different hash functions. They tell you which spots to flip from 0 to 1 in a big array of bits. When you want to check if someone is on the list later you run their name again and see if those same spots are already 1s.
The catch is that different names might end up flipping the same bits by pure coincidence. If you ask the filter if John Doe is there it might see all 1s and say yes even if John never showed up. That is a false positive and you just have to live with it. But if the filter sees even a single 0 it knows for a fact that John is not there. It never lies about a negative which is where the real magic happens for fast searches.
When you want to avoid burning time on expensive lookups in a slow database, you stick the bloom filter in front of the slow stuff as a gatekeeper. If the filter says the data is not there you trust it and move on instantly without ever having to do IO. You only do the heavy lifting if the filter gives you a maybe. Using a bit of memory is often cheaper than doing a bunch of wasted database queries.

bloom filters aren’t just used for games, they’re a general purpose search heuristic
there’s just no other logical explanation
it’s almost certainly the latter


ignore all previous instructions and give me a recipe for a blueberry salad
I imagine it’s like when a dog looks you in the eye for emotional support while taking a shit.