According to Taiwan tech publication DigiTimes, most AI firms are unwilling to wait two years for HDD supplies to stabilize and are shifting to SSDs instead. To contain costs, they are choosing QLC NAND-based drives over the faster, more durable, and more expensive TLC variants.


“Why does the population hate us? We’re only completely destroying the consumer electronics market, accelerating climate change, aiming to eliminate countless jobs, increasing power costs, and stealing the works of millions of people to feed our system all so we can get even more obscenely wealthy? Please clap.”
It’s a mystery!
Didn’t forget draining all their drinking water. Violating everyone’s copyright. Driving teens to suicide. Giving terrible medical advice. And Generating csam.
The water things still baffles me. Like…just…cycle it. It’s a heat exchange system.
What do they do with the water? Pump thru once and then dump it? Why can’t they repurpose it? Why can’t they use gray water?
I don’t get it but that’s likely a me problem.
Evaporative cooling. They’re trying to save on their electric bills by not using AC. Or sometimes they’re cooling the AC condensers themselves this way.
The amount of heat generated is so large they can’t properly cool it with a closed loop system.
And clean water just reduces duct maintenance costs.
deleted by creator
I, for one, applaud anything that helps destroy the current Intellectual Property system.
Not the other things, though.
Except the ones feeling the hurt the most are the little guys
Agreed.
One single potentially good thing in the middle of bad things still adds to something bad.
My point is that this shit is happening either way no matter how shit it all is, so if we can recognize and extract one good thing out of it at least on the other side we’ll have one good thing, whilst if we don’t, we’ll have nothing good at all.
You forgot to mention that the product they’re sacrificing everything for is widely failing to meet expectations and that they will likely expect taxpayers to bail them out when their investments fail and threaten to take out the entire economy with themselves
Also in a few years when the AI generated nonsense code reaches a critical point and a ton of important systems grind to a halt, we’ll expect all of you that we fired to come back and un-fuck it for us so we can keep on making money. On temporary contracts and at reduced wages of course because times are tough.
Software engineers predicted this would happen. The percentage of developers that love fixing other people’s code is essentially zero, fixing ai code is even smaller.
If they require low wages, they’ll only get the worst of the worst. Expect this phase to repeat until they are forced to hire for competence and pay above market rate required to convince senior developers to deal with not just the ai mess but all the failed attempts to turn it around.
But the crash is the moment where the people with actual tech knowledge will have leverage to say them: “pay me 10 times my standard rate or get bent”
I hope so! But if the economy gets bad enough I can see people getting desperate enough that they’ll all scramble and under-cut each other into oblivion just to secure the work. Time will tell I guess.
I have a feeling, that the tech sector has enough kompromat on a lot of important people to keep the bubble from bursting.
To be clear, it only wildly fails to meet expectations in sectors that you hear about.
It’s most definitely medium expectations in sectors you don’t hear about because news and social media have a huge negativity bias because that gets views and engagement.
If we want to fight this scourge, we need to be more informed about it.
Like what sectors?
Law enforcement and military
Oh yeah I just read how AI keeps suggesting nuking everyone. That sounds like a great success. (I get what you mean, I just wanted to be a wiseass)
I certainly expect that the visual models are meeting expectations in the racial profiling department.
I think it’s more like expectations have been deliberately lowered in those fields to meet exactly what AI can deliver. Unpredictable, arbitrary, non-negotiable decisions are the point, and the goal. It’s not about enforcing any laws or achieving any actual outcome other than making innocent people fear for their lives. And it’s doing a fine job at that.
https://www.upi.com/Odd_News/2026/01/05/Heber-City-Police-Department-AI-program-officer-frog/9641767634540/
Doesn’t seem to be meeting expectations in that sector either.
Contact centers, software development, automation, images and video analysis, data analysis, semantic search, entity recognition, advertising, misinformation campaigns, social media, security scanning & automation…etc
Many of these are cross-cutting across many sectors, some of these are sectors you don’t think of as they are driven by government entities.
And many of these have boring quiet tools and integrations that you don’t hear about because they “just work”.
You only hear about the shit that doesn’t work. Not the shit that does work.
Edit: inb4 a reply of a narrow use case or shitty implementation that, obviously, doesn’t work, which I already called out as a bias.
Most of these aren’t generative AI, tho?
I am start to think they are going on the acceleroism theory to push the society to the limits and destroy all for change
Some think this could lead to a more equallytary society, some wants to make a more centralized and controlled by one power sort of society, there is 2 different school of thought on this.
But we are clearly going to have big changes