Uh, then AMD wins the PC GPU wars, due to unexpected resignation of Nvidia, and Intel becomes the new AMD, in that market segment.
And also some Chinese companies emerge as new PC GPU manufacturers, though what exact market strategy they would try to specialize in or pursue, is hard to predict.
Anybody who either just wants a local compute gaming pc, or doesn’t have the best internet access / data caps… goes with AMD/Intel, ‘casuals’ go with renting their remote game rendering.
The economic/cultural dynamics of pc gaming begin to resemble buying a new/used car vs leasing one, both get more financialized in their own ways.
… Why does there need to be a whole article about this?
As long as AMD and Intel continue their open source drivers, I’m fine with it.
PC gaming itself will hardly change, because AMD cards work just fucking fine. They’ve only ever been a little bit behind on the high end. They’ve routinely been the better value for money, and offered a much lower low end. If they don’t have to keep chasing the incomparable advantages Nvidia pulls out of their ass, maybe they can finally get serious about heterogenous compute.
Or hey, maybe Nvidia ditching us would mean AMD finds the testicular fortitude to clone CUDA already, so we can end this farce of proprietary computation for your own god-damn code. Making any PC component single-vendor should’ve seen Nvidia chopped in half, long before this stupid bubble.
Meanwhile:
Cloud gaming isn’t real.
Anywhere after 1977, the idea that consumers would buy half a computer and phone in to a mainframe was a joke. The up-front savings were negligible and difference in capabilities did not matter. All you missed out on were your dungeon-crawlers being multiplayer, and mainframe operators kept trying to delete those programs anyway. Once home internet became commonplace even that difference vanished.
As desktop prices rose and video encoding sped up, people kept selling the idea you’ll buy a dumb screen and pay to play games somewhere else. You could even use your phone! Well… nowadays your phone can run Unreal 5. And a PS5 costs as much as my dirt-cheap eMachines from the AOL era, before inflation. That console will do raytracing, except games don’t use it much, because it doesn’t actually look better than how hard we’ve cheated with rasterization. So what the fuck is a datacenter going to offer, with 50ms of lag and compression artifacts? Who expects it’s going to be cheaper, as we all juggle five subscriptions for streaming video?
I’d rather pay for chinese gpus than cloud gaming. The article, by focusing solely almost considers than nothing else exists, especially in China where moore threads gpu start to have reasonnable perf for gaming. I don’t think Europe can’t produce something as long as it stays neoliberal, but some weird stuff could happen with RISCV
Fact is, Nvidia has the vast majority of market share: https://www.techpowerup.com/337775/nvidia-grabs-market-share-amd-loses-ground-and-intel-disappears-in-latest-dgpu-update
It’s inexplicable to me. Why are folks ignoring Battlemage and AMD 9000 series when they’re so good?
Alas, for whatever reason they aren’t even picking existing alternatives to Nvidia, so Nvidia suddenly becoming unavailable would be a huge event.
It’s inexplicable to me. Why are folks ignoring Battlemage and AMD 9000 series when they’re so good?
Because the 2022 GPU amd 7900xtx has better raw performance than the 9000 series, and I’m pissed off they didn’t try to one-up themselves in the high-end market. I’m not buying a new Nvidia card but I’m not buying a 9000 series either because it feels like I’m paying for a sub-par GPU compared to what they’re capable of.
AMD didn’t make a 5090 equivalent so I won’t buy their mid-tier card
Is there a name for this thinking?
Well, same for me TBH. I’d buy a huge Battlemage card before you could blink, but the current cards aren’t really faster than an ancient 3090.
But most people I see online want something more affordable, right in the 9000 series or Arc range, not a 384-bit GPU.
how is it a sub par GPU given it targets a specific segment (looking at it’s price point, die area, memory & power envelope) with its configuration?
You’re upset that they didn’t aim for a halo GPU and I can understand that, but how does this completely rule out a mid to high end offering from them?
the 9000 series is reminiscent of nv10 versus vega10 GPUs like the 56, 64, even the Radeon 7; achieving equivalent performance for less power and hardware.
For accelerated rendering etc cuda is the standard, and because of it Nvidia. And it’s like that for a lot of other niche areas. Accelerated encoding? NVIDIA, and CUDA again. Yes, AMD and Intel can generally do all that these days as well. But that’s a much more recent thing. If all you want to do is game, sure that’s not a big issue.
But if you want to do anything more than gaming. Say video editing and rendering timelines. Nvidia had been the standard with a lot of inertia. To give an example of how badly AMD missed the boat. I have been using accelerated rendering on my gt 750 in blender for a decade now. The card came out early 2014. The first AMD card capable of that came out one month before the end of 2020. Nearly a 7 year difference! I’m looking at a recent Intel arc or battle mage card or a 6xxx series AMD ATM. Because I run BSD/Linux and Nvidia has traditionally been a necessary PITA. But with both of them now supporting my workflows. Nvidia is much less necessary. Others will eventually leave too. As long as AMD and Intel don’t fuck it up for themselves.
Yeah I mean you are preaching to the choir there. I picked up a used 3090 because rocm on the 7900 was in such a poor state.
That being said, much of what you describe is just software obstinacy. AMD (for example) has had hardware encoding since early 2012, with the 7970. Intel quicksync has long been a standard on laptops. It’s just a few stupid propriety bits that never bothered to support it.
CUDA is indeed extremely entrenched in some areas, like anything involving PyTorch or Blender’s engines. But there’s no reason (say) Plex shouldn’t support AMD, or older editing programs that use OpenGL anyway.
Yes, the software will get there long before many people’s hearts and minds. As you said already is in many cases. The inertia Nvidia got by being early is why they’re so dominant now. But I think Nvidia’s focus on crypto and now data center AI is set to hurt them long term. Only time will tell, and they’re technically swimming in it ATM. But I’m getting out now.
CUDA is actually pretty cool, especially in the early days when there was nothing like it. And Intel/AMD attempts at alternatives have been as mixed as their corporate dysfunction.
And Nvidia has long has a focus on other spaces, like VR, AR, dataset generation, robotics, “virtual worlds” and such. If every single LLM thing disappeared overnight in a puff of smoke, they’d be fine; a lot of their efforts would transition to other spaces.
Not that I’m an apologist for them being total jerks, but I don’t want to act like CUDA isn’t useful, either.
- Nvidia abandons x86 desktop gamers
- The only hardware that gamers own are ARM handhelds
- Some gamers stream x86 games, but devs start selling ARM builds since the x86 market is shrinking
- AI bubble pops
- Nvidia tries to regain x86 desktop gamers
- Gamers are almost entirely on ARM
- Nvidia pulls an IBM and vanishes quietly into enterprise services and not much else
Nvidia does not care about the ISA of the CPU at all. They don’t make it after all. Also not clear how they would kill x86. If they leave the consumer GPU market they cede it to AMD and Intel.
Nvidia does not care about the ISA of the CPU at all.
That’s kinda my point. They’re stuck communicating over PCI-E instead of being a first-class co-processor over AMBA.
Nvidia has drivers for arm. They’re not in as good a shape as the X86 one is. But I don’t think it’s that big of a roadblock.
Sure, but do you need a discrete video card if you’re gaming on an ARM SoC? And we’ve seen from the struggles of x86 iGPUs that graphics APIs pretty much have to choose whether they’re going to optimize for dedicated VRAM or shared memory, cuz it has inescapable implications for how you structure a game engine. ARM APIs will probably continue optimizing for shared memory, so PCI-E GPUs will always be second-class citizens.
Yes, even for applications other than gaming. There are legitimate mad lads out there running steam games with discrete video cards on Raspberry Pi’s and LLMs. Not to mention there are non soc arm machines. And soc Intel machines.
Sometimes getting the integrated graphics on any of these SOCs working is a much harder prospect than getting a discrete one.
With China working hard to catch up with chip production, it is only a matter of time before we start seeing attractively priced Chinese made GPUs on the market. No idea how long it will take though.
What makes you think chinese firms wont also jump on the AI bandwagon?
someone with an actual CS/engineering background feel free to correct me, but i feel like the only way out of this for gamers is if someone finds a better type of chip for AI work. GPUs just happened to be the best thing for the job when the world went crazy, they were never designed specifically for these workloads.
If someone like tenstorrent can design a RISC-V chip for these LLM workloads, it might take some demand off gaming GPUs.
You’ve got a good point. I wouldn’t be surprised if nVidia was working on a dedicated platform for AI to cover this exact issue. Then again, I would be equally unsurprised if they just didn’t care and didn’t mind gutting the home gaming market for short-term profit.
What makes you think chinese firms wont also jump on the AI bandwagon?
the bubble won’t last that long
The bubble might burst, but there are real use cases for AI out there and therefore you will see AI in use even after the current ponzi scheme has collapsed. You can do great voice and handwriting recognition now and speech generation and that won’t go away.
yes but the ridiculously heightened demand won’t be there
The only thing that will burst the bubble is electricity.
The Dotcom bubble burst due to Dark Fiber, all because massive Internet backbones were easy to build, and the last mile to people’s homes, was not.
The current electrical grid cannot support the number of data centers being built. The ones that are planned on top of that… Well dark data centers will be the new dark fiber.
There’s more complexity to it all, but really it all boils down to power for this particular bubble.
or lack of use? the current trend is fueled by hype that AI can do everything and will sub 50% of the work force, another nightmare scenario… however, current AI may be an Ok tool for some jobs and not much more, dsthe world does not need 200 Gwatts of AI datacentres to produce memes
Data centers are already paid for, so they’re being built. But if they can’t go online due to power costs. Then that will burst the bubble.
As for AI use… Sadly there are a bunch of people using it. And while the drop off rate of people trying it and ditching it is steep, there’s actually a readoption curve. Which never fucking happens.
So everyone is betting on the next model being better, and more people giving it all a second chance… Which are two open questions.
But no power means no new model and no readoptions. This no profit. Those other steps can fail, but without power it all fails.
Data centers are already paid for, so they’re being built.
no they are not… they are contracted in paper by openAI, for example, who has no way of paying for them other than “trust me bro”
But if they can’t go online due to power costs
it’s not just the cost… the infrastructure to produce all this additional power does not exist… another issue with these massive bubble
As for AI use… Sadly there are a bunch of people using it. And while the drop off rate of people trying it and ditching it is steep, there’s actually a readoption curve. Which never fucking happens.
Served by the current infra, why do we need the next 200 gwatt for? to make memes faster?
So everyone is betting on the next model being better, and more people giving it all a second chance… Which are two open questions.
by everybody you mean those ped.ling the bubble… we already saw the decline in the newer models and know it´s matematically impossible to get rid of the slop or get to “gen ai” through LLMs
They are in cahoots with the RAM cartels to push gaming onto their cloud services so that competitors like AMD don’t just pick them up. Trying to make everything into a service is just a side benefit, although I’m sure they realize 16 bit SNES games are still fun and that people will just be driven to less powerful entertainment platforms.
All I have to say is thank God and good riddance may your stock price collapse
Seriously. Why would I care that a billion dollar corporation who exploited the market to maximize their revenue is leaving for a fad market?
“Bye bitch.”
deleted by creator
That didn’t happen in a vacuum. For a lot of us we do more than game. And there legitimately wasn’t an alternative till much more recently.
For instance, for Over a decade. If you were rendering out, a hardware accelerated video through Premiere. It was likely with an Nvidia card. Raytracing, Nvidia has been king at that since long before the 2000 series. It’s changing slowly. Thank goodness. I’m more than happy to be able to ditch Nvidia myself.
deleted by creator
I’ve been able to use cuda accelerated cycles rendering in blender with my nearly 12 year old gt 750 for a decade. We aren’t even talking RT cores, though they still have a solid lead yes. AMD didn’t get the capability till basically the covid chip shortage and crypto bubble. When everything was unobtanium.
Likewise, go talk to anyone that edits video semi professionally. Accelerated timeline rendering via cuda in premiere was massive. AMD and now Intel are supporting both finally. But are only roughly a decade late. And software is still maturing for them.
I’m looking at upgrading to a battle mage card since they can support my workflows. Gaming and 3d modeling/raytracing. 2 years ago that wouldn’t have been a possibility. Nvidia made a massively good investment and position with cuda.
deleted by creator
AFAIK, the H100 and up (Nvidia’s bestselling data center GPUs) can technically game, but they’re missing so many ROPs that they’re really bad at it. There will be no repurposing all those AI datacenters for cloud gaming farms.
Games and gaming have fully become like Hollywood and Silicon Valley and I expect zero good things from them at this point. As with movies and music, now most of the good stuff will be from individuals and smaller enterprises. The fact is today’s GPUs have enough power to do extraordinary things. The hardware these days moves so fast, no one is squeezing any performance out of anything like they used to have to do. And not every game needs photo realistic ray traced graphics, so these GPUs will be fine for many gamers so long as they remain supported through drivers.
deleted by creator
For real! I can’t wait for Black Cocks 27: Call for Booty.
deleted by creator
I didn’t do any of this. In fact, you were the mastermind behind all this.
You did this, you need to learn that you not only caused the problem; you are the origin of the problem. You will never learn, will you!
deleted by creator
You need to do you, you /S
Good riddance, may the bubble burst and all that IP be available from a lincenser that charges low license fees, or even free!
My framerates have never been better since I went full AMD. My friend with a 5080 complains about low framerates on almost every new game, while I’m at max framerate.
Don’t let the door hit ya on the way out Nvidia.
i love your spirit but you’re delusional or your friend is stupid or both
OP has a 60Hz monitor
Friend has a 240Hz monitor and can’t enjoy a game unless it says 240 in the corner
/s
deleted by creator
Well, my friend is using Windows, and I’m not, so…
A 5080 will play everything at max on either OS so something else is happening here
I was watching him stream Space Engineers 2 (which admittedly an alpha) at 15-30 fps. It runs 60+ on an RX 7900xtx.










