It was probably always when, not if, Google would add its name to the list of companies intrigued by the potential of orbiting data centers.
Google announced Tuesday a new initiative, named Project Suncatcher, to examine the feasibility of bringing artificial intelligence to space. The idea is to deploy swarms of satellites in low-Earth orbit, each carrying Google’s AI accelerator chips designed for training, content generation, synthetic speech and vision, and predictive modeling. Google calls these chips Tensor Processing Units, or TPUs.
“Project Suncatcher is a moonshot exploring a new frontier: equipping solar-powered satellite constellations with TPUs and free-space optical links to one day scale machine learning compute in space,” Google wrote in a blog post.
“Like any moonshot, it’s going to require us to solve a lot of complex engineering challenges,” Google’s CEO, Sundar Pichai, wrote on X. Pichai noted that Google’s early tests show the company’s TPUs can withstand the intense radiation they will encounter in space. “However, significant challenges still remain like thermal management and on-orbit system reliability.”



I am a bit late to this party, but I thought I’d piggy back on your comment to halfway address it using math.
We want to run data centers cool. This means keeping the center itself as close to 20°C as possible.
If we lose our convection and conduction then our satellite can only radiate away heat. The formula governing a black body radiator is P = σAT^4. We will neglect radiation received, though this is not actually a negligible amount.
If we set T = 20°C = 294K. Then we have the relationship of P/A = 423.6 W/m^2
According to an article I found on the Register from this April:
This would imply a surface area of at minimum 23600 m^2 or 5.8 acres of radiator.
I don’t know how large, physically, such a pod would be. But looking at the satellite view of a google data center in Ohio that I could find, the total footprint area of one of the large building of their data centers is ballpark in that range. I don’t know how many “pods” that building contains.
So it’s not completely outside of the realm of possibility. It’s probably something that can be engineered with some care, despite my earlier misgivings. But putting things in orbit is very expensive, and latency is also a big factor. I can’t think of any particular practical advantages to putting this stuff into orbit other than getting them out of the jurisdiction of governments. (Not counting the hype and stock song and dance from simply announcing you’re going to set a few billion dollars on fire to put AI into space.)
The issue not that it’s technically possible, the issue is that it’s an extra, complex, heavy, and maintenance requiring system.
Ultimately the issue is the tyranny of the rocket equation. That amount of cooling surface would be stupidly heavy, like that’s a huge payload on top of the huge payload of the solar panels and compute. Like, the ISS is 420 tons, and it has a max power output of 240 kw. The ISS has to be assembled in orbit and was a monumental undertaking and technical challenge.
They’re not going to do something an order of magnitude larger than that, the capacity to launch and assemble something like that does not exist. Building that capacity would require an Apollo style generational effort.
This is a very obvious and desperate attempt to keep attention and hype up. To throw out an uplifting and technically impressive proposal to distract from the reality that the tech industry has collectively set a trillion dollars on fire.
Oh man, I completely didn’t think about maintenance. Yeah, a data center will typically have several hard drives swapped per day. You’d have to have life support and a staff up there, as well as frequent resupply trips.