Tech
Can Orbital Data Centers Solve AI’s Power Crisis?
What’s the difference between a stupid idea and a brilliant one? Sometimes, it just comes down to resources. Practically unlimited funds, like limitless thrust, can get even a mad idea off the ground.
And so it might be for the concept of putting AI data centers in orbit. In a rare moment of unalloyed agreement, some of the richest and most powerful men in technology are staunchly backing the idea. The group includes Elon Musk, Jeff Bezos, Jensen Huang, Sam Altman, and Google CEO Sundar Pichai. In all likelihood, hundreds of people are now working on the concept of space data centers at the firms directly or indirectly controlled by these men—SpaceX, Starlink, Tesla, Amazon, Blue Origin, Nvidia, OpenAI, and Google, among others.
So how much would it cost to start training large language models in space? Probably the best accounting is one created by aerospace engineer Andrew McCalip. McCalip’s exhaustive, detailed analysis includes interactive sliders that let you compare costs for space-based and terrestrial data centers in the range of 1 to 100 gigawatts. One-gigawatt data centers are being built now on terra firma, and Meta has announced plans for a 5-GW facility, with anticipated completion some time after 2030.
In an interview, McCalip says his initial rough calculations a few years ago suggested that data centers in space would cost in the range of 7 to 10 times more, per gigawatt of capacity, than their terrestrial counterparts. “It just wasn’t practical,” he says. “Not even close.” But when Elon Musk began publicly backing the idea, McCalip revisited the numbers using publicly available information about Starlink’s and Tesla’s technologies and capabilities.
That changed the picture substantially. The figures in his online analysis assume an orbital network of data-center satellites that borrows heavily from Musk’s tech treasure chest—“essentially…you just start putting some radiation-resistant ASIC chips on the Starlink fleet and you start growing edge capacity organically on the Starlink fleet,” McCalip says. The network would rely on the kind of watt-efficient GPU architecture used in Teslas for self-driving, he adds. “You start dropping those onto the backs of Starlinks. You can slowly grow this out, and this would be approximately the performance that you would get.”
Bottom line, with some solid but not necessarily heroic engineering, the cost of an orbital data center could be as low as three times that of the comparable terrestrial one. That differential, while still high, at least nudges the concept out of the instantly dismissible category. “I have my particular views, but I want the data to speak for itself,” McCalip says.
For this illustration, we picked a configuration with an aggregate 1 GW of capacity. The network would consist of some 4,300 satellites, each of which would be outfitted with a 1-square-kilometer solar array that generates 250 kilowatts. The data center on that satellite, powered by the array, might have at least 175 GPUs; McCalip notes that a popular GPU rack, Nvidia’s NVL72, has 72 GPUs and requires 120 to 140 kW.
The total cost of the satellite network would be around US $51 billion, including launch and five years of operational expenses; a comparable terrestrial system would cost about $16 billion over the same period.
Stupid? Not stupid? You decide.
From Your Site Articles
Related Articles Around the Web