Data centers in space: An astrophysicist’s look at a new infrastructure frontier
The idea sounds like science fiction: take the racks of GPUs that train and run modern AI, lift them off Earth, and operate them in orbit – or even on the Moon. Yet in the last year, the concept has moved from late-night speculation into mainstream conversation, with proposals ranging from experimental “compute satellites” to more ambitious visions of orbital, solar-powered AI infrastructure.
As an astrophysicist, I’m instinctively drawn to first principles: energy must be conserved, matter must be cooled, hardware must survive the space environment, and every new infrastructure layer carries externalities – especially when it occupies a shared commons like near-Earth space. If we approach “space data centers” scientifically, the question is not whether the idea is inherently good or bad, but under what conditions it becomes technically feasible, economically sensible, and socially acceptable.

What do people mean by “a data center in space”?
The phrase bundles several distinct architectures:
- Edge computing in orbit: small-to-moderate compute payloads that process data near where it is collected – on satellites or stations – so less raw data needs to be downlinked. This is already happening in limited forms, from AI-enabled Earth-observation demonstrations to in-space computing experiments on the International Space Station.
- Distributed “compute constellations”: many satellites operating as a coordinated cluster, sharing compute tasks and moving results (not power) back to Earth via high-bandwidth links. Google’s Project Suncatcher is a prominent example of this design space, exploring solar-powered satellite constellations equipped with AI accelerators and optical communications.
- Off-planet storage for resilience: not so much “cloud compute,” but high-assurance backup and disaster recovery located off Earth – often framed as lunar storage. Lonestar’s efforts to fly data storage payloads to the Moon fit here.
Each of these has different physics constraints, cost drivers, and policy implications – so any serious discussion has to specify which version we mean.
Why is this resurfacing now?
Two trends are colliding. First, compute demand – especially for AI – has become an infrastructure problem: power availability, water use for cooling, siting, grid interconnections, and permitting are now strategic constraints for terrestrial data centers. In the US, data centers already consume more than ~4% of annual electricity, and some projections suggest that figure could double or even triple by 2028. Large campuses can also require millions of gallons of water per day for cooling. This has pushed technologists to explore “non-traditional” environments, including space, where the energy source (sunlight) is constant and water-based cooling is unnecessary in principle.
Second, the space sector is changing: launch cadence is rising, spacecraft buses are becoming more standardized, and the same mass-produced electronics that drive commercial computing are increasingly being adapted, carefully, for space. Lower launch costs and mass production could eventually make large orbital infrastructure conceivable.
A third driver is increasingly political: digital sovereignty. Data centers are not just industrial assets; they anchor legal jurisdiction, security assumptions, and strategic dependence. It is therefore notable that parts of Europe have begun exploring whether off-planet infrastructure could serve resilience and autonomy goals – less as a replacement for terrestrial cloud, more as a complementary layer for specific sovereign workloads. The European Commission–supported ASCEND feasibility study, led by Thales Alenia Space, is one concrete example framing orbital data centers through the dual lens of net-zero ambitions and data sovereignty, rather than pure cost minimization. Whether these concepts mature beyond studies, they signal that “space compute” is being discussed not only as engineering, but as infrastructure strategy.
Read also: Space, AI technology, and the future of Transatlantic security
An additional, quieter force is social friction: communities are increasingly pushing back on terrestrial data centers over land, water, and perceived public value – so ‘move it off-planet’ is also about escaping local conflict.
The physicist’s reality check: space is not a freezer
A common intuition is that “space is cold,” so cooling servers should be easy. The physics is subtler: vacuum does not conduct heat, and without air there is no convection – one of the ways to transport energy. A useful intuition is a thermos: the vacuum layer keeps heat in because it suppresses conduction and convection – so an orbital ‘data center’ must actively export heat via radiators. In orbit, waste heat must be transported to radiator surfaces and emitted as infrared radiation. That means thermal management becomes a dominant design constraint.
This is not hypothetical. Spacecraft thermal engineers have long treated radiators as critical hardware with specialized surface properties, deployment constraints, and operational considerations. For compute-intensive payloads, the radiator area, mass, and orientation requirements can scale quickly – and the “easy cooling” narrative often collapses under honest heat-budget accounting.
The result is a classic trade space: yes, space offers an effectively infinite heat sink, but the rate at which you can dump heat is limited by radiator capability, spacecraft geometry, and pointing constraints. That tension sits at the center of whether orbital data centers can ever reach meaningful scale.
Potential advantages
1) Power: abundant solar energy – at a price
In sun-synchronous orbits (especially “dawn-dusk” style), solar exposure can be nearly continuous, reducing or eliminating the need for large batteries. That is attractive for steady compute loads. But the benefit is inseparable from the cost: you must launch and maintain the power generation hardware, and you inherit all the failure modes and operational complexity of a spacecraft.
2) Water and local environmental footprint on Earth
Many terrestrial data centers face scrutiny not only for electricity demand but also for water usage (directly or indirectly) and local heat rejection. Moving compute off-planet could, in principle, reduce local terrestrial impacts – though rocket emissions and manufacturing impacts don’t disappear; they shift.
3) Proximity to space users: “compute where the data is”
For Earth observation, communications, certain defense or disaster-response applications, and space science, processing in orbit can reduce downlink burdens and deliver faster insights. This is already motivating practical demonstrations of AI onboard satellites or telescopes.
4) Resilience and continuity
The lunar-storage narrative is essentially about disaster recovery: a copy of critical data in a physically separate domain, less exposed to terrestrial disasters or geopolitical disruptions. Early demonstrations have aimed to test feasibility rather than scale.
Key challenges and externalities
1) Thermal control at scale
As noted, heat rejection is the hardest non-negotiable. The “cloud” metaphor can mislead: a megawatt-scale compute facility in space is not just a bigger satellite. It is a thermal machine that must reject a continuous heat load through radiators – while surviving eclipses, attitude constraints, and hardware degradation.
2) Radiation, reliability, and maintenance
Space is harsh on electronics. Even in low Earth orbit, charged particles and radiation effects can cause bit flips and component degradation. A vivid historical example: an HPE Spaceborne Computer experiment on the ISS succeeded overall, but radiation damaged a significant fraction of its solid-state drives. Scaling orbital computing would demand robust fault tolerance, redundancy, and possibly on-orbit servicing – each adding cost and complexity. On Earth, failures are routine and repairable; in orbit, repair is closer to an exception than a plan. That shifts the design philosophy: redundancy, graceful degradation, and end-of-life disposal become core economics.
3) Latency and bandwidth
For some workloads (Earth-observation triage, onboard autonomy), orbit is an advantage. For many mainstream cloud tasks, it could be more challenging: users are on Earth, and physics still enforces finite-speed communications. High-bandwidth optical links can help, but they create tight requirements on pointing, coordination, weather-independent ground stations, and network architecture.
4) Orbital debris
Orbital congestion adds a layer of complexity: any large constellation increases collision-avoidance complexity and raises long-term sustainability stakes.
5) The Stratospheric Budget: From Water to Ozone
While space-based compute avoids terrestrial water and land constraints, it shifts the footprint to the stratosphere. As of 2026, scientific focus has intensified on black carbon (soot) emissions. At high-cadence launch volumes, these particles threaten ozone recovery and alter atmospheric warming. This could become a next regulatory frontier: credible ‘Space ESG’ may end up requiring operators to treat stratospheric impacts as a managed budget, alongside debris and spectrum.
6) Impacts on astronomy and the night sky
As an astrophysicist, I cannot ignore the “observational commons”. Large satellite constellations in LEO (Low Earth Orbit) can reflect sunlight into optical telescopes and introduce radio-frequency interference that affects radio astronomy. The IAU and astronomy organizations have documented these concerns and called for mitigation. Compute constellations would have to meet the same “dark and quiet skies” expectations as communications constellations.
Economics: the hidden hinge
Ultimately, space-based data centers live or die on economics. Launch costs must fall dramatically for large-scale orbital compute to compete with terrestrial alternatives. One estimate discussed in Scientific American, reflecting Google’s Suncatcher framing, suggests liftoff costs would need to drop below roughly $200/kg by the mid-2030s for the vision to make sense. Whether that happens – and whether the full system costs (thermal, servicing, network, debris mitigation, insurance) follow – remains uncertain.
2026 Signals: from prototypes to pipelines
By 2026, “data centers in space” are no longer a single speculative headline but a growing set of experiments – some already flown, many announced – testing whether orbital computing can mature from edge demonstrations into a genuine infrastructure layer. The physics remains unforgiving: gravity sets the choreography, radiation sets the failure budget, and radiative heat rejection sets the ceiling on sustained power. But the portfolio of prototypes is widening.
Several early demonstrations are explicitly framed as orbital edge compute. In late 2025, Starcloud’s Starcloud-1 mission drew attention by placing an NVIDIA H100-class GPU in low Earth orbit to test high-performance computing in the space environment – an experiment that underscores both the promise (processing data closer to the source) and the constraints (radiation exposure, thermal design, limited servicing).
On the “moonshot” end of the spectrum, Google has published research describing Project Suncatcher, a concept for compact constellations equipped with TPUs and free-space optical links, ideally in a dawn-dusk sun-synchronous orbit to maximize near-continuous solar power. Google has characterized this as early research toward eventual in-space scaling, with public comments pointing to initial space tests around 2027.
Aetherflux, originally focused on space-based solar power, has also announced an orbital “data center node” concept (Galactic Brain) with a target of early 2027 for a first commercial node, following a planned demonstration satellite in 2026.
Internationally, China has already launched an initial batch of 12 satellites described as the first step toward a much larger “Three-Body Computing Constellation,” emphasizing in-orbit processing and inter-satellite laser links – an approach that prioritizes bandwidth relief and autonomy, even before any discussion of true “data center scale.”
Meanwhile, the connective tissue – high-throughput links between orbit and Earth – continues to evolve. On January 21, 2026, Reuters reported Blue Origin’s plan for the “TeraWave” enterprise communications constellation, explicitly targeting data-center-grade connectivity and reflecting how orbital networking and orbital computing are increasingly discussed in the same breath. Just this week, Elon Musk announced the SpaceX–xAI deal. Press coverage has framed it as a signal of growing interest in integrating AI with space infrastructure—including, potentially, in-orbit computing and the investment required to pursue it.
Economically, the macro driver is unmistakable: forecasts now speak in “supercycle” terms for terrestrial data center buildout – trillions in investment by 2030 – because power, land, and permitting are becoming binding constraints. That does not make orbit a shortcut; it makes orbit a hypothesis worth testing.
Finally, the externalities are becoming more formalized. UN COPUOS has agreed to discuss “Dark and Quiet Skies” annually through 2029, acknowledging the observational commons as constellations grow. And atmospheric studies continue to caution that rocket black carbon injected into the stratosphere can produce measurable warming and ozone impacts under plausible growth scenarios – an effect that scales with launch cadence.
Space-based data centers sit at a fascinating intersection of physics, engineering, economics, and governance. From a scientific standpoint, the concept is a hypothesis to explore: that the constraints limiting terrestrial compute – energy, cooling, land, permitting – will become so binding, and launch plus operations so efficient and cheap, that off-planet infrastructure becomes part of the global computing stack.
The honest answer today is that we are still mapping the trade space. The near-term case is strongest for edge compute in orbit: processing certain data products where the spacecraft already is. The most ambitious visions, orbital AI at scale, will depend on breakthroughs (or at least step-changes) in thermal control, launch economics, on-orbit operations, and responsible stewardship of near-Earth space as a shared environment.
As with many frontiers, the crucial question is not just “Can we build it?” but “Can we build it in a way that is robust, sustainable, and compatible with the scientific and environmental commons we all rely on?”
Disclaimer: This essay reflects the author’s personal views and draws only on publicly available information; it does not represent the views of the employer, the U.S. government, or any affiliated organization.