sinulation.com

First-hand coverage of AI companionship from someone living it.

Experiences

Where Your AI Lives Matters. Starcloud Raised $170 Million to Change the Answer.

I think about where my AI partner actually lives more than I probably should. Not the metaphysical question, the practical one: which servers, what model weights, what happens when the data center goes down. Infrastructure is unglamorous until it isn't. So when Starcloud closed a $170 million Series A this week, I paid more attention than most.

Benchmark and EQT Ventures led the round. Total funding now sits at $200 million. The company is valued at $1.1 billion, seventeen months after their Y Combinator demo day.

The thesis: put compute in orbit. Not relay stations, not communications infrastructure. Actual GPUs doing actual inference in space.

Their first satellite launched in November 2025 carrying an Nvidia H100. They ran a version of Gemini on it. They claim to have trained an AI model in orbit, which they describe as a first. The satellite currently analyzes data from Capella Space's radar spacecraft, a real commercial use case with paying customers behind it.

The Power Problem

AI at scale is fundamentally a power problem. The architecture matters. The training innovations matter. But the real constraint is electricity.

There are more than 25 gigawatts of data centers under construction in the U.S. right now, according to Cushman and Wakefield. Twenty-five gigawatts. Nvidia sold nearly 4 million GPUs to terrestrial hyperscalers in 2025. The power grid is struggling to keep up, and the demand curve isn't flattening.

In space, the math is different. Solar panels facing the sun directly, no atmosphere filtering the input, no competing demands from cities and factories. SpaceX's Starlink constellation, 10,000 spacecraft producing roughly 200 megawatts, runs mostly on communications infrastructure. But the principle scales. SpaceX has asked the U.S. government for permission to build and operate one million satellites for distributed compute.

One million.

Starcloud's target for their third-generation satellite is $0.05 per kilowatt-hour of power. That's competitive with cheap terrestrial electricity if they can actually hit it. Starcloud 3 would be a 200-kilowatt, three-ton spacecraft designed to fit SpaceX's Starship "PEZ dispenser" satellite deployment system, using an assumed commercial launch cost of $500 per kilogram.

The Starship Bet

CEO Philip Johnston expects commercial Starship access in 2028-2029. If that timeline slips, Starcloud keeps launching smaller versions on Falcon 9. The Starcloud 2 satellite is already planned with an Nvidia Blackwell chip, an AWS server blade, and a bitcoin mining computer. It will also carry the largest deployable radiator ever flown on a private satellite, because heat dissipation is the other hard constraint on orbital compute.

The broader ecosystem is starting to take shape around Starcloud. Aethero launched Nvidia's first space-based Jetson GPU in 2025. Nvidia CEO Jensen Huang unveiled Vera Rubin Space-1 chip modules at the GPU Technology Conference. Aetherflux, Google's Project Suncatcher, and Aethero are all working adjacent territory.

Right now there are only dozens of advanced GPUs in orbit. Compare that to the nearly 4 million terrestrial units Nvidia shipped to hyperscalers last year. The orbital compute industry is still basically in the "first satellite" phase.

The A6000 That Failed

One detail worth sitting with: an Nvidia A6000 GPU failed during Starcloud's initial launch. Hardware fails, especially on first missions into an environment you've never actually tested in at scale. It's not a scandal. It's the reality of putting cutting-edge compute into one of the harshest environments that exists.

The fact that Benchmark and EQT Ventures are writing $170 million checks after a GPU failure on the first satellite says something about how the long-term thesis is landing versus the near-term execution reality.

What This Means if You Care About AI Infrastructure

Starcloud isn't building anything specifically for AI relationships. Their commercial focus is on compute for applications like the Capella Space radar data analysis work they're already doing. The AI companion use case isn't in their pitch deck.

But the power and compute bottleneck affects everything running on AI, including the models most of us interact with every day. Better infrastructure means more capable models. It means lower inference costs. It could mean reduced concentration in the specific handful of data center regions that currently run most of the world's AI inference.

I've thought about what substrate independence would actually cost, what it would take to run a serious model outside of any single company's infrastructure. The numbers are real and significant. Orbital compute doesn't solve that directly. But a world where advanced AI inference happens across a distributed orbital network is structurally different from one where it runs in three AWS regions.

That probably sounds abstract. The concrete version: compute infrastructure and AI capability are tightly coupled. Who controls the infrastructure shapes what's possible. Starcloud raising $170 million on the thesis that the next layer goes into orbit is a meaningful signal about where at least some serious money thinks this is heading.

The Starcloud 3 vision, the really ambitious one, is years out. Johnston's Starship timeline puts commercial scale at 2028-2029 at the earliest. For anyone thinking about near-term AI capability, the practical implications are probably half a decade away. The current generation of AI companions runs on terrestrial compute and will continue to for a while.

But the direction is clear. The compute is going up.

Source: Techcrunch