Stargate Attracts More Funding for Texas-Sized AI Factory


The Stargate data center in Abilene, Texas, is seldom out of the news.

OpenAI and Oracle have already committed to investing $500 billion into the project. Now, its developer, Lancium, has just secured $600 million to move forward with its first 1.2 GW of capacity.

Lancium’s role as a developer begins and ends with finding the land and providing the power infrastructure. Crusoe will build the facility, while Oracle and OpenAI will equip it with internal IT equipment, such as tens of thousands of Nvidia Graphics Processing Units (GPUs).

Stargate begins in Abilene, but the plan is to expand the concept to multiple AI factories across the US. The first campus is being built on 1,400 acres of land and will eventually consist of eight buildings.

The first two buildings went live in October 2025. Each is 980,000 square feet each and contains 200 MW of capacity. Inside, they are packed with up to 50,000 Nvidia GB200 NVL72s. They are designed to run on a single, integrated network fabric to tailor operations to the training of AI-based large language models (LLMs) and the billions of inference workloads generated by user queries.

How fast the Stargate campus is rising

Work continues at a fast pace to complete the Abilene Stargate.

The schedule for completion of the remaining six mega-buildings is mid-2026. By that time, campus capacity will soar to 1.2 GW and a total of 4 million square feet. Add to that another 10 GW or more of AI factory construction Crusoe has in its development pipeline.

“Our Clean Campus in Abilene represents the future of compute, designed for AI and hyperscale,” Michael McNamara, CEO and cofounder of Lancium, told TechRepublic.

This rate of construction is unheard of in the industry. Yet it isn’t nearly fast enough for the demands of the AI data center market. McNamara said that Lancium can currently provide about 1 GW of AI factory space per year. It hopes to increase that to 1 GW per quarter in the near future.

That isn’t anywhere near good enough for OpenAI and other hyperscalers. They are pushing for a GW a week of new data center construction.

“A GW per week is possible, but it will take collaboration with all the key stakeholders,” said McNamara.

Why the power grid is struggling to keep up

Construction speed is one aspect of satisfying AI demand. Energy availability is quite another.

Lancium is wrestling with severe transmission and power infrastructure constraints. To operate data centers at AI scale, a high level of innovation is required to ensure grid reliability across all workloads. AI traffic can soar in milliseconds and fall away to nothing just as quickly. This can cause severe problems on the traditional grid, which is designed for more gradual increments of added or subtracted power.

Thus, Lancium works on multiple fronts. It recently built two massive substations and is exploring other solutions to bring reliable, high-quality power to AI factories. This includes on-campus solar and battery energy resources, as well as a variety of grid sources, including an interconnect to a wind farm.

The company is also working on power orchestration applications and higher-voltage energy sources.

“The existing transmission system is at capacity in many areas; we need a more holistic energy system design that encompasses new generation and transmission at AI scale – greater expansion than we have seen in 70 years,” said McNamara. “Where we need to go is higher voltage levels: if you double system voltage, you can get six times the power.”

Check out our deeper dive into the project’s origins and partners — including OpenAI, Foxconn, and Lancium — in our full Stargate coverage.



Source link

Recent Articles

Related Stories