TL;DR
- Policy Blueprint: OpenAI released a 13-page document proposing robot taxes, a national wealth fund, and subsidized four-day workweeks to prepare for AI-driven automation.
- Economic Proposals: The company wants to shift the tax burden from labor to capital and distribute AI-generated prosperity through a fund modeled on Alaska’s Permanent Fund.
- Safety Measures: OpenAI calls for containment playbooks for self-replicating AI, automatic safety net triggers tied to unemployment metrics, and expanded grid infrastructure.
- Skepticism: Critics question the timing of the proposals given OpenAI’s pending IPO at an $852 billion valuation and its recent conversion to a for-profit entity.
OpenAI released a 13-page policy document on April 6 urging the U.S. government to tax AI-driven profits, create a national wealth fund, and subsidize a four-day workweek, laying out its vision for how the country should prepare for widespread automation.
Titled Industrial Policy for the Intelligence Age, the document proposes shifting the tax burden from labor to capital, creating a national wealth fund modeled on Alaska’s Permanent Fund, and incentivizing 32-hour workweeks funded by taxes on AI-driven profits. OpenAI prepared the blueprint as it approaches an IPO at an estimated $852 billion valuation, raising questions about whether the company building superintelligence should also be designing the safety net for its disruption.
Economic Redistribution: Taxes, Wealth Funds, and Shorter Workweeks
At its core, OpenAI’s blueprint calls for shifting the tax burden from labor to capital. As AI automates more work, the company argues, corporate profits and capital gains will expand while payroll tax revenue shrinks, threatening funding for Social Security, Medicaid, SNAP, and housing assistance.
“As AI reshapes work and production, the composition of economic activity may shift—expanding corporate profits and capital gains while potentially reducing reliance on labor income and payroll taxes. This could erode the tax base that funds core programs like Social Security, Medicaid, SNAP, and housing assistance—putting them at risk.”
OpenAI, Industrial Policy for the Intelligence Age
To address that gap, OpenAI floats a robot tax, a concept Microsoft co-founder Bill Gates first proposed in 2017, where automated systems would be taxed at rates comparable to the human workers they replace. OpenAI does not specify a target corporate tax rate in the document, and its proposals suggest higher taxes on corporate income, AI-driven returns, or capital gains while leaving the specifics to policymakers.
However, that vagueness is notable given that Trump cut the corporate tax rate to 21% from 35% during his first term and has shown little appetite for raising it again.
Beyond taxation, the blueprint proposes a public wealth fund that would give every citizen a stake in AI-driven economic growth, with returns distributed directly to the public. Modeled on Alaska’s Permanent Fund, the mechanism is designed to ensure that AI-generated prosperity does not concentrate among shareholders and executives alone.
Furthermore, OpenAI envisions the fund investing in long-term assets tied to the AI economy, creating what amounts to a national dividend from automation.
Alongside the wealth fund, OpenAI recommends incentivizing employers to pilot 32-hour working weeks without reducing pay, tied to productivity gains from AI adoption. Workers would also benefit from portable benefit accounts that follow them across jobs, though those would still depend on employer or platform contributions. Additional proposals include boosting retirement matches, covering a larger share of healthcare costs, and subsidizing child or eldercare.
For an industry that has largely resisted calls for wealth redistribution, the proposals represent an unusual step. OpenAI is effectively arguing that companies profiting heavily from automation should fund the transition costs for displaced workers, a position that puts it at odds with much of Silicon Valley’s libertarian-leaning policy establishment.
Safety Measures and Infrastructure
On the safety front, the document calls for new safety and oversight systems, new oversight bodies, and targeted safeguards against high-risk uses including cyberattacks and biological threats. OpenAI proposes developing containment playbooks specifically for self-replicating AI systems, an acknowledgment that frontier models may eventually pose risks that existing regulatory frameworks cannot address.
In addition, the blueprint proposes automatic safety net triggers that would expand social programs based on displacement metrics like rising unemployment, rather than waiting for Congress to act. Under this model, benefits would scale up temporarily when economic indicators cross predefined thresholds, then wind down as conditions stabilize.
“Some will be good. Some will be bad. But we do feel a sense of urgency. And we want to see the debate of these issues really start to happen with seriousness.”
Sam Altman, CEO of OpenAI (via Axios)
Meanwhile, infrastructure features prominently in the blueprint as well. OpenAI proposes strengthening the electric grid to meet AI’s growing power demands, with subsidies, tax credits, or equity stakes to accelerate buildouts.
Its Project Stargate initiative, a large-scale investment in U.S. AI infrastructure announced in January 2025, underscores how central energy capacity has become to the company’s business model. By emphasizing both grid expansion and safety containment, OpenAI positions itself as simultaneously advocating for faster AI deployment and stronger guardrails, a balancing act that will face scrutiny as proposals move through the policy process.
To continue the policy conversation, OpenAI plans to host discussions at its Workshop in Washington, D.C., opening in May 2026. Separately, the company announced fellowships and research grants of up to $100,000, plus up to $1 million in API credits, for projects that build on its policy ideas.
Notably, President Trump signed an executive order in December 2025 limiting state-level AI regulations in the name of national and economic security, creating a federal policy vacuum that OpenAI’s blueprint now seeks to fill.
Critics Question Timing and Sincerity
Skepticism about the proposals centers on OpenAI’s own corporate trajectory. In 2025, the company became a for-profit entity, completing a formal restructuring in which the for-profit branch took on a public-benefit mandate while the original nonprofit retained an equity stake. Critics question whether its stated mission is compatible with shareholder obligations, particularly as OpenAI president Greg Brockman has donated millions to President Donald Trump.
Moreover, in its paper OpenAI acknowledged risks including job displacement, misuse by bad actors, AI systems evading human control, and further concentration of wealth and power. Yet the blueprint offers few implementation details, with no specific tax rates, timelines for the wealth fund, or enforcement mechanisms.
In contrast, Sen. Bernie Sanders and Rep. Alexandria Ocasio-Cortez have been working on bills that would impose a national moratorium on new AI data centers until Congress passes safety legislation, suggesting that some lawmakers favor a harder line than what OpenAI proposes.
Altman’s evolving stance on regulation adds further context. In May 2023, he testified before the U.S. Senate and openly embraced calls for AI regulation, deferring to lawmakers on how to govern the technology.
Three years later, however, OpenAI is authoring its own policy prescriptions rather than waiting for Congress to act. That shift from regulatory deference to proactive policy authorship mirrors the company’s broader transformation from research nonprofit to commercial powerhouse.
As a result, the question becomes whether OpenAI is genuinely trying to shape a fair transition or simply trying to set the terms before regulators do. Anthropic released its own policy blueprint six months earlier, making OpenAI a relative latecomer to the policy conversation among frontier AI companies.
OpenAI acknowledged uncertainty about how the transition will unfold but argued for democratic processes that give citizens power to shape AI’s future. Its framework centers on three stated goals: distributing AI-driven prosperity more broadly, building safeguards to reduce systemic risks, and ensuring widespread access so that economic power does not concentrate further.
Japan, South Korea, and Estonia are already integrating AI into public services and education systems, examples that OpenAI’s policy lead Chris Lehane cited as models for proactive workforce planning. Whether Congress takes up any of these proposals, or whether they remain a pre-IPO exercise in public positioning, will depend on whether the political appetite for taxing AI profits can overcome the lobbying power of the industry proposing the taxes.

