Tech

OpenAI calls for robot taxes, a public wealth fund, and a four-day week

Published

on

Sam Altman’s 13-page policy blueprint, ‘Industrial Policy for the Intelligence Age,’ proposes auto-triggering safety nets, containment playbooks for rogue AI, and direct citizen dividends from AI-driven growth. He told Axios it is a starting point, not a prescription.


OpenAI has published a 13-page policy document calling for sweeping economic reforms to prepare for what it describes as approaching superintelligence, including taxes on automated labour, a national public wealth fund seeded partly by AI companies, and pilots of a 32-hour working week.

The document, titled ‘Industrial Policy for the Intelligence Age: Ideas to keep people first,‘ was released as Congress prepares to debate AI legislation. CEO Sam Altman told Axios in an exclusive interview that the scale of change coming from AI is comparable to the Progressive Era and the New Deal, and that the two most immediate dangers are cyberattacks and biological weapons capable of being enabled by advanced AI.

The most radical proposal in the document is the public wealth fund. OpenAI suggests the government create a nationally managed fund, seeded in part by contributions from AI companies themselves, that would invest in AI firms and other businesses adopting the technology and distribute returns directly to American citizens.

Advertisement

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

The model is comparable to Alaska’s Permanent Fund, which pays annual dividends to state residents from oil revenues.

On labour, the document floats taxes on automated labour and a shift in the tax base from payroll towards capital gains and corporate income, an acknowledgement that AI could hollow out the wage-and-payroll revenue that currently funds Social Security.

Advertisement

The 32-hour workweek proposal is framed as an ‘efficiency dividend’ from AI-driven productivity gains.

The document includes a section on what it calls ‘containment playbooks’ for scenarios in which dangerous AI systems become autonomous and capable of replicating themselves. OpenAI acknowledges scenarios where such systems ‘cannot be easily recalled,’ and proposes government co-ordination as the response.

The blueprint also envisions automatic safety net triggers: when AI-driven displacement metrics hit preset thresholds, benefits including unemployment payments and wage insurance would increase automatically, then phase out when conditions stabilise.

Altman told Axios that a major cyberattack enabled by near-future AI models is ‘totally possible’ within the next year, and that AI models being used to create novel pathogens is ‘no longer theoretical.’

Advertisement

Altman was candid with Axios about the dual nature of the document. OpenAI is the company racing to build the very technology it is warning about, and positioning itself as the responsible actor proposing solutions is plainly also a strategy to shape regulation before regulation shapes it. Anthropic has occupied a similar lane.

The policy paper arrives at a moment when OpenAI is preparing for an IPO, has closed a $110 billion private funding round, and is simultaneously under scrutiny over its conversion from non-profit.

Whether the altruism is genuine or strategic, Altman told ‘Some will be good. Some will be bad. But we do feel a sense of urgency. And we want to see the debate of these issues really start to happen with seriousness.’

Advertisement

Source link

You must be logged in to post a comment Login

Leave a Reply

Cancel reply

Trending

Exit mobile version