Tech
AI CEOs Worry the Government Will Nationalize AI
Palantir’s CEO was blunt. “If Silicon Valley believes we are going to take away everyone’s white-collar job… and you’re going to screw the military — if you don’t think that’s going to lead to the nationalization of our technology, you’re retarded…”
And OpenAI’s Sam Altman is thinking about the same thing, writes long-time Slashdot reader destinyland:
“It has seemed to me for a long time it might be better if building AGI were a government project,” Sam Altman publicly mused last week… Altman speculated on the possibility of the government “nationalizing” private AI companies into a public project, admitting more than once he’s wondered what would happen next. “I obviously don’t know,” Altman said — but he added that “I have thought about it, of course” Altman’s speculation hedged that “It doesn’t seem super likely on the current trajectory. That said, I do think a close partnership between governments and the companies building this technology is super important.”
Could powerful AI tools one day slip from the hands of private companies to be controlled by the U.S. government? Fortune magazine’s AI editor points out that “many other breakthroughs with big strategic implications — from the Manhattan Project to the space race to early efforts to develop AI — were government-funded and largely government-directed.” And Fortune added that last week the Defense Department threatened Anthropic with the Defense Production Act, which allows the president to designate “critical and strategic” goods for which businesses must accept the government’s contracts. Fortune speculates this would’ve been “a sort of soft nationalization of Anthropic’s production pipeline”. Altman acknowledged Saturday that he’d felt the threat of attempted nationalization “behind a lot of the questions” he’d received when answering questions on X.com.
How exactly will this AI build-out be handled — and how should AI companies be working with the government? In a sprawling ask-me-anything session on X that included other members of OpenAI leadership, one Missouri-based developer even broached an AGI-government scenario directly with OpenAI’s Head of National Security Partnerships, Katherine Mulligan. If OpenAI built an AGI — something that even passed its own Turing test for AGI — would that be a case where its government contracts compelled them to grant access to the Defense Department?
“No,” Mulligan answered. At our current moment in time, “We control which models we deploy”
The article notes 100 OpenAI employees joined with 856 Google employees in an online letter titled “We Will Not Be Divided” urging their bosses to refuse their models’ use in domestic mass surveillance and autonomously killing without human oversight.
But Adafruit’s managing director Phillip Torrone (also long-time Slashdot reader ptorrone ) sees analogies to America’s atomic bomb-building Manhattan Project, and “what happened when the scientists who built the thing tried to set conditions on how the thing would be used.” (The government pressured them to back down, which he compares to the Pentagon’s designating Anthropic a “supply chain risk” before offering OpenAI a contract “with the same red lines, just worded differently”.)
Ironically, Anthropic CEO Dario Amodei frequently recommends the Pulitzer Prize-winning 1986 book The Making of the Atomic Bomb…