Big changes are happening at OpenAI. On Wednesday, the company announced that it would be shutting down their AI video creation app Sora only a couple months after its launch. In October, OpenAI completed a massive restructure of its organization that shakes the very foundations it was built on.
Tech
The contradiction at the heart of OpenAI’s restructuring
OpenAI, which powers ChatGPT, among other AI products, was originally founded purely as a nonprofit. Now it has a for-profit arm. According to OpenAI CEO Sam Altman, the nonprofit will still guide the work of the for-profit side to ensure that artificial intelligence works for the “benefit of all humanity.” On top of that, the OpenAI Foundation, would be in charge of (theoretically) $180 billion, making it one of the largest charitable organizations in the world.
Catherine Bracy, founder of the nonprofit Tech Equity, thinks this restructuring is a blatant attempt to free up the for-profit wing to act like any other AI company. She argues that OpenAI’s for-profit wing will only ever act for the benefit of its investors. Bracy believes the OpenAI Foundation is merely a glorified and toothless corporate social responsibility arm. We reached out to OpenAI for comment and did not receive a response.
Bracy spoke with Today, Explained host Sean Rameswaram about the legality of OpenAI’s new structure and her concerns about how this all might shake out. An excerpt of their conversation, edited for length and clarity, is below.
There’s much more in the full podcast, so listen to Today, Explained wherever you get your podcasts, including Apple Podcasts, Pandora, and Spotify.
(Disclosure: Vox Media is one of several publishers that have signed partnership agreements with OpenAI. Our reporting remains editorially independent.)
You used to chat with Sam Altman?
We worked together back in the day and then kind of went out of touch with each other for a few years. Then, when I was writing a book about venture capital, I was really interested in open AI’s nonprofit model. Sam had been very explicit that the reason they founded OpenAI as a nonprofit was to put the technology at arm’s length from investors because they knew investors would exploit it in a way that would make this technology — which they thought was very dangerous — actually live up to that potential danger.
So I wanted to talk to him about the decision-making process behind that. And he was very forthcoming about that being the explicit reason why OpenAI was founded as a nonprofit. They put a lot of thought and capacity and energy into creating this [nonprofit] governance structure that would protect the technology from the whims of investors, the [profit-generating] imperatives that investors put on technology companies.
And a few months later, I saw that all come crashing down.
And when you found out that Open AI was restructuring and going to try to have it both ways — mission-driven nonprofit, but also money-driven for-profit — what was your reaction?
Disappointment. I would say that was my initial reaction. And then the secondary response was, Well, what can we do about this? And many of us came together into this coalition that really started asking questions about the responsibility of the nonprofit and the responsibility of the attorney general of California to enforce nonprofit law. And things kind of went from there.
Tell me more about that. What’s nonprofit law look like as it pertains to, say, OpenAI?
I run a nonprofit. In the tax code, that means that my organization does not need to pay taxes, but in return for that tax exemption, we are required to operate in service of a public service mission. Our mission is to ensure that the tech industry is creating opportunity for everybody. OpenAI’s nonprofit mission is to ensure that AI develops for the benefit of all of humanity. And legally, Sam Altman is required to prioritize OpenAI’s mission above all else.
So when they decided they were going to split the nonprofit from the for-profit, they found that actually legally they could not do that without divesting the intellectual property that the nonprofit owned, including all of the intellectual property that was created that underlies the ChatGPT model, and the equity stake that the nonprofit owned in the for-profit company.
I think they looked at that price tag and they said, That’s not a price we’re willing to pay. And so instead of splitting the nonprofit from the for-profit, they decided to continue down this path of nonprofit ownership, which in my mind is completely untenable, unsustainable, and irreconcilable.
Basically, every day that OpenAI exists, they are violating the law.
And actually what they’re doing is just daring the attorney general to hold them accountable for it. I think they think they’re too big to be held accountable and they need the AG [of California] to assume that he will not win a case. And that’s what they’ve done. They’ve loaded up on lawyers and they are making a bet that the AG will not pursue this in any way that’s actually meaningful.
Okay. So if I’m following you, despite the fact that OpenAI has split itself into a for-profit arm and a not-for-profit arm, their not-for-profit mission still overrides everything they do. And because of that, they are violating California law — because there’s no way that the nonprofit interests are ever going to be primary in their business.
Right. I think, as the kids would say, they’re playing in our faces. They expect us to take their word that as they operate, as they make deals with the Defense Department to develop autonomous weapons and surveillance systems on American citizens, as they battle parents in court whose children have committed suicide due to conversations that these kids were having with their chatbots, they expect us to believe that the nonprofit mission is being prioritized over the profit motivation of the company.
We all know that OpenAI’s overriding priority is to “win” the AI race. It’s to beat out the competition in the marketplace, and it’s to establish the biggest AI company they can create. To the extent that the nonprofit mission ever comes into tension with that, the company will always prioritize profits over the mission.
A law is only as good as its enforcement. And I think if there’s one rule of Silicon Valley, it is to ask forgiveness and not permission. I think they said, You know, this is worth it. There’s enough money on the line for us to just break the law and do the PR work and the lobbying work and the other work that we need to do to ensure that these laws will never be enforced against us.
And when you talk about PR work, lobbying work, are you talking about, like, saying we’re going to give away this $180 billion eventually?
Well, here’s the thing. They announced this week a list of priorities that the foundation would be investing in. They listed as one of their priorities, Alzheimer’s research. My mother is currently dying of Alzheimer’s. I have one copy of the gene that puts me at extreme risk of developing Alzheimer’s when I’m older. So I pray every day that AI helps us find a solution to Alzheimer’s fast enough that I can benefit from it, that my family can benefit from it.
But let me ask you a question. What happens, do you think, if the research that’s funded by OpenAI’s Foundation finds that actually Anthropic’s models are better at drug discovery or scientific breakthroughs than ChatGPT or any of OpenAI’s other models? What does it mean for the independence of scientific research, if all of this research is funded by an entity that has an irreconcilable conflict of interest?
“We do not have to take these companies at their word that they know best how to govern this technology. We should have bigger imaginations about what’s possible.”
We would not accept the science around nicotine that tobacco companies were funding. We do not accept the science around alcohol addiction that the alcohol companies fund. We do not accept the science around sugared beverages from the soda industry. And we should not accept that this scientific research is funded by an entity that has a vested financial interest in the outcome.
And that is why it is so critically important that the OpenAI Foundation actually be independent, that it have an independent board, that it can deploy its resources independently, that the research that it is funding is independent.
Do you still think that we’re maybe better off that OpenAI says that they want to give billions away to better society — than say Anthropic, Google, maybe having some pledges to give money away, but not nearly as much?
Well, Google has a corporate foundation. It’s called Google.org. And I expect in this structure with the tension and the conflict of interest that the OpenAI Foundation has, that it will operate much more like Google.org, which is essentially an arm of the marketing department, a corporate social responsibility program that gives money to innocuous groups — but will never do anything that undercuts Google’s priorities.
I think if you read between the lines of open AI’s press release, the work they say they want to continue doing with community funding is all about convincing people about the importance and value and benefit in using AI. I mean, that’s a market building opportunity for them. That’s not actually anything that’s going to ensure that AI is developed for the benefit of humanity. And so, no, I don’t think that they’re going to operate any differently than any of the other companies’ corporate social responsibility arms. That’s essentially what they have built here.
This is the fight of our time. AI is not inevitable. The way it develops is not inevitable. And we do not have to take these companies at their word that they know best how to govern this technology. We should have bigger imaginations about what’s possible. And if anything, this should give us more energy and motivation to fix what’s broken about our democracy than to just sit back and let billionaires control our future.
Do you ever talk to Sam Altman anymore?
He doesn’t return my calls.
Well, thanks for talking to us.
You must be logged in to post a comment Login