from the not-the-flex-dc-thinks-it-is dept
Earlier this week, OpenAI became the latest tech company to publicly endorse KOSA, the Kids Online Safety Act. The company, conveniently, tries to frame this as being about its support of child safety. It’s not. It’s about political horse trading, desperation for good publicity, and building a regulatory moat.
KOSA would help create stronger online protections for young social media users through safer default settings, expanded parental controls, and greater accountability for online harms.
The path forward on kids safety, however, also requires AI-specific rules. And we believe KOSA is complementary to the work we’re doing at the federal and state level. Young people should be able to benefit from AI in ways that are safe, age-appropriate, and grounded in real-world support, including referrals to crisis resources and parental notifications in serious safety situations. That means building safeguards from the start, giving families better tools, and taking responsibility for reducing risks before they become harms.
The broader point is an important one: AI companies still have the opportunity to build protections early, before these technologies become fully embedded in everyday life. As OpenAI Chief Global Affairs Officer Chris Lehane has put it, “We can’t repeat the mistakes made during the rise of social media, when stronger safeguards for teens weren’t put in place until the platforms were already deeply embedded in young people’s lives.”
All of this is, of course, nonsense. As we’ve explained repeatedly, the underlying mechanisms of KOSA are deeply problematic and will do real damage. It will, inherently, make the internet worse for everyone. At its heart, KOSA is a surveillance and censorship bill, and it’s the last thing that we need for the internet today.
While it’s positioned as being about something no one can be against (“kid safety!”), that is all too often the facade with which terrible rights-killing laws are passed. And KOSA is no exception.
But a bunch of tech companies have endorsed it anyway. Why? Because they know it makes life way more difficult for smaller upstart competitors. The additional compliance costs it will add for companies will be ruinous to smaller, less well-resourced companies. For big companies with big bank accounts, however, it gives them a leg up.
OpenAI, perhaps more than most others in the space, needs that kind of government-backed protection against growing competition.
Almost exactly three years ago, I wrote a piece about Sam Altman going to Congress and asking for the federal government to regulate the AI space, calling it Sam Altman Wants The Government To Build Him A Moat. As I pointed out at the time, AI researchers were coming to the conclusion that there was little to no real competitive advantage that any frontier AI model could really have for any extended period of time. That situation has only gotten worse since then. The jockeying between the various leading AI models has meant that they’re all effectively comparable, and more and more builders are realizing that since you can separate out the context, the computer, and the agentic tools from the underlying LLM, that technology is quickly turning into a commodity where any one will do (and this situation is becoming even more tenuous as open weight/local models are getting better and better).
While OpenAI has a huge number of users (one of the fastest growing tech companies in history), it’s unclear if those users are particularly loyal. Indeed, there are a few indications that when OpenAI does something stupid, a large segment of users will quickly leave.
Given that, all of the large AI companies keep looking for ways to create some sort of lock-in for users. Most of them haven’t gone down the fully siloed path (knowing at this stage that would probably drive away their most valuable users). For the most part, the focus between the likes of OpenAI, Anthropic, Google and others is to build in more features to make it more convenient to stay than to swap out an underlying LLM. That and the continued leapfrogging, combined with various experiments regarding how much they’re willing to subsidize with their subscription plans.
But having the government wipe out competitors, or create “mandatory” tools that create lock-in, might be another path towards such a result. And that’s exactly what KOSA would lead to. It certainly wouldn’t protect kids. Indeed, all evidence suggests it would put plenty of marginalized kids at much greater risk.
However, it would create something of a regulatory moat for those larger companies.
On top of that, is there any company more desperate for a headline talking about how it’s “helping” protect children than OpenAI? The company has been accused of being “responsible” for suicide and other harmful behavior. And, even if those claims and lawsuits are misleading (they are!), culturally that message has been sticking. I’ve heard multiple people refer to ChatGPT as a suicide machine.
So, if you need a good headline to claim that you’re “protecting children” and doing so in a way where the law will have little direct impact on your business, but will damage some of your competitors in the space (not to mention the wider open internet), why not? It’s hard not to be cynical about OpenAI’s reasoning here.
Separately, it’s likely that the AI companies see this as a bit of political horse trading. While KOSA would have some impact on AI tools, it’s much more directed at social media platforms than AI. And it’s likely that the bet being made by OpenAI here is “hey, we’ll back KOSA for you, and you get rid of the AI-specific bills.” OpenAI’s Chris Lehane, who announced the endorsement and is featured in every press release about it, is infamous as a political trickster. He’s a political operator, not a tech or policy expert. You roll him out to cut a deal, not to advance a principled position on child safety. And that’s exactly what’s happening here.
You can see the KOSA authors gleefully using the OpenAI endorsement to falsely claim that only Mark Zuckerberg now opposes the law:
Yeah, that’s Senator Richard Blumenthal choosing to spend time on X, a site run by a guy who has made it clear he thinks Blumenthal’s political party is evil and needs to be wiped out, using that platform to lie and claim that the only people opposed to KOSA are “Mark Zuckerberg & his lobbyists.” That ignores the long list of civil society and public interest groups who have made it clear just how dangerous the law would be.
Marsha Blackburn (who has been vocal about how she wants KOSA to silence LGBTQ voices) put out a silly press release about this endorsement, saying:
“Lip service won’t save lives – Congress must take action to establish guardrails in the virtual space. I look forward to chairing a hearing on why the verdicts in California and New Mexico should spur Congress to hold Big Tech accountable for exploiting children to turn a profit.”
What? As bad as the rulings in California and New Mexico are, they seem to suggest that the courts already think they have the authority to order companies to do the impossible and magically stop anything bad from ever happening to kids who also (incidentally) use the internet.
All of this is for show. No one is being honest. Blackburn wants to censor LGBTQ speech she considers “dangerous to kids” because it terrifies her. Blumenthal wants to end encryption and the ability of tech companies to keep information, because he’s always been a cop and wants the ability to spy on your kids. And OpenAI wants Congress to direct their bad policies at social media companies rather than AI companies.
And all of us internet users are simply collateral damage for the mad power dreams of those in charge.
Filed Under: censorship, child safety, chris lehane, kosa, marsha blackburn, regulatory capture, richard blumenthal, surveillance
Companies: openai
You must be logged in to post a comment Login