OpenAI is beginning to turn its attention to ‘superintelligence’

Estimated read time 3 min read

In a post on his personal blog, OpenAI CEO Sam Altman said that he believes OpenAI “know[s] how to build [artificial general intelligence]” as it has traditionally understood it — and is beginning to turn its aim to “superintelligence.”

“We love our current products, but we are here for the glorious future,” Altman wrote in the post, which was published late Sunday evening. “Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity.”

Altman previously said that superintelligence could be “a few thousand days” away, and that its arrival will be “more intense than people think.”

AGI, or artificial general intelligence, is a nebulous term. But OpenAI has its own definition: “highly autonomous systems that outperform humans at most economically valuable work.” OpenAI and Microsoft, the startup’s close collaborator and investor, also have a definition of AGI: AI systems that can generate at least $100 billion in profits. (When OpenAI achieves this, Microsoft will lose access to its technology, per an agreement between the two companies.)

So which definition might Altman be referring to? He doesn’t say explicitly. But the former seems likeliest. In the post, Altman wrote that he thinks that AI agents — AI systems that can perform certain tasks autonomously — may “join the workforce,” in a manner of speaking, and “materially change the output of companies” this year.

“We continue to believe that iteratively putting great tools in the hands of people leads to great, broadly-distributed outcomes,” Altman wrote.

That’s possible. But it’s also true that today’s AI technology has significant technical limitations. It hallucinates. It makes mistakes obvious to any human. And it can be very expensive.

Altman seems confident all this can be overcome — and quickly. But if there’s anything we’ve learned about AI from the past few years, it’s that timelines can shift.

“We’re pretty confident that in the next few years, everyone will see what we see, and that the need to act with great care, while still maximizing broad benefit and empowerment, is so important,” Altman wrote. “Given the possibilities of our work, OpenAI cannot be a normal company. How lucky and humbling it is to be able to play a role in this work.”

One would hope that, as OpenAI telegraphs its shift in focus to what it considers to be superintelligence, the company devotes sufficient resources to ensuring superintelligent systems behave safely.

OpenAI has written several times about how successfully transitioning to a world with superintelligence is “far from guaranteed” — and that it doesn’t have all the answers. “[W]e don’t have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue,” the company wrote in a blog post dated July 2023. “[H]umans won’t be able to reliably supervise AI systems much smarter than us, and so our current alignment techniques will not scale to superintelligence.”

Since the publication of that post, OpenAI has disbanded teams focused on AI safety, including superintelligent systems safety, and seen several influential safety-focused researchers depart. Several of these staffers cited OpenAI’s increasingly commercial ambitions as the reason for their departure; OpenAI is currently undergoing a corporate restructuring to make it more attractive to outside investors.

Asked in a recent interview about critics who say OpenAI isn’t focused enough on safety, Altman responded, “I’d point to our track record.”

Source link

You May Also Like

More From Author

+ There are no comments

Add yours