CEO OPENAI:
In a post on his personal blog, OpenAI CEO Sam Altman expressed confidence that the company “knows how to build [artificial general intelligence]” based on its traditional understanding of the concept. He revealed that OpenAI is now shifting its focus toward the development of “superintelligence.”
“We’re proud of our current products, but our true mission lies in shaping a remarkable future,” Altman wrote. “Superintelligent tools have the potential to greatly accelerate scientific discovery and innovation, far beyond human capabilities, and usher in an era of unprecedented abundance and prosperity.”
Altman has previously suggested that superintelligence might be achieved within “a few thousand days,” emphasizing that its impact could be far more profound than many anticipate.
Artificial General Intelligence (AGI) is often vaguely defined, but OpenAI’s interpretation refers to “highly autonomous systems that outperform humans in most economically valuable tasks.” Additionally, OpenAI and its key collaborator and investor, Microsoft, define AGI as AI systems capable of generating at least $100 billion in profits. Under their agreement, Microsoft would lose access to OpenAI’s technology once this milestone is reached.
Which definition Altman was referencing remains unclear, but the first seems more likely. In his post, he predicted that AI agents — systems capable of autonomously performing certain tasks — might “enter the workforce” and “significantly enhance company output” within the year.
“We remain committed to the belief that providing people with advanced tools, step by step, leads to widespread positive outcomes,” Altman added.
While this vision is compelling, it’s worth noting the current limitations of AI technology. These systems are prone to “hallucinations” (producing false or nonsensical outputs), make errors that are obvious to human observers, and can be prohibitively expensive to operate.
SAM ALTMAN:
Altman appears optimistic that these challenges can be resolved—and quickly. However, as the past few years have shown, AI development timelines are often unpredictable.
“We’re confident that, in the coming years, everyone will understand what we see: the critical need to act with great care while maximizing broad benefits and empowerment,” Altman wrote. “Given the potential of our work, OpenAI cannot operate as a typical company. It’s both a privilege and a humbling responsibility to be part of this mission.”
As OpenAI signals its shift toward what it defines as superintelligence, one hopes the company will allocate sufficient resources to ensuring these advanced systems are developed safely.
OpenAI has acknowledged that successfully transitioning to a world with superintelligence is “far from guaranteed.” In a July 2023 blog post, the company admitted, “[W]e don’t have a solution for steering or controlling a potentially superintelligent AI, or preventing it from going rogue. Humans won’t be able to reliably supervise AI systems much smarter than us, and our current alignment techniques won’t scale to superintelligence.”
Despite these concerns, OpenAI has since disbanded teams dedicated to AI safety, including those focused on superintelligence, and has seen the departure of several influential safety researchers. Some of these former staff members cited the company’s increasing focus on commercial objectives as their reason for leaving. OpenAI is currently undergoing corporate restructuring to attract more external investment.
When asked recently about criticism that OpenAI isn’t prioritizing safety enough, Altman defended the company by saying, “I’d point to our track record.”