OpenAI’s Unreleased AGI Paper

A specific clause in OpenAI’s agreement with Microsoft, once viewed as a distant possibility, has now become a critical issue in one of the most significant technology partnerships.

This particular clause indicates that if OpenAI’s board declares it has developed artificial general intelligence (AGI), Microsoft’s access to OpenAI’s future technologies would be restricted.

OpenAI's Unreleased AGI Paper

After investing over $13 billion in OpenAI, Microsoft is reportedly seeking to eliminate this clause and may even contemplate withdrawing from the partnership, as reported by the Reuters.

In late 2022, the importance of AGI in the Microsoft deal led to discussions within OpenAI regarding an internal research document, according to various insiders. The document, titled “Five Levels of General AI Capabilities,” proposes a system for categorizing different stages of AI development.

By making specific claims about future AI capabilities, this paper could complicate OpenAI’s ability to assert that it has achieved AGI, which could be crucial in negotiations.

“We are dedicated to creating reliable methods for measuring AGI progress, approaches that can be replicated, quantified, and beneficial to the broader scientific community,” said OpenAI spokesperson Lindsay McCallum in a statement to the reporter.

“The ‘Five Levels’ was an initial effort to categorize stages and terms related to general AI capabilities. It was not designed as a scientific research document.” Microsoft opted not to provide any comments.

In a blog post detailing its corporate structure, OpenAI states that AGI is excluded from intellectual property licenses and other commercial agreements with Microsoft. OpenAI characterizes AGI as “a highly autonomous system that surpasses human performance in most economically valuable tasks.”

The two companies are currently renegotiating their contract as OpenAI prepares for a corporate restructuring. While Microsoft desires ongoing access to OpenAI’s models even if the startup declares AGI prior to the partnership’s conclusion in 2030, one insider suggests that Microsoft doubts OpenAI will reach AGI by that time.

However, another source close to the situation views the clause as OpenAI’s key leverage point. Both individuals are granted anonymity to discuss sensitive matters.

Reports from the Wall Street Journal indicate that OpenAI has even considered invoking the clause based on an AI coding agent.

The negotiations have become so tense that OpenAI has reportedly discussed the possibility of publicly accusing Microsoft of anticompetitive practices, according to the Journal.

An insider, who requested anonymity to speak freely, mentioned that OpenAI is nearing the achievement of AGI; Altman has expressed optimism that it could occur during Donald Trump’s current presidential term.

This insider also pointed out two significant definitions: First, OpenAI’s board can independently determine that the company has reached AGI as defined in its charter, which would immediately cut off Microsoft’s access to AGI technology or any revenue generated from it; Microsoft would still retain rights to everything prior to that milestone.

Second, the contract introduced a concept of sufficient AGI in 2023, defining AGI as a system capable of generating a specific level of profit. If OpenAI claims it has achieved this standard, Microsoft must agree with that assessment.

Additionally, the contract prohibits Microsoft from pursuing AGI independently or through third parties using OpenAI’s intellectual property.

Bloomberg had previously reported on the “Five Levels” framework, noting that OpenAI intended to share this with its external investors, although it was viewed as a draft at that time. OpenAI CEO Sam Altman and chief research officer Mark Chen have discussed the five levels of AI capabilities in various interviews since then.

Other Stories You May Like

A version of the paper dated September 2024, reviewed by reporter, outlines a five-step scale for assessing the advancement of AI systems, referencing other studies that suggest many of OpenAI’s models were at Level 1, characterized as “An AI that can understand and use language fluently and can perform a variety of tasks for users, at least as effectively as a beginner and sometimes better.”

It highlights that some models were nearing Level 2, which is described as “An AI that can handle more complex tasks at a user’s request, including tasks that might take an hour for a trained expert to complete.”

The paper intentionally avoids providing a single definition of AGI, arguing that the term is too ambiguous and binary, opting instead to use a range of capabilities to depict increasingly general and advanced AI systems.

While the paper does not specify when OpenAI’s systems will achieve each of the five levels, it discusses how advancements in capabilities could impact various aspects of society, including education, employment, science, and politics, cautioning about new risks as AI tools gain more power and autonomy.

In a podcast with YCombinator president and CEO Garry Tan last November, Altman mentioned that the company’s o1 model could be classified as Level 2, and he anticipates reaching Level 3 “sooner than people expect.”

In July of the previous year, one of the coauthors presented the research at an internal gathering where teams showcased their most significant projects for broader awareness, according to multiple sources. The research received positive feedback from other employees, as one source noted.

Sources believe that the paper appeared to be in its final stages, with the company hiring a copy editor to polish the document late last year, along with creating visuals for a blog post announcing the paper.

Internally, OpenAI’s partnership with Microsoft was cited as a reason to delay the paper’s publication, according to several sources who spoke to the reporter under the condition of anonymity. Another source mentioned that discussions with Microsoft were often referenced as a barrier to releasing the paper.

McCallum stated in a comment to the reporter that “it’s not accurate to suggest we held off from sharing these ideas to protect the Microsoft partnership.” Another individual familiar with the situation indicated that the paper was not published because it did not meet technical standards.

“I think the question of what AGI is doesn’t really matter,” Altman remarked at a conference in early June. “It’s a term that people interpret differently; often the same person will define it in various ways.”

Other Stories You May Like

Help Someone By Sharing This Article