During a conversation about potential risks associated with AI systems, Sam Altman, co-founder and CEO of OpenAI, confirmed that the organization is not training GPT-5, the anticipated successor to its GPT-4 AI language model, which was released in March.
Altman made this statement while speaking at an event held at MIT, where he was asked about an open letter circulated among the tech community. The letter requested that labs such as OpenAI must pause the development of AI systems that are “more powerful than GPT-4” due to concerns about the safety of future systems.
However, the letter has been criticized by numerous industry professionals, including some of its signatories. Experts disagree on the level of threat posed by AI, as well as on how the industry can “pause” development in the first place.
Altman stated that the letter “lacks most of the technical nuances about where we need the pause” and highlighted that an earlier version of the letter had claimed that OpenAI was currently training GPT-5. Altman refuted this claim by saying, “We are not and won’t for some time,” adding that the argument was “somewhat foolish” in that sense.
OpenAI’s decision not to develop GPT-5 does not mean that the company is not working on enhancing the capabilities of GPT-4, as Altman emphasized. He explained that OpenAI is pursuing other projects besides GPT-4, which have safety concerns not addressed in the open letter.
Altman’s remarks are significant, not for what they reveal about OpenAI’s future plans, but for the challenge they pose to the AI safety debate: the difficulty of measuring and monitoring progress. While Altman stated that OpenAI is not training GPT-5 at present, this argument lacks substance.
The fallacy of version numbers contributes to some confusion in this area. This fallacy is the idea that numbered technological upgrades indicate clear and incremental improvements in performance. This misconception has been fostered in the consumer tech industry, where version numbers assigned to new products aim to mimic version control’s precision but are primarily marketing tools. This reasoning suggests that a higher version number equates to a superior product. For instance, “iPhone 35 is better than iPhone 34 because the number is higher, and, therefore, the phone is superior.”
Due to the overlap between these two fields, the consumer tech industry’s logic of version numbers has also been applied to artificial intelligence systems, such as OpenAI’s language models. This approach is utilized not only by individuals who make outrageous predictions on Twitter about the emergence of superintelligent AI in the near future but also by more knowledgeable commentators.
As many claims about AI superintelligence cannot be disproven, these commentators rely on similar language to make their point. They create imprecise graphs with labels such as “progress” and “time,” draw an upward and rightward line, and present this as evidence without scrutiny.
This is not to decline concerns about AI safety or ignore that these systems are continually improving and beyond our complete control. Instead, it emphasizes that there are valid and invalid arguments and that just assigning a number to something, such as a new phone or the concept of intelligence, does not necessarily provide a complete understanding of it.
In these discussions, I believe the emphasis should be on capabilities: demonstrating what these systems can and cannot do and predicting how they may evolve over time.
Therefore, Altman’s announcement that OpenAI is not currently working on GPT-5 will not relieve concerns about AI safety. The company is still expanding GPT-4’s potential (such as by linking it to the internet), and other industry players are building similarly ambitious tools that enable AI systems to act on behalf of users. There is also likely ongoing work to optimize GPT-4, and OpenAI may introduce GPT-4.5 (as it did with GPT-3.5) first, further highlighting the potential for version numbers to be misleading.
Even if governments worldwide were able to impose a ban on new AI developments, it is evident that society is already facing challenges with the existing AI systems.
Although GPT-5 is not currently in development, the question arises whether it is even relevant when GPT-4’s functioning is not yet fully understood?