OpenAI CEO says no plans for GPT-5 anytime soon, Focus on GPT-4 safety and capabilities

OpenAI’s Sam Altman denied working on GPT-5 yet. But his words may not reassure those who are concerned about the risks of AI.

OpenAI CEO says no plans for GPT-5 anytime soon, Focus on GPT-4 safety and capabilities

At a recent event at MIT, Sam Altman, the CEO and co-founder of OpenAI, confirmed that the company is not currently training GPT-5, the presumed successor to its AI language model GPT-4, which was released earlier this year. This statement comes in response to an open letter that had been circulating among tech industry professionals, calling for a pause in the development of AI systems that are “more powerful than GPT-4”. The letter raises concerns about the safety of future AI systems, but it has faced criticism from many within the industry, including some of its signatories.

While Altman acknowledged the validity of the concerns raised in the open letter, he noted that it lacked technical nuance and misrepresented OpenAI’s current development efforts. In particular, an earlier version of the letter claimed that OpenAI was training GPT-5, which Altman vehemently denied. “We are not and won’t for some time,” he said. “So in that sense it was sort of silly.”

However, Altman stressed that OpenAI is continuing to expand the capabilities of GPT-4 and is taking seriously the safety implications of such work. “We are doing other things on top of GPT-4 that I think have all sorts of safety issues that are important to address and were totally left out of the letter,” he said.

You can watch a video of the exchange below:

The open letter has sparked a debate within the industry about the nature of the threat posed by AI systems and how best to address it. Some argue that the development of highly advanced AI systems could pose an existential threat to humanity, while others view the risks as more mundane. There is also disagreement about the feasibility of pausing the development of AI systems, given the highly competitive nature of the industry and the potential economic benefits of AI innovation.

Despite the differences in opinion, most within the industry agree that it is essential to carefully consider the potential risks and benefits of AI development and to take steps to mitigate any potential negative consequences. As OpenAI continues to push the boundaries of AI innovation, it will be important to balance the potential benefits of such work with the need to ensure that these systems are developed and used safely and responsibly.

Read more about the Story of OpanAI and Its Founder Sam Altman

GPT hype and the fallacy of version numbers

Altman’s comments are interesting — though not necessarily because of what they reveal about OpenAI’s future plans. Instead, they highlight a significant challenge in the debate about AI safety: the difficulty of measuring and tracking progress. Altman may say that OpenAI is not currently training GPT-5, but that’s not a particularly meaningful statement.

Some of the confusion can be attributed to what I call the fallacy of version numbers: the idea that numbered tech updates reflect definite and linear improvements in capability. It’s a misconception that’s been nurtured in the world of consumer tech for years, where numbers assigned to new phones or operating systems aspire to the rigor of version control but are really just marketing tools. “Well of course the iPhone 35 is better than the iPhone 34,” goes the logic of this system. “The number is bigger ipso facto the phone is better.”

Because of the overlap between the worlds of consumer tech and artificial intelligence, this same logic is now often applied to systems like OpenAI’s language models. This is true not only of the sort of hucksters who post hyperbolic

ALSO READ l

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies.