OpenAI Has Launched GPT 4 Omni And ChatGPT 4O

OpenAI has introduced its latest advanced generative AI model named GPT-4o. The “o” signifies “omni,” highlighting the model’s capability to process text, speech, and video.

GPT 4O will be gradually integrated into the company’s developer and consumer-oriented products in the upcoming weeks.

New features are being added to ChatGPT’s voice mode in the new model update. The updated app will function similarly to a Her-like voice assistant.

It will be capable of real-time responses and observing its surroundings. The current voice mode is limited, responding to one prompt at a time and relying solely on audio input.

GPT 4 Omni and ChatGPT 4O

In a blog post after the livestream event, Altman discussed the journey of OpenAI. Initially, the company aimed to generate various advantages for the world, but Altman recognized that this vision has evolved over time.

OpenAI faced scrutiny for not sharing its advanced AI models openly. Altman indicated a shift in focus towards providing these models to developers via paid APIs.

The new approach emphasizes enabling third parties to utilize the AI models to innovate and produce a wide array of beneficial applications.

“Our current direction involves developing AI and enabling others to leverage it for creating remarkable innovations that benefit us all,” mentioned Altman.

Before the GPT-4o launch today, there were contradictory speculations about what OpenAI would unveil. Some anticipated an AI search engine to compete with Google and Perplexity, others expected a voice assistant integrated into GPT-4, while some thought it might be an entirely new and enhanced model, GPT-5.

OpenAI strategically chose to unveil this before Google I/O, the tech giant’s main conference, where we anticipate the introduction of different AI products from the Gemini team.

Related Stories:

Help Someone By Sharing This Article