OpenAI’s ChatGPT Vs Google’s LaMDA

ChatGPT, a chatbot that was recently released, has won all the praise in the field of conversational technology. However, another market player was released before OpenAI’s chatbot project. This one deserves a lot of attention. LaMDA, Google’s conversational chatbot, was praised for its ability to be called “sentient” even by its engineer.

Blake Lemoine, however, stated in a tweet that despite all the chatter about ChatGPT, Google’s advanced dialogue system is still years ahead of others.

The Language Model for Dialog Applications (LaMDA), a Transformer-based neural-language model, has up to 137B parameters. It is pre-trained using 1.56 Trillion words of publicly accessible dialogue data and web documents. The model can be fine-tuned using three metrics: Safety, Quality, and Groundedness.

LaMDA’s progress can be quantified by collecting responses from the pre-trained model, fine-tuned models, and human raters (i.e., human-generated replies) to multi-turn, two-author dialogues. The responses are then evaluated on a series of questions against the above-described metrics by other human raters.

ChatGPT, on the other hand, is built upon the GPT-3.5 architecture and has 175B parameters. GPT-3.5 consists of three models. Code-DaVinci 002, the base model, which is used for code completion tasks.

Text-DaVinci 002, which is trained using supervised fine-tuning and human-written samples. The most recent text-DaVinci 003, includes reinforcement learning with human feedback (RLHF). This improved reward-based model was trained on human comparisons. The training data consists of code and text from before Q4 2021.

Like LaMDA, ChatGPT uses a supervision-learning model where human AI trainers have access to model suggestions to create responses and train the model. This allows them to play both the user and the AI. The chatbot’s responses were ranked by the trainers, as well as any sampled alternatives based on their quality.

Riley Goodside, ScaleAI’s founder, compares ChatGPT and LaMDA responses. He calls the former an “unlovable C-3PO”, and the responses almost read like a Q&A forum. At the same time, the latter is more friendly and “conversational.” This could be directly related to LaMDA’s training on dialogues, while ChatGPT is highly trained in web text.

Many also believe that OpenAI’s conversational artificial intelligence produces shallow content, almost like it is reciting Wikipedia.

LaMDA, on the other side, has an advantage in this context due to the many metrics it produces in its replies. The groundedness metric, for example, verifies that responses are based on reliable external sources. The quality metric also measures responses based on dimensions such as Sensibility, Specificity, or Interestingness (SSI). It ensures that responses are relevant to the context in which they are being asked and are not generic.

Goodside adds that ChatGPT can be prompted to behave however they like, but this would require that ChatGPT users insert their disclaimer acknowledging that it isn’t real. Users will talk to the protocol droid if they disagree.

Is this a sign that ChatGPT is losing its cause? It’s not. RLHF sheds light on ChatGPT’s path

ChatGPT output has been subject to errors by some users. However, OpenAI’s GPT-3.5 architecture uses a reinforcement-learning model (RLHF), a reward-based mechanism based on human feedback. This makes it better and more valuable. LaMDA does not use RLHF.

Cassie Kozyrkov, Google’s Chief Decision Scientist, posted a post pointing out the errors in ChatGPT information production. She said ChatGPT is a Bullshitter. ChatGPT’s usefulness and relationship to GANs were her prompt. It produced an absurd output that did not reflect reality. The claim stated that ChatGPT uses GANs “It generates responses to input text, which allows it to engage in natural-sounding conversation with humans.” This assertion made it a source of ridicule for many.

While everyone had their fair share of bashing, the important thing left out of the conversation was that the model could learn from its behavior. The response to the same prompt was completely different and more accurate when repeated. The response was, “ChatGPT uses an architecture of neural networks known as a transformer. This is used to process large amounts of data.

Despite this, the model is still flawed on many fronts. For example, the prompt’s response does not provide the answer to GANs. This model needs additional layers of verification to better source information.

ChatGPT has been released, and there is growing evidence that Google will soon incorporate the most recent version of ChatGPT into its search engine. Google also has a search engine monopoly.

Uncle Ari, a Twitter user, wrote that LaMDA could be embedded in search to answer questions and cite sources. This integration will dramatically change the way we interact with search. OpenAI’s partnership is also with Microsoft, which allows them to access ChatGPT within their search engine, Bing. It is hard to predict who will win the search engine battle.

Meta has its own chatbot, Blenderbot. The third iteration was released in October. The conversational AI prototype is built on 175B parameters and has long-term memory. The model relies on dialogue history, the internet, and memory to produce output.

Meta and Google are keeping more information about their chatbots under wraps. However, we can expect them to announce once they are ready, especially considering the process Google underwent the last time they released preemptively.

Related Stories:

Help Someone By Sharing This Article