Google Fired An Engineer Who Said LaMDA AI Has It’s Own Ideas, Emotions And Is Conscious

Blake Lemoine, the Google engineer who claimed publicly that Google’s LaMDA artificial intelligence, a conversational system, can communicate, has feelings, and is conscious, was fired by Google, according to The Big Technology newsletter.

On June 1, Google made Lemoine an administrative employee on leave due to breaching its confidentiality policy after Lemoine contacted the government with concerns. The company also employed an attorney for representation of LaMDA.

A letter sent to journalists on Friday by Google spokesperson Brian Gabriel appeared to confirm the dismissal, saying that “we wish the best of luck to Blake.”

Google also declares: “LaMDA has been through 11 different reviews, and we also published research papers earlier this year that detailed the process involved in the responsible development of its products.”

Google maintains that it “extensively” examined Lemoine’s claims and found them to be “wholly non-substantial.”

This is in line with several AI ethics experts who have stated that his assertions were, in a sense, unattainable with the current technology.

Lemoine asserts that his conversations with the LaMDA chatbot have led him to believe that the bot is now more than an application and can have its own ideas and emotions instead of creating a conversation that is realistic enough to appear like it does, which is what it was intended to accomplish.

He suggests that Google’s scientists must get the consent of LaMDA before conducting experiments with this issue (Lemoine himself was given the task of examining whether the AI generates hateful speech) and posted snippets of the discussions on his Medium account to prove his point.

On YouTube, the channel Computerphile offers a decently easy nine-minute explanation of how LaMDA operates and how it could produce the responses that persuaded Lemoine without even being sentient.

Here’s Google’s full statement that also addresses Lemoine’s claims that Google didn’t adequately examine his allegations:

As we share in our AI Principles, we take the development of AI very seriously and remain committed to responsible innovation. LaMDA has been through 11 distinct reviews, and we published a research paper earlier this year detailing the work that goes into its responsible development. If an employee shares concerns about our work, as Blake did, we review them extensively. We found Blake’s claims that LaMDA is sentient to be wholly unfounded and worked to clarify that with him for many months. These discussions were part of the open culture that helps us innovate responsibly. So, it’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information. We will continue our careful development of language models, and we wish Blake well.

Popular Stories You Shouldn’t Miss:

🙏 Help Us By Sharing This Article 👇: