In a surprising twist, Google, a frontrunner in the race for AI dominance, has raised a red flag over using chatbots, including its very own creation, Bard. What makes this cautionary message even more intriguing is that Google has been extensively promoting the chatbot worldwide.
Revealed by reliable sources and confirmed by Alphabet, Google’s parent company, a directive has been issued to employees, urging them to abstain from inputting sensitive company documents into AI chatbots. This directive aligns with the longstanding policy of safeguarding confidential information.
Furthermore, engineers have been advised against using code directly generated from AI software like ChatGPT and Bard AI. Google aims to enhance transparency by educating its workforce about the limitations of this technology.
The underlying reason for this precautionary approach stems from AI chatbots learning to understand language by analyzing user messages.
Recently, It was discovered in a recent study that human reviewers could potentially access internal information through these chats, or the AI itself could accidentally reproduce and leak it.
Google’s sudden cautionary stance toward its own chatbot raises significant concerns about these powerful language models’ privacy and security implications.
Ironically, Earlier, Google had been actively endorsing Bard’s usage among its employees. The updated privacy notice now explicitly advises users not to disclose confidential or sensitive information during Bard conversations.
This development follows a series of similar warnings issued by various companies, cautioning employees against using chatbots in their work environments.
Samsung has recently banned its employees from using ChatGPT, echoing Google’s apprehensions. Similarly, Apple has restricted its workforce, preventing ChatGPT usage and sharing confidential information. Amazon has also issued similar guidelines to its employees.
However, a leaked document from Microsoft stated that the company allows its employees to use ChatGPT as long as they don’t share confidential or sensitive information with the model.
Related Stories: