AI chatbots have been known to produce misleading information, a phenomenon often referred to as hallucination.
Recently, Reddit introduced its own AI chatbot called Answers, which, unfortunately, suggested heroin to a user looking for relief from pain.
This alarming recommendation was highlighted by a healthcare professional on a subreddit dedicated to moderators.
A user reported that the chatbot recommended a post claiming that heroin had been beneficial in their pain management.

In another instance, the bot suggested kratom, a plant extract that is banned in several states. The FDA has issued warnings against kratom due to serious health risks, including liver damage and the potential for addiction.
Reddit Answers operates similarly to other AI systems like Gemini and ChatGPT, but it gathers information from content created by its users.
Initially, this chatbot was accessible through a separate tab on the homepage, but Reddit has recently started integrating it into ongoing conversations.
The individual who raised concerns discovered that Reddit Answers was offering harmful medical advice in health-related threads, and moderators had no way to turn it off.
After the issue was brought to the attention of Reddit and 404Media, the platform took action to prevent the bot from appearing in sensitive discussions.
Other AI systems, including Google’s AI Overviews and ChatGPT, have also given questionable health recommendations. For example, there was a bizarre suggestion to use non-toxic glue on pizzas to prevent cheese from slipping off.
Other Stories You May Like