OpenAI, an artificial intelligence company, is accused of paying less than $2 to Kenyan workers to make its ChatGPT chatbot less toxic.
This report is based upon an investigation by Time Magazine. It claims that ChatGPT’s workers in Kenya were tasked with creating a filter system to make ChatGPT easier, user-friendly, and safer for daily use.
This job required me to consume disturbing information about horrendous subjects such as child s*xual abuse, brutality, murder, suicide, torture, self-harm, and incest.
According to the report, part of it reads, “The premise of this project was simple: Feed an AI with examples of violence, hate speech, and sexual abuse, and that tool could learn how to detect toxic substances in the wild.” ChatGPT would include a detector to detect toxicity in its training data and filter it out. It could also remove toxic text from the training datasets of future AI models.
OpenAI sent thousands of text snippets to a Kenyan outsourcing company to get these labels. This was done beginning in November 2021. Much of that text seemed to have been pulled directly from the web’s darkest corners. Some of the text described graphically situations such as child s*xual abuse, brutality, murder, suicide, torture, or self-harm.
Sama is a training-data firm that specializes in annotating data for AI algorithms. OpenAI has given this contract to Sama.
According to Time Magazine, workers earned anywhere from $1.32 to $2 an hour, depending on their seniority and performance. This disparity in pay and job makes the job seem exploitative, even though their work is a major contributor to billion-dollar industries.
Time’s report was based on graphic details provided by Sama employees, who shared some of the most horrific things they had seen while working for OpenAI.
A portion of the report states, “The work’s traumatizing nature ultimately led Sama to cancel all its OpenAI work in February 2022, eight months earlier than originally planned.”