The FTC announced that DoNotPay, a company that promoted itself as offering the “world’s first robot lawyer,” has settled the fine for $193,000.
This action is part of the FTC’s new initiative, Operation AI Comply, which aims to target companies using AI to mislead or scam customers.
The FTC complaint states that DoNotPay claimed its AI would “replace the $200-billion-dollar legal industry” and that its “robot lawyers” could replicate the knowledge and work of human lawyers in creating legal documents.
However, according to the FTC, these statements were made without any testing to support them. The complaint further highlights that:
None of the Service’s technologies has been trained on a comprehensive and current corpus of federal and state laws, regulations, and judicial decisions or on the application of those laws to fact patterns. DoNotPay employees have not tested the quality and accuracy of the legal documents and advice generated by most of the Service’s law-related features. DoNotPay has not employed attorneys and has not retained attorneys, let alone attorneys with the relevant legal expertise, to test the quality and accuracy of the Service’s law-related features.
The complaint further accuses DoNotPay of telling consumers that its AI service could be used to file assault lawsuits without the need for a human lawyer.
It also claimed the AI could scan small business websites for legal issues using just a consumer’s email address. DoNotPay suggested this would save businesses $125,000 in legal costs, but the FTC stated the service didn’t work as claimed.
According to the FTC, DoNotPay has consented to a $193,000 settlement to resolve the allegations brought against it.
The company is also required to inform customers who subscribed between 2021 and 2023 about the shortcomings of its legal-related services.
Additionally, DoNotPay is prohibited from claiming it can substitute any professional service without offering proof.
The FTC has taken action against additional companies accused of using AI services to deceive customers. Among them is Rytr, an AI “writing assistant” service that allegedly helps users generate fake reviews, according to the FTC.
This enforcement comes just over a month after the FTC issued a final rule prohibiting companies from creating or selling fake reviews, including those made with AI. Once the rule is enforced, companies could face penalties of up to $51,744 per violation.
The FTC has also taken legal action against Ascend Ecom, accusing the company of deceiving consumers out of at least $25 million.
Ascend Ecom allegedly assured customers that its AI-driven tools could help them launch online stores on platforms such as Amazon, promising a monthly income in the five-figure range.
“It’s illegal to use AI tools to deceive, mislead, or defraud individuals,” stated FTC Chair Lina M. Khan. “The FTC’s enforcement efforts send a clear message that AI does not provide an exemption from existing laws.
By targeting unfair or deceptive practices in these sectors, the FTC is making sure that consumers are protected and that legitimate businesses and innovators have a fair opportunity to compete.”
Related News You May Like