Microsoft has filed a lawsuit against a group it claims created tools to bypass the safety features of its AI cloud services.

The complaint, submitted in December to a court in Virginia, accuses ten unnamed individuals of using stolen customer login details and specially designed software to access Microsoft’s Azure OpenAI Service, which is powered by OpenAI’s technology.
In the lawsuit, Microsoft refers to the defendants as “Does” and accuses them of breaking several laws, including the Computer Fraud and Abuse Act.
They allegedly accessed Microsoft’s software and servers without permission to produce harmful and illegal content. However, Microsoft hasn’t specified what kind of abusive content was created.
The company is asking for legal remedies and compensation for damages. Microsoft discovered in July 2024 that some customers’ Azure OpenAI Service credentials, particularly API keys, were misused to create content against the service’s rules. An investigation revealed that these API keys had been stolen from legitimate customers.
Microsoft stated that it’s unclear how the defendants obtained the API keys, but it seems they have been systematically stealing them from various customers.
The lawsuit claims that the defendants used these stolen keys to establish a “hacking-as-a-service” model. They created a tool called de3u, along with software to manage communications between de3u and Microsoft’s systems.
The de3u tool allegedly allowed users to use stolen API keys to generate images with DALL-E, an OpenAI model available to Azure OpenAI Service users, without needing to write any code.
The tool also tried to avoid Microsoft’s filters that would normally adjust prompts that could trigger content restrictions.
A GitHub repository containing the de3u project code is no longer accessible. The complaint states that the combination of these features and unauthorized access to the Azure OpenAI Service enabled the defendants to find ways to bypass Microsoft’s content protections.
Microsoft claims that the defendants knowingly accessed protected computers without permission, causing damage and losses.
In a blog post, Microsoft mentioned that the court has allowed it to take control of a website that was key to the defendants’ operations, which will help gather evidence and understand how their services were monetized.
Additionally, Microsoft has implemented unspecified countermeasures and has made further safety improvements to the Azure OpenAI Service to combat the activities it detected.
Other Relevant AI News You May Like