Microsoft has taken legal action against a group the company claims intentionally developed and used tools to bypass the safety guardrails of its cloud AI products.
According to a complaint filed by the company in December in the U.S. District Court for the Eastern District of Virginia, a group of 10 unnamed defendants allegedly used stolen customer credentials and custom-designed software to break into the Azure OpenAI Service, Microsoft’s fully managed service powered by ChatGPT maker OpenAI’s technologies.
In the complaint, Microsoft accuses the defendants—who it refers to only as “Does,” a legal pseudonym—of violating the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act, and a federal racketeering law by illicitly accessing and using Microsoft’s software and servers for the purpose of “creating offensive” and “harmful and illicit content.” Microsoft did not provide specific details about the abusive content that was generated.
The company is seeking injunctive and “other equitable” relief and damages.
In the complaint, Microsoft says it discovered in July 2024 that customers with Azure OpenAI Service credentials—specifically API keys, the unique strings of characters used to authenticate an app or user—were being used to generate content that violates the service’s acceptable use policy. Subsequently, through an investigation, Microsoft discovered that the API keys had been stolen from paying customers, according to the complaint.
“The precise manner in which Defendants obtained all of the API Keys used to carry out the misconduct described in this Complaint is unknown,” Microsoft’s complaint reads, “but it appears that Defendants have engaged in a pattern of systematic API Key theft that enabled them to steal Microsoft API Keys from multiple Microsoft customers.”
Microsoft alleges that the defendants used stolen Azure OpenAI Service API keys belonging to U.S.-based customers to create a “hacking-as-a-service” scheme. Per the complaint, to pull off this scheme, the defendants created a client-side tool called de3u, as well as software for processing and routing communications from de3u to Microsoft’s systems.
De3u allowed users to leverage stolen API keys to generate images using DALL-E, one of the OpenAI models available to Azure OpenAI Service customers, without having to write their own code, Microsoft alleges. De3u also attempted to prevent the Azure OpenAI Service from revising the prompts used to generate images, according to the complaint, which can happen, for instance, when a text prompt contains words that trigger Microsoft’s content filtering.
A repo containing de3u project code, hosted on GitHub—a company that Microsoft owns—is no longer accessible at press time.
“These features, combined with Defendants’ unlawful programmatic API access to the Azure OpenAI service, enabled Defendants to reverse engineer means of circumventing Microsoft’s content and abuse measures,” the complaint reads. “Defendants knowingly and intentionally accessed the Azure OpenAI Service protected computers without authorization, and as a result of such conduct caused damage and loss.”
In a blog post published Friday, Microsoft says that the court has authorized it to seize a website “instrumental” to the defendants’ operation that will allow the company to gather evidence, decipher how the defendants’ alleged services are monetized, and disrupt any additional technical infrastructure it finds.
Taking legal action to protect the public from abusive AI-generated content
Microsoft’s Digital Crimes Unit (DCU) is taking legal action to ensure the safety and integrity of our AI services. In a complaint unsealed in the Eastern District of Virginia, we are pursuing an action to disrupt cybercriminals who intentionally develop tools specifically designed to bypass the safety guardrails of generative AI services, including Microsoft’s, to create offensive and harmful content. Microsoft continues to go to great lengths to enhance the resilience of our products and services against abuse; however, cybercriminals remain persistent and relentlessly innovate their tools and techniques to bypass even the most robust security measures. With this action, we are sending a clear message: the weaponization of our AI technology by online actors will not be tolerated.
Microsoft’s AI services deploy strong safety measures, including built-in safety mitigations at the AI model, platform, and application levels. As alleged in our court filings unsealed today, Microsoft has observed a foreign-based threat–actor group develop sophisticated software that exploited exposed customer credentials scraped from public websites. In doing so, they sought to identify and unlawfully access accounts with certain generative AI services and purposely alter the capabilities of those services. Cybercriminals then used these services and resold access to other malicious actors with detailed instructions on how to use these custom tools to generate harmful and illicit content. Upon discovery, Microsoft revoked cybercriminal access, put in place countermeasures, and enhanced its safeguards to further block such malicious activity in the future.
This activity directly violates U.S. law and the Acceptable Use Policy and Code of Conduct for our services. Today’s unsealed court filings are part of an ongoing investigation into the creators of these illicit tools and services. Specifically, the court order has enabled us to seize a website instrumental to the criminal operation that will allow us to gather crucial evidence about the individuals behind these operations, to decipher how these services are monetized, and to disrupt additional technical infrastructure we find. At the same time, we have added additional safety mitigations targeting the activity we have observed and will continue to strengthen our guardrails based on the findings of our investigation.
Every day, individuals leverage generative AI tools to enhance their creative expression and productivity. Unfortunately, and as we have seen with the emergence of other technologies, the benefits of these tools attract bad actors who seek to exploit and abuse technology and innovation for malicious purposes. Microsoft recognizes the role we play in protecting against the abuse and misuse of our tools as we and others across the sector introduce new capabilities. Last year, we committed to continuing to innovate on new ways to keep users safe and outlined a comprehensive approach to combat abusive AI-generated content and protect people and communities. This most recent legal action builds on that promise.
Beyond legal actions and the perpetual strengthening of our safety guardrails, Microsoft continues to pursue additional proactive measures and partnerships with others to tackle online harms while advocating for new laws that provide government authorities with necessary tools to effectively combat the abuse of AI, particularly to harm others. Microsoft recently released an extensive report, “Protecting the Public from Abusive AI-Generated Content,” which sets forth recommendations for industry and government to better protect the public, and specifically women and children, from actors with malign motives.
For nearly two decades, Microsoft’s DCU has worked to disrupt and deter cybercriminals who seek to weaponize the everyday tools consumers and businesses have come to rely on. Today, the DCU builds on this approach and is applying key learnings from past cybersecurity actions to prevent the abuse of generative AI. Microsoft will continue to do its part by looking for creative ways to protect people online, transparently reporting on our findings, taking legal action against those who attempt to weaponize AI technology, and working with others across public and private sectors globally to help all AI platforms remain secure against harmful abuse.
Microsoft also says that it has “put in place countermeasures,” which the company didn’t specify, and “added additional safety mitigations” to the Azure OpenAI Service targeting the activity it observed.