Microsoft on Thursday unmasked 4 of the people that it stated had been behind an Azure Abuse Enterprise scheme that entails leveraging unauthorized entry to generative synthetic intelligence (GenAI) providers with the intention to produce offensive and dangerous content material.
The marketing campaign, referred to as LLMjacking, has focused varied AI choices, together with Microsoft’s Azure OpenAI Service. The tech large is monitoring the cybercrime community as Storm-2139. The people named are –
- Arian Yadegarnia aka “Fiz” of Iran,
- Alan Krysiak aka “Drago” of United Kingdom,
- Ricky Yuen aka “cg-dot” of Hong Kong, China, and
- Phát Phùng Tấn aka “Asakuri” of Vietnam
“Members of Storm-2139 exploited uncovered buyer credentials scraped from public sources to unlawfully entry accounts with sure generative AI providers,” Steven Masada, assistant normal counsel for Microsoft’s Digital Crimes Unit (DCU), stated.
“They then altered the capabilities of those providers and resold entry to different malicious actors, offering detailed directions on tips on how to generate dangerous and illicit content material, together with non-consensual intimate pictures of celebrities and different sexually express content material.”
The malicious exercise is explicitly carried out with an intent to bypass the security guardrails of generative AI methods, Redmond added.
The amended grievance comes a bit over a month after Microsoft stated it is pursuing authorized motion towards the menace actors for participating in systematic API key theft from a number of prospects, together with a number of U.S. firms, after which monetizing that entry to different actors.

It additionally obtained a court docket order to grab a web site (“aitism[.]internet”) that’s believed to have been a vital a part of the group’s prison operation.
Storm-2139 consists of three broad classes of individuals: Creators, who developed the illicit instruments that allow the abuse of AI providers; Suppliers, who modify and provide these instruments to prospects at varied worth factors; and finish customers who make the most of them to generate artificial content material that violate Microsoft’s Acceptable Use Coverage and Code of Conduct.
Microsoft stated it additionally recognized two extra actors situated in the US, who’re based mostly within the states of Illinois and Florida. Their identities have been withheld to keep away from interfering with potential prison investigations.
The opposite unnamed co-conspirators, suppliers, and finish customers are listed beneath –
- A John Doe (DOE 2) who seemingly resides in the US
- A John Doe (DOE 3) who seemingly resides in Austria and makes use of the alias “Sekrit”
- An individual who seemingly resides in the US and makes use of the alias “Pepsi”
- An individual who seemingly resides in the US and makes use of the alias “Pebble”
- An individual who seemingly resides in the UK and makes use of the alias “dazz”
- An individual who seemingly resides in the US and makes use of the alias “Jorge”
- An individual who seemingly resides in Turkey and makes use of the alias “jawajawaable”
- An individual who seemingly resides in Russia and makes use of the alias “1phlgm”
- A John Doe (DOE 8) who seemingly resides in Argentina
- A John Doe (DOE 9) who seemingly resides in Paraguay
- A John Doe (DOE 10) who seemingly resides in Denmark
“Going after malicious actors requires persistence and ongoing vigilance,” Masada stated. “By unmasking these people and shining a lightweight on their malicious actions, Microsoft goals to set a precedent within the battle towards AI expertise misuse.”