Microsoft Cracks Down on Malicious Copilot AI UseMicrosoft Cracks Down on Malicious Copilot AI Use

Microsoft said it has observed a threat group seeking out vulnerable customer accounts using generative AI, then creating tools to abuse these services

Kristina Beek, Associate Editor

January 16, 2025

1 Min Read
Gary Hershorn/Getty Images

Microsoft's Digital Crimes Unit is pursuing legal action to disrupt cybercriminals who create malicious tools that evade the security guardrails and guidelines of generative AI services to create harmful content.

According to an unsealed complaint in the Eastern District of Virginia, though the company goes to great lengths to create and enhance secure AI products and services, cybercriminals continue to innovate their tactics and bypass security measures.

"With this action, we are sending a clear message: the weaponization of our AI technology by online actors will not be tolerated," said Microsoft in a blog post about the lawsuit.

Read the full story from our sister publication Dark Reading >>>

About the Author

Kristina Beek

Associate Editor, Dark Reading

Kristina Beek is the associate editor at Dark Reading.

Sign Up for the Newsletter
The most up-to-date AI news and insights delivered right to your inbox!

You May Also Like