Shadow AI — unapproved or unsanctioned tools used by employees — is typically the most difficult type of artificial intelligence for an organization to monitor, but Microsoft has released insights on how to protect data, govern its use, and address evolving regulatory requirements using security and governance tools.
A lack of oversight by agencies, brands, publishers can cause data to leak. Microsoft has introduced a series of steps that can help advertising and marketing teams, as well as security teams, enforce policies that prevent sensitive data from being typed into generative AI apps, starting with ChatGPT, Copilot Chat, DeepSeek and Google Gemini.
AI tools easily accessible through cloud-based services can leak data from simple AI chatbots on a website, to media-buying models designed for data analysis. This can happen when someone or a team in a department driven by innovation uses an AI app to improve productivity and meet their objectives without waiting for advertising, marketing or IT to review and approve of the platform.
advertisement
advertisement
AI has made it easier to identify high-value audiences through signals that go beyond traditional targeting in products like Microsoft Performance Max, but when integrated into tools that are not approved by the brand using them can risk data leaks.
Shadow AI can result in significant risks such as data breaches and noncompliance with data-protection regulations.
Adopting AI may seem daunting to many media companies, but implementing ways to protect data, govern its use, and address evolving regulatory requirements has become vital.
“With the rapid user adoption of generative AI, many organizations are uncovering widespread use of AI apps that have not yet been approved by IT or security teams,” Vasu Jakkal, corporate vice president of Microsoft Security, wrote in a blog post. "This unsanctioned, unprotected use of AI has created a 'shadow AI' phenomenon, which has drastically increased the risk of sensitive data leakage.”
Jakkal announced in the post the general availability of an AI web category filter in Microsoft Entra, a collection of identity and access management (IAM) products that help organizations secure access to their resources.
This should help to enforce access and control the risks of shadow AI by enforcing the policies governing it. The next step is to prevent users from leaking sensitive data into AI apps through browser data-loss prevention controls built into Microsoft Edge for Business.