It is 4:30 PM on a Wednesday, two weeks before the holidays. Your lead developer is under pressure to ship a feature before the office closes for break. They copy 200 lines of proprietary backend code, paste it into ChatGPT, and ask for help refactoring a database query. Five minutes later, the code works perfectly. Nobody in IT knows it happened.

Meanwhile, your marketing director is drafting next quarter's product launch strategy in Claude. Your HR manager is using Gemini to summarize candidate evaluations. A sales rep is feeding customer contact lists into an AI tool they found on Product Hunt last week. None of these tools have been vetted by your security team. None of them are covered by your data handling policies. And none of the people using them think they are doing anything wrong.

This is shadow AI, and it is already happening inside your organization whether you realize it or not.

What Is Shadow AI and Why Should You Care?

Shadow AI follows the same pattern as shadow IT, where employees adopt technology outside the visibility and control of the IT department. The difference is that shadow AI moves faster, touches more sensitive data, and is harder to detect. When someone installs an unauthorized SaaS app, it usually shows up in network logs or expense reports eventually. When someone pastes confidential data into a browser-based AI chat, there is often no trace at all.

The risk is not that employees are using AI. AI tools genuinely make people more productive, and blocking them entirely is neither realistic nor smart. The risk is that employees are using AI tools that your organization has not evaluated, has not approved, and has no visibility into. That means you have no idea what data is leaving your network, where it is being stored, or who might have access to it.

What Are Employees Actually Pasting Into AI Tools?

If you have never sat down and thought about what your team might be sharing with AI chatbots, the list will probably surprise you. Based on what we see across client organizations, the most common categories include:

Every one of these categories represents data that, under most regulatory frameworks and contractual obligations, your organization is required to protect. And every one of them is being pasted into third-party AI platforms that your security team has never reviewed.

How to Discover Unauthorized AI Usage

You cannot address shadow AI if you do not know it exists. Here are the practical methods for finding out what is actually happening in your organization.

Browser Extension Audits

Many employees install AI-powered browser extensions, things like writing assistants, code helpers, or summarization tools, that interact with AI platforms in the background. These extensions often have broad permissions that allow them to read page content, intercept network requests, and send data to external servers. A thorough browser extension audit across company-managed devices will reveal which AI tools employees have installed and what permissions those tools have been granted. For a deeper look at how extensions create security blind spots, see our post on why your AI conversations may not be as private as you think.

Network Traffic Analysis

Your network logs already contain the evidence. AI platforms use well-known domains and API endpoints. Look for outbound traffic to domains like api.openai.com, claude.ai, gemini.google.com, api.anthropic.com, chat.deepseek.com, and similar endpoints. If you are using a web proxy or next-gen firewall with SSL inspection, you can see not just that traffic is going to these domains, but how much data is being sent and how frequently.

DNS-Level Monitoring

Even without full traffic inspection, DNS query logs give you a clear picture. Every time an employee's browser navigates to an AI platform, a DNS lookup happens first. By monitoring DNS queries from your corporate network, you can build a complete list of which AI services are being accessed, how often, and from which devices. Tools like Pi-hole, Cisco Umbrella, or even your existing DNS server logs can surface this data. Set up alerts for DNS queries that resolve to known AI platform domains so you can track usage trends over time.

DLP Policies

If you are running a Data Loss Prevention solution, update your policies to flag data being sent to known AI platform URLs. Modern DLP tools can inspect outbound web traffic and flag instances where sensitive patterns, things like social security numbers, credit card numbers, API keys, or specific keywords from internal documents, are being transmitted to AI services. This will not catch everything, but it will catch the most obvious and highest-risk cases.

Just Ask

This one sounds too simple, but it works. Send an anonymous survey asking employees which AI tools they use for work, how often they use them, and what kinds of tasks they use them for. You will be surprised at the honesty. Most employees do not think they are doing anything wrong, because nobody ever told them otherwise. The survey also serves a second purpose: it signals to the organization that leadership is paying attention to AI usage, which naturally reduces risky behavior.

Building an Approved AI Tool List

The goal is not to ban AI. That approach fails every time. People will just use their personal phones or personal laptops to access the same tools, and then you have zero visibility instead of partial visibility. The goal is to channel AI usage into approved tools where you have control over data handling, retention policies, and access management.

Here is how to build an approved tool list that actually works:

  1. Start with what people are already using. Your discovery phase will tell you which tools are most popular. Evaluate those first, because those are the ones people actually want and will keep using regardless of what you decide.
  2. Evaluate data handling and privacy policies. For each AI tool, review where data is stored, whether it is used for model training, how long it is retained, and what controls exist for data deletion. Many enterprise AI plans explicitly exclude customer data from training, but the free tiers often do not.
  3. Negotiate enterprise agreements. Most major AI providers offer enterprise plans with better data handling guarantees, SSO integration, audit logging, and admin controls. The cost is usually reasonable, especially compared to the cost of a data breach.
  4. Create clear categories. Not all AI use carries the same risk. Consider creating tiers: tools approved for general use (drafting emails, brainstorming ideas), tools approved for business use with restrictions (no customer data, no source code), and tools approved for unrestricted use (fully vetted enterprise deployments with data processing agreements in place).
  5. Make the approved tools easy to access. If employees have to jump through five hoops to use the approved AI tool but can access an unapproved one in two clicks, they will choose the path of least resistance every time. Put links to approved tools on the intranet, pre-install approved extensions, and make the onboarding process painless.

Writing an AI Acceptable Use Policy That People Will Actually Follow

Most acceptable use policies fail because they are written by lawyers for lawyers. They are 15 pages of dense legalese that nobody reads, buried in a SharePoint folder that nobody visits. If you want your AI acceptable use policy to actually influence behavior, it needs to be short, clear, and specific.

What to Include

AI Acceptable Use Policy Template

Here is a starter template you can adapt for your organization:

AI ACCEPTABLE USE POLICY
[Company Name] - Effective [Date]

PURPOSE
We encourage the responsible use of AI tools to enhance
productivity. This policy defines acceptable use to protect
company data, customer information, and intellectual property.

APPROVED AI TOOLS
- [Tool 1] - Approved for general use
- [Tool 2] - Approved for business use (restrictions below)
- [Tool 3] - Approved for development use (enterprise plan only)
All other AI tools require written approval from IT Security.

ACCEPTABLE USE
You MAY use approved AI tools to:
- Draft and edit written communications
- Brainstorm ideas and outlines
- Get help with publicly available code or open-source projects
- Summarize publicly available information
- Assist with formatting and presentation

PROHIBITED USE
You must NEVER share the following with any AI tool:
- Customer personally identifiable information (PII)
- Proprietary source code or trade secrets
- Financial data, revenue figures, or projections
- Employee records or HR documentation
- API keys, passwords, or access credentials
- Documents marked Confidential or Internal Only
- Data subject to NDA, HIPAA, PCI-DSS, or SOC 2 obligations

WHEN IN DOUBT
Contact security@[company].com or your direct manager
before sharing any data you are unsure about.

COMPLIANCE
Violations of this policy will be addressed through the
standard disciplinary process. Intentional exfiltration of
sensitive data through AI tools will be treated as a data
breach incident.

REVIEW
This policy is reviewed quarterly. Last updated: [Date]

How to Roll It Out

Do not just email the policy and hope for the best. Schedule a 30-minute all-hands or department meeting to walk through the key points. Use real examples that are relevant to each team. Show developers what "proprietary source code" means in practice. Show the sales team what "customer PII" includes. Make it concrete and relatable.

Then follow up. Revisit the policy in team meetings quarterly. Share anonymized examples of near-misses or policy questions that came up. Keep the conversation alive so it does not become another forgotten document.

The Pre-Holiday Crunch Factor

Shadow AI usage spikes during high-pressure periods, and few periods carry more pressure than the weeks before a major holiday break. Teams are racing to close out projects, ship features, finalize budgets, and clear their inboxes before the office empties out. When people are rushed, they take shortcuts. And the most tempting shortcut right now is pasting whatever they are working on into the nearest AI tool to get it done faster.

This is exactly the wrong time to have no AI governance in place. The combination of time pressure, reduced oversight (managers are also trying to wrap up before break), and the "I will deal with this properly in January" mindset creates the perfect conditions for sensitive data to leak through unapproved channels.

If you are reading this and you do not yet have an AI acceptable use policy, do not wait until after the holidays to create one. Even a simple one-page document distributed before the break is better than nothing. Set the expectation now, and formalize it with a more thorough policy in January.

DNS Monitoring as Your Early Warning System

Of all the detection methods available, DNS monitoring deserves special attention because it is both easy to implement and hard to circumvent. Every AI platform interaction starts with a DNS query, and DNS logs are something most organizations already collect but rarely analyze for this purpose.

Here is a practical approach:

  1. Build a watchlist of AI platform domains. Start with the obvious ones: chat.openai.com, api.openai.com, claude.ai, api.anthropic.com, gemini.google.com, chat.deepseek.com, copilot.microsoft.com, perplexity.ai. Update this list monthly as new tools emerge.
  2. Set up automated alerts. Configure your DNS monitoring or SIEM to alert when queries to these domains exceed a baseline threshold, or when they appear from devices or network segments where AI usage has not been approved.
  3. Track trends over time. A sudden spike in DNS queries to AI platforms from your engineering subnet might indicate that a team has adopted a new tool without going through the approval process. Catch it early and redirect them to an approved alternative.
  4. Use DNS filtering to enforce policy. Once you have an approved tool list, you can use DNS-level filtering to block access to unapproved AI platforms on corporate networks. This is not foolproof (employees can still use personal devices on personal networks), but it covers the majority of workplace usage and sends a clear signal about expectations.

The Bottom Line

Shadow AI is not a future problem. It is a right-now problem. Your employees are already using AI tools you have not approved, sharing data you have not authorized, and creating risks you have not accounted for. The solution is not to ban AI or pretend it will go away. The solution is to get ahead of it with clear policies, approved alternatives, and monitoring that gives you visibility into what is actually happening.

Start with discovery. Find out which tools people are using and what data they are sharing. Build an approved tool list that gives employees access to the productivity gains they are looking for without the uncontrolled risk. Write a policy that is short enough to read and specific enough to follow. And put DNS monitoring in place so you have an early warning system when new shadow AI tools start showing up on your network.

The organizations that handle this well will be the ones that treat AI governance not as a technology problem, but as a people problem. Give employees clear guardrails, easy access to approved tools, and a culture where asking "is this okay to share?" is encouraged rather than dismissed. That is how you get the benefits of AI without the blind spots.