Organizations are adopting AI faster than they are securing it. That is not speculation. It is the central finding of Pentera's AI and Adversarial Testing Benchmark Report for 2026, based on a survey of 300 US CISOs. The gap between AI deployment and AI-specific security controls is wide, growing, and largely unaddressed by current tooling.

The uncomfortable truth: three out of four security leaders are protecting AI systems with controls that were never designed for them. And the problem is not budget. It is visibility, expertise, and tooling that has not caught up to the threat.

The Numbers Tell the Story

Before diving into the "why," look at the data. These figures come directly from the Pentera benchmark report and paint a clear picture of where the industry stands.

Using legacy controls for AI
75%
Ownership conflict over AI security
73%
Limited AI visibility
67%
Lack internal AI expertise
50%
Cite insufficient AI tools
36%
Cite budget as barrier
17%
Have AI-specific tools deployed
11%

That last number is worth sitting with. Only 11% of organizations have deployed security tools built specifically for AI. The other 89% are adapting what they already have, hoping that traditional endpoint protection, network monitoring, and access controls are enough.

They are not.

Why Legacy Controls Fail for AI

Traditional security controls were built for a world of deterministic software. Applications with predictable inputs, defined outputs, and well-understood attack surfaces. AI systems break all three assumptions.

Inputs are unbounded. An LLM accepts natural language. A vision model processes images. The input space is effectively infinite, and traditional input validation does not apply.

Outputs are non-deterministic. The same prompt can produce different responses. This makes anomaly detection harder. What counts as "normal" behavior when the system is designed to generate novel outputs?

The attack surface is different. Prompt injection, model poisoning, training data extraction, and jailbreaking are AI-native threats. A WAF will not catch them. An EDR agent will not flag them. They operate at a layer that legacy tools were never designed to inspect.

And then there is the supply chain problem. The report found that 35% of AI breaches are linked to malware hidden in public model repositories. That is the single most cited source of AI breaches. Yet 93% of organizations rely on these open repositories. This is a supply chain risk that traditional software composition analysis tools are not equipped to handle, because they were built to scan code dependencies, not model weights and training pipelines.

One more data point that should concern every security leader: 1 in 8 companies have already experienced an AI breach linked to agentic systems. These are autonomous AI agents that can take actions, call APIs, and interact with production systems without direct human oversight. Securing them requires controls that understand intent, context, and chained decision-making.

The Ownership Problem

Even if the right tools existed and were widely deployed, there is a structural issue that would still slow organizations down. According to the report, 73% of organizations report internal conflict over who owns AI security controls.

Is it the CISO's team? The data science group? The platform engineering org? The answer varies by company, and in most cases, nobody has clear ownership. AI systems sit at the intersection of data, infrastructure, and application logic. They do not fit neatly into any existing security domain.

This ambiguity creates real gaps. Model risk assessments get deferred because nobody is sure who should conduct them. Prompt injection testing does not happen because the AppSec team does not understand the models, and the ML team does not think in terms of adversarial testing. Incident response plans do not cover AI-specific scenarios because nobody wrote them.

The 50% of CISOs who cite lack of internal expertise are describing the same problem from a different angle. Even when ownership is assigned, the team that owns it may not have the skills to execute. AI security requires a blend of ML knowledge and adversarial thinking that is genuinely rare in the market today.

What Actually Needs to Happen

The data from the Pentera report makes the diagnosis clear. Here is the treatment plan.

1. Build a Complete AI Inventory

You cannot secure what you cannot see, and 67% of CISOs admit they have limited visibility into AI across their organization. This includes sanctioned tools, shadow AI adopted by business units, third-party AI embedded in SaaS products, and any agentic systems operating autonomously. Discovery comes first. Everything else depends on it.

2. Establish Dedicated Ownership

Pick a team. Give them authority. Fund them. It does not matter whether AI security lives under the CISO, a dedicated AI risk function, or a cross-functional working group. What matters is that someone is accountable. The 73% ownership conflict stat will not resolve itself through committee discussions. It requires a decision from leadership.

3. Deploy Purpose-Built Tooling

Legacy controls are a floor, not a ceiling. Organizations need tools designed specifically for AI threats: prompt injection detection, model behavior monitoring, AI supply chain scanning, and real-time guardrails for agentic systems. The 11% of organizations that have deployed these tools are not early adopters. They are the ones who recognized the gap first.

4. Adopt Adversarial Testing for AI

Red-teaming AI systems is fundamentally different from red-teaming traditional applications. It requires testing for prompt injection, jailbreaking, data exfiltration through model outputs, and manipulation of agentic decision chains. This should be continuous, not annual. AI systems change behavior as they are updated, fine-tuned, or exposed to new data.

5. Secure the AI Supply Chain

If 93% of organizations use public model repositories and 35% of breaches originate there, the math is simple. Implement model provenance verification. Scan model files before deployment. Monitor for known malicious models. Treat model downloads with the same scrutiny you apply to open-source code libraries.


Tools That Address the Gap

This is the space we work in at Cyber Craft Solutions. CraftedTrust provides trust scoring for AI model servers and MCP integrations, giving security teams the visibility the Pentera report shows is missing. AI Chat Shield monitors AI chat interactions in real time, detecting prompt injection and data exfiltration attempts before they succeed. These are not legacy tools repurposed for AI. They were built from the ground up for this problem.

But regardless of which tools you choose, the priority is clear: stop trying to solve a 2026 problem with 2020 controls.


"The question is no longer whether your organization uses AI. It is whether your security program has caught up to that reality. For 75% of organizations, the honest answer is: not yet."

Related Reading