For the last year, a lot of AI governance conversations have stayed stuck at the policy layer. Teams wrote usage rules, added approval checklists, and updated vendor questionnaires. That work still matters. But it is no longer enough for buyers trying to understand how connected agents actually behave in production.
The bigger question now is operational: which agent interfaces are approved, how are they authenticated, what actions are logged, and how is third-party access monitored? The Apr. 15, 2026 AIUC-1 Q2 2026 research update is a strong signal that the market is moving in exactly that direction.
Agent governance is becoming protocol-aware assurance: approved tools, verified identities, logged actions, and monitored third parties.
What AIUC-1 changed in Q2 2026
AIUC-1 said this quarter's refresh focused on MCP security, third-party risk, and agent identity and permissions. In plain English, that means the standard is moving closer to the questions enterprise buyers already ask when an AI system touches live tools, APIs, or external services.
According to the Apr. 15 update, the new emphasis includes approved MCP servers plus runtime containment, stronger authentication and encrypted transport across model APIs, MCP, and A2A channels, tool-call governance and logging for MCP, mandatory third-party access monitoring, and cryptographically verifiable agent identities with permission-ready access design.
That is a meaningful shift. It is not just "have a policy." It is "show me the control surface around the agent interfaces themselves."
Why MCP and A2A matter to buyers, not just builders
If you have never heard of MCP or A2A, the simple version is this: MCP helps models and agents connect to tools and context, while A2A is an open protocol for agents talking to other agents. Google described A2A as an open protocol that complements MCP for multi-agent interoperability.
That sounds technical, but the buyer implication is straightforward. Every new protocol or connection path becomes a governance path too. If agents can call tools, connect to MCP servers, or delegate work to other agents, then buyers need confidence that those interfaces are approved, authenticated, and observable.
This is also why the MCP authorization spec matters. The current MCP authorization guidance says implementations must implement OAuth 2.1 with appropriate security measures for HTTP transports, require PKCE for clients, and attach authorization to each HTTP request. That is a useful technical foundation, but governance teams still need evidence that the implementation is real, scoped correctly, and monitored over time.
Why third-party access monitoring becomes unavoidable
The most commercially important part of the AIUC-1 update may be the third-party angle. Traditional vendor risk programs assume outside parties are relatively static. Agent ecosystems do not work that way. MCP servers, plugin registries, hosted models, and agent-to-agent connections can appear dynamically at runtime.
That changes the buyer conversation. A security leader does not just want to know who your main AI vendor is. They want to know which outside systems an agent can reach today, which of those were approved, what permissions were granted, and whether the access was logged.
That is why AIUC-1's move to mandatory third-party access monitoring matters. It turns a nice-to-have diligence concept into an expected operating control. If your product depends on connected agents, some form of continuous third-party visibility is going to become table stakes.
Commercially, this matters because buyers will start treating connected agents more like connected vendors. They will ask for proof that the runtime relationships are understood and controlled, not just that a policy exists in a binder.
ISO 42001 and AIUC-1 are solving different problems
This is where a lot of teams get confused. ISO 42001 is still valuable, but it is not the same thing as AIUC-1. AIUC-1's own technical deep dive on ISO 42001 and AIUC-1 makes the difference pretty clear.
- ISO 42001 is an AI governance management system. It helps you establish policies, responsibilities, documentation, and due-diligence structure.
- AIUC-1 goes further into technical agent security, safety, and reliability assurance, especially where interfaces, permissions, runtime behavior, and testing are involved.
Put differently: ISO 42001 helps answer whether your organization has an AI governance system. AIUC-1 is moving toward answering whether the agent interfaces inside that system are actually defensible.
What enterprises will start asking vendors for
If this direction holds, expect enterprise buyers to get much more concrete over the next two buying cycles. Instead of broad questions about responsible AI, they will start asking for protocol-aware assurance.
- Which MCP servers, agent connectors, and tool interfaces are approved?
- How are those interfaces authenticated and encrypted?
- What actions are logged at the tool-call level?
- How do you monitor third-party access and detect changes over time?
- How are non-human identities defined, verified, and limited?
- Can you show buyer-ready evidence without forcing the buyer through a full custom audit?
That last point matters more than most teams realize. Buyers do not just want controls. They want evidence they can use. More than a policy binder, less heavyweight than a traditional GRC rollout.
What teams should implement now
You do not need to wait for every framework to settle before making progress. A practical first move is to build around four ideas: approved tools, verified identities, logged actions, and monitored third parties.
- Create an approval list for the MCP servers, APIs, and agent interfaces your environment actually allows.
- Require strong authentication and encrypted transport across those interfaces.
- Use runtime containment for higher-risk MCP or agent execution paths.
- Log tool calls in a way that is useful for security review and buyer diligence, not just debugging.
- Track third-party access continuously instead of relying only on one-time vendor review.
- Move toward unique agent identities and permission-ready access models instead of broad shared credentials.
None of that requires turning your organization into a giant compliance machine. But it does require treating agent connections like real security boundaries.
How CraftedTrust fits
This is where CraftedTrust AI Governance, MCP Trust, the Runtime Gateway direction, and Touchstone research and advisories fit together.
The point is not to claim that a single product magically solves governance. The point is to turn governance expectations into buyer-ready evidence for connected agents.
- AI Governance helps teams inventory AI systems, connect evidence to decisions, and avoid governance by spreadsheet.
- MCP Trust gives buyers a public trust layer for MCP servers, including registry context, scanning, and certification workflow.
- Runtime Gateway points toward runtime telemetry, policy checks, and live proof for teams that need more than pre-approval review.
- Touchstone research adds advisories, checks, and research context so trust decisions are grounded in current findings, not marketing claims.
That combination maps well to where AIUC-1 is heading: approved interfaces, verified identities, logged actions, and monitored third parties.