Last week, we submitted a formal comment to the NIST National Cybersecurity Center of Excellence (NCCoE) in response to their solicitation on AI agent identity and authorization. NIST is gathering input from industry on how to build identity frameworks for autonomous AI systems, and we felt strongly that this was the right moment to weigh in.

This post covers what we said, why we said it, and why we believe small companies like ours have both the right and the obligation to participate in shaping these standards.

Why NIST Is Asking These Questions Now

AI agents are no longer theoretical. They are running in production environments, making API calls, accessing databases, orchestrating workflows, and connecting to external tools through protocols like the Model Context Protocol (MCP). The autonomy is real. The scale is growing. And the identity infrastructure was never designed for any of it.

NIST's NCCoE recognizes that the existing identity and access management (IAM) stack was built for humans and, to a lesser extent, for traditional service accounts. AI agents break the model in fundamental ways. They act with delegated authority. They chain tool calls across trust boundaries. They make decisions that affect real systems, often without a human in the loop.

The NCCoE solicitation asks the right questions: How should AI agents be identified? How should their authorization be scoped? How do you maintain accountability when the actor is autonomous software?

"If you wait for the standards to be written before you engage, you are accepting rules that someone else designed for their use cases, not yours."

The Core Arguments in Our Comment

Our submission focused on five key areas that we believe any workable AI agent identity framework must address. These are not abstract recommendations. They are drawn directly from the problems we encounter building and deploying CraftedTrust, our agentic security platform.

1. Agents Need Verifiable Identity Independent of Their Operators

Today, most AI agents authenticate using their operator's credentials: a shared API key, a service account, an OAuth token belonging to the organization that deployed them. This conflates the identity of the agent with the identity of whoever is running it.

That conflation is a problem. If three different agents all authenticate with the same service account, you have lost the ability to distinguish between them in your audit logs. You cannot apply different permission sets. You cannot revoke one without disrupting the others. And if one of those agents is compromised through prompt injection or a supply chain attack on its toolchain, the blast radius extends to everything that shared credential touches.

We argued that every AI agent should have its own cryptographically verifiable identity, separate from the operator identity, the user identity, and the platform identity. This is not a new concept. X.509 certificates, DID (Decentralized Identifier) documents, and verifiable credentials already provide the building blocks. What is missing is a standard that specifies how to apply them to AI agents specifically.

2. Authorization Must Be Granular and Auditable

OAuth scopes were a good starting point for human-facing applications. They are not sufficient for agentic AI. An OAuth scope like read:documents tells you that an agent can read documents, but it does not tell you which documents, under what conditions, for how long, or whether the agent can chain that read into a write operation on another service.

We recommended that NIST promote authorization models that support:

The granularity matters because AI agents do not behave like human users. A human reads a document and makes a judgment call. An agent reads a document, extracts data, feeds it into another tool, generates output, and pushes it to a third system. The authorization model needs to account for that entire chain, not just the first hop.

3. Trust Scoring Should Be Standardized and Machine-Readable

When a human user connects to a service, there is an implicit trust relationship backed by organizational context: they are an employee, they passed a background check, they completed security training. AI agents have none of that context. The trust decision needs to be explicit, quantifiable, and consumable by other machines.

We proposed that NIST encourage the development of standardized trust scoring for AI agents and the tools they connect to. A trust score should incorporate factors like:

Critically, these scores need to be machine-readable. An AI agent connecting to an MCP server should be able to programmatically evaluate the server's trust score before establishing a connection, not rely on a human reviewing a dashboard. The trust decision needs to happen at machine speed because that is the speed at which agents operate.

4. On-Chain Attestations Provide a Verification Layer Without Single-Authority Trust

One of the more forward-looking arguments in our comment addressed the verification problem. If trust scores and identity claims are issued by a single authority, that authority becomes both a single point of failure and a single point of compromise. A centralized identity provider that gets breached can issue fraudulent attestations for every agent in the ecosystem.

We argued that on-chain attestations, specifically using frameworks like the Ethereum Attestation Service (EAS), provide a verification layer that distributes trust across multiple independent attestors. When an agent's trust score is attested on-chain, it is publicly verifiable, tamper-evident, and not dependent on any single organization's infrastructure remaining available or uncompromised.

This is not about putting everything on a blockchain. It is about using on-chain attestations as an anchoring mechanism for claims that need to be independently verifiable. The agent's identity document, its trust score, its compliance status: these are claims that benefit from a verification layer that no single party controls.

5. The MCP Ecosystem Specifically Needs Identity Standards

The Model Context Protocol has rapidly become the de facto standard for connecting AI agents to external tools and data sources. It is powerful, flexible, and growing fast. It is also, at the moment, a trust-on-first-use environment. When an agent connects to an MCP server, there is no standardized way to verify the server's identity, evaluate its security posture, or constrain the agent's behavior within that connection.

We highlighted MCP explicitly in our comment because it represents the exact scenario NIST needs to plan for: autonomous software connecting to arbitrary external servers, executing tool calls, and acting on the results. Without identity standards built into the protocol layer, every MCP connection is a potential trust boundary violation.

The MCP specification needs to define how agents identify themselves to servers, how servers present verifiable credentials to agents, and how both parties establish and enforce the scope of their interaction. This cannot be left to individual implementations. It needs to be part of the standard.

How CraftedTrust Already Implements These Principles

We did not write this comment from a theoretical perspective. Every argument in our submission reflects something we have already built or are actively building in CraftedTrust.

When we told NIST that authorization should be granular and auditable, we were describing what AgentGov already does. When we argued for standardized trust scoring, we were pointing to a system that already scores thousands of MCP servers. Our comment was not a wish list. It was a field report.

Why Small Companies Should Engage with Standards Bodies

There is a common assumption that standards are shaped by large enterprises and government contractors, and that small companies should wait until the rules are finalized and then comply. That assumption is wrong, and it is costly.

Standards bodies like NIST actively solicit input from the broader community precisely because they want diverse perspectives. The NCCoE in particular operates on a collaborative model that brings in organizations of all sizes. If you are building in the AI agent space, your operational experience is exactly the kind of input they need.

Here is why engaging early matters:

We are a small team. Filing this comment took time that could have gone to product development. It was worth it. The alternative -- sitting on the sidelines while the identity framework for the entire AI agent ecosystem gets defined without our input -- was not acceptable.

What Happens Next

NIST will review the submitted comments, synthesize the input, and use it to inform their guidance on AI agent identity and authorization. This is the beginning of a longer process, not a one-time event. We expect working groups, draft publications, and iterative refinements over the coming months and years.

We plan to stay involved at every stage. As the MCP ecosystem matures and AI agents become more autonomous, the identity question is only going to get more urgent. The organizations that engage with it now, that build systems reflecting sound identity principles, will be the ones best positioned when compliance requirements catch up to reality.


The Bottom Line

AI agents need identity frameworks that match their capabilities. NIST is building those frameworks, and they are asking for input. We gave ours: verifiable agent identity, granular authorization, standardized trust scoring, on-chain attestation, and MCP-specific identity standards. These are not future requirements. They are current necessities, and CraftedTrust already implements them.

If you are building AI agents, deploying MCP servers, or managing agentic workflows in production, this is your opportunity to shape the rules. Do not wait for someone else to define the standards your systems will have to live under. To learn more about how CraftedTrust approaches agent identity and trust, visit craftedtrust.com/platform.