
A conversation is happening in every serious AI team right now. It starts when an agent takes an action, commits code, sends an email, modifies a record, calls an external API, and someone asks:
"Who authorized that?"
Not which user triggered the workflow.
Not which service account the process ran under.
But specifically:
which agent, acting on whose authority, with what delegated permissions, at what point in the workflow?
In most enterprise systems today, there is no clean answer. And as agents move from assistants to autonomous actors, that gap is becoming one of the most urgent unsolved problems in enterprise AI.
Agents are no longer sitting at the edge of workflows, suggesting what a human should do next. They are inside the system now, reading from production databases, writing to them, orchestrating multi-step processes, spawning other agents, and operating across APIs and MCP servers to make decisions that are often irreversible by the time a human sees them.
In many companies, this is already live. In most others, it’s on the roadmap for 2026. The question has quietly shifted. It is no longer whether enterprises will deploy autonomous agents. It is whether they will be able to govern them when they do.
This isn't a hypothetical risk. We're already seeing what happens when agents operate without a proper identity layer. A Replit coding agent deleted 1,206 customer records in seconds, operating at 5,000 operations per minute, a pace that makes per-action human consent structurally impossible. At Salesloft, OAuth tokens delegated to agents remained active months after workflows completed, creating a durable attack surface with no expiration. EchoLeak (CVE-2025-32711, CVSS 9.3) demonstrated a sub-agent silently embedding unauthorized actions within routine responses, expanding the scope rather than attenuating it across a delegation chain. A single credential compromise at JLR shut down factories for 5 weeks, costing £1.9 billion. These are not hypothetical risks. They are the new threat surface introduced by autonomous execution. IBM's research found that shadow AI, agents operating outside governed identity infrastructure, adds an average of $670,000 to the cost of a breach. The pattern across every incident is the same: inadequate identity and authorization architecture.
This is not just a security problem. It is becoming a compliance deadline. The EU AI Act's requirements for high-risk systems take full effect in August 2026. Article 14 mandates demonstrable human oversight of autonomous systems. Article 99 sets penalties at up to €35 million or 7% of global annual turnover. The SEC's Cyber and Emerging Technologies Unit now requires reporting material AI-related incidents within four business days.
Every software control system starts with identity. If you don’t know who, or what, is acting, you cannot meaningfully control it, audit it, or reason about it. This is the gap that shows up everywhere once agents enter the system. You cannot enforce least privilege if agents inherit broad credentials. You cannot build reliable audit trails if multiple agents share the same identity. You cannot reason about delegation if you cannot see how authority flows from one step to the next. And you cannot answer the one question that matters when something goes wrong:
who did what, on whose authority, under what constraints?
The industry has already started moving in this direction. Standards like WIMSE and OAuth 2.1 are laying the groundwork for workload identity in distributed systems. Vendors are beginning to explore agent credentialing. But there is still a missing piece. There is no open, auditable identity layer designed specifically for autonomous agents.
At the identity layer, trust is not a feature. It is the entire system. A proprietary identity solution asks enterprises to trust a black box at the most sensitive layer of their stack, the layer that determines what is allowed, what is denied, and what is recorded. Most organizations will not accept that. And they shouldn’t. The identity systems that became global standards, TLS, OAuth, SSH, and SAML, did not win because they were the most convenient. They won because they were open. They could be inspected, audited, challenged, and improved by the people who depended on them. We believe that agent identity should follow the same path.
Today, we’re introducing ZeroID. An open source identity platform built specifically for autonomous agents. ZeroID is not a retrofit of human authentication systems. It is designed from the ground up for how agents actually operate: independently, continuously, and often without direct human oversight at every step.
At its core, ZeroID gives every agent its own cryptographically verifiable identity. Not a shared service account. Not a borrowed user session. An identity that is scoped to what the agent is allowed to do, bound to the workflow it belongs to, and limited in time.
When one agent delegates work to another, that delegation is not implicit. It is captured explicitly and carried forward. The receiving system does not have to guess whether the action is allowed. It can verify the full chain of authority directly from the credential itself. This is the critical shift. Identity is no longer just about who the agent is. It includes what the agent is doing, what it has already done, and what it is allowed to do next. Every step in the workflow becomes traceable. Every action becomes attributable. Every constraint becomes enforceable.
Scenario 1: Autonomous Security Response
No human delegation, org policy authorizes the agent chain
A security monitoring agent detects an anomaly at 3 am. No human is in the loop. It needs to investigate, contain, and remediate at machine speed.
Without ZeroID, the monitoring, investigation, and remediation agents all share the same service account. If the remediation agent patches the wrong system, there is no record of which agent decided what or why it was authorized. This is the pattern behind real incidents: one credential, unlimited blast radius, no chain of custody.
With ZeroID, each agent holds its own credential. Delegation is explicit; the monitoring agent can only pass down what it was granted, and the policy caps how deep the chain can go. By the time the remediation agent acts, its token carries the full provenance of every decision that led there.
from highflame.zeroid import ZeroIDClient
zid = ZeroIDClient(api_key="zid_sk_...")
# Monitoring agent authenticates with its own credential
# (api_key comes from agents.register() at deploy time)
monitoring = zid.tokens.issue(
grant_type="api_key",
api_key=monitoring_agent_api_key,
scope="read:logs read:network",
)
# Anomaly confirmed — delegate to investigation agent
# scope cannot exceed what monitoring was granted
investigation = zid.tokens.issue(
grant_type="urn:ietf:params:oauth:grant-type:token-exchange",
subject_token=monitoring.access_token,
scope="read:logs read:network",
)
# Threat confirmed — delegate to remediation, scope narrowed further
remediation = zid.tokens.issue(
grant_type="urn:ietf:params:oauth:grant-type:token-exchange",
subject_token=investigation.access_token,
scope="write:firewall-rules",
)
# delegation_depth: 2 — CredentialPolicy cap reached
# any further delegation returns invalid_scope
# Introspect the remediation token — full chain is there
chain = zid.tokens.introspect(remediation.access_token)
# chain.sub → spiffe://.../remediation-agent
# chain.act["sub"] → spiffe://.../investigation-agent
# chain.scope → "write:firewall-rules"
# chain.delegation_depth → 2At 9 am, when the security team reviews what happened overnight, every decision is attributed, every scope constraint is recorded, and the remediation action traces back to the org policy that authorized it, not a mystery service account that had access to everything.
Scenario 2: Developer Authorizes a Coding Agent — Then Steps Away
Human delegates once, the agent works autonomously
A developer assigns a ticket and goes to lunch. The agent reads the codebase, writes the fix, runs tests, and opens a PR, without the developer approving each step.
Without ZeroID, the agent commits under the developer's credentials. Code review sees a single author. There is no way to distinguish what the human wrote from what the agent generated. If the agent makes a mistake, there is no clear chain of accountability. With ZeroID, the developer's action is registration, authorizing the agent into existence. The created_by field is carried as the owner claim in every token issued for this agent. From that point on, the agent operates under its own credentials. The developer's credentials are never exposed to the agent.
from highflame.zeroid import ZeroIDClient
zid = ZeroIDClient(api_key="zid_sk_...")
# Developer registers the agent for this ticket
# created_by is stamped as `owner` in every token this agent receives
agent = zid.agents.register(
name="Coding Agent — ENG-4421",
external_id="coding-agent-eng4421",
sub_type="autonomous",
trust_level="first_party",
created_by="dev@company.com",
labels={"ticket": "ENG-4421"},
)
# Agent authenticates with its own credential — developer is not in this flow
token = zid.tokens.issue(
grant_type="api_key",
api_key=agent.api_key,
scope="read:codebase write:branch run:tests open:pr",
)
# token carries:
# sub: spiffe://.../agent/coding-agent-eng4421
# owner: dev@company.com ← human authorization, in every token
# scope: read:codebase write:branch run:tests open:pr
# Work complete — revoke immediately
zid.tokens.revoke(token.access_token)
# active: false on any subsequent introspection Code review knows exactly what to scrutinize. The agent's work is clearly attributed. If something goes wrong, the authorization chain is in the log.
Scenario 3: MCP Server Authorization
Giving a tool server its own identity, not borrowing the user's
Most MCP deployments today haven't solved a basic question:
how does the MCP server prove its identity to downstream services on the user's behalf?
Today, most deployments resolve this the wrong way: either the user's token is forwarded directly to downstream services (impersonation), or a shared service credential stands in for everyone (no per-user accountability). Neither is auditable at the right granularity. Neither is revocable cleanly when a user's session ends.
ZeroID gives each MCP server its own cryptographic identity. When a user authorizes a session, the delegation is captured explicitly: the user becomes the delegating authority in the token, and the MCP server remains the acting subject. Every downstream API call carries a credential that proves both which server made the call and on whose authority.
from highflame.zeroid import ZeroIDClient
zid = ZeroIDClient(api_key="zid_sk_...")
# MCP server has its own identity — registered once at deploy time
server = zid.agents.register(
name="GitHub MCP Server",
external_id="mcp-github-v1",
identity_type="mcp_server",
trust_level="first_party",
created_by="platform-team@company.com",
)
# Server authenticates at startup with its own credential
server_token = zid.tokens.issue(
grant_type="api_key",
api_key=server.api_key,
scope="repo:read pr:write issues:read",
)
# User authorizes a session — delegation captured via token exchange
# user's identity flows into act.sub; MCP server stays in sub
session_token = zid.tokens.issue(
grant_type="urn:ietf:params:oauth:grant-type:token-exchange",
subject_token=server_token.access_token,
actor_token=user_pkce_token, # user's PKCE-issued token
scope="repo:read pr:write", # user may grant a subset
)
# session_token carries:
# sub: spiffe://.../mcp_server/mcp-github-v1
# act.sub: alice@company.com ← who authorized this session
# scope: repo:read pr:write
# Session ends — revoke immediately
zid.tokens.revoke(session_token.access_token)The MCP server is audited independently of the user. The user's credentials are never passed to it. When the session ends or when the user revokes, the credential is immediately invalidated. The acting subject and the delegating authority are both cryptographically embedded in every token that interacts with a downstream API.
Highflame's commercial platform, the Agent Control Platform, builds governance, enforcement, and observability on top of the identity layer. The identity layer sits underneath all of that. It defines what is trusted in the first place.
We are open-sourcing ZeroID for three reasons.
Join the ZeroID Community
The repo is live: github.com/highflame-ai/zeroID
Read the docs, try the quickstart, open an issue. If you are building agent infrastructure and have opinions about how identity should work, we especially want to hear from you.
Discord:
X: [@highflame_ai]
The identity layer for the agentic era is being written right now. We are writing the first version. We want the community to write the rest.
Want to try it out or sign up for a free trial?