High Flame Technology Series

Traditional Authentication Isn’t Enough for Agent & MCP Security

Sharath Rajasekar
AI Engineering
March 10, 2026

As Agent and MCP (Model Context Protocol) adoption grows, more systems are exposing internal capabilities as tools that AI agents can call. A repository can be read through an MCP tool. Customer records can be queried. Workflows can be triggered. Emails can be sent. In some environments, MCP tools can even modify infrastructure or move money. This is a powerful shift. AI systems are no longer just generating text. They are taking actions inside real systems.

Most teams secure these integrations the same way they secure any other API integration. A user authenticates with OAuth, the system stores their token, and the agent uses that token when calling tools on the user’s behalf. This is good security hygiene. Credential management is handled properly, tokens can be refreshed automatically, and the agent never needs to see raw credentials.

But traditional IAM based authentication only answers one question: who (user) is making the request.It does not answer the more important one: what agent? and should this request actually be allowed.

In most MCP implementations today, once a user is authenticated their agent can call tools exposed by an MCP server. If the request structure is valid and the token checks out, the call is forwarded downstream and executed. From the system’s perspective, everything looks legitimate.

A typical Agent call to an MCP tool, might look something like this:

{
    "tool": "read_customer_data",
    "arguments": {
    	"customer_id": "cust_8421"
    },
    "auth": {"user_id": "user_a"}
 }


The request is authenticated. The user identity is known. The system verifies the token and forwards the request to the backend. But nothing here tells the system whether user_a should be allowed to access cust_8421.

Authentication confirms user's identity. It does not enforce appropriate agent's behavior. When AI agents are allowed to autonomously choose which tools to call, that distinction becomes important.

When Agents Choose the Tools

Traditional software systems are predictable. A user clicks a button, and the application executes a specific action. Engineers know which operations the UI exposes and can reason about the security model. AI agents behave differently. They select tools dynamically while solving a task.

Consider a developer who only needs read access to a repository. The MCP server connected to that repository may expose several tools: listing files, reading branches, creating webhooks, or forcing pushes. An agent acting on behalf of that developer might decide that creating a webhook helps solve the task it is working on. The request is authenticated and the token is valid, so the system allows it. But the developer never intended to grant webhook creation privileges to their agent. The agent simply chose a tool that happened to exist.

The system verified identity. It never evaluated whether the action itself made sense. Timing introduces another subtle class of issues. Imagine an automated agent triggering a destructive tool such as deleting records from a system at two in the morning on a Sunday. Technically the request is valid. The user is authenticated, the token is correct, and the tool exists. Yet most organizations would want a rule that blocks or flags destructive operations outside expected operating hours.

Authentication alone cannot express rules like that.

Multi-user environments introduce another risk. Suppose an MCP server exposes a tool called read_customer_data that accepts a customer_id argument. Two users may both be authenticated correctly, but nothing in the authentication layer prevents one user’s agent from requesting another user’s customer data simply by passing a different ID. From the API’s perspective the request is legitimate. From a business perspective it violates tenant boundaries.

There are also operations organizations simply do not want agents to execute autonomously. Actions such as transferring funds, revoking administrative access, or exporting entire datasets typically require explicit human workflows. Authentication does not distinguish between a human clicking a UI button and an agent deciding to call a tool.

Without another layer of control, the system treats those two situations exactly the same.

Agent Autonomy Needs a new IAM Layer

These issues all point to the same gap. Agents and MCP systems need a way to enforce policy over tool execution pegged on agent identity. Instead of forwarding every authenticated request directly to a downstream service, each tool call should be evaluated against a set of rules. Those rules consider who is making the request, which tool is being called, what arguments are being passed, and the broader context in which the call occurs.

In practice, that logic looks surprisingly simple. Many MCP implementations effectively behave like this:

if (tokenIsValid(userToken)) 
{
   forwardToolCall(request);
}

Once the token is validated, the tool executes. But safe MCP systems need a slightly different model.

if (tokenIsValid(userToken) && policy.allows(request) && inspection.passes(request)) 
{
   forwardToolCall(request);
}

User Authentication remains necessary, but it becomes just one part of the decision. The policy engine evaluates the context of the request and determines whether the action should be allowed. A policy rule might look something like this:

permit(
   principal.role == "support",
   action == "read_customer_data",
   resource.customer_id == principal.customer_id
   );

forbid(
   principal.type == "agent",  
   action in ["transfer_funds", "export_all_data"]
   );

The syntax is less important than the idea. Instead of trusting any authenticated request, the system evaluates each tool call against rules that reflect the organization’s actual access model. This approach mirrors how cloud infrastructure authorization works. Identity alone is not enough; requests are evaluated against policies that combine identity, action, and resource context before they are allowed to proceed. MCP simply introduces a new surface where that same model needs to be applied.

Inspecting What Flows Through Agents & MCPs

Controlling which tools can run is only part of the security story. Agents & MCP also carries risk in the content flowing through those tools. Tool arguments often originate from model output or external sources that the agent has read. That makes them vulnerable to prompt injection. A webpage or document might contain hidden instructions telling the agent to call a particular tool with a specific payload. The agent interprets the instruction as part of its reasoning and executes it. From the system’s perspective the request still originates from a legitimate user. Authentication cannot detect this.

Inspecting tool arguments before execution provides a way to detect suspicious patterns and block them. There is also the risk of data leaving the system in unintended ways. Tool responses may contain sensitive information such as personal data, credentials, or internal documents. If that data flows directly back into the model’s context, it may leave the system through prompts, logs, or downstream outputs.

Inspection on the response path allows sensitive information to be detected and redacted before it reaches the model. Even familiar attack patterns appear in this environment. Tool inputs may contain SQL injection attempts, shell commands, or other malicious payloads. Scanning requests before they reach backend systems adds another defensive layer that looks much more like a web application firewall than a traditional authentication system.

Traditional User Authentication Is Only the Starting Point

MCP makes it possible for AI agents to interact directly with real systems. That makes user authentication necessary, but it is not sufficient. Safe MCP deployments require three layers of protection working together.

(1) The system must know the agent's identity as well as which user an agent is acting on behalf of and manage credentials safely.

(2) It must enforce policies that determine which MCP tools can be executed and under what conditions.

(3) And it must inspect the content flowing through those tools to detect prompt injection, data exfiltration, and malicious payloads.

Traditional User Authentication partially solves the first of these problems. As MCP adoption grows and agents gain the ability to take real actions inside production systems, the other two become just as important. Identity tells you who made a request. But safe MCP systems also need to control what that request is allowed to do, and what data is allowed to flow through it. User Authentication is the starting point for MCP security. It should not be the finish line.

Want to try it out or sign up for a free trial?

Book A Demo

This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
HighFlame Technology Series

Continue Reading