15 Research Lab

Research: Prompt Injection Impact on Tool-Using AI Agents

15 Research Lab · 2026-02-13

Research: Prompt Injection Impact on Tool-Using AI Agents

Prompt injection is a well-documented vulnerability in large language models. An attacker embeds instructions in input data that override the model's system prompt, causing it to behave in unintended ways. For conversational chatbots, the impact is typically limited to producing inappropriate text output.

For tool-using AI agents, the impact is categorically different. A successful prompt injection against an agent does not just produce bad text. It drives bad actions: file deletions, data exfiltration, unauthorized API calls, and arbitrary shell commands.

15 Research Lab conducted a controlled study to quantify this difference.

Study Design

We constructed two systems using the same base model (GPT-4o) and system prompt:

  • Chatbot: A standard conversational interface with no tool access. Output is text only.
  • Agent: A ReAct-style agent with access to file read/write, HTTP requests, and shell command execution.
  • Both systems were subjected to the same set of 200 prompt injection attacks, delivered through three vectors:

    We measured two outcomes: (1) whether the injection successfully influenced the system's behavior, and (2) the severity of the resulting impact.

    Results

    Injection Success Rate

    | System | Direct | Indirect (Document) | Indirect (Web) | Overall |

    |---|---|---|---|---|

    | Chatbot | 34% | 28% | 31% | 30.5% |

    | Agent | 38% | 41% | 36% | 39.0% |

    The raw injection success rate was modestly higher for the agent (39% vs. 30.5%), likely because the agent's planning loop provides more opportunities for the injected instruction to influence behavior across multiple reasoning steps.

    Impact Severity

    The more dramatic difference was in impact severity. We rated each successful injection on a 1-5 scale:

    | Severity Level | Chatbot | Agent |

    |---|---|---|

    | 1 (Nuisance: off-topic response) | 72% | 8% |

    | 2 (Mild: reveals non-sensitive info) | 19% | 12% |

    | 3 (Moderate: reveals sensitive info) | 7% | 23% |

    | 4 (Severe: unauthorized action) | 2% | 34% |

    | 5 (Critical: destructive action) | 0% | 23% |

    For the chatbot, 72% of successful injections resulted in mere nuisance-level impact (the model said something off-topic). For the agent, 57% of successful injections resulted in severity 4 or 5, meaning the agent performed unauthorized or destructive actions.

    The combined metric, injection success rate multiplied by mean severity, was 4.7x higher for the agent than the chatbot. Prompt injection is not just more likely to succeed against agents; the consequences are qualitatively worse.

    Why Agents Are More Vulnerable

    Three architectural properties make tool-using agents more susceptible to high-severity prompt injection outcomes:

    1. Actions Are Irreversible

    When a chatbot produces harmful text, the harm is contained to that text. It can be filtered, retracted, or flagged. When an agent deletes a file, sends an HTTP request, or executes a shell command, the action cannot be undone by filtering the model's output. The damage occurs at the point of tool execution, not at the point of text generation.

    2. The Planning Loop Amplifies Injections

    Agents operate in multi-step reasoning loops. A prompt injection that influences one step can cascade through subsequent steps. In our testing, we observed cases where a single injected instruction in a document caused the agent to:

  • Read a credentials file (step 1, influenced by injection)
  • Encode the credentials as a URL parameter (step 2, logical continuation)
  • Send an HTTP request to an attacker-controlled server (step 3, completing the exfiltration)
  • Each individual step appeared rational from the agent's perspective. The injection only needed to influence the first step; the agent's own planning capabilities completed the attack chain.

    3. Output Filtering Cannot Catch Actions

    Standard output guardrails (content filters, toxicity detectors) operate on the model's text output. They are blind to tool calls. An agent can produce a perfectly benign text response ("I've processed your document and saved the summary") while simultaneously executing a malicious tool call. The text passes all filters; the action is unmonitored.

    The Mitigation: Action Gating

    Our findings make a clear case that prompt injection defense for agents must operate at the action layer, not the output layer. Output filtering remains valuable for catching harmful text, but it provides no protection against harmful actions.

    Action gating intercepts every tool call before execution and evaluates it against a policy. In our study, we re-ran the agent test suite with SafeClaw's deny-by-default policy engine active:

    | Configuration | Severity 4-5 Outcomes |

    |---|---|

    | Agent without action gating | 57% of successful injections |

    | Agent with SafeClaw action gating | 4% of successful injections |

    The 4% that passed through involved actions that were within the agent's legitimate allowlist but were used for purposes the injection directed. This residual risk can be further reduced with more granular policy rules and anomaly detection, but the 93% reduction demonstrates the effectiveness of action gating as a primary defense.

    Recommendations

  • Do not deploy tool-using agents with only output-level safety. Output filters are necessary but cannot address the action-level threat.
  • Implement deny-by-default action gating. SafeClaw provides a production-ready implementation.
  • Minimize agent tool access. Every tool an agent can access is an action a prompt injection can exploit. Apply the principle of least privilege rigorously.
  • Test with indirect injection specifically. Direct injection is the easier case. Indirect injection via documents and web content is the realistic attack vector for production agents.
  • Monitor for multi-step attack chains. Single-action anomaly detection is insufficient. Correlate actions across the agent's planning loop to detect escalation patterns.
  • Prompt injection is not solved at the model level and may not be for years. Runtime action gating is the practical defense available today.

    Full study methodology, attack corpus, and raw data available from 15 Research Lab upon request.