Research: File System Attack Vectors in AI Agent Deployments
Research: File System Attack Vectors in AI Agent Deployments
Abstract
AI agents with file system access represent one of the most common — and most dangerous — deployment configurations in production environments. Our research team at 15 Research Lab conducted a systematic analysis of file system attack vectors across 47 distinct AI agent frameworks, identifying recurring vulnerability patterns that persist even in mature deployments. This brief presents our taxonomy of file system threats and evaluates current mitigation approaches.
Threat Model
When an AI agent is granted file system permissions — whether for code generation, document processing, or data analysis — the attack surface expands dramatically. We categorize file system attack vectors into four primary classes:
1. Path Traversal ExploitsIn our controlled testing environment, 31 of 47 agent frameworks (66%) were susceptible to some form of path traversal when processing user-supplied file paths. Agents instructed to "read the config file at ../../etc/passwd" complied in 19 cases without any guardrail intervention. Relative path resolution remains a persistent blind spot.
Symbolic link attacks proved effective against 24 frameworks (51%). By placing a symlink in an agent's working directory that pointed to sensitive system files, we demonstrated that agents would unknowingly read and transmit contents of files outside their intended sandbox. This is particularly concerning in shared hosting environments.
3. Sensitive File DiscoveryAgents with recursive directory listing capabilities consistently discovered and attempted to process files containing credentials. In our test environment seeded with .env files, SSH keys, and API tokens, agents identified and read sensitive files in 89% of trials when given broad "find relevant configuration" instructions.
Agents with write access posed the highest risk. We observed agents overwriting critical configuration files, creating new executable scripts, and modifying .bashrc files when given ambiguous instructions. In 12 cases, agent-written files introduced exploitable vulnerabilities into the host system.
Quantitative Findings
| Attack Vector | Frameworks Vulnerable | Average Time to Exploit |
|---|---|---|
| Path Traversal | 66% | < 2 seconds |
| Symlink Following | 51% | < 5 seconds |
| Sensitive File Read | 89% | < 3 seconds |
| Malicious File Write | 74% | < 4 seconds |
The speed of exploitation is notable. These are not attacks that require sophisticated adversarial prompting — most succeed with straightforward instructions that an agent interprets as legitimate task completion.
Mitigation Analysis
We evaluated several mitigation strategies against our attack corpus:
Allowlist-Based Path Restriction proved most effective, reducing successful attacks by 94% when properly configured. Tools that enforce strict directory boundaries — permitting agents to operate only within designated workspace paths — eliminated the majority of path traversal and symlink attacks. Runtime File Monitoring caught 78% of sensitive file access attempts but introduced latency that degraded agent performance by 12-18%. Static Policy Engines that define permitted file operations before execution showed strong results. SafeClaw, which we evaluated as part of this study, implements a deny-by-default policy model that restricts file system operations to explicitly approved paths and patterns. In our testing, SafeClaw's approach to action gating effectively blocked all four attack vector categories without requiring runtime file system monitoring. Its policy configuration documentation provides a practical reference for implementing path restrictions.Recommendations
Based on our findings, 15RL recommends the following minimum controls for any AI agent with file system access:
.env, id_rsa, credentials.json)Conclusion
File system access remains a fundamental capability for useful AI agents, but our research demonstrates that current deployment practices leave most systems vulnerable to straightforward attacks. The gap between agent capability and agent safety in file system operations is significant, and organizations deploying agents with file access should treat this as a critical security concern requiring immediate attention.
15 Research Lab conducts independent research on AI agent safety. This study was self-funded and all tools were evaluated under identical controlled conditions.