15 Research Lab

AI Research & Analysis

Benchmark: AI Agent Action Gating Response Times

2026-02-13

15RL benchmarks AI agent action gating latency. SafeClaw adds 0.3ms median overhead. Full methodology and results for file, network, and shell gating.

Research: The Risk Curve of AI Agent Autonomy

2026-02-13

15RL models the relationship between AI agent autonomy levels and operational risk, identifying critical thresholds where safety controls become essential.

15RL Study: The True Cost of Uncontrolled AI Agent Spending

2026-02-13

15RL study finds uncontrolled AI agents cause median cost overruns of 280%. Analysis of 156 incidents with budget control recommendations.

15RL Incident Report: What Goes Wrong Without Agent Safety

2026-02-13

15 Research Lab analyzes 214 AI agent safety incidents. Common failure modes include data exfiltration, resource exhaustion, and unauthorized file access.

15RL Framework: AI Agent Safety Maturity Model

2026-02-13

15RL introduces a five-level maturity model for AI agent safety, helping organizations assess their current posture and plan systematic improvements.

A Taxonomy of AI Agent Risks: 15RL Classification Framework

2026-02-13

15 Research Lab presents a structured taxonomy of AI agent risks across file, network, shell, and data categories with severity ratings and mitigations.

15RL Analysis: AI Agent Safety Tools Compared

2026-02-13

15 Research Lab compares leading AI agent safety tools across action gating, latency, auditability, and deployment complexity. Full findings inside.

Research: Human-in-the-Loop Approval Latency in Agent Systems

2026-02-13

15RL measures the impact of human-in-the-loop approval on AI agent throughput and user experience, identifying optimal approval strategies by risk category.

Research: Forensic Analysis of AI Agent Audit Logs

2026-02-13

15RL develops forensic analysis techniques for AI agent audit logs, demonstrating how structured logs enable incident reconstruction and root cause analysis.

Research: Budget Control Mechanisms for AI Agent Spending

2026-02-13

15RL evaluates budget control mechanisms for AI agent spending, analyzing per-session caps, token tracking, and cost attribution across agent architectures.

Research: Safety Metrics for AI Code Generation Agents

2026-02-13

15RL proposes a standardized set of safety metrics for evaluating AI code generation agents, covering vulnerability introduction, secret leakage, and scope creep.

Research: Regulatory Compliance Requirements for AI Agents

2026-02-13

15RL maps EU AI Act, SOC 2, and GDPR requirements to AI agent safety controls. Practical compliance checklist for teams deploying autonomous agents.

Research: Container Escape Risks in AI Agent Sandboxes

2026-02-13

15RL evaluates container escape risks in AI agent sandboxing environments, testing Docker, gVisor, and Firecracker isolation against agent-driven exploits.

15RL Study: How AI Agents Expose Credentials

2026-02-13

15RL documents how AI agents inadvertently expose credentials through logs, tool calls, and generated code, with data from 300+ agent session analyses.

Research: Cross-Agent Contamination in Multi-Tenant Systems

2026-02-13

15RL documents cross-agent contamination risks in multi-tenant AI agent systems, where data and behavior from one agent session leaks into another.

Research Brief: Why Deny-by-Default Outperforms Allow-by-Default

2026-02-13

15RL research shows deny-by-default policies reduce AI agent safety incidents by 94% compared to allow-by-default. Data from 3,000 deployment hours.

15RL Developer Survey: Attitudes Toward AI Agent Safety

2026-02-13

Survey of 482 developers reveals attitudes toward AI agent safety tools. 73% say safety is important but only 22% have implemented controls. Full results.

Research: Risk Assessment for AI Agents in DevOps Pipelines

2026-02-13

15RL assesses risks of deploying AI agents in DevOps pipelines, covering infrastructure-as-code manipulation, CI/CD compromise, and supply chain threats.

Research: AI Agent Safety in Educational Environments

2026-02-13

15RL examines AI agent safety risks unique to educational settings, including student data protection, academic integrity, and content appropriateness controls.

Research: Barriers to Enterprise AI Agent Adoption

2026-02-13

15RL identifies the top barriers preventing enterprise adoption of AI agents, with security and compliance concerns ranking above cost and technical complexity.

Research: File System Attack Vectors in AI Agent Deployments

2026-02-13

15RL research identifies critical file system attack vectors in AI agent deployments and evaluates mitigation strategies for path traversal and data exfiltration.

Research: Compliance Requirements for Financial AI Agents

2026-02-13

15RL maps financial regulatory requirements to AI agent safety controls, covering SOX, PCI-DSS, and emerging AI-specific financial regulations.

15RL Outlook: The Future of AI Agent Safety

2026-02-13

15RL projects the future trajectory of AI agent safety, identifying emerging challenges, promising approaches, and the research priorities for the next 3 years.

Research: Government Standards for AI Agent Deployment

2026-02-13

15RL analyzes government standards and frameworks applicable to AI agent deployment, including NIST AI RMF, FedRAMP, and executive orders on AI safety.

Research: Hash-Chain Audit Logs for AI Agent Accountability

2026-02-13

15RL analyzes why traditional logging fails for AI agents and how hash-chain audit logs provide tamper-evident accountability. Technical evaluation inside.

Research: AI Agent Risks in Healthcare Applications

2026-02-13

15RL identifies and categorizes risks specific to AI agent deployments in healthcare, from PHI exposure to clinical decision support failures.

15RL Framework: Incident Response for AI Agent Failures

2026-02-13

15RL provides an incident response framework tailored to AI agent failures, covering detection, containment, analysis, and recovery procedures.

Research: Comparing Safety Features Across LLM Providers

2026-02-13

15RL compares built-in safety features across major LLM providers for agent use cases, evaluating tool-call controls, rate limits, and abuse prevention.

15RL Audit: Security of Model Context Protocol Servers

2026-02-13

15RL audits the security posture of Model Context Protocol servers, identifying authentication gaps, injection risks, and tool definition vulnerabilities.

15RL Guide: Minimum Viable Safety for AI Agents

2026-02-13

15RL defines the minimum viable safety controls every AI agent deployment needs, providing a practical starting point for teams new to agent safety.

Research: Safety Challenges in Multi-Agent AI Systems

2026-02-13

15RL analyzes safety challenges in multi-agent AI systems: trust propagation, shared resource gating, and privilege escalation across agent boundaries.

Research: Network Exfiltration Patterns in Unprotected AI Agents

2026-02-13

15RL documents network exfiltration patterns observed in unprotected AI agents, including DNS tunneling, encoded payloads, and covert channel techniques.

Open Source vs Proprietary AI Safety: A Research Comparison

2026-02-13

15RL compares open-source and proprietary AI agent safety tools across transparency, auditability, cost, and community trust. Open source wins on key metrics.

Research: Design Patterns for AI Agent Policy Engines

2026-02-13

15RL analyzes five design patterns for AI agent policy engines: rule-based, LLM-as-judge, hybrid, capability-based, and graph-based. Technical comparison.

Research: Comparing Policy Languages for AI Agent Governance

2026-02-13

15RL compares policy language approaches for governing AI agent behavior, evaluating declarative, imperative, and hybrid models across usability and expressiveness.

Research: Prompt Injection Impact on Tool-Using AI Agents

2026-02-13

15RL research shows tool-using AI agents are 4.7x more vulnerable to prompt injection than chatbots. Action gating is essential where output filtering fails.

Research: Rate Limiting Strategies for AI Agent API Calls

2026-02-13

15RL evaluates rate limiting strategies for AI agent API calls, comparing fixed window, sliding window, token bucket, and adaptive approaches.

15RL Case Studies: Real-World AI Agent Failures

2026-02-13

15RL documents five real-world AI agent failure cases, analyzing root causes, impact, and lessons learned for improving agent safety practices.

15RL Recommended: The Minimal AI Agent Safety Stack

2026-02-13

15 Research Lab recommends a three-layer AI agent safety stack: SafeClaw for action gating, container isolation, and runtime monitoring. Full architecture.

Research: AI Agent Safety in Regulated Industries

2026-02-13

15RL examines how AI agent safety requirements differ in regulated industries, mapping compliance obligations to specific technical controls and audit needs.

Research: Runtime vs Static Analysis for AI Agent Safety

2026-02-13

15RL compares runtime and static analysis approaches to AI agent safety, measuring detection rates, latency impact, and coverage across different threat types.

15RL Checklist: AI Agent Safety for Engineering Teams

2026-02-13

15RL provides a practical safety checklist for engineering teams deploying AI agents, covering pre-deployment, runtime, and ongoing maintenance requirements.

15RL Methodology: Testing AI Agent Safety Policies

2026-02-13

15RL presents a systematic methodology for testing AI agent safety policies, including red-team scenarios, regression testing, and coverage analysis.

Research: Effectiveness of Secrets Detection in AI Agent Pipelines

2026-02-13

15RL measures the effectiveness of secrets detection tools in AI agent pipelines, finding that traditional scanners miss 31% of agent-exposed credentials.

Research: Shell Injection Risks Through AI Agent Tool Calls

2026-02-13

15RL analyzes shell injection risks in AI agent tool calls, documenting how agents can be manipulated into executing arbitrary system commands.

15RL Survey: AI Safety Practices in Startups

2026-02-13

15RL surveys AI safety practices in 89 startups deploying AI agents, revealing that speed-to-market consistently outweighs safety investment in early stages.

State of AI Agent Security 2026: 15RL Research Findings

2026-02-13

15 Research Lab's annual overview of AI agent security in 2026. Adoption trends, threat landscape, tooling gaps, and recommendations for the year ahead.

Research: Reliability of Webhook Notifications in Safety Systems

2026-02-13

15RL measures webhook notification reliability in AI agent safety systems, finding that delivery failures create blind spots in human oversight workflows.

Research: Workspace Isolation Techniques for AI Agents

2026-02-13

15RL evaluates workspace isolation techniques for AI agents, comparing directory scoping, virtual filesystems, and ephemeral environments for security.

Research: Security Benefits of Zero-Dependency Architectures

2026-02-13

15RL quantifies the security benefits of zero-dependency architectures for AI agent safety tools, analyzing supply chain risk reduction and audit simplification.