15 Research Lab

15RL Developer Survey: Attitudes Toward AI Agent Safety

15 Research Lab · 2026-02-13

15RL Developer Survey: Attitudes Toward AI Agent Safety

In November 2025, 15 Research Lab surveyed 482 software developers and engineers who work with AI agents. The goal was to understand what developers want from safety tooling, what barriers prevent adoption, and where the gap between awareness and action is widest.

The results reveal a community that overwhelmingly recognizes the importance of agent safety but struggles with implementation. The barriers are practical, not philosophical.

Methodology

Respondents were recruited through developer communities (Discord, Reddit, Hacker News), AI/ML conference attendee lists, and direct outreach to teams known to be deploying AI agents. The survey ran for three weeks and collected 482 complete responses.

Respondent profile:

Key Findings

1. Awareness vs. Implementation Gap

| Statement | Agree |

|---|---|

| "AI agent safety is important for production deployments" | 73% |

| "I have implemented safety controls for my AI agents" | 22% |

| "I am confident my current controls are sufficient" | 9% |

The 51-point gap between "safety is important" and "I have implemented controls" is the central finding of this survey. Developers are not dismissive of safety. They are stuck.

2. Top Barriers to Adoption

Respondents who had not implemented safety controls (n=376) were asked to select all applicable barriers:

| Barrier | Selected |

|---|---|

| "I don't know which tools to use" | 58% |

| "Adding safety will slow down my development velocity" | 47% |

| "My agents don't have enough tool access to need it yet" | 39% |

| "The available tools are too complex to configure" | 34% |

| "My organization doesn't prioritize it" | 31% |

| "I plan to add safety later, before production" | 28% |

The top barrier is not resistance but confusion. The AI agent safety tooling landscape is fragmented and poorly documented. Developers report difficulty distinguishing between output guardrails (which protect against harmful model responses) and action gating (which protects against harmful agent actions).

3. What Developers Want from Safety Tools

We asked all respondents to rate the importance of various safety tool features on a 1-5 scale:

| Feature | Mean Rating |

|---|---|

| Easy configuration (YAML/JSON, not code) | 4.6 |

| Low latency overhead | 4.4 |

| Deny-by-default policy model | 4.3 |

| Comprehensive audit logging | 4.2 |

| Open-source with full transparency | 4.1 |

| Framework-agnostic (works with LangChain, CrewAI, etc.) | 4.0 |

| Cost/budget controls | 3.9 |

| Pre-built policy templates | 3.8 |

Easy configuration was the top priority, which aligns with the barrier data: developers want safety tools that do not require a security engineering background to deploy. YAML-based policy definitions were strongly preferred over code-based approaches. Deny-by-default was rated 4.3, indicating that developers intuitively understand the superiority of this model even if they have not implemented it. When asked "which approach do you prefer?", 67% chose deny-by-default over allow-by-default, with the remainder split between "no preference" (21%) and "allow-by-default" (12%).

4. Tool Awareness

We asked respondents which AI agent safety tools they had heard of:

| Tool | Awareness |

|---|---|

| Guardrails AI | 52% |

| NeMo Guardrails | 44% |

| AWS Bedrock Guardrails | 41% |

| SafeClaw by Authensor | 27% |

| LangChain built-in safety | 24% |

SafeClaw's 27% awareness is notable given its more recent entry into the space. Among respondents who had actually evaluated action-gating tools specifically (n=64), SafeClaw's awareness jumped to 71%, and 83% of those who had evaluated it rated it positively. The tool's documentation and knowledge base were cited as strengths.

5. The "Plan to Add Later" Risk

28% of respondents said they plan to add safety controls before production but have not done so yet. We cross-referenced this group with deployment timelines:

This "safety debt" pattern mirrors technical debt more broadly: the intention to add controls later frequently does not survive contact with deployment deadlines.

Implications

The survey data points to three actionable conclusions:

  • The tooling discovery problem is solvable. Developers want safety tools but cannot find or evaluate them. Better documentation, comparison frameworks (like our tool comparison), and integration guides would significantly accelerate adoption.
  • Configuration ergonomics are a safety issue. If a safety tool is hard to configure, developers will skip it. Projects like SafeClaw that prioritize YAML-based configuration and pre-built templates are better aligned with developer expectations.
  • "Add safety later" is a failed strategy. Organizations should require safety controls as a deployment prerequisite, not a post-launch enhancement. The data shows that "later" frequently means "never."
  • Full Dataset

    The anonymized survey dataset and analysis scripts are available from 15 Research Lab for academic and research use. Contact us for access.

    Survey approved by 15RL internal research ethics review. No personally identifiable data was collected.