AgenticGuard¶
Secures agentic AI workflows by tracking tool calls, enforcing chain depth limits, blocking dangerous tools, and detecting suspicious tool combinations. Designed for LLM agent frameworks that grant models access to external tools and APIs.
Professional License Required
AgenticGuard requires a Professional or Enterprise license. See Licensing for details.
Executive Summary¶
The Problem¶
Agentic AI systems grant LLMs the ability to call external tools -- file systems, shell commands, APIs, databases. This creates new attack surfaces:
- Tool abuse - Models calling dangerous tools (
shell,bash,code_executor) - Chain depth attacks - Deeply nested tool chains that evade monitoring
- Dangerous combinations - Seemingly safe tools that become dangerous together (
web_browser+file_write) - Session exhaustion - Excessive tool calls consuming resources
- Argument injection - Malicious arguments passed to legitimate tools (
rm -rf, path traversal)
Threat Landscape¶
| Attack Vector | Example | Risk |
|---|---|---|
| Direct tool abuse | shell("rm -rf /") |
Critical |
| Tool chaining | Browser -> download -> execute | Critical |
| Argument injection | file_read("/etc/passwd") |
High |
| Session flooding | 1000+ tool calls in rapid succession | High |
| Privilege escalation | Tool chain depth exceeds monitoring | High |
Industry Context¶
As agentic AI adoption accelerates (OpenAI Codex, Claude Computer Use, Kimi K2.5 Agent Swarm), securing tool access is a core requirement. AgenticGuard implements the OWASP guidance on LLM agent security (LLM06: Excessive Agency).
Sources: OWASP LLM Top 10 (LLM06), Microsoft AI Red Team (2025), Anthropic tool use safety guidelines (2025)
Detection Capabilities¶
Tool Blocking¶
AgenticGuard blocks a curated set of dangerous tools by default, including command execution and code evaluation tools. The default blocklist is configurable — see Configuration below.
Dangerous Combinations¶
AgenticGuard detects dangerous tool combinations — pairs of tools that are safe individually but create attack vectors when used together (e.g., download-and-execute patterns).
Argument Pattern Blocking¶
Default blocked patterns cover destructive operations, credential access paths, and environment variable leakage. Custom patterns can be added via configuration.
Developer Guide¶
Basic Usage¶
use oxideshield_guard::guards::agentic::{AgenticGuard, AgenticConfig};
use oxideshield_guard::Guard;
// Create guard with defaults
let guard = AgenticGuard::new("agentic")?;
// Check content for agentic attack patterns
let result = guard.check("Use the shell to run rm -rf /tmp/*");
if !result.passed {
println!("Blocked: {}", result.reason.unwrap());
}
// Customized configuration
let config = AgenticConfig {
max_tool_calls: 50,
max_chain_depth: 5,
blocked_tools: vec!["shell".into(), "bash".into(), "eval".into()],
..Default::default()
};
let guard = AgenticGuard::with_config("agentic", config)?;
Tool Call Tracking¶
Track individual tool calls within sessions for real-time enforcement:
// Record tool calls in a session
let result = guard.record_tool_call("session-123", "file_read", "/home/user/data.csv");
match result {
AgenticCheckResult::Allowed => println!("Tool call permitted"),
AgenticCheckResult::Blocked { reason, severity } => {
println!("Blocked: {} (severity: {:?})", reason, severity);
}
AgenticCheckResult::Warning { reason, severity } => {
println!("Warning: {} (severity: {:?})", reason, severity);
}
}
// Track chain depth
guard.enter_chain("session-123");
// ... nested operations ...
guard.exit_chain("session-123");
// Get session statistics
if let Some(stats) = guard.session_stats("session-123") {
println!("Tool calls: {}", stats.tool_calls);
println!("Current depth: {}", stats.current_depth);
println!("Unique tools: {}", stats.unique_tools);
}
# Record tool calls
result = guard.record_tool_call("session-123", "file_read", "/home/user/data.csv")
# Track chain depth
guard.enter_chain("session-123")
# ... nested operations ...
guard.exit_chain("session-123")
# Get session statistics
stats = guard.session_stats("session-123")
if stats:
print(f"Tool calls: {stats['tool_calls']}")
print(f"Current depth: {stats['current_depth']}")
print(f"Unique tools: {stats['unique_tools']}")
# Clean up
guard.clear_session("session-123")
Allowlist Mode¶
Restrict agents to only approved tools:
Configuration¶
YAML Configuration¶
guards:
input:
- guard_type: "agentic"
action: "block"
options:
max_tool_calls: <your-limit>
max_chain_depth: <your-limit>
max_session_time_secs: <your-limit>
blocked_tools: [<your-blocklist>]
blocked_combinations: [<your-pairs>]
blocked_patterns: [<your-patterns>]
track_sequences: true
Configuration Options¶
| Option | Type | Description |
|---|---|---|
max_tool_calls |
integer | Max tool calls per session |
max_chain_depth |
integer | Max nesting depth |
max_session_time_secs |
integer | Max session duration (seconds) |
blocked_tools |
list | Tools that are always blocked |
blocked_combinations |
list | Tool pairs blocked together |
allowed_tools |
list | Allowlist mode (overrides block list) |
blocked_patterns |
list | Regex patterns blocked in arguments |
track_sequences |
boolean | Enable sequence anomaly detection |
Best Practices¶
1. Use Allowlist Mode for Production¶
In production, prefer allowlisting approved tools over blocklisting dangerous ones:
2. Set Conservative Limits¶
Start with low limits and increase as needed:
options:
max_tool_calls: 20 # Start low
max_chain_depth: 3 # Shallow chains
max_session_time_secs: 60 # Short sessions
3. Monitor Session Statistics¶
Regularly check session stats to detect anomalies:
let stats = guard.session_stats("session-123").unwrap();
if stats.tool_calls > threshold {
alert("Excessive tool usage detected");
}
4. Combine with SwarmGuard¶
For multi-agent systems, use AgenticGuard per-agent and SwarmGuard for cross-agent coordination:
guards:
input:
- guard_type: "agentic" # Per-agent tool security
- guard_type: "swarm" # Cross-agent coordination security
References¶
Research Sources¶
- OWASP LLM Top 10 - LLM06: Excessive Agency
- https://owasp.org/www-project-top-10-for-large-language-model-applications/
- Microsoft AI Red Team - Agent Security Best Practices (2025)
- Anthropic - Tool Use Safety Guidelines (2025)
- https://docs.anthropic.com/en/docs/build-with-claude/tool-use
- NIST AI RMF - Secure AI Agent Deployment
- https://www.nist.gov/artificial-intelligence
Related Guards¶
- SwarmGuard - Multi-agent swarm protection
- ContainmentPolicy - Swarm containment and isolation
- PatternGuard - General prompt injection detection
API Reference¶
Full API documentation including struct definitions, method signatures, and enum variants is available to licensed users. See Licensing for access.
Key methods: new(), with_config(), record_tool_call(), session_stats(), check()