Python API Reference¶
Functions¶
pattern_guard()¶
Create a pattern guard for prompt injection detection.
pii_guard()¶
Create a PII detection guard.
Parameters:
- redaction: "mask", "replace", "remove", or "hash"
toxicity_guard()¶
Create a toxicity detection guard.
length_guard()¶
semantic_similarity_guard()¶
def semantic_similarity_guard(
threshold: float = 0.85,
cache_enabled: bool = True
) -> SemanticSimilarityGuard
multi_layer_defense()¶
def multi_layer_defense(
enable_length: bool = True,
enable_pii: bool = True,
enable_toxicity: bool = True,
enable_length: bool = True,
pii_redaction: str = "mask",
toxicity_threshold: float = 0.7,
max_chars: int = 10000,
strategy: str = "fail_fast"
) -> MultiLayerDefense
Classes¶
GuardCheckResult¶
class GuardCheckResult:
guard_name: str # Name of the guard that produced this result
passed: bool # Whether the check passed
action: str # Action taken: "Allow", "Block", "Sanitize", "Log", "Alert"
reason: str # Reason for the result
sanitized: str | None # Sanitized content (if applicable)
match_count: int # Number of pattern matches found
PatternGuard¶
PIIGuard¶
class PIIGuard:
def check(self, input: str) -> GuardCheckResult: ...
def detect(self, input: str) -> list[tuple[str, str, int, int]]: ...
# Returns: [(category, matched_text, start_pos, end_pos), ...]
def redact(self, input: str) -> str: ...
# Returns redacted text with PII replaced
Policy API (Professional)¶
validate_policy()¶
Validate a policy YAML string.
load_policy()¶
Load a policy from YAML.
PolicyEngine¶
class PolicyEngine:
@classmethod
def from_policy(cls, policy: SecurityPolicy) -> PolicyEngine: ...
def check(self, input: str) -> PolicyResult: ...
See Policy API for complete documentation.