Use Cases¶
OxideShield™ protects LLM applications across industries. This section shows how to configure guards for specific scenarios.
Industry Solutions¶
| Industry | Primary Threats | Recommended Guards |
|---|---|---|
| Financial Services | PII leakage, fraud prompts | PIIGuard, PatternGuard, ToxicityGuard |
| Healthcare | PHI exposure, misinformation | PIIGuard, ToxicityGuard, MLClassifierGuard |
| Customer Support | Jailbreaks, brand damage | PatternGuard, ToxicityGuard, LengthGuard |
| Developer Tools | Code injection, extraction | PatternGuard, EncodingGuard, SemanticGuard |
| Chat Bots & Personal AI | Multi-platform attacks, privilege escalation | PatternGuard, Proxy Gateway, Rate Limiting |
By Threat Type¶
Protecting Sensitive Data¶
Problem: Customer data (emails, SSNs, credit cards) leaking into LLM responses or logs.
Solution: PIIGuard with appropriate redaction strategy.
from oxideshield import pii_guard
guard = pii_guard(redaction="hash") # Audit-friendly hashing
result = guard.check(user_input)
if result.sanitized:
safe_input = result.sanitized # PII replaced with hashes
Preventing Jailbreaks¶
Problem: Users trick your AI into bypassing content policies.
Solution: Multi-layer defense with pattern + ML classification.
from oxideshield import pattern_guard, ml_classifier_guard
pattern = pattern_guard() # Fast, catches known attacks
ml = ml_classifier_guard(threshold=0.7) # Catches novel attacks
result = pattern.check(user_input)
if result.passed:
result = ml.check(user_input)
Blocking Prompt Injection¶
Problem: Attackers override your system prompt to change AI behavior.
Solution: PatternGuard + SemanticSimilarityGuard for defense in depth.
from oxideshield import pattern_guard, semantic_similarity_guard
pattern = pattern_guard()
semantic = semantic_similarity_guard(threshold=0.85)
# Pattern catches exact matches, semantic catches paraphrases
Preventing Resource Exhaustion¶
Problem: Attackers send extremely long inputs to exhaust tokens/costs.
Solution: LengthGuard as first line of defense.
from oxideshield import length_guard
guard = length_guard(max_chars=10000, max_tokens=2000)
result = guard.check(user_input)
# Blocks inputs that would be too expensive to process
Deployment Patterns¶
API Gateway Protection¶
Protect all LLM API calls through a central proxy:
oxideshield proxy \
--listen 0.0.0.0:8080 \
--upstream openai=https://api.openai.com \
--upstream anthropic=https://api.anthropic.com \
--config guards.yaml
Library Integration¶
Embed guards directly in your application:
from oxideshield import multi_layer_defense
defense = multi_layer_defense(
enable_length=True,
enable_pii=True,
enable_toxicity=True,
strategy="fail_fast"
)
# Check before every LLM call
result = defense.check(user_input)
if result.passed:
response = llm.generate(result.sanitized or user_input)
Browser-Side Validation¶
Pre-filter inputs before they leave the browser:
import init, { PatternGuard, PIIGuard } from '@oxideshield/wasm';
await init();
const guard = new PatternGuard();
// Check before sending to backend
const result = guard.check(userInput);
if (!result.passed) {
showError("Please rephrase your question");
}
Compliance Mapping¶
OxideShield™ maps to major compliance frameworks:
| Framework | Relevant Guards | Documentation |
|---|---|---|
| NIST AI RMF | All guards | NIST Mapping |
| EU AI Act | PIIGuard, ToxicityGuard | EU AI Act |
| HIPAA | PIIGuard | Healthcare Use Case |
| SOX | PatternGuard, PIIGuard | Financial Use Case |
| GDPR | PIIGuard | PIIGuard Documentation |