Financial Services¶
Financial institutions face unique risks when deploying LLM applications: regulatory compliance (SOX, PCI-DSS), customer data protection, and fraud prevention.
Common Threats¶
| Threat | Example | Impact |
|---|---|---|
| PII Leakage | Customer SSN, account numbers in logs | Regulatory fines, breach notification |
| Fraud Prompts | "How do I commit insurance fraud?" | Liability, reputation damage |
| Account Extraction | "Show me John Smith's balance" | Data breach, compliance violation |
| Social Engineering | Tricking AI into revealing procedures | Security vulnerability |
Recommended Configuration¶
# guards.yaml for financial services
guards:
- name: pii
type: pii
config:
redaction: hash # Audit trail with hashed values
categories:
- credit_card
- ssn
- account_number
- routing_number
action: sanitize
- name: pattern
type: pattern
config:
categories:
- prompt_injection
- system_prompt_leak
- social_engineering
action: block
- name: toxicity
type: toxicity
config:
threshold: 0.6 # Stricter for professional context
categories:
- illegal
- dangerous
action: block
- name: length
type: length
config:
max_chars: 5000
max_tokens: 1000
action: block
pipeline:
strategy: fail_fast
guards:
- length # Cheapest first
- pattern # Fast pattern matching
- pii # Redact sensitive data
- toxicity # Content filtering
Implementation Example¶
Customer Service Chatbot¶
from oxideshield import multi_layer_defense, pii_guard
# Create defense for customer-facing chatbot
defense = multi_layer_defense(
enable_length=True,
enable_pii=True,
enable_toxicity=True,
enable_length=True,
pii_redaction="hash",
toxicity_threshold=0.6,
max_chars=5000,
strategy="fail_fast"
)
async def handle_customer_query(user_input: str, customer_id: str):
# Check input before sending to LLM
result = defense.check(user_input)
if not result.passed:
# Log the blocked attempt (without PII)
log_security_event(
event="blocked_input",
customer_id=customer_id,
reason=result.reason,
action=result.action
)
return "I'm sorry, I can't help with that request."
# Safe to process - use sanitized version if PII was redacted
safe_input = result.sanitized or user_input
# Call LLM
response = await llm.generate(
system_prompt=CUSTOMER_SERVICE_PROMPT,
user_message=safe_input
)
# Also check LLM output before returning to user
output_check = defense.check(response)
if not output_check.passed:
return "I apologize, but I cannot provide that information."
return output_check.sanitized or response
Fraud Detection Alerts¶
from oxideshield import pattern_guard, toxicity_guard
# Patterns specific to financial fraud
fraud_patterns = [
r"how to commit.*fraud",
r"hide money from",
r"fake insurance claim",
r"money laundering",
r"avoid.*taxes illegally",
]
pattern = pattern_guard() # Default patterns + fraud patterns
toxicity = toxicity_guard(threshold=0.5)
def check_for_fraud_intent(user_input: str) -> dict:
"""Check if input suggests fraudulent intent."""
pattern_result = pattern.check(user_input)
toxicity_result = toxicity.check(user_input)
if not pattern_result.passed or not toxicity_result.passed:
return {
"flagged": True,
"reason": pattern_result.reason or toxicity_result.reason,
"requires_review": True
}
return {"flagged": False}
PII Handling for Compliance¶
SOX Compliance¶
For Sarbanes-Oxley, you need an audit trail without exposing raw PII:
from oxideshield import pii_guard
# Hash redaction creates consistent, audit-friendly tokens
guard = pii_guard(redaction="hash")
input_text = "Transfer $50,000 to account 123456789"
result = guard.check(input_text)
# Result: "Transfer $50,000 to account [ACCOUNT:a1b2c3d4]"
# The hash is consistent - same account always produces same token
# Auditors can correlate without seeing raw data
PCI-DSS Compliance¶
For credit card data, use masking:
guard = pii_guard(redaction="mask")
input_text = "Charge card 4111-1111-1111-1111 for $500"
result = guard.check(input_text)
# Result: "Charge card 4***-****-****-1111 for $500"
# Last 4 digits preserved for verification
Monitoring and Alerting¶
from oxideshield import multi_layer_defense
import logging
defense = multi_layer_defense(
enable_length=True,
enable_pii=True,
strategy="all" # Run all guards for complete logging
)
def process_with_monitoring(user_input: str):
result = defense.check(user_input)
# Log all security events
if not result.passed:
logging.warning(
"Security event",
extra={
"event_type": "blocked_input",
"guard": result.guard_name,
"action": result.action,
"reason": result.reason,
"match_count": result.match_count
}
)
# Alert on repeated attempts
if result.match_count > 3:
send_security_alert(
severity="HIGH",
message=f"Multiple attack patterns detected: {result.reason}"
)
return result
Performance Considerations¶
Financial applications often have strict latency requirements:
| Configuration | Latency | Security Level |
|---|---|---|
| Pattern + Length only | <3ms | Basic |
| + PII + Toxicity | <15ms | Standard |
| + SemanticSimilarity | <35ms | Enhanced |
| + MLClassifier | <50ms | Maximum |
For high-frequency trading or real-time applications, consider:
# Fast path for low-risk operations
fast_defense = multi_layer_defense(
enable_length=True,
enable_length=True,
strategy="fail_fast"
)
# Full defense for high-risk operations (transfers, account changes)
full_defense = multi_layer_defense(
enable_length=True,
enable_pii=True,
enable_toxicity=True,
enable_length=True,
strategy="all"
)
Next Steps¶
- PIIGuard Configuration - Detailed PII detection options
- Compliance Reports - Generate SOX/PCI compliance reports
- Proxy Gateway - Centralized protection