Skip to content

BiasGuard

BiasGuard detects biased, stereotypical, or discriminatory language in LLM inputs and outputs across multiple bias categories.

Overview

Property Value
Latency <5ms
Memory 100 KB
Async No
ML Required No
License Community

Categories

Category Description Examples
Gender Gender-based stereotypes and discriminatory language "Women can't lead"
Racial Racial/ethnic stereotypes and slurs Stereotypical generalizations
Age Ageist language and stereotypes "Too old to learn"
Disability Ableist language and stereotypes Derogatory terms
Religious Religious bias and stereotypes Religious generalizations
Socioeconomic Class-based stereotypes and bias Class assumptions

Usage

Rust

use oxideshield_guard::guards::BiasGuard;
use oxideshield_guard::{Guard, GuardAction};

let guard = BiasGuard::new("bias")
    .with_action(GuardAction::Block);

let result = guard.check("Women are not suited for leadership roles");
assert!(!result.passed);

Python

from oxideshield import bias_guard

guard = bias_guard(action="block")
result = guard.check("Women are not suited for leadership roles")
assert not result.passed

Configuration

guards:
  - type: bias
    action: block
    categories:
      - gender
      - racial
      - age
      - disability
      - religious
      - socioeconomic

Research References