Detection Categories
Register your detector with one of these categories:| Category | Use Case |
|---|---|
DetectionCategory.Security | Adversarial attacks, injections |
DetectionCategory.Moderation | Harmful content, toxicity |
DetectionCategory.Privacy | PII, secrets, sensitive data |
DetectionCategory.Integrity | Data quality, format validation |
DetectionCategory.Generic | Anything else |
Detection Result Format
Thedetect method must return a tuple of (flagged: bool, metadata: dict)
Always include
query_string and response_string in metadata for proper guardrail operation.Using Custom Detectors
- Define before instantiation: custom detectors must be defined before creating the Dome instance.
- Import from separate file: for cleaner organization, define detectors in a separate module.
Example Detectors
- Rate Limiter: Track request frequency per user
- Regex Pattern Detector: Block content matching patterns
- External API Detector: Call an external service for detection
- Semantic Similarity Detector: Block content similar to known bad examples
Work in Progress
The programmatic protection capabilities and Dome integrations are currently in private preview and subject to change.
Next Steps
Configure Guardrails
Use custom detectors in configurations
Use Guardrails
Runtime integration patterns
Observability
Monitor custom detector performance
Protection Overview
Built-in detectors reference