Skip to main content
Dome’s configuration system lets you precisely control which guards run, how they execute, and what detectors they use. This guide covers all configuration options.

Configuration Hierarchy

Dome organizes protection in three levels:
Guardrail (input/output)
    └── Guard (security, moderation, privacy)
            └── Detector (specific detection method)
Each level has its own configuration options that can be customized.

Guardrail Configuration

Basic Structure

config = {
    # List of guards for each guardrail
    "input-guards": ["security-guard", "moderation-guard"],
    "output-guards": ["privacy-guard", "moderation-guard"],

    # Guardrail-level execution settings
    "input-early-exit": True,
    "input-run-parallel": False,
    "output-early-exit": True,
    "output-run-parallel": False,

    # Guard definitions (see below)
    "security-guard": { ... },
    "moderation-guard": { ... },
    "privacy-guard": { ... }
}

Guardrail Options

OptionTypeDefaultDescription
input-guardsList[]Guards to run on input
output-guardsList[]Guards to run on output
input-early-exitBooleanTrueStop on first input flag
input-run-parallelBooleanFalseRun input guards in parallel
output-early-exitBooleanTrueStop on first output flag
output-run-parallelBooleanFalseRun output guards in parallel

Execution Modes

Early Exit (default): Stops processing when the first guard flags content. Faster for rejecting clearly malicious input.
config = {
    "input-early-exit": True,  # Stop at first flag
    "input-guards": ["security-guard", "moderation-guard"]
}
# If security-guard flags, moderation-guard won't run
Complete Execution: Runs all guards regardless of flags. Useful for comprehensive logging.
config = {
    "input-early-exit": False,  # Run all guards
    "input-guards": ["security-guard", "moderation-guard"]
}
# Both guards always run, all flags recorded
Parallel Execution: Runs guards simultaneously for lower latency.
config = {
    "input-run-parallel": True,
    "input-early-exit": False  # Often paired with parallel
}

Guard Configuration

Structure

Each guard groups detectors of the same type:
config = {
    "input-guards": ["my-security-guard"],

    "my-security-guard": {
        "type": "security",                    # Required: guard type
        "methods": ["prompt-injection-mbert"], # Required: detectors
        "early-exit": True,                    # Optional
        "run-parallel": False,                 # Optional
        "blocked-response": "Request blocked"  # Optional
    }
}

Guard Types

TypeUse CaseAvailable Detectors
securityAdversarial attacksPrompt injection, encoding detection
moderationHarmful contentToxicity, profanity, hate speech
privacySensitive dataPII detection, secrets
integrityData qualityFormat validation (experimental)
genericCustom logicUser-defined detectors

Guard Options

OptionTypeDefaultDescription
typeStringRequiredGuard category
methodsListRequiredDetectors to use
early-exitBooleanTrueStop on first detector flag
run-parallelBooleanFalseRun detectors in parallel
blocked-responseStringDefaultCustom block message

Detector Configuration

Available Detectors

Security Detectors:
DetectorDescriptionOptions
prompt-injection-mbertMultilingual BERT modelthreshold
prompt-injection-deberta-v3-baseDeBERTa v3 modelthreshold
encoding-heuristicsBase64, Unicode tricksNone
security-embeddingsSemantic similaritythreshold, top_k
security-llmLLM-based detectionmodel_name
Moderation Detectors:
DetectorDescriptionOptions
moderation-flashtextFast keyword matchingwordlist
moderation-debertaNeural toxicity classifierthreshold
moderations-oai-apiOpenAI Moderation APINone
moderation-llamaguardLlama Guard modelthreshold
Privacy Detectors:
DetectorDescriptionOptions
privacy-presidioPII entity recognitionentities, threshold
detect-secretsCredential detectionNone

Detector-Level Configuration

Configure individual detectors within a guard:
config = {
    "input-guards": ["security-guard"],

    "security-guard": {
        "type": "security",
        "methods": ["prompt-injection-mbert", "security-llm"],

        # Detector-specific settings
        "prompt-injection-mbert": {
            "threshold": 0.8  # Confidence threshold
        },
        "security-llm": {
            "model_name": "gpt-4o"  # Model to use
        }
    }
}

Common Detector Options

Threshold: Confidence score required to flag (0.0-1.0)
"prompt-injection-mbert": {
    "threshold": 0.9  # Higher = fewer false positives
}
Model Selection: For LLM-based detectors
"security-llm": {
    "model_name": "gpt-4o-mini"  # Faster, cheaper
}

TOML Configuration

Store configuration in a file for easier management:
# dome-config.toml
[guardrail]
input-guards = ["security-guard", "moderation-guard"]
output-guards = ["privacy-guard"]
input-early-exit = true
input-run-parallel = false

[security-guard]
type = "security"
methods = ["prompt-injection-deberta-v3-base", "encoding-heuristics"]
early-exit = true

[security-guard.prompt-injection-deberta-v3-base]
threshold = 0.85

[moderation-guard]
type = "moderation"
methods = ["moderation-flashtext", "moderation-deberta"]

[privacy-guard]
type = "privacy"
methods = ["privacy-presidio"]

[privacy-guard.privacy-presidio]
entities = ["PERSON", "EMAIL", "PHONE_NUMBER", "CREDIT_CARD"]
Load from file:
from vijil_dome import Dome

dome = Dome("dome-config.toml")
Use lowercase true and false for booleans in TOML files.

Configuration Examples

Minimal Security

Fast, low-latency protection:
config = {
    "input-guards": ["security-guard"],
    "input-early-exit": True,

    "security-guard": {
        "type": "security",
        "methods": ["prompt-injection-mbert"]
    }
}

Comprehensive Protection

Full coverage for sensitive applications:
config = {
    "input-guards": ["security-guard", "moderation-guard"],
    "output-guards": ["privacy-guard", "moderation-guard"],
    "input-early-exit": False,
    "output-early-exit": False,

    "security-guard": {
        "type": "security",
        "methods": [
            "prompt-injection-deberta-v3-base",
            "encoding-heuristics",
            "security-embeddings"
        ],
        "run-parallel": True
    },
    "moderation-guard": {
        "type": "moderation",
        "methods": ["moderation-deberta", "moderations-oai-api"],
        "run-parallel": True
    },
    "privacy-guard": {
        "type": "privacy",
        "methods": ["privacy-presidio", "detect-secrets"]
    }
}

Privacy-Focused

For healthcare, finance, or regulated industries:
config = {
    "input-guards": ["security-guard"],
    "output-guards": ["privacy-guard"],

    "security-guard": {
        "type": "security",
        "methods": ["prompt-injection-deberta-v3-base"]
    },
    "privacy-guard": {
        "type": "privacy",
        "methods": ["privacy-presidio"],
        "privacy-presidio": {
            "entities": [
                "PERSON", "EMAIL", "PHONE_NUMBER",
                "CREDIT_CARD", "US_SSN", "MEDICAL_LICENSE"
            ],
            "threshold": 0.7
        }
    }
}

Low-Latency Production

Optimized for speed:
config = {
    "input-guards": ["fast-security"],
    "output-guards": ["fast-moderation"],
    "input-early-exit": True,
    "output-early-exit": True,

    "fast-security": {
        "type": "security",
        "methods": ["prompt-injection-mbert"],  # Fastest model
        "early-exit": True
    },
    "fast-moderation": {
        "type": "moderation",
        "methods": ["moderation-flashtext"],  # Keyword-based, very fast
        "early-exit": True
    }
}

Loading Configuration from Console

Pull configuration from your Vijil Console setup:
import os
from vijil_dome import Dome

dome = Dome.create_from_vijil_agent(
    agent_id="your-agent-id",
    api_key=os.environ["VIJIL_API_KEY"]
)
This keeps your code and configuration in sync across environments.

Next Steps

Using Guardrails

Runtime integration patterns

Custom Detectors

Build your own detectors

Observability

Monitoring and tracing

Framework Guides

Framework-specific integration
Last modified on March 19, 2026