Documentation Index
Fetch the complete documentation index at: https://docs.vijil.ai/llms.txt
Use this file to discover all available pages before exploring further.
Setup
To begin with, let’s set up and initialize aDome object in your Python environment.
This may install models and perform some initial setup the first time it is invoked.
Dome class. Later in this example
we show how to create your own configurations.
Scan strings
The default configuration blocks prompt injection and jailbreak attacks in inputs, and toxic content in inputs and outputs. Let’s pass a prompt injection string to the input Guard in our initialized Dome and see if it gets detected.traceback() string and the is_safe() flag, Dome provides a guarded_response() method that you can use to obtain an output from Dome. Depending on your Guard’s configuration, this is either a blocked message, the original string that was passed through the Guard, or possibly a santized version of the string passed to the Guard.
Configuring Dome
Dome can be initialized via dictionaries or TOML files. A full guide on configuring Dome can be found here.Initialization via a dict
As an example, let’s initialize a Dome using an input Guard comprising of a single Guard which enforces a phrase banlis, and an output Guard that detects toxicity and PII. For PII, you customize theprivacy-presidio Guard using
anonymizethat results in the PII Guard obfuscating PII,allow_list_fileswhich is a list of allowlisted files that has data not to be obfuscated.
Banlist
The following query is not caught by larger models, but is caught via our banlist Guard.Personally Identifiabile Infomration (PII)
The following is a sample PII query that gets censored.Allowlisted Personally Identifiabile Infomration (PII)
The PII allowlist enabled in the config allows us to customize what terms we can exclude from being classified as PII. Currently this contains the stringshelp@ally.com, ally.com, (877) 247-2559.
Here is what happens when text containing the above strings is scanned using Dome.