Evaluate and protect any Python agent with Vijil Diamond and Dome.
Not using LangChain or Google ADK? Vijil works with any Python function that processes text. Whether youβre calling OpenAI directly, using a custom orchestration layer, or wrapping a fine-tuned model, you can evaluate and protect your agent the same way.This guide shows you how to wrap your custom agent function for evaluation, then add runtime guardrails to intercept attacks in production.
Vijil works with any Python function that takes a string and returns a string. The LocalAgentExecutor wraps your agent with adapters to translate between Vijilβs API format and your agentβs interface.
import osfrom vijil import Vijil# Your custom agent function (must be async)async def my_agent(prompt: str) -> str: # Your agent logic here return f"Response to: {prompt}"# Create client and executorvijil = Vijil(api_key=os.getenv("VIJIL_API_KEY"))local_agent = vijil.local_agents.create( agent_function=my_agent, input_adapter=input_adapter, output_adapter=output_adapter,)# Run the evaluationvijil.local_agents.evaluate( agent_name="my-custom-agent", evaluation_name="Trust Score Evaluation", agent=local_agent, harnesses=["trust_score"], # Or specific harnesses rate_limit=30, rate_limit_interval=1,)
The evaluation runs automatically, showing live progress. Press Ctrl+C to cancel if needed.
from vijil_dome import Dome# Create Dome instance with default guardsdome = Dome()input_guardrail, output_guardrail = dome.get_guardrails()async def protected_agent(prompt: str) -> str: # Check input input_result = await input_guardrail.aguard(prompt) if input_result.flagged: return "I can't process that request." # Run your agent response = await my_agent(prompt) # Check output output_result = await output_guardrail.aguard(response) if output_result.flagged: return "I can't provide that response." return response
For non-async code, use the synchronous guard method:
Copy
Ask AI
def protected_agent_sync(prompt: str) -> str: # Check input input_result = input_guardrail.guard(prompt) if input_result.flagged: return "I can't process that request." # Run your agent response = my_agent_sync(prompt) # Check output output_result = output_guardrail.guard(response) if output_result.flagged: return "I can't provide that response." return response
If your agent is already deployed with an OpenAI-compatible API, evaluate it directly without LocalAgentExecutor:
Copy
Ask AI
# Store your API key first via console.vijil.ai or:vijil.api_keys.create( name="my-hosted-agent-key", hub="custom", key="your-agent-api-key")# Run evaluation against your endpointvijil.evaluations.create( api_key_name="my-hosted-agent-key", model_hub="custom", model_url="https://your-agent-endpoint.com/v1", model_name="your-model-name", harnesses=["trust_score"],)