IntegrationsΒΆ

Vijil Evaluate is integrated with a number of leading LLM providers.

To evaluate serverless LLM endpoints hosted on any of these, follow directions in the quickstart tutorial, but with different values for the model hub and model name.

Provider

model_hub

model_name

OpenAI

openai

List of Models

Together AI

together

List of Models

Mistral AI

mistral

List of Models

Fireworks AI

fireworks

List of Models

NVIDIA NIM

nvidia

List of Models

We also support a number of cloud services, give you the flexibility of evaluating agents accessible through custom endpoints.