Workflows are ways to group and analyze your LLM data. Not all calls to a given model should be evaluated the same way, so workflows allow you to define different evaluation criteria for different models.

In addition to high level metrics, workflows also allow you to define custom evaluation criteria that you can use to evaluate your LLM data.

Setup A Workflow

Workflows are created when data is first pushed and tagged with a given workflow name. There are two ways to create a workflow:

Use an API Key

Use an API key to automatically push data to a workflow. Learn more here.

Use Optimodel to push data to a workflow (see code below)

from optimodel import queryModel, ModelTypes, ModelMessage, ModelMessageContentEntry

response = await queryModel(
    # Just set this field
    workflowName="test-workflow", 
    model=ModelTypes.llama_3_70b_instruct,
    messages=[
        ModelMessage(
            role="system",
            content="Always respond with a JSON object",
        ),
        ModelMessage(role="user", content=[
            ModelMessageContentEntry(type="text", text="Hello how are you again?"),
        ]),
    ],
)

Use the native OpenAI client to push data to a workflow

response = client.chat.completions.create(
    model=ModelTypes.mistral_large_latest.name,
    temperature=0,
    max_tokens=1000,
    messages=[
        {"role": "system", "content": "You are a helpful assistant. Always JSON"},
        {"role": "user", "content": "How are you?"},
    ],
    extra_headers={
        # Set this field
        "workflowName": "Example workflow"
    }
)