With a couple of lines of code, you can start using Lytix to manage your evaluation and usage via the Vercel AI SDK.

Prerequisite First create a lytix account here

Anthropic

Step 1: Install the SDK

npm i @ai-sdk/anthropic ai

Step 2: Update You Client

import { createAnthropic } from "@ai-sdk/anthropic";

const anthropic = createAnthropic({
    baseURL: "https://api.lytix.co/proxy/v1/anthropic",
    apiKey: "$LYTIX_API_KEY",
    headers: {
      anthropicApiKey: "$ANTHROPIC_API_KEY",
    },
});

๐Ÿ‡ช๐Ÿ‡บ Note You will need to use https://eu.api.lytix.co/proxy/v1/anthropic if you are in the EU region.

Step 3: Start Querying

import { generateText } from "ai";

const response = await generateText({
    // Choose your model here
    model: anthropic("claude-3-5-sonnet-20240620"),

    /**
     * Either
     */
    prompt: "What is the meaning of life?",
    system: "Always respond with a JSON object",

    /**
     * Or
     */
    messages: [
        { role: "user", content: "What is the meaning of life?" },
        { role: "system", content: "Always respond with a JSON object" },
    ],
    temperature: 0.5,
    maxTokens: 1000,
    headers: {
        // ...see extra headers below
    },
});

OpenAI

Step 1: Install the SDK

npm i @ai-sdk/openai ai

Step 2: Update You Client

import { createOpenAI } from "@ai-sdk/openai";

const openai = createOpenAI({
    baseURL: "https://api.lytix.co/proxy/v1/openai",
    apiKey: "$LYTIX_API_KEY",
    headers: {
      openaiKey: "$OPENAI_API_KEY",
    },
});

๐Ÿ‡ช๐Ÿ‡บ Note You will need to use https://eu.api.lytix.co/proxy/v1/openai if you are in the EU region.

Step 3: Start Querying

import { generateText } from "ai";

const response = await generateText({
    // Choose your model here
    model: openai("gpt-4o-mini"),

    /**
     * Either
     */
    prompt: "What is the meaning of life?",
    system: "Always respond with a JSON object",

    /**
     * Or
     */
    messages: [
        { role: "user", content: "What is the meaning of life?" },
        { role: "system", content: "Always respond with a JSON object" },
    ],
    temperature: 0.5,
    maxTokens: 1000,
    headers: {
        // ...see extra headers below
    },
});

Groq

Step 1: Install the SDK

npm i @ai-sdk/xai ai

Step 2: Update You Client

import { createXai } from "@ai-sdk/xai";

const groq = createXai({
    baseURL: "https://api.lytix.co/proxy/v2/groq",
    apiKey: "$LYTIX_API_KEY",
    headers: {
      groqApiKey: "$GROQ_API_KEY",
    },
});

๐Ÿ‡ช๐Ÿ‡บ Note You will need to use https://eu.api.lytix.co/proxy/v2/groq if you are in the EU region.

Step 3: Start Querying

import { generateText } from "ai";

const response = await generateText({
    // Choose your model here
    model: xai("gemma2-9b-it"),

    /**
     * Either
     */
    prompt: "What is the meaning of life?",
    system: "Always respond with a JSON object",

    /**
     * Or
     */
    messages: [
        { role: "user", content: "What is the meaning of life?" },
        { role: "system", content: "Always respond with a JSON object" },
    ],
    temperature: 0.5,
    maxTokens: 1000,
    headers: {
        // ...see extra headers below
    },
});

Extra Headers

When querying you can pass the following headers to the client:

  • workflowName - The name of the workflow to create (see workflow setup for more information)
  • userId - The id of the user
  • sessionId - The id of the session to group calls together
  • cacheTTL - The number of seconds to cache the response for. Please see caching to setup your cache.