Use Lytix to manage your evaluation and usage diretly with the OpenAI SDK. Gain access to models across providers and manage your usage and billing.
Quickstart
Prerequisite First create a lytix account here
Create a Lytix API Key
Start by creating and noting down a lytix api key. See instructions here
Update your OpenAI SDK
With 2 lines you can start using Lytix to manage your evaluation and usage.
from openai import OpenAI
client = OpenAI(
# Update your base url to the lytix proxy
base_url=f"https://api.lytix.co/proxy/v2/openai",
# Update your api key to the lytix api key
api_key="$LYTIX_API_KEY",
default_headers={
# Move your openai key to the default headers
"openaiKey": "$OPENAI_API_KEY"
},
)
🇪🇺 Note You will need to use https://eu.api.lytix.co/proxy/v2/openai
if you are in the EU region.
Optional Fields
Optimodel supports a variety of optional parameters to help you get the best results.
response = client.chat.completions.create(
...,
extra_body={
"lytix-fallbackModels": ...
...
},
extra_headers={
"sessionId": "1234567890",
"userId": "sid@lytix.co",
"workflowName": "test-workflow",
},
)
You will need to use the optimodel-py
/@lytix/client
package to use these parameters.
pip3 install optimodel-py
The following optional parameters are supported:
Guards
lytix-guards
: Pass in a list of fallback models to use
from optimodel_server_types import LLamaPromptGuardConfig
extra_body={
"lytix-guards": [LLamaPromptGuardConfig(
guardName="LLamaPromptGuard",
jailbreakThreshold=0.9999,
guardType="preQuery", # You'll likely only want to guard the input here
).dict()]
}
See here for a list of all supported guards
Fallback Models
lytix-fallbackModels
: Pass in a list of extra models to try if the primary model fails. This can be helpful in mitigating provider outages.
from optimodel_server_types import ModelTypes
...
extra_body={
"lytix-fallbackModels": [ModelTypes.claude_3_5_sonnet.name, ...]
}
Speed Priority
lytix-speedPriority
: Pass in a speed priority to use
extra_body={
"lytix-speedPriority": "low"
}
If set to low
, optimodel will choose the cheapest possible model across all providers (for example if you have two providers bedrock
and anthropic
that both offer claude-3-opus
, optimodel will choose the claude-3-opus
model with the lowest price regardless of which provider is faster). If set to high
, optimodel will choose the fastest possible model across all providers.
Provider
lytix-provider
: Pass in a provider to use
from optimodel_server_types import Providers
...
extra_body={
"lytix-provider": ProviderTypes.bedrock.name
}
Explicitly specify a provider to use incase you have multiple providers available for a specific model and want to force a specific one.
You can also track workflows, users and sessions to get a better understanding of your users and how they interact with your models.
SessionId
sessionId
: A unique identifier for the session.
extra_headers={
"sessionId": "1234567890"
}
UserId
userId
: A unique identifier for the user.
extra_headers={
"userId": "sid@lytix.co"
}
WorkflowName
workflowName
: A unique identifier for the workflow. If this workflow does not exist, it will be created and can be viewed here
extra_headers={
"workflowName": "test-workflow"
}
Passing in Images
Passing images to any model uses the OpenAIs syntax. Underneath we’ll convert the syntax for the model you’re using.
import base64
# Encode the image to base64
with open("some.png", "rb") as image_file:
encoded_string = base64.b64encode(image_file.read())
encoded_string = encoded_string.decode('utf-8')
response = client.chat.completions.create(
model="gpt-4",
messages=[
...
{
"role": "user",
"content": [
{"type": "text", "text": "whats this image?"},
{
"type": "image_url",
"image_url": {
"url": f"data:image/jpeg;base64,{encoded_string}"
}
}
]
},
],
)
Then you can switch to a model such as claude-3-5-sonnet
and pass the image in with no code changes.
from optimodel_server_types import ModelTypes,
response = client.chat.completions.create(
model=ModelTypes.claude_3_5_sonnet.name,
messages=[
# Same as above
...
],
)
Using Models From Other Providers
Beyond the models available on the OpenAI API, Lytix also supports a range of other models from different providers. Just add the credentials for the model/provider and you can start using them immediately.
pip3 install optimodel-py
Then just update our model
field to the model you want to use.
from optimodel import ModelTypes
from openai import OpenAI
client = OpenAI(
base_url="https://api.lytix.co/proxy/v2/openai",
api_key="$LYTIX_API_KEY",
default_headers={
# Add your lytix api key
"openaiKey": "$OPENAI_API_KEY",
# Add any extra credentials for the providers you want to use
"mistralApiKey": "$MISTRAL_API_KEY"
},
)
response = client.chat.completions.create(
# Specify the model you want to use from the ModelTypes enum (Remember to use the .name attribute)
model=ModelTypes.codestral_latest.name,
messages=[
{"role": "user", "content": "Say this is a test"}
]
)
Passing in Credentials
To pass in credentials for a provider, you can add the credentials to the headers. The following is a list of credentils you can pass in:
mistralApiKey
: The API key for the Mistral API.
mistralCodeStralApiKey
: The API key for the Mistral CodeStral API.
openaiKey
: The API key for the OpenAI API.
anthropicApiKey
: The API key for the Anthropic API.
groqApiKey
: The API key for the Groq API.
togetherApiKey
: The API key for the Together API.
geminiApiKey
: The API key for the Gemini API.
Bedrock To run models via bedrock, 3 headers are required:
awsAccessKeyId
: The access key for the AWS account.
awsSecretKey
: The secret access key for the AWS account.
awsRegion
: The session token for the AWS account.
Supported Models & Providers
You can see the list of up to date models and providers here and clicking “Available Models”.