๐Ÿšจ Note We have fully transitioned to use the OpenAI SDK as our main interaction point. Please see here for more information.

Using lytix to manage your OptiModel server is as simple as creating a lytix API key.

Prerequisite First create a lytix account here

Create a Lytix API Key

Start by creating and noting down a lytix api key. See instructions here

(Reccomended) Set you API key via the environment variable

export LX_API_KEY=<your-api-key-here>

Set your API key in the code

(Optional) Add a new Provider

Note: You optinally skip this step and can send credentials with every SDK call. See instructions here

Before you can start making LLMs calls, youโ€™ll first need to setup a new provider here

Note Access to models is limited to what providers you have setup. For example, if you only setup OpenAI, you will not be able to call llama3 models.

Install the SDK

Call The SDK

Now you are ready to make your first call by passing in the LX_API_KEY environment variable.

Just remember to pass your LX_API_KEY when starting your program as an environment variable.

Using Local Credentials

If you want to use local credentials, you can do so by passing in an array of credentials to the queryModel function.

Where each items in credentials can be any of the following

AWS Credentials

Open AI

Anthropic

TogetherAI

Groq

MistralAI

Extra Parameters

The following extra parameters are available to pass to the queryModel function:

speedPriority: This can be used to control how OptiModel should prioritize the request. If set to high it will not focus on cost

validator: This is a function that will be used to validate the response. If the validator returns False the request will be retried. Note: You must pass fallbackModels if you use a validator.

fallbackModels: This a list of other models to fallback to if the first model fails the validator.

maxGenLen: This is the maximum length of the response, if the model response is longer than this, it will be truncated. This will check against the contig, so for example if you pass a value of 1 million, and no provider will be able to generate a response of that length, the request will fail.

jsonMode: This will enable json mode for the request. This is useful if you want to pass in a json object as a prompt.

provider: You can optionally force a specific provder to be used. This is useful if you have multiple providers setup and want to force a specific one to be used.

userId: You can optionally pass in a userId to track user requests across the Lytix platform. This is often a unique user identifier.

sessionId: You can optionally pass in a sessionId to track sessions or workflows. Events with the same sessionId will be grouped together.

workflowName: You can optionally pass in a workflowName to track sessions or workflows. Events with the same workflowName will be grouped together.