POST
/
optimodel
/
api
/
v1
/
query
curl --request POST \
  --url https://api.lytix.co/optimodel/api/v1/query \
  --header 'Content-Type: application/json' \
  --header 'lx-api-key: <api-key>' \
  --data '{
  "messages": [
    {
      "role": "<string>",
      "content": {
        "role": "<string>",
        "content": "<string>"
      }
    }
  ],
  "modelToUse": "llama_3_8b_instruct",
  "speedPriority": "low",
  "temperature": 123,
  "maxGenLen": 123,
  "jsonMode": true,
  "provider": "openai"
}'
This response has no body data.

Authorizations

lx-api-key
string
header
required

Body

application/json
Optimodel request
messages
object[]
required

Messages to use for the query

modelToUse
enum<string>
required

Model to use for the query

Available options:
llama_3_8b_instruct,
llama_3_70b_instruct,
llama_3_1_405b,
llama_3_1_70b,
llama_3_1_8b,
claude_3_5_sonnet,
claude_3_haiku,
mistral_7b_instruct,
mixtral_8x7b_instruct,
gpt_4,
gpt_3_5_turbo,
gpt_4o,
gpt_4_turbo,
gpt_3_5_turbo_0125,
gpt_4o_mini
speedPriority
enum<string>

Speed priority of the query (low or high)

Available options:
low,
high
temperature
number

Temperature of the query

maxGenLen
number

Max generations of the query

jsonMode
boolean

Whether to use json mode

provider
enum<string>

Provider to use for the query

Available options:
openai,
groq,
bedrock,
anthropic,
together