Lytix helps developers manage, test and automatically improve their prompts. Learn more below

Prompt Management

Lytix supports optimizing and versioning prompts, to help with prompt engineering and experimentation. Any models that are created or optimized with lytix will be stored and versioned in the “Prompts” page (each version listed under “Prompt Versions”).

Making a New Prompt

Lytix support saving and versioning prompts. You can create a new prompt by visiting the prompt page and clicking the New Prompt button.

First, use the lytix playground to experiment with different prompts and foundational models. Keep playing with different configurations until you have a base prompt you’re happy to start with (remember, you can continue to optimize this prompt against emerging edge cases). Test your prompt against any LLM available in the playground

Finally you can view existing versions of your prompt and view the diff between them by clicking ‘Prompt Versions’

When you’re happy with the prompt - select “Save Prompt” to save it as one of your Prompts. You’ll see it under the Prompts page.

Prompt Testing

You can track and test against eedge cases for your prompts. Start by adding a new test case

Adding A New Test Case

Pressure test your prompts against known failure-points and edge cases, before you deploy.

  1. Add a new test case, representing a type of event you need to evaluate your prompt against. Add one case for each event type you’d like to evaluate.
    1. These can be examples of known failures such edge cases you’ve seen in production. Or ‘successful’ examples you’d like the model to produce more reliably.

Once you have all the cases you want to test against, you’re ready to Run a Test Suite.

2.. Start a New Test Run, and select the model and parameters you’d use in production (i.e. GPT4o, temperature of 0.9)

  1. Once you Start your Test suite, lytix will run your prompt (with the provided model configurations), with the user/system prompt provided by each test case. Wait for the test case to run against each of your test cases.

  2. For each test case, compare the “expected” output with the actual output, and decide if the test case “Passed” or “Failed”.

    1. Lytix will help you providing the semantic “distance” and actual character diff, between the expected and produced output
    2. Lytix will automatically tag test cases as ‘passes’ or ‘failures’ based on this data, but remember you can always manually override these results!
  3. After identifying which test cases passed and failed, you’re ready to further-optimize your prompt to protect against failed test cases!

Prompt Optimization

Now you have a sense of where your original prompt is failing. Next, let Lytix optimize your prompt to eliminate these failure cases.

  1. Lytix will iterate on your original prompt, using the failed test cases and state of the art prompt engineering techniques.
  1. You can confirm that the new prompt passes the test cases that your previous prompt failed
  2. If you’re happy - save it as a new version of your prompt! You’ll see it under the latest version of the ‘base prompt’ you started with in the Prompts page.