Lytix also supports capturing errors in FastAPI. To get started first hook up our Lytix middleware:
Copy
from lytix_py.FastAPIMiddleware.LytixMiddleware import LytixMiddlewarefrom fastapi import FastAPI...app = FastAPI()app.add_middleware(LytixMiddleware)...
Now you can throw errors in your FastAPI routes or subcalls as follows:
Copy
# main.py@app.get("/")async def read_root(): logger = LLogger("read-root") logger.info('In the main view here...') await backgroundFastAPIProcess() return {"Hello": "World"}# backgroundProcess.pyasync def backgroundFastAPIProcess(): logger = LLogger("background-fast-api-process") logger.info("In the background here") """ All the logs associated with this request will get sent to lytix """ raise LError("Some error")
Lytix supports automatically collecting the input and output of LLM models. To do this you can use the following
Copy
from lytix_py import MetricCollector# Note the metricMetadata is optionalMetricCollector.captureModelIO( modelName="testModelName", modelInput="Whats the capital of France?", modelOutput="Paris is the capital of France", metricMetadata={"env": "dev"})
Lytix will automatically process the input and output of the LLM model and push the metrics to the Lytix platform. You can see your model metrics in the Lytix platform here.
Lytix also supports capturing trace information (e.g. duration) for the LLM model. To do this you can use the following
Copy
from lytix_py import MetricCollectoruserInput = "Whats the capital of France?""""This callback is expecting the model output to be returned"""async def callback(*args): ... return "Paris is the capital of france"response = await MetricCollector.captureModelTrace( modelName="testModelName", modelInput=userInput, callback=callback)
You’ll now see the model duration trace in the Lytix platform here.
Similar to traces, we can also capture logs when a model is called. This will auto push the logs to the Lytix platform and associated with this trace
Copy
from lytix_py import MetricCollectoruserInput = "Whats the capital of France?""""This callback is expecting the model output to be returned"""async def callback(logger): ... logger.info("Inside the callback") return "Paris is the capital of france"response = await MetricCollector.captureModelTrace( modelName="testModelName", modelInput=userInput, callback=callback)
You always have the option to manually push metrics to the Lytix platform with custom names and metadata.The following example pushes a metric to the Lytix platform with the name testMetic and the value 1 along with the env: prod metadata.
Copy
from lytix_py import MetricCollectorMetricCollector.increment("testMetic", 1, {"env": "prod"})
You then have access to that on the Lytix platform and can filter on any of the metadata passed.
Assistant
Responses are generated using AI and may contain mistakes.