We really like langfuse, the team and the product.
Compared to it:
* We send and ingest Otel traces with GenAI semconv
* Provide semantic-event based analytics - you actually can understand what's happening with your LLM app, not just stare at the logs all day.
* Laminar is built be high-performance and reliable from day 0, easily ingesting and processing spikes of 500k+ tokens per seconds
* Much more flexible evals, because you execute everything locally and simply store the results on Laminar
* Go beyond simple prompt management and support Prompt Chain / LLM pipeline management. Extremely useful when you want to host something like Mixture of Agents as a scalable and trackable micro-service.
* It's not released yet, but searchable trace / span data