Replies: 17 comments 57 replies
-
|
Thanks for sharing! I think generally supporting the forming opentelemetry conventions (genai/llm) is super interesting as they mature. There are currently a number of instrumentation implementations with slightly different conventions as far as I understand the notes of the working group(s). I'd love to understand your perspective on this! I think adding an OTel collector to the langfuse platform seems the most promising way to support OTel-based instrumentation as long as the formats/semantics across instrumentation libraries are starting to be standardized/stable enough. |
Beta Was this translation helpful? Give feedback.
-
@marcklingen Yes, OTel collector will be the best option for this. I was wondering maybe even before the formats/semantics across instrumentation libraries become mature, you can probably still able to build some OTel collectors to integrate with langfuse, as the formats/semantics across instrumentation libraries did not impact much for the implementation. Embracing OTEL can also help enlarge the ecosystem for langfuse as well.
Yes, but the OTEL community is trying to unify this via the LLM working group, there is already a initial draft spec for LLM semantic convention at https://github.com/open-telemetry/semantic-conventions/tree/main/docs/gen-ai |
Beta Was this translation helpful? Give feedback.
-
|
Thanks @marcklingen , I totally understand your concern for backward compatibility.
Let me share some of my understanding, usually, if a product did not reach GA milestone, the product usually in Beta or Alpha state, and customer can have a quick try and share some feedback. Due to the product is Beta or Alpha, and it will not support running in production, but just PoC. When reaching GA, the customer may need to allow some break changes. But anyway, glad to see you are interested in OTEL AI Semantic Convention, look forward to work with you in this area 👍 |
Beta Was this translation helpful? Give feedback.
-
|
Just adding a note here that OTel support would also enable Langfuse to support Magentic (cc @jackmpcollins) and any library that is itself instrumented with Pydantic Logfire. For Magentic in particular, see details here: https://magentic.dev/logging-and-tracing/ |
Beta Was this translation helpful? Give feedback.
-
|
OTel support would also allow to integrate with Firebase GenKit, see instrumentation docs and thread: https://github.com/orgs/langfuse/discussions/3351 cc @debkanchan |
Beta Was this translation helpful? Give feedback.
-
|
Merging thread #2043 by @baggiponte
|
Beta Was this translation helpful? Give feedback.
-
|
Merging #199
|
Beta Was this translation helpful? Give feedback.
-
|
I am wondering whether the semantic standards and compatibility of OTel have reached a level of maturity to meet practical needs. Considering the potential differences between various implementations, can this standardization progress provide sufficient stability and consistency to support a wide range of use cases? |
Beta Was this translation helpful? Give feedback.
-
|
I would like to provide an example of the necessity of supporting OTel. Pydantic-ai is a library developed for LLM Agent by Pydantic(which should be familiar to anyone experienced with Python development). By default, it uses Logfire as the observability SDK. In a recent patch, the Logfire SDK has been updated to send traces to any OTel server(pydantic/logfire#78 (comment)). Once Langfuse supports OTel, Pydantic-ai will be able to send traces to Langfuse without the need to hack any Thanks for you work! |
Beta Was this translation helpful? Give feedback.
-
|
+1 on adding an Otel collector: Pydantic AI supports OTel instrumentation, requested here: https://github.com/orgs/langfuse/discussions/5036 cc @Steffen911 |
Beta Was this translation helpful? Give feedback.
-
|
Additional use case: microsoft/autogen cc @Steffen911 |
Beta Was this translation helpful? Give feedback.
-
|
Additional use case: instrumentation for Java applications |
Beta Was this translation helpful? Give feedback.
-
|
Hey everyone, Please check it out and let us know if there is a need to map additional properties and if you wish to see other enhancements. |
Beta Was this translation helpful? Give feedback.
-
|
Thank you very much for providing this feature! When trying OpenLit example with self hosted version, getting this error RangeError: Invalid time value |
Beta Was this translation helpful? Give feedback.
-
|
Thank you for bringing us this feature! I’d like to know if Langfuse currently plans to support (or already supports—please forgive my ignorance) the OTEL Gen AI standard? Additionally, if I want to use Langfuse's features(with logfire) for model cost, latency, etc., the only way for now might still be manual adaptation, in from pydantic_ai import Agent
import logfire
logfire.configure(
# Setting a service name is good practice in general, but especially
# important for Jaeger, otherwise spans will be labeled as 'unknown_service'
service_name="trail",
# Sending to Logfire is on by default regardless of the OTEL env vars.
# Keep this line here if you don't want to send to both Jaeger and Logfire.
send_to_logfire=False,
)
# Define a very simple agent including the model to use, you can also set the model when running the agent.
agent = Agent(
"google-gla:gemini-1.5-flash",
# Register a static system prompt using a keyword argument to the agent.
# For more complex dynamically-generated system prompts, see the example below.
system_prompt="Be concise, reply with one sentence.",
)
# Run the agent synchronously, conducting a conversation with the LLM.
# Here the exchange should be very short: PydanticAI will send the system prompt and the user query to the LLM,
# the model will return a text response. See below for a more complex run.
result = agent.run_sync('Where does "hello world" come from?')
print(result.data)
"""
The first known use of "hello, world" was in a 1974 textbook about the C programming language.
""" |
Beta Was this translation helpful? Give feedback.
-
|
Thank you! This is a really essential feature! I am using the self-hosted version v3.24.0. |
Beta Was this translation helpful? Give feedback.
-
|
https://opentelemetry.io/blog/2024/otel-generative-ai/ |
Beta Was this translation helpful? Give feedback.


Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Describe the feature or potential improvement
openllmetry can provide many instrumentation code for vectordb, LLMs, LLM orchestration platforms etc, and traceloop sdk is kind of otel collector and can expose metrics, logs(did not implement yet), tracing to a third party observability platform. Langfuse is a good candidate to act as a 3rd party LLM observability platform.
Note
Documentation on Otel Support in Langfuse is available here: https://langfuse.com/integrations/native/opentelemetry
Additional information
No response
Beta Was this translation helpful? Give feedback.
All reactions