Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 10 additions & 0 deletions packages/opentelemetry-instrumentation-openai/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,18 @@
<img src="https://badge.fury.io/py/opentelemetry-instrumentation-openai.svg">
</a>


This library allows tracing OpenAI prompts and completions sent with the official [OpenAI library](https://github.com/openai/openai-python).

## Meter Attributes

As of vNEXT, all meter data points now include both the response and request model names:

* `gen_ai.response.model` — The model name returned in the response (e.g., "gpt-3.5-turbo-0125").
* `gen_ai.request.model` — The model name specified in the request payload (e.g., "gpt-3.5-turbo").

This provides richer context for metrics and observability.

## Installation

```bash
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -888,6 +888,7 @@ def _build_from_streaming_response(

shared_attributes = {
SpanAttributes.LLM_RESPONSE_MODEL: complete_response.get("model") or None,
"gen_ai.request.model": request_kwargs.get("model") if request_kwargs else None,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

New attribute 'gen_ai.request.model' is added for streaming responses. However, non-streaming completions (handled via _handle_response/_set_chat_metrics) do not include this attribute. Please update the synchronous path so that all meter data points include 'gen_ai.request.model' to ensure consistency with tests.

"server.address": _get_openai_base_url(instance),
"stream": True,
}
Expand Down Expand Up @@ -959,6 +960,7 @@ async def _abuild_from_streaming_response(

shared_attributes = {
SpanAttributes.LLM_RESPONSE_MODEL: complete_response.get("model") or None,
"gen_ai.request.model": request_kwargs.get("model") if request_kwargs else None,
"server.address": _get_openai_base_url(instance),
"stream": True,
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -47,12 +47,15 @@ def test_chat_completion_metrics(instrument_legacy, reader, openai_client):
]
assert len(data_point.attributes["server.address"]) > 0
assert data_point.sum > 0
# Check request model attribute
assert data_point.attributes["gen_ai.request.model"] == "gpt-3.5-turbo"

if metric.name == Meters.LLM_GENERATION_CHOICES:
found_choice_metric = True
for data_point in metric.data.data_points:
assert data_point.value >= 1
assert len(data_point.attributes["server.address"]) > 0
assert data_point.attributes["gen_ai.request.model"] == "gpt-3.5-turbo"

if metric.name == Meters.LLM_OPERATION_DURATION:
found_duration_metric = True
Expand All @@ -66,6 +69,10 @@ def test_chat_completion_metrics(instrument_legacy, reader, openai_client):
len(data_point.attributes["server.address"]) > 0
for data_point in metric.data.data_points
)
assert all(
data_point.attributes["gen_ai.request.model"] == "gpt-3.5-turbo"
for data_point in metric.data.data_points
)

assert found_token_metric is True
assert found_choice_metric is True
Expand Down