Skip to content

feat(semantic-conventions-ai): move custom histogram to semantic-conventions-ai #3181

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 9 commits into
base: main
Choose a base branch
from

Conversation

minimAluminiumalism
Copy link
Contributor

@minimAluminiumalism minimAluminiumalism commented Jul 28, 2025

  • I have added tests that cover my changes.
  • If adding a new instrumentation or changing an existing one, I've added screenshots from some observability platform showing the change.
  • PR name follows conventional commits format: feat(instrumentation): ... or fix(instrumentation): ....
  • (If applicable) I have updated the documentation accordingly.

Feat #3146

Note: Since I've changed opentelemetry-semantic-conventions-ai which is a dependency of other instrumentation packages, more unit tests of metric for openetelemetry-instrumentation-xxx will be added in a follow-up PR after a new verison of opentelemetry-semantic-conventions-ai released(maybe v0.4.13) to avoid local path dependencies in pyproject.toml. https://github.com/traceloop/openllmetry/blob/cfe309d0658b9a9fd9ffd6045a0e24a3b2be3f7c/packages/opentelemetry-instrumentation-openai/pyproject.toml#L30C1-L30C38

Currently I've only change the dependency of opentelemetry-semantic-conventions-ai to local dev under traceloop-sdk to pass the tests. We may need to revert it after the new semconv is released.

https://github.com/traceloop/openllmetry/pull/3181/files#diff-f576c311dca29ec53bc94ff0163289ed5bea0734e5614784440df1972cf8ee24R38

I've also add a example under sample_app to verified this change is effective locally.


Important

Centralizes custom histogram configurations in opentelemetry-semantic-conventions-ai and applies them across various instrumentations, with added tests for verification.

  • Behavior:
    • Moves custom histogram configurations to opentelemetry-semantic-conventions-ai.
    • Applies apply_genai_bucket_configuration() in _instrument() methods across multiple instrumentors (e.g., AnthropicInstrumentor, BedrockInstrumentor, CrewAIInstrumentor).
    • Adds tests for bucket configurations in test_bucket_configuration.py and test_bucket_integration.py.
  • Metric Configuration:
    • Defines MetricBuckets and MetricViewsBuilder in semconv_ai/__init__.py.
    • Adds predefined bucket boundaries for metrics like LLM_TOKEN_USAGE and PINECONE_DB_QUERY_DURATION.
    • Provides methods to get metric views with these buckets.
  • Testing:
    • Adds integration tests in test_integration.py and test_metric_buckets.py to verify bucket configurations and view creation.
    • Tests cover bucket immutability and error handling in view creation.
  • Misc:
    • Updates pyproject.toml to include new dependencies for testing.
    • Adds example usage in openai_standalone_instrument.py to demonstrate metric setup.

This description was created by Ellipsis for 7be622d. You can customize this summary. It will automatically update as commits are pushed.

Summary by CodeRabbit

  • New Features

    • Introduced GenAI-specific histogram bucket boundaries and metric views for improved OpenTelemetry metrics.
    • Added a function to apply GenAI metric bucket configuration globally.
    • Enhanced instrumentation for various AI and database libraries to automatically apply GenAI metric bucket settings when appropriate.
  • Bug Fixes

    • Improved error resilience in metric configuration by gracefully handling missing dependencies.
  • Tests

    • Added comprehensive unit and integration tests for GenAI metric buckets and views.
  • Chores

    • Updated dependencies to use local development versions for easier testing and development.
    • Added sample code demonstrating GenAI metric instrumentation in an OpenAI usage scenario.

Copy link

coderabbitai bot commented Jul 28, 2025

Walkthrough

This change introduces a new package, opentelemetry-semantic-conventions-ai, which centralizes GenAI-specific histogram bucket boundaries and metric views for OpenTelemetry. Instrumentation packages are updated to conditionally apply these metric configurations when no meter provider is supplied. Related dependencies, tests, and sample usage are added or updated to support and verify the new approach.

Changes

Cohort / File(s) Change Summary
GenAI Metric Buckets & Views Implementation
packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/__init__.py
Adds MetricBuckets, MetricViewsBuilder, and apply_genai_bucket_configuration() for standardized GenAI histogram buckets and metric views; includes logic to globally apply these configurations.
GenAI Metric Buckets & Views Tests
packages/opentelemetry-semantic-conventions-ai/tests/test_metric_buckets.py, packages/opentelemetry-semantic-conventions-ai/tests/test_integration.py
Adds comprehensive unit and integration tests for metric buckets, views, and their application to MeterProvider, including error handling and immutability checks.
GenAI Metric Buckets & Views Test Dependencies
packages/opentelemetry-semantic-conventions-ai/pyproject.toml
Adds OpenTelemetry API and SDK as test dependencies.
Instrumentation Packages: Conditional GenAI Bucket Application
packages/opentelemetry-instrumentation-anthropic/opentelemetry/instrumentation/anthropic/__init__.py, packages/opentelemetry-instrumentation-bedrock/opentelemetry/instrumentation/bedrock/__init__.py, packages/opentelemetry-instrumentation-crewai/opentelemetry/instrumentation/crewai/instrumentation.py, packages/opentelemetry-instrumentation-groq/opentelemetry/instrumentation/groq/__init__.py, packages/opentelemetry-instrumentation-langchain/opentelemetry/instrumentation/langchain/__init__.py, packages/opentelemetry-instrumentation-milvus/opentelemetry/instrumentation/milvus/__init__.py, packages/opentelemetry-instrumentation-ollama/opentelemetry/instrumentation/ollama/__init__.py, packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/__init__.py, packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/v0/__init__.py, packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/v1/__init__.py, packages/opentelemetry-instrumentation-pinecone/opentelemetry/instrumentation/pinecone/__init__.py, packages/opentelemetry-instrumentation-watsonx/opentelemetry/instrumentation/watsonx/__init__.py
Updates _instrument methods to conditionally import and invoke apply_genai_bucket_configuration if no meter provider is supplied, ensuring GenAI-specific bucket configuration is applied only as needed.
SDK Metric Views Refactor
packages/traceloop-sdk/traceloop/sdk/metrics/metrics.py
Refactors metric_views() to delegate GenAI metric view construction to MetricViewsBuilder.get_all_genai_views(), removing manual bucket specification.
SDK Dependency Update
packages/traceloop-sdk/pyproject.toml
Switches opentelemetry-semantic-conventions-ai dependency to a local editable path.
Sample App: GenAI Buckets Example
packages/sample-app/sample_app/openai_standalone_instrument.py
Adds a sample script demonstrating OpenAI instrumentation with GenAI bucket configuration and telemetry export.
Sample App Dependency Update
packages/sample-app/pyproject.toml
Adds local development dependency on opentelemetry-semantic-conventions-ai.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant InstrumentationPkg as Instrumentation Package (_instrument)
    participant SemconvAI as opentelemetry-semconv_ai
    participant MeterProvider

    User->>InstrumentationPkg: Call _instrument(**kwargs)
    alt meter_provider not supplied
        InstrumentationPkg->>SemconvAI: apply_genai_bucket_configuration()
        SemconvAI->>MeterProvider: Set global MeterProvider with GenAI views
    else meter_provider supplied
        InstrumentationPkg->>MeterProvider: Use provided MeterProvider
    end
    InstrumentationPkg->>MeterProvider: get_meter()
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related issues

Suggested reviewers

  • doronkopit5
  • nirga

Poem

In burrows deep where metrics dwell,
A rabbit hops with tales to tell—
Of buckets, views, and GenAI flair,
Now shared across the code with care.
Instrumentors check, then buckets apply,
Telemetry flows as numbers fly!
🐇✨

Note

⚡️ Unit Test Generation is now available in beta!

Learn more here, or try it out under "Finishing Touches" below.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed everything up to 7be622d in 1 minute and 3 seconds. Click for details.
  • Reviewed 1247 lines of code in 21 files
  • Skipped 4 files when reviewing.
  • Skipped posting 6 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/traceloop/sdk/metrics/metrics.py:30
  • Draft comment:
    Consider adding thread synchronization (e.g. a lock) in new to ensure thread safety for the singleton initialization.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.
2. packages/traceloop-sdk/traceloop/sdk/metrics/metrics.py:32
  • Draft comment:
    If MetricsWrapper.endpoint is not set, the instance is returned without initializing __metrics_exporter and __metrics_provider. Consider clarifying this behavior or raising an exception.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.
3. packages/traceloop-sdk/traceloop/sdk/metrics/metrics.py:87
  • Draft comment:
    MetricViewsBuilder.get_all_genai_views() is used to set custom views for the MeterProvider. Ensure these views meet your requirements and that the global meter provider is updated as intended.
  • Reason this comment was not posted:
    Confidence changes required: 30% <= threshold 50% None
4. packages/opentelemetry-instrumentation-anthropic/opentelemetry/instrumentation/anthropic/__init__.py:601
  • Draft comment:
    Minor typographical note: consider changing "inited" to "initialized" in the comment for clarity.
  • Reason this comment was not posted:
    Comment was on unchanged code.
5. packages/opentelemetry-instrumentation-bedrock/opentelemetry/instrumentation/bedrock/__init__.py:589
  • Draft comment:
    Typo: The comment uses "inited" which is not standard. Consider changing it to "initialized" for clarity.
  • Reason this comment was not posted:
    Comment was on unchanged code.
6. packages/opentelemetry-instrumentation-groq/opentelemetry/instrumentation/groq/__init__.py:417
  • Draft comment:
    Typographical suggestion: In the comment "meter and counters are inited here", it may be clearer to use "initialized" instead of "inited".
  • Reason this comment was not posted:
    Comment was on unchanged code.

Workflow ID: wflow_w2sPuDcSnsq1inom

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (2)
packages/opentelemetry-semantic-conventions-ai/tests/test_integration.py (2)

85-117: Improve the final assertion.

Good integration test for Pinecone metrics, but the final assert True is meaningless. Consider either removing it or adding a more meaningful assertion that verifies the histogram operations completed successfully.

-        assert True
+        # Test passes if no exceptions are raised during metric recording

Or remove the assertion entirely if the test's purpose is just to verify no exceptions are thrown.


230-239: Fix Yoda conditions for better style consistency.

The test logic is correct, but the equality comparisons use "Yoda conditions" (constant == variable) which goes against Python style conventions.

-        assert MetricBuckets.LLM_TOKEN_USAGE == original_llm_buckets
-        assert MetricBuckets.PINECONE_DB_QUERY_DURATION == original_duration_buckets
-        assert MetricBuckets.PINECONE_DB_QUERY_SCORES == original_score_buckets
+        assert original_llm_buckets == MetricBuckets.LLM_TOKEN_USAGE
+        assert original_duration_buckets == MetricBuckets.PINECONE_DB_QUERY_DURATION
+        assert original_score_buckets == MetricBuckets.PINECONE_DB_QUERY_SCORES
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7be622d and 1d1561f.

⛔ Files ignored due to path filters (3)
  • packages/opentelemetry-instrumentation-pinecone/poetry.lock is excluded by !**/*.lock
  • packages/opentelemetry-semantic-conventions-ai/poetry.lock is excluded by !**/*.lock
  • packages/sample-app/poetry.lock is excluded by !**/*.lock
📒 Files selected for processing (21)
  • packages/opentelemetry-instrumentation-anthropic/opentelemetry/instrumentation/anthropic/__init__.py (1 hunks)
  • packages/opentelemetry-instrumentation-bedrock/opentelemetry/instrumentation/bedrock/__init__.py (1 hunks)
  • packages/opentelemetry-instrumentation-crewai/opentelemetry/instrumentation/crewai/instrumentation.py (1 hunks)
  • packages/opentelemetry-instrumentation-groq/opentelemetry/instrumentation/groq/__init__.py (1 hunks)
  • packages/opentelemetry-instrumentation-langchain/opentelemetry/instrumentation/langchain/__init__.py (1 hunks)
  • packages/opentelemetry-instrumentation-milvus/opentelemetry/instrumentation/milvus/__init__.py (1 hunks)
  • packages/opentelemetry-instrumentation-ollama/opentelemetry/instrumentation/ollama/__init__.py (1 hunks)
  • packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/__init__.py (1 hunks)
  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/v0/__init__.py (1 hunks)
  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/v1/__init__.py (1 hunks)
  • packages/opentelemetry-instrumentation-openai/tests/metrics/test_bucket_configuration.py (1 hunks)
  • packages/opentelemetry-instrumentation-openai/tests/metrics/test_bucket_integration.py (1 hunks)
  • packages/opentelemetry-instrumentation-pinecone/opentelemetry/instrumentation/pinecone/__init__.py (1 hunks)
  • packages/opentelemetry-instrumentation-watsonx/opentelemetry/instrumentation/watsonx/__init__.py (1 hunks)
  • packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/__init__.py (1 hunks)
  • packages/opentelemetry-semantic-conventions-ai/pyproject.toml (1 hunks)
  • packages/opentelemetry-semantic-conventions-ai/tests/test_integration.py (1 hunks)
  • packages/opentelemetry-semantic-conventions-ai/tests/test_metric_buckets.py (1 hunks)
  • packages/sample-app/pyproject.toml (1 hunks)
  • packages/sample-app/sample_app/openai_standalone_instrument.py (1 hunks)
  • packages/traceloop-sdk/traceloop/sdk/metrics/metrics.py (2 hunks)
✅ Files skipped from review due to trivial changes (3)
  • packages/opentelemetry-semantic-conventions-ai/pyproject.toml
  • packages/sample-app/sample_app/openai_standalone_instrument.py
  • packages/opentelemetry-instrumentation-anthropic/opentelemetry/instrumentation/anthropic/init.py
🚧 Files skipped from review as they are similar to previous changes (17)
  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/v0/init.py
  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/v1/init.py
  • packages/opentelemetry-instrumentation-ollama/opentelemetry/instrumentation/ollama/init.py
  • packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/init.py
  • packages/sample-app/pyproject.toml
  • packages/opentelemetry-instrumentation-milvus/opentelemetry/instrumentation/milvus/init.py
  • packages/opentelemetry-instrumentation-watsonx/opentelemetry/instrumentation/watsonx/init.py
  • packages/opentelemetry-instrumentation-bedrock/opentelemetry/instrumentation/bedrock/init.py
  • packages/opentelemetry-instrumentation-langchain/opentelemetry/instrumentation/langchain/init.py
  • packages/opentelemetry-instrumentation-crewai/opentelemetry/instrumentation/crewai/instrumentation.py
  • packages/opentelemetry-instrumentation-groq/opentelemetry/instrumentation/groq/init.py
  • packages/opentelemetry-instrumentation-pinecone/opentelemetry/instrumentation/pinecone/init.py
  • packages/traceloop-sdk/traceloop/sdk/metrics/metrics.py
  • packages/opentelemetry-instrumentation-openai/tests/metrics/test_bucket_configuration.py
  • packages/opentelemetry-instrumentation-openai/tests/metrics/test_bucket_integration.py
  • packages/opentelemetry-semantic-conventions-ai/tests/test_metric_buckets.py
  • packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/init.py
🧰 Additional context used
🧬 Code Graph Analysis (1)
packages/opentelemetry-semantic-conventions-ai/tests/test_integration.py (2)
packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/__init__.py (7)
  • MetricBuckets (277-332)
  • MetricViewsBuilder (335-410)
  • Meters (6-33)
  • get_all_genai_views (402-410)
  • get_llm_token_usage_view (339-347)
  • get_pinecone_query_duration_view (350-358)
  • get_pinecone_query_scores_view (361-369)
packages/opentelemetry-instrumentation-milvus/tests/conftest.py (1)
  • meter_provider (45-50)
🪛 Ruff (0.12.2)
packages/opentelemetry-semantic-conventions-ai/tests/test_integration.py

165-165: Loop control variable i not used within loop body

(B007)


236-236: Yoda condition detected

(SIM300)


237-237: Yoda condition detected

(SIM300)


238-238: Yoda condition detected

(SIM300)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: Test Packages (3.10)
  • GitHub Check: Test Packages (3.11)
  • GitHub Check: Test Packages (3.9)
  • GitHub Check: Test Packages (3.12)
🔇 Additional comments (7)
packages/opentelemetry-semantic-conventions-ai/tests/test_integration.py (7)

1-8: LGTM!

Clean imports and descriptive module docstring. The imports are well-organized and follow Python conventions.


13-52: LGTM!

Good integration test that verifies custom metric views work with MeterProvider. The early return pattern for handling missing OpenTelemetry SDK dependencies is appropriate, and the test thoroughly validates view creation, histogram creation, and metric recording.


53-71: Good test coverage with a minor consideration.

The test properly validates individual view creation and instrument name assignment. Note that accessing private attributes like _instrument_name may be brittle if the OpenTelemetry SDK changes its internal API, but this appears necessary for thorough testing of the integration.


72-84: LGTM!

Effective test that verifies bucket configurations are properly applied to views. The use of private attributes is necessary for this level of integration testing.


118-144: LGTM!

Excellent compatibility test that verifies views work across different OpenTelemetry versions. The distinction between default aggregation for operation duration and explicit bucket aggregation for other metrics is correctly handled.


192-216: LGTM!

Excellent test coverage for similarity score buckets. The method thoroughly validates bucket boundaries, ordering, and coverage for typical score values including edge cases.


221-229: LGTM!

Good defensive test that ensures view creation works gracefully even when OpenTelemetry SDK dependencies might be missing. This helps maintain robustness in different deployment scenarios.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
packages/opentelemetry-semantic-conventions-ai/tests/test_integration.py (1)

85-117: Replace weak assertion with meaningful verification or remove it.

The test properly creates and records values to Pinecone histograms, but ends with assert True which doesn't verify anything meaningful.

Consider one of these alternatives:

-        assert True
+        # Test passes if no exceptions are raised during recording

Or add meaningful assertions:

-        assert True
+        # Verify histograms were created successfully
+        assert duration_histogram is not None
+        assert score_histogram is not None
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1d1561f and 4d1e090.

⛔ Files ignored due to path filters (4)
  • packages/opentelemetry-instrumentation-pinecone/poetry.lock is excluded by !**/*.lock
  • packages/opentelemetry-semantic-conventions-ai/poetry.lock is excluded by !**/*.lock
  • packages/sample-app/poetry.lock is excluded by !**/*.lock
  • packages/traceloop-sdk/poetry.lock is excluded by !**/*.lock
📒 Files selected for processing (20)
  • packages/opentelemetry-instrumentation-anthropic/opentelemetry/instrumentation/anthropic/__init__.py (1 hunks)
  • packages/opentelemetry-instrumentation-bedrock/opentelemetry/instrumentation/bedrock/__init__.py (1 hunks)
  • packages/opentelemetry-instrumentation-crewai/opentelemetry/instrumentation/crewai/instrumentation.py (1 hunks)
  • packages/opentelemetry-instrumentation-groq/opentelemetry/instrumentation/groq/__init__.py (1 hunks)
  • packages/opentelemetry-instrumentation-langchain/opentelemetry/instrumentation/langchain/__init__.py (1 hunks)
  • packages/opentelemetry-instrumentation-milvus/opentelemetry/instrumentation/milvus/__init__.py (1 hunks)
  • packages/opentelemetry-instrumentation-ollama/opentelemetry/instrumentation/ollama/__init__.py (1 hunks)
  • packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/__init__.py (1 hunks)
  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/v0/__init__.py (1 hunks)
  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/v1/__init__.py (1 hunks)
  • packages/opentelemetry-instrumentation-pinecone/opentelemetry/instrumentation/pinecone/__init__.py (1 hunks)
  • packages/opentelemetry-instrumentation-watsonx/opentelemetry/instrumentation/watsonx/__init__.py (1 hunks)
  • packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/__init__.py (1 hunks)
  • packages/opentelemetry-semantic-conventions-ai/pyproject.toml (1 hunks)
  • packages/opentelemetry-semantic-conventions-ai/tests/test_integration.py (1 hunks)
  • packages/opentelemetry-semantic-conventions-ai/tests/test_metric_buckets.py (1 hunks)
  • packages/sample-app/pyproject.toml (1 hunks)
  • packages/sample-app/sample_app/openai_standalone_instrument.py (1 hunks)
  • packages/traceloop-sdk/pyproject.toml (1 hunks)
  • packages/traceloop-sdk/traceloop/sdk/metrics/metrics.py (2 hunks)
✅ Files skipped from review due to trivial changes (2)
  • packages/opentelemetry-semantic-conventions-ai/pyproject.toml
  • packages/traceloop-sdk/pyproject.toml
🚧 Files skipped from review as they are similar to previous changes (17)
  • packages/opentelemetry-instrumentation-langchain/opentelemetry/instrumentation/langchain/init.py
  • packages/opentelemetry-instrumentation-anthropic/opentelemetry/instrumentation/anthropic/init.py
  • packages/opentelemetry-instrumentation-milvus/opentelemetry/instrumentation/milvus/init.py
  • packages/opentelemetry-instrumentation-pinecone/opentelemetry/instrumentation/pinecone/init.py
  • packages/opentelemetry-instrumentation-crewai/opentelemetry/instrumentation/crewai/instrumentation.py
  • packages/opentelemetry-instrumentation-bedrock/opentelemetry/instrumentation/bedrock/init.py
  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/v1/init.py
  • packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/init.py
  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/v0/init.py
  • packages/traceloop-sdk/traceloop/sdk/metrics/metrics.py
  • packages/sample-app/pyproject.toml
  • packages/opentelemetry-instrumentation-ollama/opentelemetry/instrumentation/ollama/init.py
  • packages/sample-app/sample_app/openai_standalone_instrument.py
  • packages/opentelemetry-instrumentation-groq/opentelemetry/instrumentation/groq/init.py
  • packages/opentelemetry-instrumentation-watsonx/opentelemetry/instrumentation/watsonx/init.py
  • packages/opentelemetry-semantic-conventions-ai/tests/test_metric_buckets.py
  • packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/init.py
🧰 Additional context used
🧬 Code Graph Analysis (1)
packages/opentelemetry-semantic-conventions-ai/tests/test_integration.py (1)
packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/__init__.py (7)
  • MetricBuckets (277-332)
  • MetricViewsBuilder (335-410)
  • Meters (6-33)
  • get_all_genai_views (402-410)
  • get_llm_token_usage_view (339-347)
  • get_pinecone_query_duration_view (350-358)
  • get_pinecone_query_scores_view (361-369)
🪛 Ruff (0.12.2)
packages/opentelemetry-semantic-conventions-ai/tests/test_integration.py

236-236: Yoda condition detected

(SIM300)


237-237: Yoda condition detected

(SIM300)


238-238: Yoda condition detected

(SIM300)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
  • GitHub Check: Test Packages (3.11)
  • GitHub Check: Test Packages (3.9)
  • GitHub Check: Test Packages (3.12)
  • GitHub Check: Test Packages (3.10)
  • GitHub Check: Lint
  • GitHub Check: Build Packages (3.11)
🔇 Additional comments (11)
packages/opentelemetry-semantic-conventions-ai/tests/test_integration.py (11)

1-8: LGTM! Clean and appropriate imports.

The imports are well-organized and import exactly what's needed for the integration tests.


10-11: LGTM! Clear class purpose and documentation.

The class name and docstring clearly indicate this is for integration testing between MetricViewsBuilder and OpenTelemetry SDK.


13-52: LGTM! Comprehensive integration test with proper error handling.

The test properly:

  • Handles optional OpenTelemetry SDK dependencies with try/except
  • Verifies view count and types
  • Creates and tests histograms with actual data recording
  • Uses meaningful assertions

53-71: LGTM! Individual view creation test is well-structured.

The test properly verifies:

  • Individual view creation methods
  • View type assertions
  • Instrument name verification using private attributes (acceptable in tests)

72-84: LGTM! Proper bucket configuration verification.

The test correctly verifies:

  • Aggregation type (ExplicitBucketHistogramAggregation)
  • Bucket boundaries match the expected values
  • Appropriate use of private attributes for testing

118-144: LGTM! Comprehensive compatibility testing.

The test properly:

  • Verifies view attributes across different versions
  • Handles different aggregation types appropriately
  • Tests both default and explicit bucket aggregations
  • Includes actual histogram creation and recording

149-170: LGTM! Past review comments have been addressed.

The method name now correctly reflects that it tests LLM_TOKEN_USAGE bucket distribution, and the unused loop variable issue has been resolved. The test logic is sound and comprehensive.


171-191: LGTM! Method name correctly reflects the test purpose.

The test properly verifies Pinecone query duration bucket distribution with appropriate assertions for bucket sorting, boundaries, and coverage.


192-216: LGTM! Comprehensive similarity score bucket testing.

The test thoroughly verifies:

  • Bucket count and key boundary values
  • Proper sorting order
  • Complete range coverage (-1 to 1)
  • Representative score value coverage

221-229: LGTM! Good error handling verification.

The test ensures that MetricViewsBuilder can create views even when OpenTelemetry SDK dependencies are missing, which demonstrates good robustness in the implementation.


230-239: LGTM! Proper immutability testing with false positive static analysis warnings.

The test correctly verifies that bucket definitions remain unchanged during test execution. The static analysis warnings about "Yoda conditions" are false positives - these are normal equality comparisons between class attributes and copied values.

@minimAluminiumalism minimAluminiumalism changed the title feat(semantic-conentions-ai): move custom histogram to semantic-conentions-ai feat(semantic-conventions-ai): move custom histogram to semantic-conventions-ai Jul 28, 2025
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

🧹 Nitpick comments (2)
packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/__init__.py (2)

366-375: Import placement could be optimized.

The import statements are placed inside the method, which is repeated across multiple methods. Consider moving these imports to the module level or using a shared import pattern for better performance.

Move imports to module level:

+from opentelemetry.sdk.metrics.view import View, ExplicitBucketHistogramAggregation
+
 class MetricViewsBuilder:
     """Builder for OpenTelemetry metric views with predefined buckets."""
 
     @staticmethod
     def get_llm_token_usage_view():
-        from opentelemetry.sdk.metrics.view import View, ExplicitBucketHistogramAggregation
-
         return View(

441-491: Consider adding configuration options and validation.

The function hardcodes the GenAI views without allowing customization. Consider adding parameters to enable/disable specific metrics or allow custom bucket configurations for different use cases.

Add configuration options:

-def apply_genai_bucket_configuration():
+def apply_genai_bucket_configuration(*, 
+                                   enabled_metrics=None, 
+                                   custom_buckets=None,
+                                   force_reapply=False):
     """Apply GenAI bucket configuration to the global MeterProvider.
     
     Args:
+        enabled_metrics: List of metric names to configure (default: all)
+        custom_buckets: Dict of metric_name -> bucket_boundaries overrides
+        force_reapply: Whether to reapply even if already configured
     """
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4d1e090 and adaa4e2.

⛔ Files ignored due to path filters (2)
  • packages/sample-app/poetry.lock is excluded by !**/*.lock
  • packages/traceloop-sdk/poetry.lock is excluded by !**/*.lock
📒 Files selected for processing (7)
  • packages/opentelemetry-instrumentation-anthropic/opentelemetry/instrumentation/anthropic/__init__.py (1 hunks)
  • packages/opentelemetry-instrumentation-groq/opentelemetry/instrumentation/groq/__init__.py (1 hunks)
  • packages/opentelemetry-instrumentation-langchain/opentelemetry/instrumentation/langchain/__init__.py (1 hunks)
  • packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/__init__.py (1 hunks)
  • packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/__init__.py (1 hunks)
  • packages/opentelemetry-semantic-conventions-ai/pyproject.toml (1 hunks)
  • packages/traceloop-sdk/pyproject.toml (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • packages/opentelemetry-semantic-conventions-ai/pyproject.toml
🚧 Files skipped from review as they are similar to previous changes (5)
  • packages/traceloop-sdk/pyproject.toml
  • packages/opentelemetry-instrumentation-anthropic/opentelemetry/instrumentation/anthropic/init.py
  • packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/init.py
  • packages/opentelemetry-instrumentation-groq/opentelemetry/instrumentation/groq/init.py
  • packages/opentelemetry-instrumentation-langchain/opentelemetry/instrumentation/langchain/init.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
  • GitHub Check: Test Packages (3.12)
  • GitHub Check: Test Packages (3.11)
  • GitHub Check: Build Packages (3.11)
  • GitHub Check: Test Packages (3.10)
  • GitHub Check: Lint
🔇 Additional comments (3)
packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/__init__.py (3)

305-323: Well-designed exponential bucket boundaries for token usage.

The LLM_TOKEN_USAGE buckets use a power-of-4 progression (1, 4, 16, 64...) which provides good coverage for typical LLM token usage patterns, from small prompts to very large contexts up to 67M tokens.


342-360: Score bucket boundaries cover full similarity range correctly.

The score buckets appropriately cover the typical similarity score range [-1, 1] with reasonable granularity (0.125 increments), suitable for cosine similarity and other normalized distance metrics.


325-340: Double-check Pinecone query duration buckets

I ran a search for any existing Pinecone timeout or latency settings (rg -A 3 -B 3 "timeout.*pinecone|pinecone.*timeout" --type py) and found no references in the codebase. Please verify that the upper duration buckets (20.48 s, 40.96 s, 81.92 s) actually reflect expected Pinecone query latencies rather than timeouts or underlying system issues.

Comment on lines +400 to +405
def get_llm_operation_duration_view():
from opentelemetry.sdk.metrics.view import View

return View(
instrument_name=Meters.LLM_OPERATION_DURATION,
)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

LLM operation duration view lacks explicit aggregation configuration.

Unlike other views, this method doesn't specify an ExplicitBucketHistogramAggregation, which means it will use the default histogram buckets. This inconsistency might lead to suboptimal bucket boundaries for duration metrics.

Add explicit bucket configuration for consistency:

 @staticmethod
 def get_llm_operation_duration_view():
-    from opentelemetry.sdk.metrics.view import View
+    from opentelemetry.sdk.metrics.view import View, ExplicitBucketHistogramAggregation
 
     return View(
         instrument_name=Meters.LLM_OPERATION_DURATION,
+        aggregation=ExplicitBucketHistogramAggregation(
+            MetricBuckets.PINECONE_DB_QUERY_DURATION  # or define LLM_OPERATION_DURATION buckets
+        ),
     )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def get_llm_operation_duration_view():
from opentelemetry.sdk.metrics.view import View
return View(
instrument_name=Meters.LLM_OPERATION_DURATION,
)
@staticmethod
def get_llm_operation_duration_view():
- from opentelemetry.sdk.metrics.view import View
+ from opentelemetry.sdk.metrics.view import View, ExplicitBucketHistogramAggregation
- return View(
- instrument_name=Meters.LLM_OPERATION_DURATION,
- )
+ return View(
+ instrument_name=Meters.LLM_OPERATION_DURATION,
+ aggregation=ExplicitBucketHistogramAggregation(
+ MetricBuckets.PINECONE_DB_QUERY_DURATION # or define LLM_OPERATION_DURATION buckets
+ ),
+ )
🤖 Prompt for AI Agents
In
packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/__init__.py
around lines 400 to 405, the get_llm_operation_duration_view function returns a
View without specifying an ExplicitBucketHistogramAggregation, causing it to use
default histogram buckets. To fix this, import
ExplicitBucketHistogramAggregation and add an explicit aggregation parameter
with appropriate bucket boundaries to the View constructor, ensuring consistent
and optimal histogram bucket configuration for duration metrics.

Comment on lines +413 to +416
aggregation=ExplicitBucketHistogramAggregation(
MetricBuckets.PINECONE_DB_QUERY_DURATION
),
)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Reusing Pinecone duration buckets for generic DB queries may not be appropriate.

The DB_QUERY_DURATION view uses PINECONE_DB_QUERY_DURATION buckets, but different database systems may have vastly different performance characteristics. Consider defining separate buckets or using more generic duration buckets.

Consider adding generic database duration buckets:

 class MetricBuckets:
     # ... existing buckets ...
+    
+    DB_QUERY_DURATION = [
+        0.001, 0.005, 0.01, 0.025, 0.05, 0.075, 0.1, 0.25, 0.5, 
+        0.75, 1.0, 2.5, 5.0, 7.5, 10.0, 30.0, 60.0
+    ]

Then update the view:

 aggregation=ExplicitBucketHistogramAggregation(
-    MetricBuckets.PINECONE_DB_QUERY_DURATION
+    MetricBuckets.DB_QUERY_DURATION
 ),
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
aggregation=ExplicitBucketHistogramAggregation(
MetricBuckets.PINECONE_DB_QUERY_DURATION
),
)
# in packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/__init__.py
class MetricBuckets:
# ... existing buckets ...
# generic database query duration buckets
DB_QUERY_DURATION = [
0.001, 0.005, 0.01, 0.025, 0.05, 0.075, 0.1, 0.25, 0.5,
0.75, 1.0, 2.5, 5.0, 7.5, 10.0, 30.0, 60.0
]
Suggested change
aggregation=ExplicitBucketHistogramAggregation(
MetricBuckets.PINECONE_DB_QUERY_DURATION
),
)
# later in the same file, update the view definition:
aggregation=ExplicitBucketHistogramAggregation(
MetricBuckets.DB_QUERY_DURATION
),
🤖 Prompt for AI Agents
In
packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/__init__.py
around lines 413 to 416, the DB_QUERY_DURATION view is using
PINECONE_DB_QUERY_DURATION buckets which are specific to Pinecone and may not
suit other databases. Define a new set of generic database duration buckets that
better represent typical DB query durations across various systems, then update
the aggregation in the view to use these new generic buckets instead of the
Pinecone-specific ones.

Comment on lines +424 to +427
aggregation=ExplicitBucketHistogramAggregation(
MetricBuckets.PINECONE_DB_QUERY_SCORES
),
)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Reusing score buckets for distance metrics may be semantically incorrect.

Distance metrics often have different ranges and semantics than similarity scores. For example, Euclidean distance is typically non-negative and unbounded, while cosine similarity is bounded [-1, 1].

Consider defining separate buckets for distance metrics:

 class MetricBuckets:
     # ... existing buckets ...
+    
+    DB_SEARCH_DISTANCE = [
+        0, 0.1, 0.25, 0.5, 0.75, 1.0, 1.5, 2.0, 3.0, 5.0, 10.0, 25.0, 50.0, 100.0
+    ]
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
aggregation=ExplicitBucketHistogramAggregation(
MetricBuckets.PINECONE_DB_QUERY_SCORES
),
)
class MetricBuckets:
# ... existing buckets ...
DB_SEARCH_DISTANCE = [
0, 0.1, 0.25, 0.5, 0.75, 1.0, 1.5, 2.0, 3.0, 5.0, 10.0, 25.0, 50.0, 100.0
]
🤖 Prompt for AI Agents
In
packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/__init__.py
around lines 424 to 427, the code reuses score buckets designed for similarity
scores in distance metric aggregations, which is semantically incorrect due to
differing value ranges. Define and use a separate set of metric buckets
specifically tailored for distance metrics, ensuring the bucket ranges align
with the expected value domain of the distance metric being measured.

Comment on lines +449 to +450
if hasattr(current_provider, '_genai_buckets_applied'):
return
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Thread safety concern with provider modification.

The flag _genai_buckets_applied is checked and set without synchronization, which could lead to race conditions in multi-threaded applications where multiple threads might simultaneously call this function.

Add thread synchronization:

+import threading
+
+_config_lock = threading.Lock()
+
 def apply_genai_bucket_configuration():
     """Apply GenAI bucket configuration to the global MeterProvider."""
+    with _config_lock:
         try:
             # ... rest of the function
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if hasattr(current_provider, '_genai_buckets_applied'):
return
import threading
_config_lock = threading.Lock()
def apply_genai_bucket_configuration():
"""Apply GenAI bucket configuration to the global MeterProvider."""
with _config_lock:
try:
if hasattr(current_provider, '_genai_buckets_applied'):
return
# ... rest of the function
🤖 Prompt for AI Agents
In
packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/__init__.py
around lines 449 to 450, the check and setting of the _genai_buckets_applied
flag on current_provider is not thread-safe, risking race conditions. To fix
this, introduce a threading lock to synchronize access to this flag, ensuring
that only one thread can check and set it at a time. Use a global or
module-level lock object and acquire it before checking or setting the flag,
then release it afterward.

Comment on lines +455 to +460
if ('NoOp' in provider_type or 'Proxy' in provider_type or
not hasattr(current_provider, '__module__') or
current_provider.__class__.__name__ == 'ProxyMeterProvider'):
new_provider = MeterProvider(views=genai_views)
new_provider._genai_buckets_applied = True
metrics.set_meter_provider(new_provider)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Provider detection logic may be fragile.

The string-based detection using 'NoOp' in provider_type and 'Proxy' in provider_type is brittle and could break with OpenTelemetry SDK updates or custom provider implementations.

Use more robust type checking:

-        if ('NoOp' in provider_type or 'Proxy' in provider_type or
-                not hasattr(current_provider, '__module__') or
-                current_provider.__class__.__name__ == 'ProxyMeterProvider'):
+        from opentelemetry.metrics import NoOpMeterProvider
+        from opentelemetry.metrics._internal import ProxyMeterProvider
+        
+        if (isinstance(current_provider, (NoOpMeterProvider, ProxyMeterProvider)) or
+                not hasattr(current_provider, '_all_metric_readers')):
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if ('NoOp' in provider_type or 'Proxy' in provider_type or
not hasattr(current_provider, '__module__') or
current_provider.__class__.__name__ == 'ProxyMeterProvider'):
new_provider = MeterProvider(views=genai_views)
new_provider._genai_buckets_applied = True
metrics.set_meter_provider(new_provider)
from opentelemetry.metrics import NoOpMeterProvider
from opentelemetry.metrics._internal import ProxyMeterProvider
if (isinstance(current_provider, (NoOpMeterProvider, ProxyMeterProvider)) or
not hasattr(current_provider, '_all_metric_readers')):
new_provider = MeterProvider(views=genai_views)
new_provider._genai_buckets_applied = True
metrics.set_meter_provider(new_provider)
🤖 Prompt for AI Agents
In
packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/__init__.py
around lines 455 to 460, the current provider detection uses fragile string
checks like 'NoOp' in provider_type and 'Proxy' in provider_type. Replace these
with robust type checking by importing and using the actual NoOpMeterProvider
and ProxyMeterProvider classes from the OpenTelemetry SDK and checking if
current_provider is an instance of these classes using isinstance(). This will
make the detection more reliable against SDK changes or custom implementations.

Comment on lines +477 to +478
except Exception:
pass
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Silent exception handling may hide important errors.

The bare except Exception: pass blocks suppress all exceptions, which could hide configuration errors, import issues, or other problems that should be logged or handled appropriately.

Add proper error handling and logging:

+import logging
+
+logger = logging.getLogger(__name__)
+
                     except Exception as e:
-                        pass
+                        logger.debug(f"Failed to recreate metric reader: {e}")
+                        continue

Apply similar changes to other exception blocks.

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In
packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/__init__.py
around lines 477 to 478, the bare except Exception: pass block silently
suppresses all exceptions, potentially hiding important errors. Modify this
block to catch the exception, log the error details using an appropriate logger
or print statement, and handle the error properly instead of passing silently.
Apply similar error handling and logging improvements to other bare except
blocks in the file.

Comment on lines +486 to +487
import opentelemetry.metrics._internal as metrics_internal
metrics_internal._METER_PROVIDER = new_provider
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Direct manipulation of internal OpenTelemetry state is risky.

Directly setting metrics_internal._METER_PROVIDER bypasses the official API and could break with future OpenTelemetry versions or cause unexpected behavior.

Use the official API instead:

-                import opentelemetry.metrics._internal as metrics_internal
-                metrics_internal._METER_PROVIDER = new_provider
+                metrics.set_meter_provider(new_provider)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
import opentelemetry.metrics._internal as metrics_internal
metrics_internal._METER_PROVIDER = new_provider
@@ -486,2 +486,1 @@
- import opentelemetry.metrics._internal as metrics_internal
- metrics_internal._METER_PROVIDER = new_provider
+ metrics.set_meter_provider(new_provider)
🤖 Prompt for AI Agents
In
packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/__init__.py
at lines 486-487, avoid directly setting the internal variable
metrics_internal._METER_PROVIDER as it bypasses the official API and risks
future compatibility. Instead, replace this direct assignment with calls to the
official OpenTelemetry API for setting or updating the meter provider, ensuring
you use the supported methods to configure the meter provider safely.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants