Skip to content

Conversation

@XG-xin
Copy link

@XG-xin XG-xin commented Dec 5, 2025

Motivation

Move reasoning tokens from metadata to metrics in nodejs openai tests.
Need to configure to skip some old dd-trace versions.

Changes

Workflow

  1. ⚠️ Create your PR as draft ⚠️
  2. Work on you PR until the CI passes
  3. Mark it as ready for review
    • Test logic is modified? -> Get a review from RFC owner.
    • Framework is modified, or non obvious usage of it -> get a review from R&P team

🚀 Once your PR is reviewed and the CI green, you can merge it!

🛟 #apm-shared-testing 🛟

Reviewer checklist

  • If PR title starts with [<language>], double-check that only <language> is impacted by the change
  • No system-tests internal is modified. Otherwise, I have the approval from R&P team
  • A docker base image is modified?
    • the relevant build-XXX-image label is present
  • A scenario is added (or removed)?

@github-actions
Copy link
Contributor

github-actions bot commented Dec 5, 2025

CODEOWNERS have been resolved as:

tests/integration_frameworks/llm/openai/test_openai_llmobs.py           @DataDog/ml-observability

"output_tokens": mock.ANY,
"total_tokens": mock.ANY,
"cache_read_input_tokens": mock.ANY,
"reasoning_output_tokens": mock.ANY,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe because this one is the one where we actually expect a non-zero number of reasoning tokens, we can use the real count? i think it was 64, but if it fails locally you can just update it with whatever it said the output was (should we ever regress) (it also mighhttt be different for stream=True vs stream=False)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants