-
Notifications
You must be signed in to change notification settings - Fork 1.2k
feat: Add top_logprobs parameter to Responses API #4384
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Implements issue llamastack#3552: Add support for top_logprobs parameter in Responses Create API. Changes: - Add top_logprobs parameter to create_openai_response in agents.py - Pass top_logprobs through _create_streaming_response - Add top_logprobs to StreamingResponseOrchestrator - Pass top_logprobs to inference API (OpenAIChatCompletionRequestWithExtraBody) The parameter allows users to request top log probabilities for each token when logprobs are enabled via the include parameter. The response will contain top_logprobs from the provider or None if not supported. Fixes: llamastack#3552
|
Hi @Vasuk12! Thank you for your pull request and welcome to our community. Action RequiredIn order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you. ProcessIn order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with If you have received this in error or have any questions, please contact us at [email protected]. Thanks! |
|
Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks! |
|
Hi, thanks for running the ci checks. I will commit with some additions to the yaml files to solve the failing integration tests. |
Signed-off-by: Vasu <[email protected]>
✱ Stainless preview buildsThis PR will update the Edit this comment to update it. It will appear in the SDK's changelogs. ⚡ llama-stack-client-python studio · conflict
✅ llama-stack-client-node studio · code · diff
✅ llama-stack-client-go studio · code · diff
This comment is auto-generated by GitHub Actions and is automatically kept up to date as you push. |
Signed-off-by: Vasu <[email protected]>
|
This pull request has merge conflicts that must be resolved before it can be merged. @Vasuk12 please rebase it. https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork |
|
Hey @iamemilio @leseb, I have implemented some changes and all tests have passed. Let me know if there are any other changes that might be reuired... |
Fixes: #3552
What does this PR do?
The parameter allows users to request top log probabilities for each token when logprobs are enabled via the include parameter. The response will contain top_logprobs from the provider or None if not supported.
Test Plan