Skip to content
Discussion options

You must be logged in to vote

Hi @nxa06464,

Currently, Ollama models like Qwen3 and DeepSeek-R1 send thinking content as <think> tags in the assistant message, which gets rendered in the Vaadin UI during streaming.
For gpt-oss, thinking comes through assistant metadata fields, but I haven’t integrated this yet since Spring AI is still developing unified support for thinking across providers.
Once Spring AI exposes a consistent interface, I’ll add framework-level parsing/persistence support alongside the existing UI handling. For now, check ChatContentView for the current streaming logic.

Thanks for the question!

Replies: 1 comment 3 replies

Comment options

You must be logged in to vote
3 replies
@nxa06464
Comment options

@JM-Lab
Comment options

@nxa06464
Comment options

Answer selected by JM-Lab
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants