-
Hello, Regards |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 3 replies
-
Hi @nxa06464, Currently, Ollama models like Qwen3 and DeepSeek-R1 send thinking content as Thanks for the question! |
Beta Was this translation helpful? Give feedback.
Hi @nxa06464,
Currently, Ollama models like Qwen3 and DeepSeek-R1 send thinking content as
<think>
tags in the assistant message, which gets rendered in the Vaadin UI during streaming.For gpt-oss, thinking comes through assistant metadata fields, but I haven’t integrated this yet since Spring AI is still developing unified support for thinking across providers.
Once Spring AI exposes a consistent interface, I’ll add framework-level parsing/persistence support alongside the existing UI handling. For now, check
ChatContentView
for the current streaming logic.Thanks for the question!