Skip to content

Conversation

@hexqi
Copy link
Collaborator

@hexqi hexqi commented Sep 28, 2025

English | 简体中文

PR

PR Checklist

Please check if your PR fulfills the following requirements:

  • The commit message follows our Commit Message Guidelines
  • Tests for the changes have been added (for bug fixes / features)
  • Docs have been added / updated (for bug fixes / features)
  • Built its own designer, fully self-validated

PR Type

What kind of change does this PR introduce?

  • Bugfix
  • Feature
  • Code style update (formatting, local variables)
  • Refactoring (no functional changes, no api changes)
  • Build related changes
  • CI related changes
  • Documentation content changes
  • Other... Please describe:

Background and solution

What is the current behavior?

Issue Number: N/A

What is the new behavior?

Does this PR introduce a breaking change?

  • Yes
  • No

Other information

Summary by CodeRabbit

  • New Features

    • Introduced a revamped Robot AI chat experience with a new home panel, chat UI, quick prompts, file attachments, history management, and fullscreen support.
    • Added support for OpenAI-compatible chat (including streaming) with dynamic model/base URL configuration.
  • Improvements

    • Renamed modes to Agent/Chat with updated selectors, tooltips, and defaults.
    • Enhanced settings popover with helpful tooltips.
    • Refined right panel layout and container styling for better positioning and visuals.
  • Bug Fixes

    • Stabilized authorization header handling to avoid undefined values.
  • Chores

    • Updated robot-related dependencies to 0.3.0-rc.5.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Sep 28, 2025

Important

Review skipped

Draft detected.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Walkthrough

This change set overhauls the robot plugin: introduces a new Home.vue entry/component, adds an OpenAI-compatible client/provider with dynamic LLM config, implements a streaming chat + MCP tools composable, updates modes from Builder→Agent, refactors components and icons, adjusts MCP types/usage, bumps robot deps, and tweaks minor layout/CSS and auth header handling.

Changes

Cohort / File(s) Summary
Auth header safety
packages/common/js/completion.js
Ensure Authorization header uses `Bearer ${apiKey
Layout positioning tweaks
packages/layout/src/DesignSettings.vue, packages/layout/src/Main.vue
Remove top offset on right panel; set .tiny-engine-right-wrap to position: relative.
Robot entry switch
packages/plugins/robot/index.ts
Default export now uses Home.vue instead of Main.vue.
Robot dependency bumps
packages/plugins/robot/package.json
Update @opentiny/tiny-robot* deps from 0.3.0-rc.0 to 0.3.0-rc.5.
New robot Home component
packages/plugins/robot/src/Home.vue
Add new robot AI chat container with toolbar, Teleport panel, settings, AI mode switching, file upload hook, and layout logic.
Robot main UI/logic refactor
packages/plugins/robot/src/Main.vue
Shift default mode to Agent, update placeholders, adjust allowFiles logic, refactor imports, replace Builder checks with Agent, update token defaults, add border style, remove exposed avatarUrl/VISUAL_MODEL.
OpenAI-compatible client provider
packages/plugins/robot/src/client/OpenAICompatibleProvider.ts, packages/plugins/robot/src/client/index.ts
Add provider supporting non-stream/stream chat; introduce AIClient wrapper with beforeRequest hook; dynamic updates to baseURL/apiKey/model; export client and updateLLMConfig.
Chat and MCP composables
packages/plugins/robot/src/composables/useChat.ts, packages/plugins/robot/src/composables/useMcp.ts
Add streaming chat with tool-calls execution/merging; add abort handling; refactor MCP API, cache LLM tools, change imports, remove mcpServers exposure.
Robot components updates
packages/plugins/robot/src/components/RobotChat.vue, packages/plugins/robot/src/components/RobotSettingPopover.vue, packages/plugins/robot/src/components/RobotTypeSelect.vue
Add full chat UI with history, prompts, attachments; replace labels with tooltip slot in settings; switch mode tabs to Agent/Chat, adjust props/defaults.
Robot modes/utilities
packages/plugins/robot/src/js/useRobot.ts, packages/plugins/robot/src/utils/common-utils.ts
Change AI_MODES to { Agent, Chat }, remove VISUAL_MODEL; export formatMessages, include optional tool_calls/tool_call_id, use toRaw.
MCP UI and types
packages/plugins/robot/src/mcp/McpServer.vue, packages/plugins/robot/src/mcp/types.ts, packages/plugins/robot/src/types/mcp-types.ts
Update McpServer props and imports; add baseUrl? to RequestOptions; add new MCP-related public interfaces.
Icon SFC cleanup
packages/plugins/robot/src/icons/mcp-icon.vue, packages/plugins/robot/src/icons/page-icon.vue, packages/plugins/robot/src/icons/study-icon.vue
Remove SVG metadata and script exports; templates only.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor User
  participant UI as Home.vue / RobotChat.vue
  participant Chat as useChat (composable)
  participant Client as AIClient
  participant Prov as OpenAICompatibleProvider
  participant MCP as MCP Server

  User->>UI: Type prompt / select prompt
  UI->>Chat: send(message, attachments?)
  Chat->>Client: chatStream(request with messages/tools)
  Client->>Prov: chatStream(payload, beforeRequest)
  Note over Client,Prov: beforeRequest injects MCP tools, model, apiKey, baseURL
  Prov-->>Client: SSE stream (delta, tool_calls, reasoning)
  Client-->>Chat: onReceiveData(delta)
  Chat->>UI: Update bubbles (reasoning/markdown)
  alt Tool calls present
    Chat->>MCP: Execute tool(name,args)
    MCP-->>Chat: Tool result
    Chat->>Client: Continue stream with tool result
  end
  Client-->>Chat: onFinish
  Chat->>UI: Finalize message/state
Loading
sequenceDiagram
  autonumber
  actor User
  participant UI as Home.vue / SettingsPopover
  participant Cfg as updateLLMConfig
  participant Client as AIClient
  participant Prov as OpenAICompatibleProvider

  User->>UI: Change model/apiKey/baseURL
  UI->>Cfg: updateLLMConfig(newConfig)
  Cfg->>Client: update in-memory config
  Cfg->>Prov: updateConfig(newConfig)
  Note over Client,Prov: Subsequent requests use updated provider settings
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~75 minutes

Poem

ahem, I twitch my ears—new gears engage!
Agent wakes, Chat waves from the page.
Streams now whisper tool-call lore,
MCP doors creak—then open more.
Headers tidy, panels align,
I thump approval: ship time! 🐇✨

Pre-merge checks and finishing touches and finishing touches

❌ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Title Check ❓ Inconclusive The title “[WIP] feat: ai plugin refactor” does reference that the pull request is a refactoring of the AI plugin, but it includes Work‐In‐Progress noise and a generic “ai plugin refactor” phrase that does not clearly convey the primary focus or outcome of the extensive changeset. It is therefore only partially related to the changes and too vague to inform a reviewer at a glance. Please remove the “[WIP]” marker and craft a concise, descriptive title that highlights the core refactor—such as “Refactor Robot AI plugin core interfaces and introduce new Home.vue chat component”—so reviewers immediately understand the primary change.
✅ Passed checks (1 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@hexqi hexqi marked this pull request as draft September 28, 2025 03:22
@hexqi hexqi force-pushed the feat/ai-plugin-refactor branch from 0d7f889 to 57ac995 Compare September 28, 2025 03:24
@hexqi hexqi force-pushed the feat/ai-plugin-refactor branch from 57ac995 to 5eb53e9 Compare September 29, 2025 07:06
@chilingling
Copy link
Member

image

模型配置需要支持配置额外传参,比如配置一个 qwen-plus-thinking 模型,可以配置额外带一个 { enable_thinking: true } 的参数

@chilingling
Copy link
Member

感觉可以增强一下代码结构,增强插件的可拓展性与可维护性,比如:

  1. 增强 useRobot.ts 中的 AIModelOptions。提供默认配置,允许在AI 插件配置层级进行新增与隐藏。

好处:将大模型提供商+模型的静态配置集中放置,容易阅读+可维护;也方便后续集中提供配置进行新增或者隐藏

增强示例结构:

export default {
   name: 'DeepSeek',
  apiBase: 'https://api.deepseek.com/v1',
  models: [
    {
        id: 'deepseek-chat',
        name: 'deepseek-chat',
        contextWindow: 65536, // 上下文大小
        maxTokens: 8192, 
        defaultMaxTokens: 8000,
        inputPrice: 0.0006, // 输入 token 价格
        outputPrice: 0.002, // 输出 token 价格
        isDefault: true,
        description: `60 tokens/second, Enhanced capabilities,API compatibility intact`, // 描述
        capabilities: { // 模型能力
            tools: {
              enabled: true,
            },
        },
    },
  ]
}
  1. 增强 modelProvider 能力。(clients/index.ts、clients/OpenAICompatibleProvider.ts)
    当前我们提供了基础的 openai compatible 的 modelProvider。
    建议:
  • 集成处理 tool_call 的能力(当前处理 tool_call 放置在了 useChat.ts 中)
  • 将 agent模式/chat 模式的处理,抽离出来。

好处:不同的大模型提供商、甚至不同的大模型的 tool_call 格式、以及传参可能都有细微的差别,我们将通用的处理模式全部内聚到一个 provider 里面,后续如果有定制化的需求,直接开放配置出来,让二开用户传入自己的 provider 即可处理 tool_call 格式、传参的相关差异。

总结:增强扩展性+高内聚

  1. 增加模式处理器(agent Mode、chat Mode、plan Mode 等等)
    不同的模式,可能 system prompt、可以调用的工具、可以调用的大模型不同。原本的一些处理我们在 useChat 和 modelProvider 中都有散落处理,可以考虑抽离一个 chatMode 之类的处理器,内聚处理不同模式的差异,然后再传递给 modelProvider。做到下层无感知。

@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically.


It feels like the code structure can be enhanced to enhance the scalability and maintainability of the plug-in, such as:

  1. Enhance AIModelOptions in useRobot.ts. Provides default configuration, allowing adding and hiding at the AI ​​plug-in configuration level.

Benefits: Centralize the static configuration of large model providers and models, making it easy to read and maintain; it also facilitates subsequent centralized provision of configurations for adding or hiding

Enhanced example structure:

export default {
   name: 'DeepSeek',
  apiBase: 'https://api.deepseek.com/v1',
  models: [
    {
        id: 'deepseek-chat',
        name: 'deepseek-chat',
        contextWindow: 65536, //Context size
        maxTokens: 8192,
        defaultMaxTokens: 8000,
        inputPrice: 0.0006, //Input token price
        outputPrice: 0.002, // Output token price
        isDefault: true,
        description: `60 tokens/second, Enhanced capabilities,API compatibility intact`, // description
        capabilities: { // Model capabilities
            tools: {
              enabled: true,
            },
        },
    },
  ]
}
  1. Enhance modelProvider capabilities. (clients/index.ts, clients/OpenAICompatibleProvider.ts)
    Currently we provide a basic openai compatible modelProvider.
    suggestion:
  • Integrated ability to handle tool_call (currently processing tool_call is placed in useChat.ts)
  • Extract the processing of agent mode/chat mode.

Benefits: Different large model providers, or even different large models, may have subtle differences in the tool_call format and parameter passing. We have integrated all common processing modes into one provider. If there is a need for customization in the future, the configuration can be directly opened, allowing secondary users to pass in their own providers to handle the differences in tool_call format and parameter passing.

Summary: Enhanced scalability + high cohesion

  1. Add mode processor (agent Mode, chat Mode, plan Mode, etc.)
    Different modes may have different system prompts, tools that can be called, and large models that can be called. We have scattered some of the original processing in useChat and modelProvider. We can consider extracting a processor such as chatMode to cohesively handle the differences in different modes, and then pass it to modelProvider. Make sure the lower level is insensitive.

@hexqi hexqi force-pushed the feat/ai-plugin-refactor branch from cd3d8c0 to 8005437 Compare October 20, 2025 11:51
@hexqi hexqi force-pushed the feat/ai-plugin-refactor branch from f8a401a to 0d7ca22 Compare October 28, 2025 10:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants