Skip to content

feat: System Message Tools #6395

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 55 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
55 commits
Select commit Hold shift + click to select a range
c314afb
make all tools support agent mode, add warning for poor models
RomneyDa Jul 2, 2025
af1429e
built in tool system message descriptions
RomneyDa Jul 2, 2025
999684c
intercept and parse xml tool calls
RomneyDa Jul 2, 2025
94ef189
translate tool calls back to text
RomneyDa Jul 2, 2025
6e83ca8
textify and reinsert xml tool calls as text
RomneyDa Jul 2, 2025
4871c67
tweak system message, only use for models that don't support
RomneyDa Jul 2, 2025
0e4b9d3
add proficient with tools support
RomneyDa Jul 2, 2025
ab35aab
add setting for forcing system message tools, fix message compilation…
RomneyDa Jul 2, 2025
e76c8bb
merge main
RomneyDa Jul 2, 2025
2bbcd05
fix build errors
RomneyDa Jul 2, 2025
f3dba6b
fix flatten message tests and other tweaks based on cubic feedback
RomneyDa Jul 2, 2025
79dc88f
Fix request rule tests
RomneyDa Jul 2, 2025
ccee106
fix message construction tests
RomneyDa Jul 2, 2025
30a1973
Merge branch 'main' of https://github.com/continuedev/continue into d…
RomneyDa Jul 2, 2025
f3e59df
add tiptap class to editor
RomneyDa Jul 2, 2025
9c0e57e
update recommended agent model logic
RomneyDa Jul 2, 2025
25014d9
merge main
RomneyDa Jul 4, 2025
1bb06a9
model supports native tools
RomneyDa Jul 4, 2025
c63b05c
reorder tests
RomneyDa Jul 5, 2025
76f075e
test cleanup
RomneyDa Jul 5, 2025
a839dff
merge main, only allow one xml tool call
RomneyDa Jul 5, 2025
217d46c
handle alternative tool call start tags in interceptor
RomneyDa Jul 5, 2025
4e10cd0
move detection into util, fix nested eslint issue
RomneyDa Jul 6, 2025
86ce7c5
tool call parsing bugs
RomneyDa Jul 7, 2025
22efa01
merge main
RomneyDa Jul 7, 2025
c0de4ed
merge main, fix conflicts by moving around system message
RomneyDa Jul 8, 2025
4e75e7c
fix merge conflicts
RomneyDa Jul 8, 2025
d102f75
merge main
RomneyDa Jul 9, 2025
e50cea2
change to use 'tool_name' syntax
RomneyDa Jul 9, 2025
58aafa2
new simpler syntax
RomneyDa Jul 13, 2025
9cd1e14
merge main
RomneyDa Jul 14, 2025
f3e9f83
use codeblock tool call syntax
RomneyDa Jul 14, 2025
781ad8b
WIP: parsing of new system tool output tweaks
RomneyDa Jul 14, 2025
a4c56aa
WIP: parsing of new system tool output tweaks
RomneyDa Jul 15, 2025
3c7e5e0
WIP: system message tool parsing continued
RomneyDa Jul 16, 2025
05e662d
merge main (required adding some parallel support)
RomneyDa Jul 17, 2025
197c23c
updated parsing logic
RomneyDa Jul 18, 2025
756769c
merge main
RomneyDa Jul 18, 2025
9f6df34
update system tools parsing
RomneyDa Jul 18, 2025
ddbb867
merge main
RomneyDa Jul 18, 2025
9e75395
fix run terminal command bug
RomneyDa Jul 18, 2025
35b73d1
system message tools working
RomneyDa Jul 18, 2025
21dba5c
system message tools - system message tweaks
RomneyDa Jul 18, 2025
38f6610
system message tools - system message tweaks
RomneyDa Jul 18, 2025
689f22f
system tool calls: cleanup
RomneyDa Jul 18, 2025
b1680cb
search and replace system message tool description
RomneyDa Jul 18, 2025
53690e1
Merge branch 'main' of https://github.com/continuedev/continue into d…
RomneyDa Jul 18, 2025
7eb9ab9
fix prettier
RomneyDa Jul 18, 2025
79f483e
fix split at codeblocks and newlines util
RomneyDa Jul 18, 2025
ce8c483
fix broken gui test file
RomneyDa Jul 18, 2025
7d2aae1
Merge branch 'main' of https://github.com/continuedev/continue into d…
RomneyDa Jul 18, 2025
f7afb22
fix gui tests
RomneyDa Jul 18, 2025
0c51cb3
Merge branch 'main' of https://github.com/continuedev/continue into d…
RomneyDa Jul 18, 2025
72a45f9
cubic recs
RomneyDa Jul 19, 2025
2ebf57d
change how json system message is injected
RomneyDa Jul 19, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 5 additions & 8 deletions core/config/load.ts
Original file line number Diff line number Diff line change
Expand Up @@ -262,7 +262,10 @@
config.models.map(async (desc) => {
if ("title" in desc) {
const llm = await llmFromDescription(
desc,
{
...desc,
systemMessage: desc.systemMessage ?? config.systemMessage,
},
ide.readFile.bind(ide),
getUriFromPath,
uniqueId,
Expand All @@ -284,6 +287,7 @@
...desc,
model: modelName,
title: modelName,
systemMessage: desc.systemMessage ?? config.systemMessage,
},
ide.readFile.bind(ide),
getUriFromPath,
Expand Down Expand Up @@ -486,7 +490,7 @@
}
if (name === "llm") {
const llm = models.find((model) => model.title === params?.modelTitle);
if (!llm) {

Check warning on line 493 in core/config/load.ts

View workflow job for this annotation

GitHub Actions / core-checks

Unexpected negated condition
errors.push({
fatal: false,
message: `Unknown reranking model ${params?.modelTitle}`,
Expand Down Expand Up @@ -560,13 +564,6 @@
}
}

if (config.systemMessage) {
continueConfig.rules.unshift({
rule: config.systemMessage,
source: "json-systemMessage",
});
}

// Trigger MCP server refreshes (Config is reloaded again once connected!)
const mcpManager = MCPManagerSingleton.getInstance();
mcpManager.setConnections(
Expand Down
6 changes: 6 additions & 0 deletions core/config/sharedConfig.ts
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@ export const sharedConfigSchema = z
useCurrentFileAsContext: z.boolean(),
optInNextEditFeature: z.boolean(),
enableExperimentalTools: z.boolean(),
onlyUseSystemMessageTools: z.boolean(),
codebaseToolCallingOnly: z.boolean(),

// `ui` in `ContinueConfig`
Expand Down Expand Up @@ -184,6 +185,11 @@ export function modifyAnyConfigWithSharedConfig<
configCopy.experimental.optInNextEditFeature =
sharedConfig.optInNextEditFeature;
}
if (sharedConfig.onlyUseSystemMessageTools !== undefined) {
configCopy.experimental.onlyUseSystemMessageTools =
sharedConfig.onlyUseSystemMessageTools;
}

if (sharedConfig.codebaseToolCallingOnly !== undefined) {
configCopy.experimental.codebaseToolCallingOnly =
sharedConfig.codebaseToolCallingOnly;
Expand Down
1 change: 0 additions & 1 deletion core/config/types.ts
Original file line number Diff line number Diff line change
Expand Up @@ -1122,7 +1122,6 @@ declare global {
* This is needed to crawl a large number of documentation sites that are dynamically rendered.
*/
useChromiumForDocsCrawling?: boolean;
useTools?: boolean;
modelContextProtocolServers?: MCPOptions[];
}

Expand Down
3 changes: 2 additions & 1 deletion core/index.d.ts
Original file line number Diff line number Diff line change
Expand Up @@ -1080,6 +1080,7 @@ export interface Tool {
faviconUrl?: string;
group: string;
originalFunctionName?: string;
systemMessageDescription?: string;
}

interface ToolChoice {
Expand Down Expand Up @@ -1443,6 +1444,7 @@ export interface ExperimentalConfig {
defaultContext?: DefaultContextProvider[];
promptPath?: string;
enableExperimentalTools?: boolean;
onlyUseSystemMessageTools?: boolean;

/**
* Quick actions are a way to add custom commands to the Code Lens of
Expand Down Expand Up @@ -1692,7 +1694,6 @@ export type RuleSource =
| "model-options-agent"
| "rules-block"
| "colocated-markdown"
| "json-systemMessage"
| ".continuerules";

export interface RuleWithSource {
Expand Down
20 changes: 1 addition & 19 deletions core/llm/autodetect.ts
Original file line number Diff line number Diff line change
@@ -1,9 +1,4 @@
import {
ChatMessage,
ModelCapability,
ModelDescription,
TemplateType,
} from "../index.js";
import { ChatMessage, ModelCapability, TemplateType } from "../index.js";

import {
anthropicTemplateMessages,
Expand Down Expand Up @@ -41,7 +36,6 @@ import {
xWinCoderEditPrompt,
zephyrEditPrompt,
} from "./templates/edit.js";
import { PROVIDER_TOOL_SUPPORT } from "./toolSupport.js";

const PROVIDER_HANDLES_TEMPLATING: string[] = [
"lmstudio",
Expand Down Expand Up @@ -107,17 +101,6 @@ const MODEL_SUPPORTS_IMAGES: string[] = [
"granite-vision",
];

function modelSupportsTools(modelDescription: ModelDescription) {
if (modelDescription.capabilities?.tools !== undefined) {
return modelDescription.capabilities.tools;
}
const providerSupport = PROVIDER_TOOL_SUPPORT[modelDescription.provider];
if (!providerSupport) {
return false;
}
return providerSupport(modelDescription.model) ?? false;
}

function modelSupportsImages(
provider: string,
model: string,
Expand Down Expand Up @@ -400,5 +383,4 @@ export {
autodetectTemplateType,
llmCanGenerateInParallel,
modelSupportsImages,
modelSupportsTools,
};
2 changes: 2 additions & 0 deletions core/llm/defaultSystemMessages.ts
Original file line number Diff line number Diff line change
Expand Up @@ -62,6 +62,8 @@ export const DEFAULT_AGENT_SYSTEM_MESSAGE = `\
You are in agent mode.

${CODEBLOCK_FORMATTING_INSTRUCTIONS}

When making changes to files, use provided tools, do not write codeblocks directly.
</important_rules>`;

// The note about read-only tools is for MCP servers
Expand Down
18 changes: 6 additions & 12 deletions core/llm/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -512,33 +512,27 @@ export abstract class BaseLLM implements ILLM {
}

private _formatChatMessages(messages: ChatMessage[]): string {
const msgsCopy = messages ? messages.map((msg) => ({ ...msg })) : [];
let formatted = "";
for (const msg of msgsCopy) {
formatted += this._formatChatMessage(msg);
}
return formatted;
return (messages ?? [])
.map((msg) => this._formatChatMessage(msg))
.join("\n\n");
}

private _formatChatMessage(msg: ChatMessage): string {
let contentToShow = "";
if (msg.role === "tool") {
contentToShow = msg.content;
} else if (msg.role === "assistant" && msg.toolCalls) {
} else if (msg.role === "assistant" && msg.toolCalls?.length) {
contentToShow = msg.toolCalls
?.map(
(toolCall) =>
`${toolCall.function?.name}(${toolCall.function?.arguments})`,
)
.join("\n");
} else if ("content" in msg) {
if (Array.isArray(msg.content)) {
msg.content = renderChatMessage(msg);
}
contentToShow = msg.content;
contentToShow = renderChatMessage(msg);
}

return `<${msg.role}>\n${contentToShow}\n\n`;
return `<${msg.role}>\n${contentToShow}`;
}

protected async *_streamFim(
Expand Down
4 changes: 2 additions & 2 deletions core/llm/llms/Bedrock.ts
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ import { ChatMessage, Chunk, LLMOptions, MessageContent } from "../../index.js";
import { safeParseToolCallArgs } from "../../tools/parseArgs.js";
import { renderChatMessage, stripImages } from "../../util/messageContent.js";
import { BaseLLM } from "../index.js";
import { PROVIDER_TOOL_SUPPORT } from "../toolSupport.js";
import { NATIVE_TOOL_SUPPORT } from "../toolSupport.js";
import { getSecureID } from "../utils/getSecureID.js";
import { withLLMRetry } from "../utils/retry.js";

Expand Down Expand Up @@ -288,7 +288,7 @@ class Bedrock extends BaseLLM {
// First get tools
const supportsTools =
(this.capabilities?.tools ||
PROVIDER_TOOL_SUPPORT.bedrock?.(options.model)) ??
NATIVE_TOOL_SUPPORT.bedrock?.(options.model)) ??
false;

let toolConfig: undefined | ToolConfiguration = undefined;
Expand Down
2 changes: 0 additions & 2 deletions core/llm/llms/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -140,8 +140,6 @@ export async function llmFromDescription(

let baseChatSystemMessage: string | undefined = undefined;
if (desc.systemMessage !== undefined) {
// baseChatSystemMessage = DEFAULT_CHAT_SYSTEM_MESSAGE;
// baseChatSystemMessage += "\n\n";
baseChatSystemMessage = await renderTemplatedString(
Handlebars,
desc.systemMessage,
Expand Down
34 changes: 17 additions & 17 deletions core/llm/toolSupport.test.ts
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
// core/llm/toolSupport.test.ts
import { PROVIDER_TOOL_SUPPORT } from "./toolSupport";
import { NATIVE_TOOL_SUPPORT } from "./toolSupport";

describe("PROVIDER_TOOL_SUPPORT", () => {
describe("NATIVE_TOOL_SUPPORT", () => {
describe("continue-proxy", () => {
const supportsFn = PROVIDER_TOOL_SUPPORT["continue-proxy"];
const supportsFn = NATIVE_TOOL_SUPPORT["continue-proxy"];

it("should return true for Claude 3.5 models", () => {
expect(
Expand Down Expand Up @@ -61,7 +61,7 @@ describe("PROVIDER_TOOL_SUPPORT", () => {
});

describe("anthropic", () => {
const supportsFn = PROVIDER_TOOL_SUPPORT["anthropic"];
const supportsFn = NATIVE_TOOL_SUPPORT["anthropic"];

it("should return true for Claude 3.5 models", () => {
expect(supportsFn("claude-3-5-sonnet")).toBe(true);
Expand All @@ -85,7 +85,7 @@ describe("PROVIDER_TOOL_SUPPORT", () => {
});

describe("openai", () => {
const supportsFn = PROVIDER_TOOL_SUPPORT["openai"];
const supportsFn = NATIVE_TOOL_SUPPORT["openai"];

it("should return true for GPT-4 models", () => {
expect(supportsFn("gpt-4")).toBe(true);
Expand All @@ -110,7 +110,7 @@ describe("PROVIDER_TOOL_SUPPORT", () => {
});

describe("cohere", () => {
const supportsFn = PROVIDER_TOOL_SUPPORT["cohere"];
const supportsFn = NATIVE_TOOL_SUPPORT["cohere"];

it("should return true for Command models", () => {
expect(supportsFn("command-r")).toBe(true);
Expand All @@ -123,7 +123,7 @@ describe("PROVIDER_TOOL_SUPPORT", () => {
});

describe("gemini", () => {
const supportsFn = PROVIDER_TOOL_SUPPORT["gemini"];
const supportsFn = NATIVE_TOOL_SUPPORT["gemini"];

it("should return true for all Gemini models", () => {
expect(supportsFn("gemini-pro")).toBe(true);
Expand All @@ -143,7 +143,7 @@ describe("PROVIDER_TOOL_SUPPORT", () => {
});

describe("bedrock", () => {
const supportsFn = PROVIDER_TOOL_SUPPORT["bedrock"];
const supportsFn = NATIVE_TOOL_SUPPORT["bedrock"];

it("should return true for Claude 3.5 Sonnet models", () => {
expect(supportsFn("anthropic.claude-3-5-sonnet-20240620-v1:0")).toBe(
Expand Down Expand Up @@ -193,7 +193,7 @@ describe("PROVIDER_TOOL_SUPPORT", () => {
});

describe("mistral", () => {
const supportsFn = PROVIDER_TOOL_SUPPORT["mistral"];
const supportsFn = NATIVE_TOOL_SUPPORT["mistral"];

it("should return true for supported models", () => {
expect(supportsFn("mistral-large-latest")).toBe(true);
Expand Down Expand Up @@ -225,7 +225,7 @@ describe("PROVIDER_TOOL_SUPPORT", () => {
});

describe("ollama", () => {
const supportsFn = PROVIDER_TOOL_SUPPORT["ollama"];
const supportsFn = NATIVE_TOOL_SUPPORT["ollama"];

it("should return true for supported models", () => {
expect(supportsFn("llama3.1")).toBe(true);
Expand Down Expand Up @@ -270,17 +270,17 @@ describe("PROVIDER_TOOL_SUPPORT", () => {

describe("edge cases", () => {
it("should handle empty model names", () => {
expect(PROVIDER_TOOL_SUPPORT["continue-proxy"]("")).toBe(false);
expect(PROVIDER_TOOL_SUPPORT["anthropic"]("")).toBe(false);
expect(PROVIDER_TOOL_SUPPORT["openai"]("")).toBe(false);
expect(PROVIDER_TOOL_SUPPORT["gemini"]("")).toBe(false);
expect(PROVIDER_TOOL_SUPPORT["bedrock"]("")).toBe(false);
expect(PROVIDER_TOOL_SUPPORT["ollama"]("")).toBe(false);
expect(NATIVE_TOOL_SUPPORT["continue-proxy"]("")).toBe(false);
expect(NATIVE_TOOL_SUPPORT["anthropic"]("")).toBe(false);
expect(NATIVE_TOOL_SUPPORT["openai"]("")).toBe(false);
expect(NATIVE_TOOL_SUPPORT["gemini"]("")).toBe(false);
expect(NATIVE_TOOL_SUPPORT["bedrock"]("")).toBe(false);
expect(NATIVE_TOOL_SUPPORT["ollama"]("")).toBe(false);
});

it("should handle non-existent provider", () => {
// @ts-ignore - Testing runtime behavior with invalid provider
expect(PROVIDER_TOOL_SUPPORT["non-existent"]).toBe(undefined);
expect(NATIVE_TOOL_SUPPORT["non-existent"]).toBe(undefined);
});
});
});
Loading
Loading