Skip to content

Support AI SDK 5 #93

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 85 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
85 commits
Select commit Hold shift + click to select a range
6e8c3db
start v5
ianmacartney Jul 28, 2025
c9645be
update example
ianmacartney Jul 29, 2025
4be2655
codegen
ianmacartney Jul 29, 2025
1671c4c
separate out test
ianmacartney Jul 29, 2025
f7ca03d
Revert "separate out test"
ianmacartney Jul 29, 2025
777f762
separate out test
ianmacartney Jul 29, 2025
d62f917
drop build
ianmacartney Jul 29, 2025
eedca83
bundle definePlaygroundAPI into package
ianmacartney Jul 29, 2025
72bcf77
drop setupjs openrouter
ianmacartney Jul 30, 2025
3834b3b
update to v 0.5
ianmacartney Aug 1, 2025
6df9b2f
Merge branch 'main' into ai-sdk-v5
ianmacartney Aug 2, 2025
146f619
codegen
ianmacartney Aug 2, 2025
9b5dc50
update rag version to one that supports v5
ianmacartney Aug 2, 2025
c0b2f58
drop playground as dep
ianmacartney Aug 2, 2025
89b7da3
use existing fns
ianmacartney Aug 2, 2025
5590fd7
v5 for real
ianmacartney Aug 2, 2025
a9987e4
fix moving definePlaygroundAPI
ianmacartney Aug 2, 2025
fff7c21
fix-usage
ianmacartney Aug 2, 2025
9c0fd6e
fix some examples
ianmacartney Aug 2, 2025
9205528
index-test
ianmacartney Aug 2, 2025
13634a8
warnings
ianmacartney Aug 2, 2025
aaf7f68
human
ianmacartney Aug 2, 2025
afc9cb7
allow passing in message or modelmessage to most places
ianmacartney Aug 2, 2025
c5fb5b9
ease types for stopWhen
ianmacartney Aug 2, 2025
3a87b14
fix deltas
ianmacartney Aug 2, 2025
9a07690
fix uimessages
ianmacartney Aug 2, 2025
8851d6b
drop type tests we've given up on
ianmacartney Aug 2, 2025
65bf554
build playground for pkg-pr-new
ianmacartney Aug 3, 2025
21de30e
only save text as a last resort if there's already real content
ianmacartney Aug 4, 2025
789927d
Use `tool-[toolName]` as the part type for tool calls.
defrex Aug 4, 2025
9edad9d
Return the output value directly when the type is JSON
defrex Aug 4, 2025
8899143
Add a test for the output type unpacking
defrex Aug 4, 2025
805c425
change tool-call to tool-{name}
ianmacartney Aug 4, 2025
94b18dc
Merge branch 'ai-sdk-v5' into ai-sdk-v5-tool-type-fix
ianmacartney Aug 4, 2025
49da4c1
Merge pull request #104 from defrex/ai-sdk-v5-tool-type-fix
ianmacartney Aug 4, 2025
bdd9a26
Add generic type parameters to UIMessage and toUIMessages function
ethan-huo Aug 6, 2025
51f46b6
run tests on ai-sdk-v5
ianmacartney Aug 7, 2025
2232820
run tests on ai-sdk-v5
ianmacartney Aug 7, 2025
a65dd38
format & fix type error
ianmacartney Aug 7, 2025
926a949
Merge branch 'ai-sdk-v5-toolset-type' into ai-sdk-v5
ianmacartney Aug 7, 2025
ef5209f
Add @ai-sdk/provider-utils peer dependency
ethan-huo Aug 11, 2025
9b247ec
check for crypto.randomUUID
ianmacartney Aug 11, 2025
f71f09d
Merge branch 'main' into ai-sdk-v5
ianmacartney Aug 11, 2025
74de6e7
0.5.0-alpha.1
ianmacartney Aug 11, 2025
0ae3644
import from zod3
ianmacartney Aug 12, 2025
b5deaf1
fix up npm i
ianmacartney Aug 11, 2025
307aa65
package versioning
ianmacartney Aug 11, 2025
7d3575a
support language or chat provider strings
ianmacartney Aug 12, 2025
2745754
Merge branch 'ai-sdk-v5' into to-ui-message-web-api
ianmacartney Aug 12, 2025
d044656
Merge pull request #111 from ethan-huo/to-ui-message-web-api
ianmacartney Aug 12, 2025
820f160
track package lock & re-install convex for example
ianmacartney Aug 12, 2025
997ca66
package revival
ianmacartney Aug 12, 2025
d6c6e86
zod3 f
ianmacartney Aug 13, 2025
407585e
document .text
ianmacartney Aug 14, 2025
5a6e6bb
ordered message handle order 0
ianmacartney Aug 14, 2025
52acb3b
update ordering to bump on users
ianmacartney Aug 14, 2025
8ced943
run all but skip tests
ianmacartney Aug 14, 2025
4e0dc97
move message saving into client/messages.ts file
ianmacartney Aug 14, 2025
cea7b78
format
ianmacartney Aug 14, 2025
45fda66
move thread things to client/threads.ts
ianmacartney Aug 14, 2025
e259796
generating embeddings shouldn't require deserializing
ianmacartney Aug 14, 2025
38afd32
document saving messages without the Agent
ianmacartney Aug 14, 2025
2ed3a4c
snip lastMessageId
ianmacartney Aug 14, 2025
fb72bd7
allow message embeddings to save message action
ianmacartney Aug 14, 2025
1fc49b9
return all messages when generating/streaming
ianmacartney Aug 14, 2025
e10482a
check off contributing tasks
ianmacartney Aug 14, 2025
ec9aa22
fix playground import
ianmacartney Aug 14, 2025
feedb59
fix: migrate to AI SDK v5 and resolve Zod v4 compatibility
ethan-huo Aug 14, 2025
b3fab01
make storeFile args object
ianmacartney Aug 14, 2025
541d143
move more functions to threads
ianmacartney Aug 14, 2025
9dc26d2
use base functions instead of on agent
ianmacartney Aug 14, 2025
9ff151a
fix UIs for new UIMessage types
ianmacartney Aug 14, 2025
d579719
eslint
ianmacartney Aug 14, 2025
53b0479
drop id from messages
ianmacartney Aug 14, 2025
3fe0570
change maxRetries to a callSettings
ianmacartney Aug 14, 2025
ca5b64f
change pending to message state passed in
ianmacartney Aug 14, 2025
6b1ab0a
fix getMessagesByIds
ianmacartney Aug 14, 2025
e54d849
spread in usage handler
ianmacartney Aug 14, 2025
0d91779
npm i
ianmacartney Aug 14, 2025
484f3e3
fix pending tests
ianmacartney Aug 14, 2025
d133ced
Merge branch 'main' into ai-sdk-v5
ianmacartney Aug 14, 2025
816163c
0.2.0-alpha.2
ianmacartney Aug 14, 2025
a35117c
Merge branch 'ai-sdk-v5' into ai-sdk-v5
ianmacartney Aug 14, 2025
dbcd22a
Merge commit 'a35117c6' into ai-sdk-v5
ianmacartney Aug 14, 2025
a1282ba
use LanguageModel{v2,}
ianmacartney Aug 14, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 4 additions & 3 deletions .github/workflows/node.js.yml
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
name: Run tests
on:
push:
branches: ["main"]
branches: ["main", "ai-sdk-v5"]
pull_request:
branches: ["main"]
branches: ["main", "ai-sdk-v5"]
jobs:
build:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4
Expand All @@ -18,6 +18,7 @@ jobs:
node-version: "20.x"
cache: "npm"
- run: node setup.cjs
- run: cd playground && npm run build && cd ..
- run: npx pkg-pr-new publish ./ ./playground
- run: npm test
- run: npm run typecheck
Expand Down
21 changes: 21 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,26 @@
# Changelog

## 0.2.0 AI SDK v5 support (alpha)

- Supports LanguageModel: a string for gateway or LanguageModelV2.
- Add a `.text` field to the toUIMessages return value (this is the
UIMessage exported by the Agent library.
Note: this is somewhat different, so your UIs will have to adjust.
- No longer returns lastMessageId when saving message, instead it
returns all the messages it saved with all metadata.
- All messages are returned when generating / streaming text / objects.
Not just the prompt message ID / order.
- More functions are supported without using an Agent class.
- The UIMessage is now based on the v5 UIMessage, and the MessageDoc is
based on the ModelMessage. The data at rest has not changed - backwards
compatible.
- The `id` is no longer used / configurable when saving messages.
Reach out if you were using it - the normal message ID should suffice.
- `maxRetries` option has been moved into a general `callSettings` config on Agent
- The `maxSteps` option has been changed to the v5 `stopWhen` alternative.
- saving pending messages now requires passing status: "pending" instead of
a top-level pending: true. Most people don't use this feature currently.

## 0.1.18

- definePlaygroundAPI uses the new interface functions
Expand Down
13 changes: 0 additions & 13 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,14 +49,6 @@ npm run alpha
- Convenience function to create a thread by copying an existing thread (fork)
- Add a "failed" message when an error is thrown in generate/stream call.
- Add a "failed" message when a stream is aborted.
- Enable saving a message as part of the same `order` as a given message.
- Validate that you can save a tool response, and use that as promptMessageId
and have the response assistant message be on the same order & after the
tool call message stepOrder.
- Return the order from `saveMessage` so it can be used for idempotency &
appending, if not already returned
- Return more message metadata from `generateText` & `streamText` - all
message info, not just prompt id
- Support new AI SDK version (and LanguageModelProviderV2)
- Add a `contextHandler` option to the Agent component, that can be used to see
and modify the context passed to the LLM before it's called.
Expand All @@ -71,16 +63,11 @@ npm run alpha
- Improve the demo to show more of the features & have nicer UI
- Add an example of aborting a stream.
- Add an example of using tracing / telemetry.
- When adding messages, increment order for each user message
- Refactor agent code to more helper functions, and break up `client/index.ts`
into more files.
- Add a `deleteMessageOrder` function that takes a message id, and deletes all
messages at that message's order.
- Add an example of using MCP with the Agent.
- Automatically turn big text content into a file when saving a message and keep
as a fileId. Re-hydrate it when reading out for generation.
- Finish deprecating save{All,Any}InputMessages in favor of saveInputMessages &
other changes
- When a generateText finishes with a tool call, return a `continue` fn that can
be used to save the tool call response(s) and continue the generation at the
same order.
Expand Down
6 changes: 4 additions & 2 deletions docs/files.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -64,8 +64,10 @@ const { file } = await storeFile(
ctx,
components.agent,
new Blob([bytes], { type: mimeType }),
filename,
sha256,
{
filename,
sha256,
},
);
const { fileId, url, storageId } = file;
```
Expand Down
6 changes: 3 additions & 3 deletions docs/getting-started.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -80,9 +80,9 @@ attribute each message to a specific agent. Other options are defaults that can
be over-ridden at each LLM call-site.

```ts
import { tool } from "ai";
import { tool, stepCountIs } from "ai";
import { openai } from "@ai-sdk/openai";
import { z } from "zod";
import { z } from "zod/v3";
import { Agent, createTool } from "@convex-dev/agent";
import { components } from "./_generated/api";

Expand Down Expand Up @@ -114,7 +114,7 @@ const supportAgent = new Agent(components.agent, {
// Used for limiting the number of steps when tool calls are involved.
// NOTE: if you want tool calls to happen automatically with a single call,
// you need to set this to something greater than 1 (the default).
maxSteps: 1,
stopWhen: stepCountIs(5),
// Used for limiting the number of retries when a tool call fails. Default: 3.
maxRetries: 3,
// Used for tracking token usage. See https://docs.convex.dev/agents/usage-tracking
Expand Down
2 changes: 1 addition & 1 deletion docs/human-agents.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ needs human intervention to do so, such as confirmation of a fact.

```ts
import { tool } from "ai";
import { z } from "zod";
import { z } from "zod/v3";

const askHuman = tool({
description: "Ask a human a question",
Expand Down
52 changes: 43 additions & 9 deletions docs/messages.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -172,7 +172,7 @@ apply, except you don't have to provide a model. It will use the agent's default
chat model.

```ts
import { z } from "zod";
import { z } from "zod/v3";

const result = await thread.generateObject({
prompt: "Generate a plan based on the conversation so far",
Expand Down Expand Up @@ -280,7 +280,7 @@ function MyComponent({ threadId }: { threadId: string }) {
return (
<div>
{toUIMessages(messages.results ?? []).map((message) => (
<div key={message.key}>{message.content}</div>
<div key={message.key}>{message.text}</div>
))}
</div>
);
Expand Down Expand Up @@ -321,7 +321,7 @@ It can work with any text, but is especially handy for streaming text.
import { useSmoothText } from "@convex-dev/agent/react";

// in the component
const [visibleText] = useSmoothText(message.content);
const [visibleText] = useSmoothText(message.text);
```

You can configure the initial characters per second. It will adapt over time to
Expand All @@ -335,7 +335,7 @@ streaming and non-streaming messages, do:
import { useSmoothText, type UIMessage } from "@convex-dev/agent/react";

function Message({ message }: { message: UIMessage }) {
const [visibleText] = useSmoothText(message.content, {
const [visibleText] = useSmoothText(message.text, {
startStreaming: message.status === "streaming",
});
return <div>{visibleText}</div>;
Expand Down Expand Up @@ -385,6 +385,44 @@ provide them as a prompt, as well as all generated messages.
You can save messages to the database manually using `saveMessage` or
`saveMessages`.

- You can pass a `prompt` or a full `message` (`CoreMessage` type)
- The `metadata` argument is optional and allows you to provide more details,
such as `sources`, `reasoningDetails`, `usage`, `warnings`, `error`, etc.

### Without the Agent class:

Note: If you aren't using the Agent class with a text embedding model set, you
need to pass an `embedding` if you want to save it at the same time.

```ts
import { saveMessage } from "@convex-dev/agent";

const { messageId } = await saveMessage(ctx, components.agent, {
threadId,
userId,
message: { role: "assistant", content: result },
metadata: [{ reasoning, usage, ... }] // See MessageWithMetadata type
agentName: "my-agent",
embedding: { vector: [0.1, 0.2, ...], model: "text-embedding-3-small" },
});
```

```ts
import { saveMessages } from "@convex-dev/agent";

const { messages } = await saveMessages(ctx, components.agent, {
threadId,
userId,
messages: [{ role, content }, ...],
metadata: [{ reasoning, usage, ... }, ...] // See MessageWithMetadata type
agentName: "my-agent",
embeddings: { model: "text-embedding-3-small", vectors: [[0.1...], ...] },
});
```


### Using the Agent class:

```ts
const { messageId } = await agent.saveMessage(ctx, {
threadId,
Expand All @@ -394,10 +432,9 @@ const { messageId } = await agent.saveMessage(ctx, {
});
```

You can pass a `prompt` or a full `message` (`CoreMessage` type)

```ts
const { lastMessageId, messageIds} = await agent.saveMessages(ctx, {
const { messages } = await agent.saveMessages(ctx, {
threadId, userId,
messages: [{ role, content }],
metadata: [{ reasoning, usage, ... }] // See MessageWithMetadata type
Expand All @@ -410,9 +447,6 @@ generated lazily if the message is used as a prompt. Or you can provide an
embedding upfront if it's available, or later explicitly generate them using
`agent.generateEmbeddings`.

The `metadata` argument is optional and allows you to provide more details, such
as `sources`, `reasoningDetails`, `usage`, `warnings`, `error`, etc.

## Configuring the storage of messages

Generally the defaults are fine, but if you want to pass in multiple messages
Expand Down
9 changes: 5 additions & 4 deletions docs/tools.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -30,8 +30,9 @@ tools in a context that is convenient.

## Using tools

The Agent component will automatically handle tool calls if you pass `maxSteps`
to the `generateText` or `streamText` functions.
The Agent component will automatically handle passing tool call results back in
and re-generating if you pass `stopWhen: stepCountIs(num)` where `num > 1` to
`generateText` or `streamText`.

The tool call and result will be stored as messages in the thread associated
with the source message. See [Messages](./messages.mdx) for more details.
Expand Down Expand Up @@ -90,8 +91,8 @@ By default, the context passed to a tool is a `ToolCtx` with:
- `userId` - the user ID associated with the call, if any
- `threadId` - the thread ID, if any
- `messageId` - the message ID of the prompt message passed to generate/stream.
- Everything in `ActionCtx`, such as `auth`, `storage`, `runQuery`, etc.
Note: in scheduled functions, workflows, etc, the auth user will be `null`.
- Everything in `ActionCtx`, such as `auth`, `storage`, `runQuery`, etc. Note:
in scheduled functions, workflows, etc, the auth user will be `null`.

To add more fields to the context, you can pass a custom context to the call,
such as `agent.generateText({ ...ctx, orgId: "123" })`.
Expand Down
2 changes: 1 addition & 1 deletion docs/workflows.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -129,7 +129,7 @@ For an action that generates text in a thread, similar to `thread.generateText`:

```ts
export const getSupport = supportAgent.asTextAction({
maxSteps: 10,
stopWhen: stepCountIs(10),
});
```

Expand Down
Loading