-
Notifications
You must be signed in to change notification settings - Fork 10
Add an interface for surfacing tool calls #69
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
e482f6a
to
bb829b5
Compare
I like the idea of a For completeness and generality, I think
|
bb829b5
to
0b6e5cc
Compare
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
…ponse I don't it was necessary in the first place, and leads to inefficient use of memory
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
…opt-out, and better naming/docs
27d2887
to
3a1bd9c
Compare
b49c888
to
0ac76a8
Compare
I'm kind of late to the party for this one, but it would be very handy to be able to just send stuff to the chat UI directly, instead of having two different places where it happens. Some issues I see with this:
For example, imagine a tool where you give it the name of a city, and the tool (1) looks up the coordinates for the city, and (2) look up the weather for those coordinates. If the conversation goes like this. Note that the parts in square brackets wouldn't be shown -- I'm just using them here to annotate what's going on under the hood.
With the current code in this PR, you can't display the middle two lines of the tool call phase. But here's how the rest would look: async def get_current_temperature(city_name: str):
lat, lon = await find_coordinates(city_name)
temp, sun = await find_weather(lat, lon)
return ToolResult(
{"temperature": temp, "sun": sun},
user=f"Current conditions in {request["city_name"]}: {temp} degrees and {sun}\n\n",
)
chat_model.register_tool(
get_current_temperature,
on_request=lambda request: f"Looking up coordinates for {request["city_name"]}...\n\n",
) It would be nice if the tool itself had access to the stream and could async def get_current_temperature(city_name: str):
f"Looking up coordinates for {request["city_name"]}...\n\n"
lat, lon = await find_coordinates(city_name)
yield f"{request["city_name"]} is at {lat}, {lon}\n\n"
yield "Looking up weather...\n\n"
temp, sun = await find_weather(lat, lon)
yield f"Current conditions in {request["city_name"]}: {temp} degrees and {sun}.\n\n"
return ToolResult({"temperature": temp, "sun": sun})
chat_model.register_tool(get_current_temperature) Some other options instead of using
One final idea, which I kind of like: It could also be useful to let the user define other data to pass in. You could imagine a tool call where you want to provide the tool with some information, but you don't want to send that information to the LLM. For example, in something like Sidebot, suppose you have a data set and you want the LLM to use tool calls to run SQL queries on the data. You want to send the user request and schema and the LLM, but you don't want to send it the data. # Define the tool in a scope outside of the app's server code
def run_query(query: str, data: pd.DataFrame, chat: Chat):
...
def server(input, output):
# Suppose the value of df can change over time
df = pd.DataFrame(...)
chat_model = ChatAnthropic(...)
@chat_model.register_tool(run_query, extra_args = {
"data": lambda: df
"chat": lambda: chat_model
}) In this case, the LLM only sees and uses the In the example above, I used @chat_model.register_tool(run_query, extra_args = {
"data": dynamic(lambda: df)
"chat": chat_model
}) |
Oh, and one more thing about being able to pass in arbitrary objects to the tool. Suppose you want to use the same tool in a Shiny app, and in a console app. Let's say the tool looks like this, where it takes an # Define the tool in a scope outside of the app's server code
def run_query(query: str, data: pd.DataFrame, emit: Callable[[str], Awaitable[None]]):
emit(f"Starting query: f{query}...")
...
emit("Running query...")
...
emit("Finished query...")
return ... The In a console app, you might just pass in the df = pd.DataFrame(...)
chat_model = ChatAnthropic(...)
@chat_model.register_tool(run_query, extra_args = {
"data": dynamic(lambda: df)
"emit": chat_model.emit
}) But in Shiny, you might do something fancier with those messages. In this case, it might wrap them in def server(input, output):
# Suppose the value of df can change over time
df = pd.DataFrame(...)
# This is the Shiny chat object
chat = ui.Chat(...)
async def append_code_to_chat(txt: str):
await chat.append_message_stream(ui.tags.code(txt))
# The chatlas chat object
chat_model = ChatAnthropic(...)
@chat_model.register_tool(run_query, extra_args = {
"data": dynamic(lambda: df)
"emit": append_code_to_chat
}) This would also the tool caller to define their own functions for many purposes, like displaying progress, or getting user input: If you have a long computation that needs to display progress, then that progress could be implemented one way at the console, another way in Shiny, and yet another in Streamlit: def long_computation(x: int, progress: Callable[[int], Awaitable[None]]):
progress(0)
...
progress(33)
...
progress(66)
...
progress(100)
return ... Or say the tool needs to get user input to confirm something def ask_user_yes_no(
msg: str,
confirm: Callable[[str], Awaitable[bool]]
):
user_response = await confirm(msg)
return user_response In a discussion with @JCheng about this, he pointed out that we can already do some of these things, with currying:
Both of the uses above, in Shiny and the terminal, output directly to their respective UIs. But if we want to modify the chatlas object's output stream, it might be something like this: ## Emit to chat_model's output stream, at the chatlas level
chat_model.register_tool(make_run_query(df, chat_model.emit)) And finally, one other possibility that we discussed, where if the tool function takes a parameter named # This version of run_query will emit to the chatlas object's output stream
async def run_query(query: str, _chat: Chat):
"""Runs a SQL query on data"""
await _chat.emit(f"Starting query: f{query}...")
...
await _chat.emit("Running query...")
...
await _chat.emit("Finished query...")
return ...
chat_model.register_tool(run_query) |
Addresses #33
Related posit-dev/shinychat#31
This PR adds a few things to help with surfacing tool requests and results in response content.
ToolResult()
class, which allows for:.stream()
or.chat()
) to display when the tool is called.on_request
parameter for.register_tool()
. When tool is requested, this callback executes, and the result is yielded to the user.Chat.on_tool_request()
method for registering a default tool request handler.Here is a basic Shiny example:
tool-call.mp4
TODO