-
Notifications
You must be signed in to change notification settings - Fork 2.1k
Model Context Protocal with KerasHub Models #2166
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @laxmareddyp, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request introduces a new guide and associated code demonstrating the Model Context Protocol (MCP) using KerasHub models. The primary goal is to illustrate how AI models can move beyond simple text generation to actively interact with external tools and perform real-world actions. The guide provides a modular architecture for building such systems, including components for tool definition, registration, and client-side orchestration, exemplified through practical use cases like fetching weather, performing calculations, and conducting searches.
Highlights
- Introduction of Model Context Protocol (MCP) Guide: A new comprehensive guide has been added, demonstrating how to build a Model Context Protocol (MCP) system using KerasHub models. This guide illustrates how AI models can be empowered to interact with external tools and perform actions.
- Implementation of Core MCP Components: The PR introduces the core components of an MCP system:
MCPTool
(to define individual tools),MCPToolRegistry
(to manage and register available tools), andMCPClient
(to orchestrate the interaction between the AI model and the tools). - Inclusion of Example Tools (Weather, Calculator, Search): Three practical example tools are provided: a
weather_tool
(demonstrating external data access), acalculator_tool
(for mathematical computations), and asearch_tool
(for information retrieval). These examples showcase diverse capabilities that an MCP system can integrate. - Integration with KerasHub Models for Intelligent Tool Calling: The guide demonstrates the integration of a KerasHub model, specifically the Gemma3 Instruct 1B, to understand user requests and intelligently determine when and how to call the registered tools.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a new guide on using KerasHub models with a Model Context Protocol (MCP) for tool calling. The guide is well-structured and provides a good walkthrough with examples for weather, calculator, and search tools.
My review focuses on improving the security and correctness of the provided code examples. I've identified a critical security vulnerability in the calculator_tool
due to the use of eval()
, and a bug in the tool call parsing logic for the same tool. I've provided suggestions to address these issues in both the Python script and the Markdown guide.
Also, there's a small typo in the pull request title: "Protocal" should be "Protocol".
Overall, this is a great addition and with these changes, it will be an excellent and safe guide for users.
/gemini review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a new guide on the Model Context Protocol (MCP) with KerasHub, provided in Jupyter Notebook, Python script, and Markdown formats. The guide is well-structured and the code is generally clear. My review includes suggestions to improve code robustness and maintainability, particularly by refactoring a hardcoded tool-parsing function to be more generic. I've also pointed out opportunities for code style consistency and clarity improvements in the documentation's example outputs.
mda@danowar2 |
|
||
# Look for TOOL_CALL blocks with strict JSON parsing | ||
pattern = r"TOOL_CALL:\s*\n(\{[^}]*\})" | ||
matches = re.findall(pattern, response, re.DOTALL) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pattern matching here seems to be fragile right?
using Json might be better and force the model output to adhere to a specific json format
tool_calls = self._extract_tool_calls(response) | ||
|
||
if tool_calls: | ||
# Safety check: if multiple tool calls found, execute only the first one |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this for easy demo purpose? if so we should mention why we are only executing one call
Returns: | ||
A formatted prompt for the AI model | ||
""" | ||
tools_list = self.tool_registry.get_tools_list() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this can include more information
Clear instructions on when to use tools.
Examples of how to call tools (few-shot prompting).
Instructions on how to respond if no tool is needed.
A more explicit format for tool calls that the parser expects.
Error handling instructions (e.g., "If a tool returns an error, try to use another tool or inform the user").
|
||
if __name__ == "__main__": | ||
main() | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the simulated data is okay - not sure if this is possible but is there a real tool API we can actually show?
self.model = model | ||
self.tool_registry = tool_registry | ||
|
||
def _build_prompt(self, user_input: str) -> str: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also, should we save conversation history to provide more stateful reposnse?
# Look for TOOL_CALL blocks with strict JSON parsing | ||
pattern = r"TOOL_CALL:\s*\n(\{[^}]*\})" | ||
matches = re.findall(pattern, response, re.DOTALL) | ||
for match in matches: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
a more formal schema (like OpenAI's function calling) could provide clearer guidance, especially for argument types and validation.
""" | ||
tools_list = self.tool_registry.get_tools_list() | ||
|
||
# Ultra-simple prompt - just the essentials |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lets remove llm vibes like "ultra" etc here and everywhere
Refere working colab gist