Replies: 1 comment
-
Response from ADK Answering Agent (experimental, answer may be inaccurate) TLDR: For your production Electron app, the most secure approach is to not store the LLM API key on the client side at all. Instead, create a two-part agent system: a lightweight local agent bundled with your app for filesystem access, and a remote agent deployed on a secure server (like Google Cloud Run) that holds the API key and communicates with the LLM. The local and remote agents can communicate using ADK's built-in Agent-to-Agent (A2A) protocol. Hello! That's an excellent question, as securing API keys in a distributed application is a critical step for production deployment. Your intuition is correct that bundling the key in a PyInstaller executable or using environment variables on the client machine is not secure. Based on ADK's design and security best practices, we recommend a client-server architecture. This involves splitting your agent's responsibilities: 1. The Local Agent (Client-Side)This agent would be bundled with your Electron application (e.g., as the PyInstaller executable you mentioned). Its responsibilities would be limited to tasks that must happen on the local machine:
2. The Remote Agent (Server-Side)This agent is deployed to a secure, controlled environment that you manage, such as a cloud service. Its responsibilities are:
How They Communicate: Agent-to-Agent (A2A)ADK provides a built-in protocol for agents to communicate with each other, called Agent-to-Agent (A2A). You can expose your remote agent as an A2A service. [2, 3] The local agent can then call the remote agent as if it were just another tool. This keeps the powerful credentials isolated on your secure server. You can also add authentication to the A2A connection itself to ensure that only your application can access the remote agent. [4, 5] Recommended DeploymentA great option for deploying the remote agent is Google Cloud Run. It's a serverless platform that securely manages environment variables for your secrets (like an API key) or can be assigned a service account, which is the recommended best practice for Google Cloud services. [6] This architecture effectively solves the problem: your sensitive API key never leaves your secure server environment, and your Electron application can still leverage the power of the LLM while safely interacting with the local filesystem. [1] https://google.github.io/adk-docs/agents/models/ |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I am building an Electron app that speaks with an ADK agent over a Websocket connection. This ADK agent requires access to local-based MCPs, such as filesystem. How do I deploy it into production, while effectively/securely managing my API key? Should I store my API key in a remote server and inject it at runtime?
I plan on bundling the Python subprocess running the ADK as a PyInstaller executable. Using environment variables to store the API Key is also out of the question, since it can be reverse engineered or extracted simply from our Electron app.
Beta Was this translation helpful? Give feedback.
All reactions