@@ -38,38 +38,33 @@ docker run -p 8080:8080 documentation-agent
3838- ` GET /health ` - Health check endpoint
3939- ` POST /a2a ` - A2A protocol endpoint
4040
41- ## Available Tools
42- - ** resolve_library_id** - Resolves library by its id
43- - ** get_library_docs** - Get the docs for the specific library
41+ ## Available Skills
42+
43+ | Skill | Description | Parameters |
44+ | -------| -------------| ------------|
45+ | ` resolve_library_id ` | Resolves library by its id | id |
46+ | ` get_library_docs ` | Get the docs for the specific library | library |
4447
4548## Configuration
4649
4750Configure the agent via environment variables:
4851
49- ### Core Application Settings
50-
51- - ` ENVIRONMENT ` - Deployment environment
52-
53- ### A2A Agent Configuration
54-
55- #### Server Configuration
56-
57- - ` A2A_SERVER_PORT ` - Server port (default: ` 8080 ` )
58- - ` A2A_SERVER_READ_TIMEOUT ` - Maximum duration for reading requests (default: ` 120s ` )
59- - ` A2A_SERVER_WRITE_TIMEOUT ` - Maximum duration for writing responses (default: ` 120s ` )
60- - ` A2A_SERVER_IDLE_TIMEOUT ` - Maximum time to wait for next request (default: ` 120s ` )
61- - ` A2A_SERVER_DISABLE_HEALTHCHECK_LOG ` - Disable logging for health check requests (default: ` true ` )
62-
63- #### LLM Client Configuration
64-
65- - ` A2A_AGENT_CLIENT_PROVIDER ` - LLM provider: ` openai ` , ` anthropic ` , ` groq ` , ` ollama ` , ` deepseek ` , ` cohere ` , ` cloudflare `
66- - ` A2A_AGENT_CLIENT_MODEL ` - Model to use
67- - ` A2A_AGENT_CLIENT_API_KEY ` - API key for LLM provider
68- - ` A2A_AGENT_CLIENT_BASE_URL ` - Custom LLM API endpoint
69- - ` A2A_AGENT_CLIENT_TIMEOUT ` - Timeout for LLM requests (default: ` 30s ` )
70- - ` A2A_AGENT_CLIENT_MAX_RETRIES ` - Maximum retries for LLM requests (default: ` 3 ` )
71- - ` A2A_AGENT_CLIENT_MAX_TOKENS ` - Maximum tokens for LLM responses (default: ` 4096 ` )
72- - ` A2A_AGENT_CLIENT_TEMPERATURE ` - Controls randomness of LLM output (default: ` 0.7 ` )
52+ | Category | Variable | Description | Default |
53+ | ----------| ----------| -------------| ---------|
54+ | ** Core Application** | ` ENVIRONMENT ` | Deployment environment | - |
55+ | ** Server** | ` A2A_SERVER_PORT ` | Server port | ` 8080 ` |
56+ | ** Server** | ` A2A_SERVER_READ_TIMEOUT ` | Maximum duration for reading requests | ` 120s ` |
57+ | ** Server** | ` A2A_SERVER_WRITE_TIMEOUT ` | Maximum duration for writing responses | ` 120s ` |
58+ | ** Server** | ` A2A_SERVER_IDLE_TIMEOUT ` | Maximum time to wait for next request | ` 120s ` |
59+ | ** Server** | ` A2A_SERVER_DISABLE_HEALTHCHECK_LOG ` | Disable logging for health check requests | ` true ` |
60+ | ** LLM Client** | ` A2A_AGENT_CLIENT_PROVIDER ` | LLM provider (` openai ` , ` anthropic ` , ` groq ` , ` ollama ` , ` deepseek ` , ` cohere ` , ` cloudflare ` ) | - |
61+ | ** LLM Client** | ` A2A_AGENT_CLIENT_MODEL ` | Model to use | - |
62+ | ** LLM Client** | ` A2A_AGENT_CLIENT_API_KEY ` | API key for LLM provider | - |
63+ | ** LLM Client** | ` A2A_AGENT_CLIENT_BASE_URL ` | Custom LLM API endpoint | - |
64+ | ** LLM Client** | ` A2A_AGENT_CLIENT_TIMEOUT ` | Timeout for LLM requests | ` 30s ` |
65+ | ** LLM Client** | ` A2A_AGENT_CLIENT_MAX_RETRIES ` | Maximum retries for LLM requests | ` 3 ` |
66+ | ** LLM Client** | ` A2A_AGENT_CLIENT_MAX_TOKENS ` | Maximum tokens for LLM responses | ` 4096 ` |
67+ | ** LLM Client** | ` A2A_AGENT_CLIENT_TEMPERATURE ` | Controls randomness of LLM output | ` 0.7 ` |
7368
7469## Development
7570
0 commit comments