Skip to content

Commit 3705818

Browse files
committed
chore: resolve conflict
2 parents 58d82a7 + 6e9fd19 commit 3705818

File tree

8 files changed

+469
-268
lines changed

8 files changed

+469
-268
lines changed

CLAUDE.md

Lines changed: 161 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,161 @@
1+
# CLAUDE.md
2+
3+
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
4+
5+
## Common Development Commands
6+
7+
### Build and Setup
8+
```bash
9+
# Initial setup - installs all dependencies and configures pre-commit hooks
10+
make build
11+
12+
# Configure OpenHands (LLM settings, workspace directory)
13+
make setup-config
14+
15+
# Start PostgreSQL database (required before running)
16+
[ ! -f .env ] && cp .env.example .env
17+
docker-compose up -d postgres
18+
```
19+
20+
### Running the Application
21+
```bash
22+
# Run both frontend and backend (recommended)
23+
make run
24+
25+
# Run backend only
26+
make start-backend
27+
28+
# Run frontend only
29+
make start-frontend
30+
31+
# Run in Docker
32+
make docker-run
33+
34+
# Development in Docker container
35+
make docker-dev
36+
```
37+
38+
### Testing
39+
```bash
40+
# Run all frontend tests
41+
make test
42+
43+
# Run Python unit tests
44+
poetry run pytest ./tests/unit/test_*.py
45+
46+
# Run specific test file
47+
poetry run pytest tests/unit/test_file_name.py -xvs
48+
49+
# Run integration tests
50+
poetry run pytest ./tests/integration/
51+
52+
# Run evaluation benchmarks
53+
cd evaluation && poetry run python -m pytest benchmarks/
54+
```
55+
56+
### Linting and Code Quality
57+
```bash
58+
# Run all linters
59+
make lint
60+
61+
# Backend linting (Python)
62+
make lint-backend
63+
poetry run ruff check --fix openhands/ evaluation/
64+
poetry run mypy openhands/ --config-file dev_config/python/mypy.ini
65+
66+
# Frontend linting (TypeScript/React)
67+
make lint-frontend
68+
cd frontend && npm run lint
69+
70+
# Pre-commit hooks (runs automatically on commit)
71+
poetry run pre-commit run --all-files
72+
```
73+
74+
### Development Utilities
75+
```bash
76+
# Clean caches
77+
make clean
78+
79+
# Format Python code
80+
poetry run ruff format openhands/ evaluation/
81+
82+
# Add Python dependency
83+
poetry add <package-name>
84+
85+
# Add dev dependency
86+
poetry add --group dev <package-name>
87+
88+
# Update dependencies
89+
poetry update
90+
91+
# Frontend dependency management
92+
cd frontend && npm install <package-name>
93+
```
94+
95+
## High-Level Architecture
96+
97+
### Core Components
98+
99+
**OpenHands** is an AI-powered software development agent platform with these major components:
100+
101+
1. **Backend Server** (`/openhands/server/`)
102+
- FastAPI-based REST API and WebSocket server
103+
- Handles agent lifecycle, conversation management, and file operations
104+
- Uses PostgreSQL for persistence and Redis for caching
105+
- Real-time communication via Socket.IO
106+
107+
2. **Frontend** (`/frontend/`)
108+
- React + TypeScript SPA with Redux state management
109+
- Provides IDE-like interface with terminal, file browser, and code editor
110+
- WebSocket client for real-time agent interactions
111+
112+
3. **Agent System** (`/openhands/agenthub/`)
113+
- Multiple agent implementations (CodeActAgent, BrowsingAgent, etc.)
114+
- Each agent has different capabilities and approaches to problem-solving
115+
- Agents interact with runtime environments to execute actions
116+
117+
4. **Runtime Environment** (`/openhands/runtime/`)
118+
- Sandboxed execution environments (Docker, E2B, Modal, Local)
119+
- Provides secure isolation for code execution
120+
- Supports browser automation, file operations, and command execution
121+
122+
5. **Controller** (`/openhands/controller/`)
123+
- Orchestrates agent-runtime interactions
124+
- Manages conversation state and action execution
125+
- Handles agent delegation and error recovery
126+
127+
6. **Evaluation Framework** (`/evaluation/`)
128+
- Comprehensive benchmarking system
129+
- Supports SWE-bench, HumanEvalFix, and other coding benchmarks
130+
- Used for measuring agent performance
131+
132+
### Key Concepts
133+
134+
- **Actions**: Commands agents can execute (RunIPythonAction, FileWriteAction, BrowseInteractiveAction, etc.)
135+
- **Observations**: Results from action execution returned to agents
136+
- **Events**: All actions and observations are events in the event stream
137+
- **Microagents**: Specialized prompt templates for specific tasks (in `/microagents/`)
138+
- **MCP (Model Context Protocol)**: Tool integration system for extending agent capabilities
139+
140+
### Configuration
141+
142+
- Main config: `config.toml` (created from `config.template.toml`)
143+
- Environment variables: `.env` (created from `.env.example`)
144+
- LLM configurations support via litellm (OpenAI, Anthropic, Google, local models, etc.)
145+
146+
### Important Development Notes
147+
148+
- Python 3.12 required (use Poetry for dependency management)
149+
- Node.js 22+ required for frontend
150+
- Docker required for runtime sandboxing
151+
- Development mode: Set `RUN_MODE=DEV` to bypass auth checks
152+
- Pre-commit hooks enforce code quality standards
153+
- WebSocket connection handles real-time agent-user communication
154+
- File operations are restricted to configured workspace directory
155+
156+
### Testing Strategy
157+
158+
- Unit tests: Test individual components in isolation
159+
- Integration tests: Test agent capabilities end-to-end
160+
- Evaluation benchmarks: Measure performance on standard coding tasks
161+
- All new features should include appropriate test coverage

0 commit comments

Comments
 (0)