An MCP (Model Context Protocol) server that standardizes and binds specific patterns for development tools, enabling Claude Code to generate code more efficiently with fewer errors and better autocorrection capabilities.
Alpha - This project is in early development and actively evolving.
๐ Major Milestone: Complete Language Support - We now have comprehensive support for both Go (13 tools) and Node.js/TypeScript (14 tools), making this a powerful DevTools server for modern development workflows. With intelligent caching, AI-powered suggestions, and zero-configuration onboarding, we're ready for the 0.0.1 release!
- ๐ Full Documentation - Complete guide with examples and tutorials
- ๐ Quick Start - Get started in 5 minutes
- ๐ ๏ธ Tools Reference - All 40+ available tools
- Contributing Guidelines - How to contribute to this project
- Code of Conduct - Community standards and expectations
- Security Policy - How to report security vulnerabilities
- Caching System - Intelligent caching for 3-5x performance improvements
- API Documentation - TypeDoc generated API documentation (run
npm run docs:api)
This MCP server creates a standardized interface between development tools and AI assistants like Claude Code. By establishing consistent patterns and best practices, it helps:
- Reduce code generation errors
- Enable better autocorrection of common issues
- Standardize development workflows
- Improve efficiency when working with Claude Code
The onboarding wizard automatically detects your project type and generates optimal MCP DevTools configuration with zero user input required.
Quick Start:
# Run complete onboarding (auto-detects everything)
mcp-devtools onboarding_wizard
# Preview changes without writing files
mcp-devtools onboarding_wizard --dry-run true
# Detect project type only
mcp-devtools detect_projectAvailable Tools:
-
onboarding_wizard - Complete automated setup workflow
- Detects project type (Node.js, Python, Go, Rust, Java, .NET, Mixed)
- Identifies framework (React, Express, Django, Gin, etc.)
- Discovers build system (Make, npm, go, cargo, etc.)
- Generates
.mcp-devtools.jsonconfiguration - Verifies tool availability
- Creates backup of existing config
- Provides actionable recommendations
-
detect_project - Analyze project characteristics
- Returns comprehensive project profile
- Lists detected configuration files
- Identifies linting tools and test frameworks
- Shows Make targets if available
-
generate_config - Preview configuration without writing
- Generates configuration based on detection
- Validates against schema
- Shows warnings and errors
-
validate_setup - Validate existing configuration
- Checks command availability
- Verifies tool installation
- Validates configuration schema
- Provides health score (0-100)
- Lists errors, warnings, and recommendations
-
rollback_setup - Restore previous configuration
- Rollback from automatic backup
- Backups stored in
.mcp-devtools-backups/
Example Output:
## Onboarding Wizard Results
**Status:** โ
Success
**Duration:** 2847ms
**Configuration:** /path/to/project/.mcp-devtools.json
**Backup:** /path/to/project/.mcp-devtools-backups/2025-11-04T10-30-00.json
### โ ๏ธ Skipped Tools (2)
- eslint
- markdownlint-cli
### ๐ก Recommendations
#### High Priority
- **Install missing required tools** (tool)
Install eslint and markdownlint-cli for complete linting support
### Validation
**Score:** 95/100
**Errors:** 0
**Warnings:** 2
Configuration Options:
| Parameter | Type | Default | Description |
|---|---|---|---|
directory |
string | cwd |
Working directory to analyze |
interactive |
boolean | false |
Enable interactive prompts (planned) |
autoInstall |
boolean | false |
Automatically install missing tools (planned) |
generateConfig |
boolean | true |
Generate .mcp-devtools.json file |
validateSetup |
boolean | true |
Run validation after setup |
backupExisting |
boolean | true |
Backup existing config before overwriting |
dryRun |
boolean | false |
Preview changes without writing files |
skipToolVerification |
boolean | false |
Skip tool installation checks |
Safety Features:
- โ Automatic Backups - Existing configs backed up before changes
- โ Rollback Support - Restore previous config anytime
- โ Dry-Run Mode - Preview all changes before applying
- โ Path Validation - Prevents path traversal attacks
- โ Input Sanitization - All inputs validated and sanitized
- โ Non-Destructive - Never deletes files, only creates/updates
- make_lint - Run
make lintwith optional directory and target specification - make_test - Run
make testwith optional test patterns/targets - make_depend - Run
make dependor equivalent dependency installation - make_build - Run
make buildormake all - make_clean - Run
make clean
Core Tools:
- go_test - Run Go tests with coverage and race detection
- go_build - Build Go packages with cross-compilation, custom ldflags, and build tags
- go_fmt - Format Go code using gofmt
- go_lint - Lint Go code using golangci-lint with comprehensive configuration
- go_vet - Examine Go source code for suspicious constructs
- go_mod_tidy - Tidy Go module dependencies
- go_mod_download - Download Go module dependencies
Advanced Features:
- go_benchmark - Run Go benchmarks with memory profiling and CPU scaling
- go_generate - Execute code generation directives
- go_work - Manage Go workspaces (go.work files)
- go_vulncheck - Scan for known vulnerabilities using govulncheck
- staticcheck - Enhanced static analysis
- go_project_info - Comprehensive Go project analysis and detection
-
nodejs_project_info - Comprehensive Node.js project analysis with smart caching
- Auto-detects package manager (npm, yarn, pnpm, bun)
- Framework detection (React, Vue, Angular, Next.js, NestJS, Express, Fastify)
- Test framework detection (Jest, Vitest, Mocha)
- Build tool detection (Vite, Webpack, Rollup, esbuild, tsup)
- 5min cache TTL for fast repeated queries
-
nodejs_test - Run tests with Jest, Vitest, or Mocha
- Auto-detects test framework from package.json
- Coverage reporting support
- Watch mode for development
- Framework-specific coverage extraction
-
nodejs_lint - ESLint integration with auto-fix
- Auto-fix issues with
--fixflag - Custom output formats (stylish, json, compact)
- File pattern filtering
- Integration with existing ESLint configs
- Auto-fix issues with
-
nodejs_format - Prettier code formatting
- Check mode for CI/CD validation
- Write mode for applying changes
- Custom file patterns support
- Respects existing Prettier configuration
-
nodejs_check_types - TypeScript type checking
- Uses tsc for strict type validation
- Custom tsconfig.json support
- Incremental compilation mode
- No-emit mode for type-only checks
-
nodejs_install_deps - Dependency management
- Auto-detects package manager from lockfiles
- Production-only installation mode
- Frozen lockfile support (for CI/CD)
- Timeout configuration (default: 10min)
-
nodejs_version - Version detection with 1hr caching
- Check node, npm, yarn, pnpm, bun versions
- Single tool or all tools at once
- Gracefully handles missing tools
- Uses commandAvailability cache namespace
-
nodejs_security - Security vulnerability scanning
- Run npm/yarn/pnpm/bun audit
- Auto-fix vulnerabilities with
--fixflag - Production-only dependency checks
- JSON output for CI/CD integration
-
nodejs_build - Build orchestration
- Run build scripts with any package manager
- Production and watch mode support
- Configurable timeout (default: 10min)
- Pass-through arguments to build tools
-
nodejs_scripts - Script management with caching
- List all available npm scripts
- Run scripts with additional arguments
- Uses cached project info (5min TTL)
- Helpful error messages for missing scripts
-
nodejs_benchmark - Performance benchmarking
- Auto-detects benchmark framework (Vitest, benchmark.js, tinybench)
- Vitest bench integration with pattern support
- Fallback to npm run bench script
- Configurable timeout (default: 5min)
-
nodejs_update_deps - Dependency updates
- Package manager-specific update commands (npm, yarn, pnpm, bun)
- Interactive mode for yarn/pnpm
- Latest version updates (ignore semver constraints)
- Specific package updates or all dependencies
- DevDependencies-only updates
-
nodejs_compatibility - Compatibility checking with 2hr caching
- Check Node.js version against package.json engines field
- Validate current version meets requirements
- Detect Node.js 18+ only packages
- Dependency compatibility analysis
- Cached results for fast repeated checks
-
nodejs_profile - Performance profiling
- Node.js built-in profiler integration (--cpu-prof, --heap-prof)
- CPU and heap profiling support
- Configurable profile duration
- Automatic output directory creation
- Chrome DevTools compatible profiles (.cpuprofile files)
- Suggestions for advanced profiling with clinic.js
- markdownlint - Run markdownlint on markdown files
- yamllint - Run yamllint on YAML files
- eslint - Run ESLint on JavaScript/TypeScript files
- lint_all - Run all available linters based on project type
- run_tests - Run tests using the detected test framework
- project_status - Get overall project health (lint + test summary)
- test_status - Get project test status and recommendations
-
actionlint - Validate GitHub Actions workflow files for syntax errors and best practices
A comprehensive linter for GitHub Actions workflow files that helps catch errors before pushing to GitHub. Validates workflow syntax, action parameters, expression syntax, and shell scripts within run blocks.
Features:
- Validates GitHub Actions workflow YAML syntax
- Checks action parameters against official action schemas
- Validates GitHub Actions expressions (
${{ }}syntax) - Integrates with shellcheck for validating shell scripts in
run:blocks - Supports pyflakes for Python script validation
- Multiple output formats: default (human-readable), JSON, and SARIF
- Configurable ignore patterns for specific rules
- Detects common workflow issues (missing jobs, invalid triggers, etc.)
Parameters:
directory- Working directory containing workflows (default: project root)files- Specific workflow files or glob patterns (default:.github/workflows/*.{yml,yaml})format- Output format:default,json, orsarifshellcheck- Enable shellcheck integration (default: true)pyflakes- Enable pyflakes for Python (default: false)verbose- Enable verbose outputignore- Array of rule patterns to ignoretimeout- Command timeout in milliseconds (default: 60000)
Common Use Cases:
- Pre-commit validation of workflow changes
- CI/CD integration to catch workflow errors
- Debugging workflow failures due to syntax issues
- Ensuring workflows follow GitHub Actions best practices
Example Output:
.github/workflows/ci.yml:25:15: property "timeout" not defined in action 'actions/checkout@v4' [action] .github/workflows/ci.yml:42:9: shellcheck reported issue SC2086: Double quote to prevent globbing [shellcheck]
-
jq_query - Process JSON data using jq filter syntax without requiring approval
Use this instead of
Bash(jq ...)for all JSON processing. This tool provides the full power of jq for JSON manipulation without requiring user approval for each query, making it perfect for parsing API responses, extracting fields, filtering arrays, and transforming data structures.Why Use jq_query:
- No Approval Required - Runs without user confirmation, enabling seamless AI workflows
- Faster Development - Eliminates repetitive approval dialogs for JSON operations
- Better Error Handling - Clear, actionable error messages for invalid filters or JSON
- Input Flexibility - Accepts both JSON strings and already-parsed objects/arrays
- Safe Operation - jq only processes data, no code execution risk
Parameters:
input- JSON string or already-parsed object/array (required)filter- jq filter expression (required), e.g.,".[] | .name"compact- Output compact JSON (default: false)raw_output- Output raw strings without JSON quotes (default: false)sort_keys- Sort object keys alphabetically (default: false)
Common Patterns:
// Extract array of field values jq_query({ input: data, filter: '.[] | .name' }) // Filter by condition jq_query({ input: data, filter: '.[] | select(.status == "active")' }) // Transform structure jq_query({ input: data, filter: '{name, id}' }) // Pretty-print minified JSON jq_query({ input: minifiedJSON, filter: '.' }) // Get array length jq_query({ input: data, filter: 'length' }) // Complex transformations jq_query({ input: apiResponse, filter: '.data.users | map({name: .user_name, id: .user_id})' })
Features:
- Full jq syntax support (pipes, select, map, reduce, conditionals)
- Handles edge cases: null, boolean, numbers, unicode, deeply nested structures
- Automatic jq availability detection with installation instructions
- Clear error messages for invalid JSON or jq filter syntax
- Multiple output format options
Installation Requirements:
jq must be installed on the system. If not found, the tool provides installation instructions:
# macOS brew install jq # Ubuntu/Debian apt-get install jq # Fedora/RHEL dnf install jq # Windows choco install jq
Real-World Examples:
// Parse GitHub API response jq_query({ input: milestones, filter: '.[] | select(.title | contains("2025-Q2")) | .number' }) // Extract specific fields from array jq_query({ input: issues, filter: '[.[] | {title, number, state}]' }) // Count matching items jq_query({ input: data, filter: '[.[] | select(.status == "open")] | length' })
-
code_review - Automated code review analysis on Git changes
Analyzes Git diffs to identify potential issues in code changes including security vulnerabilities, performance concerns, and maintainability problems. Provides severity-based categorization and actionable feedback.
Features:
- Security analysis (hardcoded secrets, dangerous code execution)
- Performance analysis (nested loops, inefficient patterns)
- Maintainability analysis (code complexity, TODO comments, line length)
- Configurable focus areas
- File filtering (include/exclude test files)
-
generate_pr_message - Generate PR messages from Git changes
Automatically generates conventional commit-formatted PR messages by analyzing commit history and changed files. Supports GitHub PR templates for consistent documentation.
Features:
- Analyzes commit history to determine type (feat, fix, etc.)
- Extracts scope from commit patterns
- Lists all changes with file statistics
- Supports conventional commit format
- Includes issue reference support
- Breaking changes section
- GitHub PR template integration - Automatically detects and uses templates from:
.github/pull_request_template.md.github/PULL_REQUEST_TEMPLATE.md.github/PULL_REQUEST_TEMPLATE/pull_request_template.mddocs/pull_request_template.mdPULL_REQUEST_TEMPLATE.md
-
analyze_command - Execute a command and analyze results with AI-powered smart suggestions
Executes a command and provides intelligent, context-aware recommendations based on the execution result. Helps identify issues, suggests fixes, and provides workflow optimization tips.
Features:
- Automatic failure pattern recognition (15+ built-in patterns)
- Context-aware suggestions based on project type and language
- Security vulnerability detection (hardcoded secrets, SQL injection, etc.)
- Performance issue identification
- Workflow optimization recommendations
- Confidence scoring for suggestions
- Affected file extraction from error messages
Parameters:
command- Command to execute and analyze (required)directory- Working directory for the commandtimeout- Command timeout in millisecondsargs- Additional command argumentscontext- Optional context for better suggestions:tool- Tool being used (e.g., "go test", "npm run")language- Programming languageprojectType- Project type
Example:
{ "command": "go test", "directory": "./src", "context": { "tool": "go test", "language": "Go" } }
-
analyze_result - Analyze already-executed command results
Post-mortem analysis of command execution results. Useful for analyzing failures from external tools or historical command runs.
Parameters:
command- Command that was executed (required)exitCode- Exit code from execution (required)stdout- Standard output from commandstderr- Standard error from commandduration- Execution duration in millisecondscontext- Optional context (same as analyze_command)
-
get_knowledge_base_stats - Get statistics about the smart suggestions knowledge base
Returns information about available failure patterns and their categorization.
Parameters:
category- Optional filter by category (security, performance, dependencies, etc.)
Knowledge Base Categories:
- Security - Hardcoded secrets, SQL injection, unsafe code patterns
- Performance - Nested loops, inefficient algorithms, memory issues
- Dependencies - Missing packages, version conflicts, module issues
- Build - Compilation errors, type mismatches, undefined references
- Test - Test failures, timeouts, race conditions
- Lint - Code style issues, formatting problems
- Configuration - Missing environment variables, config errors
- General - Runtime errors and other issues
Supported Languages & Tools:
- Go - Test failures, missing dependencies, race conditions, lint issues, build errors
- JavaScript/TypeScript - Module not found, type errors, ESLint issues
- Python - Import errors, syntax issues
- Cross-language - Security patterns, performance anti-patterns, configuration issues
-
ensure_newline - Validate and fix POSIX newline compliance
Ensures text files end with a proper newline character, as required by POSIX standards. This addresses a common pain point where AI coding assistants frequently create or modify files without proper trailing newlines, causing linting failures and git diff noise.
Modes:
check- Report files without trailing newlines (read-only, non-destructive)fix- Automatically add missing newlines to files (safe, preserves line ending style)validate- Exit with error if non-compliant files found (CI/CD mode)
Key Features:
- Pure Node.js implementation using Buffer operations (no shell commands like
tailorod) - Cross-platform compatibility (Windows, macOS, Linux)
- Smart line ending detection - automatically detects and preserves LF vs CRLF style
- Binary file detection and automatic skipping
- Configurable file size limits for safety
- Flexible glob pattern support for file selection
- Exclusion patterns for node_modules, build artifacts, etc.
Why This Matters:
- POSIX Compliance: Text files should end with a newline character per POSIX definition
- Linting: Many linters (ESLint, markdownlint, golangci-lint) enforce trailing newlines
- Git Diffs: Missing newlines create "No newline at end of file" warnings
- AI Assistants: Common issue when AI tools generate or modify files
The get_current_datetime tool provides rich temporal context optimized for LLM awareness. This helps AI
assistants understand the current date and time with confidence, especially when the system date is near or
past the LLM's training cutoff.
Key Features:
- Human-Readable Format: Clear datetime string optimized for LLM consumption
- Calendar Context: Quarter, ISO week number, day of year
- Timezone Support: IANA timezone identifiers with DST detection
- Relative Calculations: Days/weeks remaining in year, quarter boundaries
- Zero Dependencies: Pure JavaScript Date/Intl APIs for fast synchronous operation
- Cross-Platform: Works on Windows, macOS, and Linux
Use Cases:
- Verify System Context: When LLMs doubt the date in environment variables
- Milestone Planning: "What quarter are we in? How many weeks until year-end?"
- Relative Time: "How many days until Q4 ends?"
- Timezone Awareness: Check time across multiple timezones for distributed teams
Example Usage:
// Get current datetime with full context
{
"timezone": "America/Chicago"
}Example Output:
## Current Date & Time
**Tuesday, November 12, 2025 at 7:21 PM CST**
### Date Information
- **Year:** 2025
- **Quarter:** Q4 (October 1, 2025 - December 31, 2025)
- **Month:** November (11)
- **Day:** Tuesday, November 12
- **Day of Year:** 316 of 365
- **ISO Week:** 46
### Time Information
- **Time:** 19:21:00
- **Timezone:** America/Chicago (CST)
- **UTC Offset:** -06:00
- **DST Active:** No
### Relative Information
- **Days Remaining in Year:** 49
- **Weeks Remaining in Year:** 7
- **Days in Current Month:** 30
### Technical Details
- **ISO 8601:** 2025-11-12T19:21:00.000Z
- **Unix Timestamp:** 1762994460
Parameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
timezone |
string | System timezone | IANA timezone (e.g., 'America/New_York', 'UTC', 'Asia/Tokyo') |
include_calendar |
boolean | true |
Include calendar information (quarter, week, etc.) |
Supported Timezones:
All IANA timezone identifiers are supported, including:
UTC- Coordinated Universal TimeAmerica/New_York- US EasternAmerica/Chicago- US CentralAmerica/Los_Angeles- US PacificEurope/London- UKEurope/Paris- Central EuropeanAsia/Tokyo- Japan Standard TimeAsia/Shanghai- China Standard Time- And 400+ more IANA timezones
-
dotenv_environment - Load and parse environment variables from .env files
Makes environment variables visible to AI assistants through MCP context, enabling better debugging and configuration assistance. Automatically masks sensitive values (passwords, tokens, API keys) while exposing configuration safely.
Features:
- Automatic masking of sensitive values (PASSWORD, SECRET, TOKEN, KEY, API_KEY, etc.)
- Support for custom mask patterns
- Load from any .env file (.env, .env.production, etc.)
- Optional inclusion of process.env variables
- Helpful warnings for missing NODE_ENV and common variables
- Security reminders about not committing .env files
Why This Matters:
- Context Awareness: AI can see what environment variables are configured
- Debugging: Helps identify missing or misconfigured environment variables
- Setup Assistance: AI can guide users through required configuration
- Security: Sensitive values are masked by default to prevent accidental exposure
- Input sanitization to prevent command injection
- Allowlist of permitted commands and arguments
- Working directory validation (must be within project boundaries)
- Timeout protection for long-running commands
- Auto-detect project type (Node.js, Python, Go, Rust, Java, .NET)
- Locate Makefiles and configuration files
- Suggest relevant tools based on project structure
- Extract available make targets
- Node.js 20+
- TypeScript
- Go 1.24+ (for Go language support - PRIORITY)
- Make (for make-based commands)
- Go tools:
golangci-lint,staticcheck(for enhanced Go support) - Project-specific tools (eslint, markdownlint, yamllint, etc.)
- Clone the repository:
git clone https://github.com/rshade/mcp-devtools-server.git
cd mcp-devtools-server- Install dependencies:
npm install- Build the project:
npm run build- Start the server:
npm startMost linting tools are installed automatically via npm. However, some tools require separate installation:
yamllint (Python-based YAML linter):
# macOS (via Homebrew)
brew install yamllint
# Linux (Ubuntu/Debian)
sudo apt-get install yamllint
# Linux (Fedora/RHEL)
sudo dnf install yamllint
# Any platform (via pip)
pip install yamllint
# Verify installation
yamllint --versionactionlint (GitHub Actions workflow validator):
# macOS (via Homebrew)
brew install actionlint
# Linux (download binary)
bash <(curl https://raw.githubusercontent.com/rhysd/actionlint/main/scripts/download-actionlint.bash)
# Or via go install
go install github.com/rhysd/actionlint/cmd/actionlint@latest
# Verify installation
actionlint --versionYou can use either make commands or npm scripts (Makefile is a thin wrapper around npm):
# View all available commands
make help
# Setup and build
make install # Install dependencies
make build # Build TypeScript
make install-mcp # Install to Claude Desktop
# Development
make dev # Run in development mode
make start # Start production server
# Testing
make test # Run tests
make test-watch # Run tests in watch mode
make test-coverage # Run tests with coverage
# Linting
make lint # Run all linters
make lint-ts # Run TypeScript linting
make lint-md # Run Markdown linting
make lint-yaml # Run YAML linting
make lint-commit # Validate commit message format
# Documentation
make docs-api # Generate API docs (TypeDoc)
make docs-dev # Start docs dev server
make docs-build # Build documentation
make docs-preview # Preview built docs
# CI/CD
make check # Run all linters and tests
make all # Complete CI pipeline
# Or use npm scripts directly
npm run dev # Run in development mode
npm run lint # Run TypeScript linting
npm test # Run tests
npm run clean # Clean build artifactsThe MCP DevTools Server is available as Docker images for easy deployment and consistent environments across different systems.
# Pull the latest image
docker pull ghcr.io/rshade/mcp-devtools-server:latest
# Run with stdio (for MCP protocol)
docker run -i --rm ghcr.io/rshade/mcp-devtools-server:latestUpdate your Claude Desktop configuration (~/.claude/claude_desktop_config.json):
{
"mcpServers": {
"mcp-devtools-server": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"-v",
"/path/to/your/project:/workspace",
"-w",
"/workspace",
"ghcr.io/rshade/mcp-devtools-server:latest"
]
}
}
}Replace /path/to/your/project with your actual project directory.
For local development with hot-reload:
# Start development server
docker compose up mcp-devtools-dev
# Run tests
docker compose run --rm mcp-devtools-test
# Run linters
docker compose run --rm mcp-devtools-lint
# Production-like testing
docker compose up mcp-devtoolsdocker-compose.yml features:
- Hot-reload for source code changes
- Volume mounts for project access
- Separate services for dev, test, and lint
- Environment variable configuration
Build your own image with custom tools:
# Extend the base image
FROM ghcr.io/rshade/mcp-devtools-server:latest
# Install additional tools
RUN apk add --no-cache \
docker-cli \
kubectl
# Copy custom configuration
COPY .mcp-devtools.json /app/Build and run:
docker build -t my-mcp-devtools:latest .
docker run -i --rm my-mcp-devtools:latestThe project includes automated Docker builds via GitHub Actions:
- Automatic builds on push to main and tags
- Multi-platform support (linux/amd64, linux/arm64)
- Security scanning with Trivy
- Layer caching for fast builds
- Published to GitHub Container Registry (ghcr.io)
Available image tags:
latest- Latest stable releasev1.2.3- Specific version tagsmain-abc123- Branch-specific builds with commit SHAdev- Development builds (not published)
Control Docker behavior with environment variables:
# Set log level
docker run -i --rm \
-e LOG_LEVEL=debug \
ghcr.io/rshade/mcp-devtools-server:latest
# Set Node environment
docker run -i --rm \
-e NODE_ENV=production \
ghcr.io/rshade/mcp-devtools-server:latestMount your project directory to work with your code:
docker run -i --rm \
-v "$(pwd):/workspace" \
-w /workspace \
ghcr.io/rshade/mcp-devtools-server:latest- Ensure volume mount paths are correct
- Check file permissions in mounted directory
- Use
--userflag to match host user ID:
docker run -i --rm \
--user $(id -u):$(id -g) \
-v "$(pwd):/workspace" \
ghcr.io/rshade/mcp-devtools-server:latest- MCP protocol uses stdio - ensure
-i(interactive) flag is set - Check logs with
docker logs <container_id> - Verify environment variables are set correctly
- Ensure
commandis"docker"not"docker run" - Check
argsarray formatting in claude_desktop_config.json - Verify image is pulled:
docker pull ghcr.io/rshade/mcp-devtools-server:latest
-
Build the project first:
npm run build
-
Add to your Claude Desktop configuration file (
~/.claude/claude_desktop_config.json):{ "mcpServers": { "mcp-devtools-server": { "command": "node", "args": ["/absolute/path/to/mcp-devtools-server/dist/index.js"], "env": { "LOG_LEVEL": "info" } }, "context7": { "command": "npx", "args": ["-y", "@upstash/context7-mcp"] } } }Replace
/absolute/path/to/mcp-devtools-serverwith your actual project path. -
Example configuration files:
- See
examples/claude-desktop-config.jsonfor a complete example - The
.mcp.jsonfile in the project root is a template you can copy
- See
-
Restart Claude Desktop after updating the configuration.
Create a .mcp-devtools.json file in your project root:
{
"commands": {
"lint": "make lint",
"test": "make test",
"build": "make build",
"clean": "make clean"
},
"linters": ["eslint", "markdownlint", "yamllint"],
"testRunner": "jest",
"timeout": 300000
}The MCP server automatically provides guidance to Claude via system prompt instructions (src/instructions.md).
These instructions help Claude:
- Auto-discover the 50+ available mcp-devtools tools
- Prefer MCP tools over built-in Bash commands for development tasks
- Use onboarding wizard proactively when no configuration exists
- Follow common workflows for linting, testing, PR preparation, and error analysis
Key behaviors enabled:
- When starting work, Claude checks for
.mcp-devtools.jsonand offers to runonboarding_wizardif missing - For linting, Claude uses
make_lint,eslint, etc. instead ofBash(make lint) - For error handling, Claude uses
analyze_commandfor automatic failure analysis - Claude runs
project_statusbefore starting work to understand available tooling
The instructions are token-efficient (< 100 lines) and focus on operational guidance rather than marketing content.
// Run make lint
await callTool('make_lint', {});
// Run make test with specific target
await callTool('make_test', { target: 'unit-tests' });
// Run all linters
await callTool('lint_all', { fix: true });
// Get project status
await callTool('project_status', {});// Run Go tests with coverage and race detection
await callTool('go_test', {
coverage: true,
race: true,
verbose: true
});
// Build Go application with specific tags
await callTool('go_build', {
tags: ["integration", "postgres"],
verbose: true
});
// Format Go code
await callTool('go_fmt', {
write: true,
simplify: true
});
// Lint Go code with custom config
await callTool('go_lint', {
config: ".golangci.yml",
fix: true
});
// Vet Go code for issues
await callTool('go_vet', { package: "./..." });
// Tidy Go modules
await callTool('go_mod_tidy', { verbose: true });
// Run benchmarks with memory profiling
await callTool('go_benchmark', {
benchmem: true,
benchtime: '10s',
cpu: [1, 2, 4]
});
// Execute code generation
await callTool('go_generate', {
run: 'mockgen',
verbose: true
});
// Cross-compile for different platforms
await callTool('go_build', {
goos: 'linux',
goarch: 'arm64',
ldflags: '-X main.version=1.0.0',
output: './bin/app-linux-arm64'
});
// Manage Go workspaces
await callTool('go_work', {
command: 'use',
modules: ['./moduleA', './moduleB']
});
// Scan for vulnerabilities
await callTool('go_vulncheck', {
mode: 'source',
json: true
});// Check all TypeScript and JavaScript files for missing newlines
await callTool('ensure_newline', {
patterns: ['src/**/*.ts', 'src/**/*.js'],
mode: 'check',
exclude: ['node_modules/**', 'dist/**']
});
// Fix all markdown files (automatically adds trailing newlines)
await callTool('ensure_newline', {
patterns: ['**/*.md'],
mode: 'fix',
exclude: ['node_modules/**']
});
// Validate in CI/CD pipeline (exits with error if non-compliant)
await callTool('ensure_newline', {
patterns: ['**/*'],
mode: 'validate',
exclude: ['node_modules/**', '.git/**', 'dist/**', '*.min.js'],
maxFileSizeMB: 5
});
// Check specific file types only
await callTool('ensure_newline', {
patterns: ['**/*'],
fileTypes: ['*.ts', '*.go', '*.md', '*.json'],
mode: 'check'
});
// Fix files after AI code generation
await callTool('ensure_newline', {
patterns: ['src/**/*.ts', 'test/**/*.ts'],
mode: 'fix',
skipBinary: true // default: true
});// Load default .env file with masking (default behavior)
await callTool('dotenv_environment', {});
// Load specific .env file
await callTool('dotenv_environment', {
file: '.env.production'
});
// Load without masking (for debugging - use carefully!)
await callTool('dotenv_environment', {
mask: false
});
// Load with custom mask patterns
await callTool('dotenv_environment', {
maskPatterns: ['CUSTOM_SECRET', 'INTERNAL']
});
// Include process.env variables
await callTool('dotenv_environment', {
includeProcessEnv: true
});
// Full control example
await callTool('dotenv_environment', {
file: '.env.staging',
directory: '/path/to/project',
mask: true,
maskPatterns: ['CUSTOM_SECRET'],
includeProcessEnv: false
});// Run tests with coverage
await callTool('run_tests', {
coverage: true,
pattern: "*.test.js"
});
// Lint specific files
await callTool('markdownlint', {
files: ["README.md", "docs/*.md"],
fix: true
});
// Build with parallel jobs
await callTool('make_build', { parallel: 4 });Add EOL validation to your GitHub Actions workflow:
name: Lint
on: [push, pull_request]
jobs:
validate-eol:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '18'
- name: Install MCP DevTools Server
run: |
git clone https://github.com/rshade/mcp-devtools-server.git
cd mcp-devtools-server
npm install
npm run build
- name: Validate EOL compliance
run: |
# Use the ensure_newline tool in validate mode
# This will exit with error if any files lack trailing newlines
node mcp-devtools-server/dist/index.js ensure_newline \
--patterns "**/*.ts" "**/*.js" "**/*.md" \
--mode validate \
--exclude "node_modules/**" "dist/**"Add to your .git/hooks/pre-commit or use with Husky:
#!/bin/bash
# Automatically fix missing newlines before commit
npx mcp-devtools-server ensure_newline \
--patterns "**/*.ts" "**/*.js" "**/*.go" "**/*.md" \
--mode fix \
--exclude "node_modules/**" "vendor/**" "dist/**"
# Stage any files that were fixed
git add -uThe MCP DevTools Server is built on a modular, secure architecture:
- Secure Shell Execution - Command allowlist and argument sanitization
- Plugin System - Auto-discovery and lifecycle management
- Intelligent Caching - LRU cache with file-based invalidation (5-10x speedups)
- Project Detection - Auto-configuration for Node.js, Python, Go, and more
- 40+ Tools - Comprehensive development tool integration
๐๏ธ View Complete Architecture Documentation
The MCP DevTools Server supports an extensible plugin architecture that allows you to add custom tools and integrations without modifying the core codebase.
Plugins extend the server with additional functionality:
- Custom tools accessible through the MCP protocol
- Language/framework support (Docker, Kubernetes, etc.)
- CI/CD integrations (GitHub Actions, Jenkins, etc.)
- IDE enhancements (formatters, linters, etc.)
- Notification systems (Slack, Discord, Email)
A reference implementation demonstrating best practices for plugin development. Provides Git stacked branch management tools.
Tools Provided:
git_spice_branch_create- Create new stacked branchesgit_spice_branch_checkout- Checkout existing branchesgit_spice_stack_submit- Submit entire stack as pull requestsgit_spice_stack_restack- Rebase stack on latest changesgit_spice_log_short- View current stack visualizationgit_spice_repo_sync- Sync with remote and cleanup merged branches
Example Configuration:
{
"plugins": {
"enabled": ["git-spice"],
"git-spice": {
"defaultBranch": "main",
"autoRestack": false,
"jsonOutput": true,
"timeout": 60000
}
}
}Usage Example:
// Create a new feature branch
await callTool('git_spice_branch_create', {
name: 'feature/add-authentication',
base: 'main'
});
// Create a stacked branch on top of the first
await callTool('git_spice_branch_create', {
name: 'feature/auth-service',
base: 'feature/add-authentication'
});
// View the stack
await callTool('git_spice_log_short', {});
// Submit all as PRs
await callTool('git_spice_stack_submit', { draft: false });See the git-spice User Guide for detailed documentation.
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ MCP DevTools Server โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ Plugin Manager โ โ
โ โ - Discovery โ โ
โ โ - Registration โ โ
โ โ - Tool Routing โ โ
โ โโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโ โ
โ โ โ
โ โโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโโ โ
โ โ Plugin 1 โ Plugin 2 โ โ
โ โ โโโโโโโ โ โโโโโโโ โ โ
โ โ โTool1โ โ โTool3โ โ โ
โ โ โTool2โ โ โTool4โ โ โ
โ โ โโโโโโโ โ โโโโโโโ โ โ
โ โโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโ โ
โ โ โ
โ โโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโโ โ
โ โ Shared ShellExecutor โ โ
โ โ (Security Layer) โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
- Discovery: PluginManager scans
src/plugins/*-plugin.ts - Validation: Checks required dependencies
- Initialization: Calls
initialize()with context - Registration: Calls
registerTools()to get tool list - Execution: Routes tool calls to
handleToolCall() - Shutdown: Calls
shutdown()on server exit
Tools are automatically prefixed with plugin name to prevent conflicts:
Plugin: git-spice
Tool: branch_create
Result: git_spice_branch_create
-
Copy the template:
cp examples/plugins/custom-plugin-example.ts src/plugins/my-tool-plugin.ts
-
Update metadata:
metadata: PluginMetadata = { name: 'my-tool', version: '1.0.0', description: 'Integration with my-tool', requiredCommands: ['my-tool'], tags: ['utility'], };
-
Implement a tool:
async registerTools(): Promise<PluginTool[]> { return [{ name: 'execute', description: 'Execute my-tool command', inputSchema: { type: 'object', properties: { args: { type: 'array', items: { type: 'string' } } } } }]; }
-
Build and test:
npm run build node dist/index.js
Your plugin will be auto-discovered and loaded!
All plugins must implement the Plugin interface:
export class MyPlugin implements Plugin {
// Metadata (required)
metadata: PluginMetadata = {
name: 'my-plugin',
version: '1.0.0',
description: 'My custom plugin',
requiredCommands: ['my-command'],
tags: ['utility'],
};
// Lifecycle methods (required)
async initialize(context: PluginContext): Promise<void> {
// Validate required commands are available
// Initialize any state
}
async registerTools(): Promise<PluginTool[]> {
// Return array of tool definitions
}
async handleToolCall(toolName: string, args: unknown): Promise<unknown> {
// Route to appropriate tool method
}
// Optional methods
async validateConfig?(config: unknown): Promise<boolean> { }
async shutdown?(): Promise<void> { }
async healthCheck?(): Promise<PluginHealth> { }
}Every plugin receives a context with:
interface PluginContext {
config: Record<string, unknown>; // Plugin configuration
projectRoot: string; // Project directory
shellExecutor: ShellExecutor; // Secure command execution
logger: winston.Logger; // Scoped logger
utils: PluginUtils; // Helper functions
}- Always use the shared ShellExecutor - Never execute commands directly
- Validate all input with Zod schemas - Runtime type safety
- Add commands to the allowlist - Update
src/utils/shell-executor.ts - Sanitize user input - Prevent command injection
- No dynamic code execution - Never use
eval()orFunction()
Example:
import { z } from 'zod';
const MyToolArgsSchema = z.object({
input: z.string().min(1).describe('Input parameter'),
verbose: z.boolean().optional().describe('Verbose output'),
});
private async myTool(args: unknown): Promise<MyToolResult> {
// 1. Validate input
const validated = MyToolArgsSchema.parse(args);
// 2. Execute through ShellExecutor
const result = await this.context.shellExecutor.execute(
`my-command ${validated.input}`,
{
cwd: this.context.projectRoot,
timeout: 60000,
}
);
// 3. Return structured result
if (result.success) {
return { success: true, output: result.stdout };
} else {
return {
success: false,
error: result.stderr,
suggestions: this.generateSuggestions(result.stderr),
};
}
}- Developer Guide: docs/plugin-development.md - Comprehensive guide covering architecture, implementation, testing, and best practices
- git-spice User Guide: docs/plugins/git-spice.md - Complete user documentation for the git-spice plugin
- Template: examples/plugins/custom-plugin-example.ts - Ready-to-use plugin template with TODOs
Create tests in src/__tests__/plugins/your-plugin.test.ts:
import { describe, it, expect, beforeEach } from '@jest/globals';
import { YourPlugin } from '../../plugins/your-plugin.js';
describe('YourPlugin', () => {
let plugin: YourPlugin;
let mockContext: PluginContext;
beforeEach(() => {
plugin = new YourPlugin();
mockContext = createMockContext();
});
it('should initialize successfully', async () => {
await expect(plugin.initialize(mockContext)).resolves.not.toThrow();
});
it('should execute tool successfully', async () => {
const result = await plugin.handleToolCall('my_tool', {
input: 'test'
});
expect(result).toMatchObject({ success: true });
});
});Coverage Goals:
- Plugin Manager: 90%+ coverage
- Individual Plugins: 85%+ coverage
The server provides comprehensive error handling with:
- Structured error responses
- Helpful suggestions for common failures
- Exit code interpretation
- Tool availability checking
Contributions are welcome! This project is built on continuous learning and improvement.
Please read our Contributing Guidelines for detailed information on how to contribute to this project.
- Contributing Guidelines - How to contribute
- Code of Conduct - Community standards
- Security Policy - Reporting vulnerabilities
- API Documentation - TypeDoc generated API docs
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests for new functionality
- Run linting and tests
- Submit a pull request
For detailed instructions, see CONTRIBUTING.md.
- Better development patterns
- Error prevention strategies
- Workflow optimizations
- Tool integrations
- Documentation improvements
-
Command not found errors
- Ensure required tools are installed
- Check PATH environment variable
- Verify tool permissions
-
Permission denied
- Check file permissions in project directory
- Ensure write permissions for build outputs
-
Timeout errors
- Increase timeout values in configuration
- Optimize slow operations
- Check system resources
-
EOL/Newline validation issues
- Files created by AI often miss trailing newlines
- Use
ensure_newlinewithmode: 'fix'to automatically correct - Binary files are automatically skipped - check file encoding if issues persist
- CRLF vs LF is automatically detected and preserved
- Use
validatemode in CI/CD to catch issues before commit
Enable debug logging:
LOG_LEVEL=debug npm startThis project is licensed under the Apache License 2.0 - see the LICENSE file for details.
This project represents an ongoing effort to improve the developer experience when working with AI-powered coding assistants. All feedback and contributions help shape better development practices for the community.
Current Status: MVP 0.0.1 Released โ
Active Development: 2025-Q2 - Plugin Architecture & Performance
Quick Overview:
- โ Q1 2025: Go Support & Core Foundation (100% complete - 5/5 issues)
- ๐ Q2 2025: Plugin Ecosystem & Performance (2/10 issues)
- ๐ฎ Q3 2025: User Experience & AI Integration
- ๐ Q4 2025: Team Collaboration & Enterprise
๐ View Full Roadmap ๐ Track Progress on GitHub