A lightweight chat agent that converts high-level feature requirements into concise, actionable specifications.
- Business-voice user stories with acceptance criteria
- Ambiguity detection and clarifying questions
- Assumptions and dependencies identification
- Edge cases enumeration
- Functional requirements extraction
- Implementation tasks breakdown (FE/BE/Infra/QA/Docs)
- T-shirt sizing and complexity estimation
- Project context management with versioning
- Local LLM integration (LM Studio compatible)
- Next.js 15 with TypeScript
- TailwindCSS v4 for styling
- IndexedDB for client-side storage
- Zod for schema validation
- Local LM Studio integration (OpenAI-compatible API)
- Node.js 18+ and npm
- LM Studio running locally on port 1234 (optional, for AI features)
- Clone and install dependencies:
git clone <repository-url>
cd specgen
npm install
- Start the development server:
npm run dev
- Open http://localhost:3000 in your browser
For AI-powered specification generation:
- Download and install LM Studio
- Load a compatible model (e.g., GPT-style models)
- Start the local server on port 1234
- The application will automatically detect and use the local LLM
- Click "Create First Project" or "Create New Project"
- Enter a project name and description
- The system creates a default project context
- Select your project
- Enter feature title and description
- Optionally add stakeholders, constraints, and non-functional requirements
- Click "Generate Draft Spec"
If the input has high ambiguity (score > 60%), the system will:
- Present 3-5 targeted clarifying questions
- Allow you to answer questions to reduce ambiguity
- Generate a refined specification based on your answers
- Option to skip questions and generate with current information
The output includes multiple tabs:
- Summary: Overview and estimation
- User Story: As-a/I-want/So-that format with acceptance criteria
- Requirements: Functional requirements and clarifications needed
- Tasks: Implementation breakdown by area (FE/BE/Infra/QA/Docs)
- Risks & Edge Cases: Risk mitigations and edge case considerations
- Context: Resolved project context used for generation
- JSON: Raw structured output for export
The application uses IndexedDB with the following entities:
projects
- Project definitionsproject_contexts
- Versioned context configurationsspec_inputs
- Feature input recordsspec_outputs
- Generated specificationsspec_evaluations
- Quality assessments
POST /api/projects
- Create projectGET /api/projects
- List projectsGET /api/projects/[id]
- Get project detailsGET /api/projects/[id]/context
- Get project contextPOST /api/projects/[id]/context
- Update project contextPOST /api/specs/generate
- Generate specificationPOST /api/specs/refine
- Refine with clarifying answers
Projects maintain versioned contexts including:
- Glossary - Domain terms and definitions
- Stakeholders - Names, roles, and interests
- Constraints - Technical and business limitations
- Non-functional Requirements - Performance, security, etc.
- API Catalog - Available services and endpoints
- Data Models - Entity definitions
- Environment Profiles - Local/dev/test/prod configs
- Labels - Jira components, service mappings
- Project defaults (from active context version)
- Feature-level overrides (provided with input)
Feature overrides always take precedence over project defaults.
The system generates structured JSON output following a comprehensive schema:
interface SpecOutput {
input: SpecInput
resolved_context: ResolvedContext
story: UserStory
needs_clarification: Clarification[]
assumptions: string[]
dependencies: string[]
edge_cases: string[]
functional_requirements: FunctionalRequirement[]
tasks: Task[]
estimation: Estimation
risks: Risk[]
}
src/
├── app/ # Next.js app router
│ ├── api/ # API routes
│ └── page.tsx # Main application
├── components/ # React components
│ ├── ui/ # Base UI components
│ └── *.tsx # Feature components
├── lib/ # Core services
│ ├── database.ts # IndexedDB operations
│ ├── llm.ts # LLM integration
│ └── projectContext.ts # Context management
└── types/ # TypeScript definitions
├── schemas.ts # Zod schemas
└── database.ts # Database types
- DatabaseService - IndexedDB operations with versioning
- LLMService - Local LLM integration with structured output
- ProjectContextService - Context merging and versioning
npm run lint # ESLint checking
npm run type-check # TypeScript validation
- ✅ Core specification generation
- ✅ Project and context management
- ✅ Clarifying questions flow
- ✅ Two-pane UI with tabbed output
- ✅ Ambiguity detection
- ✅ T-shirt sizing estimation
- 🔲 Project context editor UI
- 🔲 Evaluation and scoring system
- 🔲 Jira export functionality
- 🔲 Specification refinement UI
- 🔲 Context versioning UI
- 🔲 Offline evaluation harness
- 🔲 Multi-turn conversation improvements
Default configuration for LM Studio:
{
id: 'gpt-oss-20b',
provider: 'local',
name: 'GPT 20B LM Studio',
base_url: 'http://localhost:1234',
is_openai_compatible: true,
model: 'openai/gpt-oss-20b'
}
The system calculates ambiguity scores based on:
- Input length (shorter = more ambiguous)
- Presence of specific numbers/dates
- Vague language patterns
- Pronoun density
- Missing context elements
Scores > 0.6 trigger clarifying questions in draft mode.
- Fork the repository
- Create a feature branch
- Make your changes
- Test thoroughly
- Submit a pull request
MIT License - see LICENSE file for details