Release v1.7.14
Overview
This release improves Acceptance Criteria (AC) scoping guidelines to optimize for autonomous LLM implementation in CI/CD environments. The changes focus on clarifying what should and shouldn't be included in ACs to maximize automation ROI.
What's Changed
🎯 Improved AC Scoping Guidelines
acceptance-test-generator.md (English & Japanese)
- Simplified "Out of Scope" section into 3 clear categories:
- External Dependencies: Test contracts/interfaces instead of real API calls
- Non-Deterministic in CI: Avoid performance metrics and load testing
- Implementation Details: Focus on user-observable behavior, not internal structure
- Removed redundant "Security and Performance Requirements Processing" section
- Added Action guideline: Clear instructions on handling excluded items in ACs
technical-designer.md (English & Japanese)
- Added "AC Scoping for Autonomous Implementation" section:
- Include (High automation ROI):
- Business logic correctness (calculations, state transitions, data transformations)
- Data integrity and persistence behavior
- User-visible functionality completeness
- Error handling behavior (what user sees/experiences)
- Exclude (Low ROI in LLM/CI/CD environment):
- External service real connections → Use contract/interface verification
- Performance metrics → Non-deterministic in CI, defer to load testing
- Implementation details → Focus on observable behavior
- UI layout specifics → Focus on information availability, not presentation
- Principle: AC = User-observable behavior verifiable in isolated CI environment
- Include (High automation ROI):
Benefits
For AC Authors
- Clearer guidelines on what to include/exclude in acceptance criteria
- Reduced ambiguity when writing requirements for autonomous implementation
- Better alignment with CI/CD testing capabilities
For Test Generators
- Improved accuracy when transforming ACs into test cases
- Reduced confusion about scope boundaries
- Better test quality by focusing on high-value verification points
For Autonomous Implementation
- Higher success rate for LLM-based implementations
- More reliable CI/CD pipelines with deterministic tests
- Better ROI on automated testing efforts
Rationale
Why This Change?
- Observable Behavior Focus: ACs should specify what users see/experience, not implementation details
- CI/CD Optimization: Exclude non-deterministic tests that fail in automated environments
- Automation ROI: Concentrate testing effort on business logic and data integrity
- LLM Capability Alignment: Match AC scope to what autonomous agents can effectively verify
What Problem Does This Solve?
Previous AC guidelines led to "ideal quality assurance" criteria that were:
- Difficult to implement autonomously by LLMs
- Non-deterministic in CI/CD environments
- Low ROI for automated testing
- Mixed implementation details with user-observable behavior
Technical Details
Files Modified
.claude/agents-en/acceptance-test-generator.md.claude/agents-ja/acceptance-test-generator.md.claude/agents-en/technical-designer.md.claude/agents-ja/technical-designer.mdpackage.json(version bump)package-lock.json(version bump)
Full Changelog: v1.7.13...v1.7.14