Skip to content

Conversation

@llbbl
Copy link

@llbbl llbbl commented Oct 18, 2025

About UnitSeeker

Hi! This PR is part of the UnitSeeker project, a human-guided initiative to help Python repositories establish testing infrastructure.

Key points:

  • Human-approved: Every PR is manually approved before work begins
  • Semi-automated with oversight: Created and controlled via a homegrown wrapper around Claude Code with human quality control
  • Infrastructure only: This PR intentionally contains only the testing setup without actual unit tests
  • Your repository, your rules: Feel free to modify, reject, or request changes - all constructive feedback is welcome
  • Follow-up support: All responses and discussions are personally written, not automated

Learn more about the project and see the stats on our progress at https://unitseeker.llbbl.com/


Summary

This PR establishes a complete testing infrastructure for the LLMCompiler project, providing a solid foundation for writing and maintaining tests. The setup includes Poetry for dependency management, pytest for testing, coverage reporting, and a well-organized test directory structure.

Changes Made

Package Management

  • Created pyproject.toml with Poetry configuration
  • Migrated dependencies from requirements.txt to Poetry
  • Added production dependencies: bs4, langchain, langchain-community, numexpr, tiktoken, openai
  • Added development dependencies: pytest, pytest-cov, pytest-mock
  • Configured convenient test runner scripts (poetry run test and poetry run tests)

Testing Configuration

  • pytest configuration with:

    • Test discovery patterns for automatic test detection
    • Coverage threshold set to 80% with HTML, XML, and terminal reporting
    • Strict markers mode for better test organization
    • Custom markers: @pytest.mark.unit, @pytest.mark.integration, @pytest.mark.slow
    • Warning filters to catch deprecation issues
  • Coverage configuration with:

    • Source tracking for the src/ directory
    • Exclusions for test files, cache, and virtual environments
    • Branch coverage enabled for thorough testing
    • Multiple report formats (HTML in htmlcov/, XML in coverage.xml, terminal output)

Directory Structure

tests/
├── __init__.py
├── conftest.py           # Shared pytest fixtures
├── test_infrastructure.py # Validation tests
├── unit/
│   └── __init__.py
└── integration/
    └── __init__.py

Shared Fixtures (tests/conftest.py)

Comprehensive set of reusable fixtures including:

  • File system fixtures: temp_dir, temp_file, mock_dataset_path
  • Environment fixtures: mock_env_vars for API keys and configuration
  • Mock fixtures: mock_llm_client, mock_openai_response, mock_function_registry
  • Configuration fixtures: sample_config, sample_tools_config, sample_task_list
  • Testing utilities: capture_logs, reset_environment

Validation Tests

Created tests/test_infrastructure.py with 18 tests validating:

  • ✅ pytest installation and functionality
  • ✅ Python version requirements (3.10+)
  • ✅ Project structure verification
  • ✅ All shared fixtures working correctly
  • ✅ Custom pytest markers functioning
  • ✅ pytest-mock integration
  • ✅ Coverage tracking capability

Other Updates

  • Updated .gitignore to exclude:
    • .pytest_cache/, .coverage, htmlcov/, coverage.xml
    • .claude/* for Claude Code configuration
    • (Lock files are intentionally tracked for reproducible builds)

Running Tests

Install dependencies:

poetry install

Run all tests:

poetry run test
# or
poetry run tests
# or
poetry run pytest

Run specific test files:

poetry run pytest tests/test_infrastructure.py

Run tests with specific markers:

poetry run pytest -m unit           # Run only unit tests
poetry run pytest -m integration    # Run only integration tests
poetry run pytest -m "not slow"     # Skip slow tests

Run tests without coverage:

poetry run pytest --no-cov

Generate coverage report:

poetry run pytest
# HTML report will be in htmlcov/index.html
# XML report will be in coverage.xml

Validation Results

All 18 infrastructure validation tests pass successfully:

============================= test session starts ==============================
platform linux -- Python 3.11.2, pytest-8.4.2, pluggy-1.6.0
collected 18 items

tests/test_infrastructure.py::TestInfrastructure::test_pytest_working PASSED
tests/test_infrastructure.py::TestInfrastructure::test_python_version PASSED
tests/test_infrastructure.py::TestInfrastructure::test_project_structure PASSED
tests/test_infrastructure.py::TestInfrastructure::test_source_directory_accessible PASSED
tests/test_infrastructure.py::TestFixtures::test_temp_dir_fixture PASSED
tests/test_infrastructure.py::TestFixtures::test_temp_file_fixture PASSED
tests/test_infrastructure.py::TestFixtures::test_mock_env_vars_fixture PASSED
tests/test_infrastructure.py::TestFixtures::test_sample_config_fixture PASSED
tests/test_infrastructure.py::TestFixtures::test_sample_tools_config_fixture PASSED
tests/test_infrastructure.py::TestFixtures::test_sample_task_list_fixture PASSED
tests/test_infrastructure.py::TestFixtures::test_mock_function_registry_fixture PASSED
tests/test_infrastructure.py::TestPytestMarkers::test_unit_marker PASSED
tests/test_infrastructure.py::TestPytestMarkers::test_integration_marker PASSED
tests/test_infrastructure.py::TestPytestMarkers::test_slow_marker PASSED
tests/test_infrastructure.py::TestMocking::test_mocker_fixture PASSED
tests/test_infrastructure.py::TestMocking::test_mock_patch PASSED
tests/test_infrastructure.py::TestCoverage::test_coverage_tracking PASSED
tests/test_infrastructure.py::TestCoverage::test_multiple_branches PASSED

============================== 18 passed in 0.02s ==============================

Next Steps

The testing infrastructure is now ready to use! Developers can:

  1. Write unit tests in tests/unit/ for individual components
  2. Write integration tests in tests/integration/ for component interactions
  3. Use the shared fixtures from tests/conftest.py to simplify test setup
  4. Mark tests appropriately using @pytest.mark.unit, @pytest.mark.integration, or @pytest.mark.slow
  5. Run tests locally before committing using poetry run test
  6. Check coverage to ensure adequate test coverage (target: 80%)

Notes

  • Poetry vs pip: This setup uses Poetry for better dependency management and reproducible builds. The original requirements.txt remains for reference but is superseded by pyproject.toml.
  • No actual unit tests: This PR intentionally only sets up the infrastructure. Writing actual tests for the codebase components is left to the repository maintainers who have domain expertise.
  • Coverage threshold: Set to 80% but can be adjusted in pyproject.toml under [tool.pytest.ini_options] by modifying --cov-fail-under=80.
  • Lock file tracked: poetry.lock is tracked in version control to ensure all developers use the same dependency versions.

Questions or Concerns?

Feel free to request changes, ask questions, or provide feedback. I'm happy to adjust any part of this setup to better fit your project's needs!

Set up complete testing infrastructure with Poetry package management,
pytest framework, coverage reporting, and organized test directory structure.
Includes shared fixtures, custom markers, and validation tests to ensure
developers can immediately start writing unit and integration tests.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant