Skip to content

Conversation

alarinel
Copy link

@alarinel alarinel commented Sep 24, 2025

User description

A new agent that creates a local podman pipeline that can be run in any environment to analyze a code base. It is also able to create the new folders, files, and directory for the new pipeline with the goal being to have a containerized solution that can be run in multiple environments and will be portable with the project.


PR Type

Other


Description

  • Initial implementation of Podman-based code quality pipeline tool

  • Comprehensive multi-language static analysis and security scanning

  • Containerized quality checks with parallel execution support

  • Automated report generation and CI/CD integration templates


Diagram Walkthrough

flowchart LR
  A["Agent Config"] --> B["Pipeline Script"]
  B --> C["Quality Tools"]
  C --> D["Reports Generation"]
  E["Docker Compose"] --> F["Infrastructure"]
  G["Quality Gates"] --> H["Thresholds"]
Loading

File Walkthrough

Relevant files
Enhancement
run-pipeline.sh
Pipeline execution script with multi-language support       

agents/code-quality-podman/run-pipeline.sh

  • Implements executable pipeline script with language auto-detection
  • Configures Podman containers for Python, JavaScript, and Go quality
    tools
  • Generates consolidated JSON reports with execution metadata
  • Supports parallel execution of quality checks with error handling
+136/-0 
Documentation
README.md
Complete agent documentation and usage guide                         

agents/code-quality-podman/README.md

  • Comprehensive documentation for 12+ programming languages support
  • Details containerized execution with 30+ quality tools integration
  • Provides CI/CD integration examples for major platforms
  • Includes troubleshooting guide and best practices
+362/-0 
example-usage.md
Usage examples and execution guidance                                       

agents/code-quality-podman/example-usage.md

  • Demonstrates actual file creation vs planning approach
  • Shows execution flow from user request to report generation
  • Explains key changes from planning to execution mode
  • Provides testing guidelines for agent functionality
+92/-0   
Configuration changes
agent.toml
Agent configuration with execution parameters                       

agents/code-quality-podman/agent.toml

  • Defines agent configuration with execution strategy and arguments
  • Configures MCP servers for sequential thinking and memory
  • Specifies comprehensive output schema for pipeline results
  • Sets up quality gates and monitoring configuration structure
+160/-0 
agent.yaml
Agent metadata and tool specifications                                     

agents/code-quality-podman/agent.yaml

  • Comprehensive agent metadata with supported languages and tools
  • Defines quality tools categorization and container specifications
  • Configures pipeline stages and quality gates with thresholds
  • Lists integration platforms and monitoring capabilities
+269/-0 
config.yaml
Comprehensive configuration with tool specifications         

agents/code-quality-podman/config.yaml

  • Global settings for parallel execution and resource limits
  • Quality tools configuration with container specifications
  • Pipeline stages definition with timeouts and parallelization
  • Notification and integration settings for various platforms
+295/-0 
docker-compose.yml
Infrastructure services orchestration                                       

agents/code-quality-podman/docker-compose.yml

  • Infrastructure setup with SonarQube, Grafana, and Prometheus
  • Supporting services including Redis, MinIO, and PostgreSQL
  • Quality dashboard and webhook receiver configuration
  • Network and volume configuration for service communication
+158/-0 
quality-gates.yaml
Quality gates and thresholds configuration                             

agents/code-quality-podman/quality-gates.yaml

  • Defines security, code quality, and performance gates
  • Configures tool-specific thresholds and blocking conditions
  • Sets up notification templates for failures and successes
  • Environment-specific overrides for different deployment stages
+163/-0 

Copy link
Contributor

PR Reviewer Guide 🔍

Here are some key observations to aid the review process:

⏱️ Estimated effort to review: 3 🔵🔵🔵⚪⚪
🧪 No relevant tests
🔒 Security concerns

Sensitive information exposure:
docker-compose includes default credentials and exposed ports for SonarQube, Grafana, Redis, and MinIO. Examples use admin/admin values and expose services on host interfaces, which can be risky if run outside isolated environments. Recommend .env with overrides, bind to localhost by default, and document production hardening (non-default passwords, network restrictions). Additionally, the pipeline pulls latest tags for security tools which can lead to supply-chain drift; pin image digests or versions and verify signatures where possible.

⚡ Recommended focus areas for review

Portability Issue

The script uses podman in containers based on alpine images and installs tools at runtime without network/cache checks; also relies on volume selinux flag (:Z) which may fail on systems without SELinux or Podman. Consider configurable registry/mirrors, retry logic, and compatibility notes or fallbacks.

    # Create a simple podman command to run the quality tool
    podman run --rm \
        -v "$PROJECT_DIR:/workspace:Z" \
        -w /workspace \
        "$container" \
        bash -c "$command" || echo "⚠️  $tool completed with warnings"

    echo "$tool completed"
}
Language Detection

Auto-detection is simplistic and may misclassify polyglot repos or monorepos; suggest scanning globs and prioritizing multiple tool runs when multiple languages are present, or supporting comma-separated languages.

if [ "$LANGUAGE" = "auto" ]; then
    echo "🔍 Auto-detecting language..."

    if [ -f "$PROJECT_DIR/package.json" ]; then
        LANGUAGE="javascript"
        echo "📦 Detected: JavaScript/Node.js"
    elif [ -f "$PROJECT_DIR/requirements.txt" ] || [ -f "$PROJECT_DIR/setup.py" ]; then
        LANGUAGE="python"
        echo "🐍 Detected: Python"
    elif [ -f "$PROJECT_DIR/go.mod" ]; then
        LANGUAGE="go"
        echo "🐹 Detected: Go"
    elif [ -f "$PROJECT_DIR/Cargo.toml" ]; then
        LANGUAGE="rust"
        echo "🦀 Detected: Rust"
    else
        LANGUAGE="generic"
        echo "📄 Using generic analysis"
    fi
fi
Default Credentials

Services are configured with weak/default creds (e.g., Grafana admin, MinIO admin). Ensure secrets are sourced from environment or .env and not committed defaults for production samples.

grafana:
  image: grafana/grafana:latest
  container_name: code-quality-grafana
  ports:
    - "3000:3000"
  environment:
    - GF_SECURITY_ADMIN_PASSWORD=admin
  volumes:
    - grafana_data:/var/lib/grafana
    - ./grafana/dashboards:/etc/grafana/provisioning/dashboards
    - ./grafana/datasources:/etc/grafana/provisioning/datasources
  networks:
    - quality-network

# Prometheus for metrics collection
prometheus:
  image: prom/prometheus:latest
  container_name: code-quality-prometheus
  ports:
    - "9090:9090"
  volumes:
    - ./prometheus/prometheus.yml:/etc/prometheus/prometheus.yml
    - prometheus_data:/prometheus
  command:
    - '--config.file=/etc/prometheus/prometheus.yml'
    - '--storage.tsdb.path=/prometheus'
    - '--web.console.libraries=/etc/prometheus/console_libraries'
    - '--web.console.templates=/etc/prometheus/consoles'
    - '--web.enable-lifecycle'
  networks:
    - quality-network

# Redis for caching quality results
redis:
  image: redis:alpine
  container_name: code-quality-redis
  ports:
    - "6379:6379"
  volumes:
    - redis_data:/data
  networks:
    - quality-network

# MinIO for storing quality reports
minio:
  image: minio/minio:latest
  container_name: code-quality-minio
  ports:
    - "9001:9001"
    - "9002:9002"
  environment:
    - MINIO_ROOT_USER=minioadmin
    - MINIO_ROOT_PASSWORD=minioadmin
  volumes:
    - minio_data:/data
  command: server /data --console-address ":9001" --address ":9002"
  networks:
    - quality-network

# Webhook receiver for notifications
webhook-receiver:
  image: adnanh/webhook:latest
  container_name: code-quality-webhook
  ports:
    - "9003:9000"
  volumes:
    - ./webhooks/hooks.json:/etc/webhook/hooks.json
    - ./webhooks/scripts:/var/scripts
  command: ["-verbose", "-hooks=/etc/webhook/hooks.json", "-hotreload"]
  networks:
    - quality-network

# Quality dashboard (custom web interface)
quality-dashboard:
  build:
    context: ./dashboard
    dockerfile: Dockerfile
  container_name: code-quality-dashboard
  ports:
    - "8080:80"
  environment:
    - SONARQUBE_URL=http://sonarqube:9000
    - GRAFANA_URL=http://grafana:3000

Copy link
Contributor

PR Code Suggestions ✨

Explore these optional code suggestions:

CategorySuggestion                                                                                                                                    Impact
High-level
Re-evaluate the project's scope

The PR's implementation, a simple script, does not match the complex system
described in its documentation and configuration files. The suggestion is to
either reduce the project's scope to an MVP to align them or use an existing
tool like MegaLinter.

Examples:

agents/code-quality-podman/config.yaml [111-152]
pipeline:
  stages:
    setup:
      description: "Initialize environment and pull container images"
      parallel: false
      timeout: 600

    lint:
      description: "Code style and syntax checking"
      parallel: true

 ... (clipped 32 lines)
agents/code-quality-podman/run-pipeline.sh [71-106]
case $LANGUAGE in
    "python")
        if [[ "$QUALITY_TOOLS" == *"pylint"* ]] || [ "$QUALITY_TOOLS" = "all" ]; then
            run_quality_check "Pylint" "python:3.11-alpine" "pip install pylint && pylint --output-format=json --reports=no . > quality-reports/pylint-report.json 2>/dev/null || true"
        fi

        if [[ "$QUALITY_TOOLS" == *"bandit"* ]] || [ "$QUALITY_TOOLS" = "all" ]; then
            run_quality_check "Bandit" "python:3.11-alpine" "pip install bandit && bandit -r . -f json -o quality-reports/bandit-report.json 2>/dev/null || true"
        fi


 ... (clipped 26 lines)

Solution Walkthrough:

Before:

// In config.yaml, a complex parallel pipeline is defined:
pipeline:
  stages:
    lint:
      parallel: true
      tools: ["eslint", "pylint", ...]
    security:
      parallel: true
      tools: ["semgrep", "bandit", "trivy", ...]
    ...

// But in run-pipeline.sh, the implementation is a simple sequential script:
case $LANGUAGE in
    "python")
        run_quality_check "Pylint" ...
        run_quality_check "Bandit" ...
        ;;
    "javascript")
        run_quality_check "ESLint" ...
        ;;
esac

After:

// Option 1: Scope is reduced to an MVP.
// README.md is simplified:
# Code Quality Podman Pipeline Agent (MVP)
This agent runs linters for Python and JavaScript projects.
- Supports: Pylint, Bandit, ESLint
- Generates separate JSON reports for each tool.

// run-pipeline.sh remains the same, but now matches the documentation.
case $LANGUAGE in
    "python")
        run_quality_check "Pylint" ...
        run_quality_check "Bandit" ...
        ;;
    "javascript")
        run_quality_check "ESLint" ...
        ;;
esac
// Complex, unused config files are removed.
Suggestion importance[1-10]: 10

__

Why: This suggestion correctly identifies a critical, fundamental flaw in the PR: a massive discrepancy between the documented features and the actual implementation, making the current state misleading and non-functional as described.

High
Possible issue
Fix ineffective code formatting check

Remove the --write flag from the Prettier arguments in config.yaml. The presence
of --write alongside --check causes files to be reformatted in place, making the
check ineffective.

agents/code-quality-podman/config.yaml [101]

-args: ["prettier", "--check", ".", "--write"]
+args: ["prettier", "--check", "."]
  • Apply / Chat
Suggestion importance[1-10]: 9

__

Why: This suggestion correctly identifies a critical bug where the Prettier formatting check is ineffective because the --write flag modifies files in-place, causing the check to always pass. This fix is essential for the tool to function as intended.

High
Properly handle and exit on errors

Modify the run_quality_check function to correctly handle podman execution
errors. The current implementation suppresses non-zero exit codes, preventing
the script from failing on critical errors.

agents/code-quality-podman/run-pipeline.sh [35-39]

 podman run --rm \
     -v "$PROJECT_DIR:/workspace:Z" \
     -w /workspace \
     "$container" \
-    bash -c "$command" || echo "⚠️  $tool completed with warnings"
+    bash -c "$command"
 
+if [ $? -ne 0 ]; then
+    echo "❌ ERROR: $tool failed to execute." >&2
+    exit 1
+fi
+
  • Apply / Chat
Suggestion importance[1-10]: 8

__

Why: The suggestion correctly identifies a bug where critical podman execution errors are silenced, which could lead to silent pipeline failures. Fixing this significantly improves the script's robustness and reliability.

Medium
Do not suppress tool error messages

In run-pipeline.sh, remove 2>/dev/null from the tool execution commands.
Suppressing stderr hides important error messages from tools like Pylint, making
it difficult to debug configuration or execution issues.

agents/code-quality-podman/run-pipeline.sh [74]

-run_quality_check "Pylint" "python:3.11-alpine" "pip install pylint && pylint --output-format=json --reports=no . > quality-reports/pylint-report.json 2>/dev/null || true"
+run_quality_check "Pylint" "python:3.11-alpine" "pip install pylint && pylint --output-format=json --reports=no . > quality-reports/pylint-report.json || true"
  • Apply / Chat
Suggestion importance[1-10]: 8

__

Why: This suggestion correctly points out that suppressing stderr hides crucial error messages, making debugging difficult. Removing the redirection is a significant improvement for the script's maintainability and usability.

Medium
Prevent invalid JSON report generation

Improve the robustness of the summary report generation in run-pipeline.sh. The
current find command can produce invalid JSON if no report files are found; use
a loop to build the JSON array safely.

agents/code-quality-podman/run-pipeline.sh [124]

-$(find "$PROJECT_DIR/quality-reports" -name "*.json" -o -name "*.txt" | sed 's/.*/"&"/' | paste -sd, -)
+$(
+    REPORTS=()
+    while IFS= read -r -d $'\0' file; do
+        REPORTS+=("\"$file\"")
+    done < <(find "$PROJECT_DIR/quality-reports" -name "*.json" -o -name "*.txt" -print0)
+    
+    (IFS=,; echo "${REPORTS[*]}")
+)
  • Apply / Chat
Suggestion importance[1-10]: 7

__

Why: The suggestion correctly identifies a potential bug where the summary report could become invalid JSON if no report files are generated. The proposed fix makes the JSON generation more robust, preventing potential downstream processing errors.

Medium
  • More

@alarinel alarinel changed the title chore: initial version of podman code quality pipeline tool Podman Code Quality Pipelines: #QodoAgentChallenge Competition Submission Sep 24, 2025
@SagiMedina
Copy link
Contributor

This is really cool, @alarinel. I would love to see a video of it running

@SagiMedina
Copy link
Contributor

Also, links or an mcpServers JSON for the required MCP servers would be really helpful (quality_scanners, for example)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants