Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
43 changes: 43 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -158,6 +158,47 @@ print(f"Start Time: {result['metrics']['start_time']}")
print(f"Exit Code: {result['exit_code']}")
print(f"Success: {result['metrics']['success']}")
```

## ⏱️ Task Scheduling & Automation

Python Script Runner ships with a lightweight scheduler to automate recurring or event-driven jobs without standing up extra infrastructure. The scheduler includes:

- **Automation & recurrence**: Define hourly, daily, weekly, or custom interval schedules (`every_5min`, `every_30min`) or supply a cron expression for more complex windows.
- **Dependency-aware execution**: Chain tasks together so downstream jobs only start after upstream tasks complete successfully.
- **Error handling & visibility**: Execution results are captured in-memory with status, error messages, and next-run timestamps for quick troubleshooting.
- **Event triggers**: Bind tasks to custom events (for example, `on_script_failure`) and trigger them manually via the CLI.

Basic usage:

```python
from runner import TaskScheduler

scheduler = TaskScheduler()

# Schedule a daily report and a dependent distribution step
scheduler.add_scheduled_task("generate_report", "reports/daily.py", schedule="daily")
scheduler.add_scheduled_task(
"distribute_report",
"reports/distribute.py",
dependencies=["generate_report"],
)

# Run any tasks that are due (e.g., inside a cron shell)
for result in scheduler.run_due_tasks():
print(result)
```

You can also interact via the CLI:

```bash
python -m runner \
--add-scheduled-task nightly_cleanup \
--script scripts/cleanup.py \
--schedule daily \
--list-scheduled-tasks
```

The scheduler respects dependency ordering automatically; if a prerequisite task fails, dependent tasks are skipped until the next eligible run.
Copy link

Copilot AI Nov 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The "Benefit" line here appears to be misplaced. It describes SQLite audit trail benefits which relate to the "Compliance & Audit Logging" section above (section 10), not the Task Scheduling section. This line should either be moved up to follow line 160 (after the Compliance section code example) or removed if it's redundant.

Suggested change
The scheduler respects dependency ordering automatically; if a prerequisite task fails, dependent tasks are skipped until the next eligible run.
The scheduler respects dependency ordering automatically; if a prerequisite task fails, dependent tasks are skipped until the next eligible run.

Copilot uses AI. Check for mistakes.
**Benefit**: SQLite database provides immutable audit trail for SOC2/HIPAA compliance. Every execution logged with full context.

---
Expand Down Expand Up @@ -186,6 +227,8 @@ python -m runner script.py --slack-webhook "YOUR_WEBHOOK_URL"
python-script-runner myscript.py
```

> Need a quick smoke test? Run the bundled sample script with `python -m runner examples/sample_script.py` to see the default metrics output without creating your own file first.

### 📊 Default Output - Comprehensive Metrics Report

Every run automatically displays a detailed metrics report with:
Expand Down
2 changes: 1 addition & 1 deletion dashboard/backend/test_app.py
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ def test_database():
cursor.execute("""INSERT INTO executions
(script_path, script_args, start_time, end_time, execution_time_seconds, exit_code, success, stdout_lines, stderr_lines, created_at)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)""",
('test_script.py', '', now, now, 1.5, 0, True, 10, 0, now))
('sample_script.py', '', now, now, 1.5, 0, True, 10, 0, now))
exec_id = cursor.lastrowid

cursor.execute("""INSERT INTO metrics (execution_id, metric_name, metric_value)
Expand Down
2 changes: 1 addition & 1 deletion docs/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ docker run --rm psr myscript.py
```bash
python runner.py --version
python runner.py --help
python runner.py test_script.py
python runner.py examples/sample_script.py
```

## Troubleshooting
Expand Down
5 changes: 5 additions & 0 deletions examples/sample_script.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
#!/usr/bin/env python3
"""Simple sample script for Python Script Runner demonstration."""

print("Python Script Runner - Sample Script")
print("✅ Sample completed successfully")
4 changes: 2 additions & 2 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -47,13 +47,13 @@ dependencies = [
]

[project.optional-dependencies]
dashboard = ["fastapi>=0.104.0", "uvicorn[standard]>=0.24.0", "websockets>=12.0"]
dashboard = ["fastapi>=0.104.0", "uvicorn[standard]>=0.24.0", "websockets>=12.0", "httpx>=0.27.0"]
export = ["pyarrow>=13.0.0", "scikit-learn>=1.3.0"]
otel = ["opentelemetry-api>=1.20.0", "opentelemetry-sdk>=1.20.0", "opentelemetry-exporter-jaeger>=1.20.0", "opentelemetry-instrumentation>=0.41b0"]
security = ["bandit>=1.7.5", "semgrep>=1.45.0", "safety>=2.3.0", "detect-secrets>=1.4.0", "cyclonedx-python>=4.0.0"]
cloud = ["boto3>=1.28.0", "azure-identity>=1.13.0", "google-cloud-compute>=1.13.0", "google-cloud-monitoring>=2.15.0"]
vault = ["hvac>=1.2.0"]
all = ["fastapi>=0.104.0", "uvicorn[standard]>=0.24.0", "websockets>=12.0", "pyarrow>=13.0.0", "scikit-learn>=1.3.0", "opentelemetry-api>=1.20.0", "opentelemetry-sdk>=1.20.0", "opentelemetry-exporter-jaeger>=1.20.0", "opentelemetry-instrumentation>=0.41b0", "bandit>=1.7.5", "semgrep>=1.45.0", "safety>=2.3.0", "detect-secrets>=1.4.0", "cyclonedx-python>=4.0.0", "boto3>=1.28.0", "azure-identity>=1.13.0", "google-cloud-compute>=1.13.0", "google-cloud-monitoring>=2.15.0", "hvac>=1.2.0"]
all = ["fastapi>=0.104.0", "uvicorn[standard]>=0.24.0", "websockets>=12.0", "httpx>=0.27.0", "pyarrow>=13.0.0", "scikit-learn>=1.3.0", "opentelemetry-api>=1.20.0", "opentelemetry-sdk>=1.20.0", "opentelemetry-exporter-jaeger>=1.20.0", "opentelemetry-instrumentation>=0.41b0", "bandit>=1.7.5", "semgrep>=1.45.0", "safety>=2.3.0", "detect-secrets>=1.4.0", "cyclonedx-python>=4.0.0", "boto3>=1.28.0", "azure-identity>=1.13.0", "google-cloud-compute>=1.13.0", "google-cloud-monitoring>=2.15.0", "hvac>=1.2.0"]
dev = ["pytest>=7.0.0", "pytest-cov>=4.0.0", "black>=22.0.0", "flake8>=4.0.0", "mypy>=0.900"]
docs = ["mkdocs>=1.4.0", "mkdocs-material>=9.0.0"]

Expand Down
2 changes: 1 addition & 1 deletion release.sh
Original file line number Diff line number Diff line change
Expand Up @@ -728,7 +728,7 @@ cmd_validate() {
# Check Python compilation
print_step "Checking code quality..."
local compile_output
if compile_output=$(python3 -m py_compile runner.py test_script.py 2>&1); then
if compile_output=$(python3 -m py_compile runner.py examples/sample_script.py 2>&1); then
print_success "Compilation successful"
else
print_error "Python compilation failed:"
Expand Down
2 changes: 2 additions & 0 deletions requirements-dev.txt
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,8 @@
# Testing Framework
pytest==7.4.3
pytest-cov==4.1.0
httpx==0.27.2
pytest-benchmark==4.0.0

# Code Quality & Formatting
black==23.12.0
Expand Down
Loading
Loading