A lightweight memory profiling toolkit specifically designed for embedded systems engineers running Python on resource-constrained devices. This tool provides byte-level precision tracking, static analysis, fragmentation visualization, and cross-platform memory prediction.
- Track every memory allocation down to individual bytes
- Microsecond-precision timestamps
- Full stack traces for allocation sources
- Size class categorization (tiny, small, medium, large)
- Real-time allocation event streaming
- Pre-deployment memory usage estimation
- Code path analysis
- Memory hotspot detection
- Device constraint validation
- Bytecode analysis for memory prediction
- Text-based memory visualization
- Fragmentation percentage calculation
- Free block distribution tracking
- Defragmentation strategy suggestions
- Memory layout visualization
- Data structure optimization suggestions
- Object pooling candidates identification
- Algorithm efficiency recommendations
- Redundancy detection
- Implementation effort estimation
- Support for ARM32/64, x86/64, ESP32, STM32, AVR
- Platform-specific memory modeling
- Alignment requirement calculations
- Memory overhead predictions
- Cross-platform comparison reports
pip install -r requirements.txtfrom src.micro_tracker import MicroTracker
# Track allocations in your code
with MicroTracker() as tracker:
# Your code here
data = [i for i in range(1000)]
cache = {'key': 'value'}
# Get allocation statistics
stats = tracker.get_stats()
print(f"Total allocations: {stats.total_count}")
print(f"Current memory: {stats.current_memory} bytes")
print(f"Peak memory: {stats.peak_memory} bytes")
# Stream real-time allocations
tracker.stream_allocations()from src.static_analyzer import StaticAnalyzer
from pathlib import Path
# Analyze before deployment
analyzer = StaticAnalyzer(device_memory_limit=256*1024) # 256KB limit
report = analyzer.analyze_file(Path("your_code.py"))
print(f"Estimated memory: {report.total_estimate.average_bytes} bytes")
print(f"Memory hotspots: {report.memory_hotspots}")
print(f"Recommendations: {report.recommendations}")from src.fragmentation import FragmentationAnalyzer
# Analyze memory fragmentation
analyzer = FragmentationAnalyzer(memory_size=64*1024) # 64KB
# Simulate allocations
allocations = [
(1024, "buffer"),
(512, "cache"),
(2048, "data"),
]
report = analyzer.analyze_allocation_pattern(allocations)
print(report.visualization)
print(f"Fragmentation: {report.metrics.fragmentation_percentage}%")from src.optimizer import MemoryOptimizer
# Get optimization suggestions
optimizer = MemoryOptimizer()
report = optimizer.analyze_file(Path("your_code.py"))
for suggestion in report.suggestions:
print(f"Priority {suggestion.priority}: {suggestion.suggested_approach}")
print(f"Estimated savings: {suggestion.estimated_savings} bytes")
print(f"Effort: {suggestion.implementation_effort}")from src.cross_platform import CrossPlatformPredictor, Architecture
# Predict memory usage across platforms
predictor = CrossPlatformPredictor()
allocations = [
("sensor_data", 4096),
("message_buffer", 1024),
("state_machine", 512),
]
report = predictor.predict_memory(
allocations,
[Architecture.ESP32, Architecture.STM32, Architecture.AVR]
)
for arch, prediction in report.predictions.items():
print(f"{arch.value}: {prediction.peak_memory} bytes")
if prediction.warnings:
print(f" Warnings: {prediction.warnings}")To run all tests and generate the JSON report:
pytest --json-report --json-report-file=pytest_results.jsonTo run specific test modules:
pytest tests/test_micro_tracker.py -v
pytest tests/test_static_analyzer.py -v
pytest tests/test_fragmentation.py -v
pytest tests/test_optimizer.py -v
pytest tests/test_cross_platform.py -vmemory_profiler_tool_embedded_engineer/
├── src/
│ ├── __init__.py
│ ├── micro_tracker.py # Byte-level allocation tracking
│ ├── static_analyzer.py # Pre-deployment analysis
│ ├── fragmentation.py # Fragmentation metrics
│ ├── optimizer.py # Optimization recommendations
│ └── cross_platform.py # Cross-compilation prediction
├── tests/
│ ├── __init__.py
│ ├── test_micro_tracker.py
│ ├── test_static_analyzer.py
│ ├── test_fragmentation.py
│ ├── test_optimizer.py
│ └── test_cross_platform.py
├── requirements.txt
└── README.md
This tool is designed with embedded systems constraints in mind:
- Minimal overhead: The profiler itself uses minimal memory
- No external dependencies: Pure Python implementation
- Text-based output: No GUI components, suitable for embedded environments
- Byte-level precision: Critical for memory-constrained devices
- Platform awareness: Understands different embedded architectures
- Pre-deployment Analysis: Always run static analysis before deploying to embedded devices
- Regular Profiling: Profile during development to catch memory issues early
- Platform Testing: Test memory usage predictions against actual device measurements
- Fragmentation Monitoring: Check fragmentation patterns in long-running applications
- Optimization Priority: Focus on high-priority, low-effort optimizations first
- Dynamic memory tracking requires the Python garbage collector
- Static analysis provides estimates, not guarantees
- Platform predictions are based on heuristics and may vary from actual usage
- Some Python internals may not be fully tracked
When contributing to this project:
- Ensure all tests pass
- Add tests for new features
- Keep embedded constraints in mind
- Document memory overhead of new features
- Test on multiple platforms when possible