|
| 1 | +# Filesize.js Benchmarks |
| 2 | + |
| 3 | +This directory contains comprehensive performance benchmarks for the filesize.js library. The benchmarks are designed to measure performance across different usage patterns, option combinations, and edge cases. |
| 4 | + |
| 5 | +## 📁 Benchmark Files |
| 6 | + |
| 7 | +### 🏃 `basic-performance.js` |
| 8 | +Tests fundamental performance characteristics of the filesize function: |
| 9 | +- Basic conversion performance with various input sizes |
| 10 | +- Different option combinations |
| 11 | +- Memory usage analysis |
| 12 | +- Baseline performance metrics |
| 13 | + |
| 14 | +### ⚙️ `options-benchmark.js` |
| 15 | +Analyzes the performance impact of different configuration options: |
| 16 | +- Individual option performance costs |
| 17 | +- Complex option combinations |
| 18 | +- Relative performance comparisons |
| 19 | +- Optimization insights |
| 20 | + |
| 21 | +### 🔥 `stress-test.js` |
| 22 | +Evaluates performance under challenging conditions: |
| 23 | +- Edge cases and extreme values |
| 24 | +- Error handling performance |
| 25 | +- Memory pressure scenarios |
| 26 | +- Performance consistency analysis |
| 27 | +- BigInt support testing |
| 28 | + |
| 29 | +### 🔧 `partial-benchmark.js` |
| 30 | +Focuses on the partial function and functional programming patterns: |
| 31 | +- Partial function vs direct calls |
| 32 | +- Function creation overhead |
| 33 | +- Functional programming patterns |
| 34 | +- Currying performance analysis |
| 35 | + |
| 36 | +### 🎯 `index.js` |
| 37 | +Main benchmark runner that executes all test suites: |
| 38 | +- Orchestrates all benchmark execution |
| 39 | +- Provides comprehensive summary |
| 40 | +- System information reporting |
| 41 | +- Error handling and reporting |
| 42 | + |
| 43 | +## 🚀 Running Benchmarks |
| 44 | + |
| 45 | +### Run All Benchmarks |
| 46 | +```bash |
| 47 | +cd benchmarks |
| 48 | +node index.js |
| 49 | +``` |
| 50 | + |
| 51 | +### Run Individual Benchmarks |
| 52 | +```bash |
| 53 | +# Basic performance tests |
| 54 | +node basic-performance.js |
| 55 | + |
| 56 | +# Options impact analysis |
| 57 | +node options-benchmark.js |
| 58 | + |
| 59 | +# Stress testing |
| 60 | +node stress-test.js |
| 61 | + |
| 62 | +# Partial function analysis |
| 63 | +node partial-benchmark.js |
| 64 | +``` |
| 65 | + |
| 66 | +### Enhanced Performance Mode |
| 67 | +For more accurate memory-related benchmarks, run with garbage collection exposed: |
| 68 | +```bash |
| 69 | +node --expose-gc index.js |
| 70 | +``` |
| 71 | + |
| 72 | +## 📊 Understanding Results |
| 73 | + |
| 74 | +### Performance Metrics |
| 75 | +- **Ops/sec**: Operations per second (higher is better) |
| 76 | +- **Avg (ms)**: Average execution time per operation (lower is better) |
| 77 | +- **Total (ms)**: Total execution time for all iterations |
| 78 | +- **Relative**: Performance relative to baseline (lower multiplier is better) |
| 79 | + |
| 80 | +### Benchmark Categories |
| 81 | + |
| 82 | +#### 🎯 **Basic Performance** |
| 83 | +- Measures core function performance |
| 84 | +- Tests with various input sizes (0 bytes to MAX_SAFE_INTEGER) |
| 85 | +- Establishes baseline performance characteristics |
| 86 | + |
| 87 | +#### ⚙️ **Options Impact** |
| 88 | +- Quantifies performance cost of each option |
| 89 | +- Identifies expensive operations (locale formatting, complex outputs) |
| 90 | +- Helps optimize option usage |
| 91 | + |
| 92 | +#### 🔥 **Stress Testing** |
| 93 | +- Validates performance under extreme conditions |
| 94 | +- Tests error handling efficiency |
| 95 | +- Measures performance consistency |
| 96 | +- Evaluates memory usage patterns |
| 97 | + |
| 98 | +#### 🔧 **Functional Programming** |
| 99 | +- Compares partial functions vs direct calls |
| 100 | +- Analyzes currying overhead |
| 101 | +- Tests functional composition patterns |
| 102 | + |
| 103 | +## 📈 Performance Insights |
| 104 | + |
| 105 | +### General Findings |
| 106 | +- **Baseline Performance**: ~500K-1M+ ops/sec for basic conversions |
| 107 | +- **Locale Formatting**: Significant overhead (~2-5x slower) |
| 108 | +- **Object Output**: Minimal overhead (~10-20% slower) |
| 109 | +- **Complex Options**: Compound performance impact |
| 110 | +- **Partial Functions**: ~10-30% overhead, amortized over multiple uses |
| 111 | + |
| 112 | +### Optimization Tips |
| 113 | +1. **Cache Partial Functions**: Reuse partial functions for repeated operations |
| 114 | +2. **Avoid Locale When Possible**: Use locale formatting sparingly |
| 115 | +3. **Prefer String Output**: Fastest output format for most use cases |
| 116 | +4. **Batch Operations**: Group similar operations together |
| 117 | +5. **Profile Your Usage**: Run benchmarks with your specific patterns |
| 118 | + |
| 119 | +## 🔧 Benchmark Configuration |
| 120 | + |
| 121 | +### Iteration Counts |
| 122 | +- **Basic Performance**: 100,000 iterations |
| 123 | +- **Options Testing**: 50,000 iterations |
| 124 | +- **Stress Testing**: 10,000 iterations |
| 125 | +- **Partial Functions**: 100,000 iterations |
| 126 | + |
| 127 | +### Warmup Periods |
| 128 | +All benchmarks include warmup periods to ensure JIT optimization and stable measurements. |
| 129 | + |
| 130 | +### Memory Management |
| 131 | +- Garbage collection calls between tests (when available) |
| 132 | +- Memory pressure testing |
| 133 | +- Memory usage monitoring |
| 134 | + |
| 135 | +## 🛠️ Customizing Benchmarks |
| 136 | + |
| 137 | +### Adding New Tests |
| 138 | +1. Create a new benchmark file in the `benchmarks` directory |
| 139 | +2. Follow the existing pattern for benchmark functions |
| 140 | +3. Add the file to `BENCHMARK_FILES` in `index.js` |
| 141 | + |
| 142 | +### Modifying Parameters |
| 143 | +- Adjust `ITERATIONS` constants for different test durations |
| 144 | +- Modify test data sets for specific scenarios |
| 145 | +- Add new option combinations for testing |
| 146 | + |
| 147 | +### Example Custom Benchmark |
| 148 | +```javascript |
| 149 | +import { filesize } from '../dist/filesize.js'; |
| 150 | + |
| 151 | +const ITERATIONS = 10000; |
| 152 | + |
| 153 | +function benchmark(testName, testFunction, iterations = ITERATIONS) { |
| 154 | + // Warmup |
| 155 | + for (let i = 0; i < 1000; i++) { |
| 156 | + testFunction(); |
| 157 | + } |
| 158 | + |
| 159 | + const startTime = process.hrtime.bigint(); |
| 160 | + for (let i = 0; i < iterations; i++) { |
| 161 | + testFunction(); |
| 162 | + } |
| 163 | + const endTime = process.hrtime.bigint(); |
| 164 | + |
| 165 | + const totalTime = Number(endTime - startTime) / 1000000; |
| 166 | + const avgTime = totalTime / iterations; |
| 167 | + const opsPerSecond = Math.round(1000 / avgTime); |
| 168 | + |
| 169 | + return { testName, opsPerSecond, avgTime }; |
| 170 | +} |
| 171 | + |
| 172 | +// Your custom test |
| 173 | +const result = benchmark('Custom test', () => { |
| 174 | + return filesize(1024 * 1024, { /* your options */ }); |
| 175 | +}); |
| 176 | + |
| 177 | +console.log(result); |
| 178 | +``` |
| 179 | + |
| 180 | +## 🔍 Interpreting Results |
| 181 | + |
| 182 | +### Performance Baselines |
| 183 | +- **Excellent**: >1M ops/sec |
| 184 | +- **Good**: 500K-1M ops/sec |
| 185 | +- **Acceptable**: 100K-500K ops/sec |
| 186 | +- **Slow**: <100K ops/sec |
| 187 | + |
| 188 | +### When to Optimize |
| 189 | +- If your use case requires >100K operations/sec |
| 190 | +- When performance regression is detected |
| 191 | +- Before production deployment with high load |
| 192 | +- When adding new features or options |
| 193 | + |
| 194 | +### Profiling Your Application |
| 195 | +1. Run benchmarks with your specific usage patterns |
| 196 | +2. Identify bottlenecks in your option combinations |
| 197 | +3. Test with your actual data sizes |
| 198 | +4. Measure end-to-end performance in your application |
| 199 | + |
| 200 | +## 🤝 Contributing |
| 201 | + |
| 202 | +When contributing performance improvements: |
| 203 | +1. Run all benchmarks before and after changes |
| 204 | +2. Document performance impacts in commit messages |
| 205 | +3. Add new benchmarks for new features |
| 206 | +4. Ensure no significant regressions in existing tests |
| 207 | + |
| 208 | +## 📚 Additional Resources |
| 209 | + |
| 210 | +- [MDN Performance Best Practices](https://developer.mozilla.org/en-US/docs/Web/Performance) |
| 211 | +- [Node.js Performance Hooks](https://nodejs.org/api/perf_hooks.html) |
| 212 | +- [V8 Performance Tips](https://v8.dev/blog/optimizing-cpp-and-js) |
0 commit comments