Skip to content

Commit c21578d

Browse files
committed
Add Scrape Time Rule Evaluation
Signed-off-by: Julien Pivotto <[email protected]>
1 parent 652b981 commit c21578d

File tree

1 file changed

+186
-0
lines changed

1 file changed

+186
-0
lines changed
Lines changed: 186 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,186 @@
1+
## Scrape-time Rule Evaluation
2+
3+
* **Owners:**
4+
* [@roidelapluie](https://github.com/roidelapluie)
5+
6+
* **Implementation Status:** Not implemented
7+
8+
* **Related Issues and PRs:**
9+
* [Original feature request](https://github.com/prometheus/prometheus/issues/394)
10+
11+
> This proposal introduces the ability to evaluate PromQL expressions at scrape time against raw metrics from a single scrape, before any relabeling occurs. This enables the creation of derived metrics that combine values from the same scrape without the time skew issues inherent in recording rules or query-time calculations. Additionally, by evaluating before relabeling, this enables powerful cardinality reduction strategies where aggregated metrics can be computed and stored while dropping the original high-cardinality metrics.
12+
13+
## Why
14+
15+
Prometheus users frequently need to calculate derived metrics by combining values from multiple related metrics. A common example is calculating "Memory Used" from /proc/meminfo statistics, which requires subtracting available memory from total memory. Currently, users must either:
16+
17+
1. Calculate these at query time, which can become complex and repetitive
18+
2. Use recording rules, which run at their own interval separate from scraping
19+
20+
Beyond that, scrape-time rules enable powerful cardinality management strategies. For example, an application might expose 100 detailed per-component metrics, but for long-term storage, you only need the aggregate total. With scrape-time rules, you can:
21+
22+
1. Create a `sum()` rule that aggregates the 100 metrics into a single metric
23+
2. Use `metric_relabel_configs` to drop the original 100 detailed metrics
24+
3. Store only the aggregate, reducing cardinality by 99%
25+
26+
This is only possible because rules evaluate before relabeling. If you tried to do this with recording rules, you'd need to scrape and store all 100 metrics first, defeating the purpose of cardinality reduction at ingestion time.
27+
28+
### Pitfalls of the current solution
29+
30+
The recording rule approach is problematic because it introduces time skew, which could be avoided. It also means that staleness markers will be inserted when the rule executes rather than when the target is down. A derived metric could be calculated up to `scrape_interval` after a target is down. It also means that if multiple targets have different scrape intervals, there should be different rule evaluation times.
31+
32+
In the use case of cardinality reduction, extra work is also needed if you do not want to send the non-aggregated metrics to remote storage, but they would still take some place on disk locally.
33+
34+
## Goals
35+
36+
* Enable evaluation of PromQL expressions at scrape time against raw scraped metrics
37+
* Guarantee that all input metrics come from the same scrape, eliminating time skew
38+
* Evaluate rules after parsing but before relabeling, ensuring rules work with original metric names
39+
* Enable cardinality reduction by aggregating metrics before storage and dropping originals via relabeling
40+
* Support instant vector PromQL operations (arithmetic, aggregations, functions operating on current values)
41+
* Pre-process and validate rules at configuration load time to fail fast on invalid rules
42+
* Maintain scrape performance by running rule evaluation in the existing scrape pipeline only when configured
43+
44+
## Non-Goals
45+
46+
* Support for range vector operations (e.g., `rate()`, `increase()`, selectors with `[5m]`)
47+
* These require historical data which is not available at scrape time
48+
* Support for time-based modifiers (`offset`, `@` timestamp)
49+
* Only the current scrape's data is available
50+
* Support for rules that reference target labels
51+
* Rules evaluate before target labels are added
52+
* Replacement of recording rules for all use cases
53+
* Recording rules remain useful for expensive aggregations over time ranges
54+
* Support for rules that span multiple scrapes or targets
55+
* Each scrape is evaluated independently with only its own metrics
56+
* Support alerting
57+
* Out of scope for now
58+
59+
## How
60+
61+
### Configuration
62+
63+
Scrape-time rules will be configured in the scrape configuration under a new `scrape_rules` field:
64+
65+
```yaml
66+
scrape_configs:
67+
- job_name: 'node'
68+
static_configs:
69+
- targets: ['localhost:9100']
70+
scrape_rules:
71+
- record: node_memory_used_bytes
72+
expr: node_memory_MemTotal_bytes - node_memory_MemAvailable_bytes
73+
- record: node_filesystem_avail_percent
74+
expr: 100 * node_filesystem_avail_bytes / node_filesystem_size_bytes
75+
- record: node_cpu_busy_percent
76+
expr: 100 - (avg by (instance) (node_cpu_seconds_total{mode="idle"}) * 100)
77+
```
78+
79+
Each rule consists of:
80+
81+
- `record`: The name of the metric to create (must be a valid metric name)
82+
- `expr`: A PromQL expression to evaluate (must be an instant vector expression)
83+
84+
#### Example: Cardinality Reduction
85+
86+
```yaml
87+
scrape_configs:
88+
- job_name: 'application'
89+
static_configs:
90+
- targets: ['localhost:8080']
91+
scrape_rules:
92+
# Aggregate 100 per-component metrics into a total
93+
- record: http_requests_total
94+
expr: sum(http_requests_by_component_total)
95+
metric_relabel_configs:
96+
# Drop the detailed per-component metrics
97+
- source_labels: [__name__]
98+
regex: 'http_requests_by_component_total'
99+
action: drop
100+
```
101+
102+
This pattern:
103+
- Creates `http_requests_total` as the sum of all components at scrape time
104+
- Drops the original 100 `http_requests_by_component_total` metrics via relabeling
105+
- Reduces cardinality by 99% while preserving the aggregate view
106+
- Only works because scrape rules evaluate before relabeling
107+
108+
### Scraping Pipeline Integration
109+
110+
The scrape-time rule evaluation will be inserted as a new stage in the scraping pipeline, between parsing and relabeling:
111+
112+
```
113+
Current Flow:
114+
1. FETCH (HTTP GET)
115+
2. PARSE (Text Format Parser)
116+
3. RELABEL (Apply target labels + metric_relabel_configs)
117+
4. VALIDATE
118+
5. APPEND TO STORAGE
119+
120+
New Flow:
121+
1. FETCH (HTTP GET)
122+
2. PARSE (Text Format Parser)
123+
3. SCRAPE-TIME RULES ← NEW STAGE
124+
4. RELABEL (Apply target labels + metric_relabel_configs)
125+
5. VALIDATE
126+
6. APPEND TO STORAGE
127+
```
128+
129+
This positioning ensures:
130+
131+
- Rules have access to all scraped metrics with their original names
132+
- Rules don't have access to target labels (job, instance), which aren't available yet
133+
- Synthetic metrics flow through the same relabeling and validation as scraped metrics
134+
- Cache and staleness tracking work correctly for both scraped and synthetic metrics
135+
- Cardinality reduction is possible: aggregated metrics can be created and original high-cardinality metrics dropped via `metric_relabel_configs` before they reach storage
136+
137+
### Implementation Details
138+
139+
#### Rule Pre-processing (at ApplyConfig time)
140+
141+
**In `scrape.go: NewManager()`:**
142+
143+
The scrape manager is initialized with a PromQL engine instance configured with essential options from the query engine. This engine will be reused for all scrape-time rule evaluations across all scrape pools.
144+
145+
The scrape-time PromQL engine will have the same configuration compared as the query engine. It could be instrumented with its own Prometheus metrics collector using a distinct prefix (such as `scrape_rules_engine_`) to allow separate monitoring of scrape-time vs query-time PromQL performance.
146+
147+
**In `scrape.go: newScrapePool()`:**
148+
149+
1. Parse each `scrape_rules` expression using the standard PromQL parser
150+
2. Validate that expressions don't use disallowed features (ranged, @, , offset, etc)
151+
3. Extract metric selectors/matchers from each rule for optimization
152+
5. Store parsed matchers and selectors in the scrapePool config
153+
154+
#### Rule Evaluation (at scrape time)
155+
156+
In `scrape.go: scrapeLoop.append()`, after parsing is complete but before relabeling:
157+
158+
1. Collect all scraped samples that matches the selectos in the rules into an in-memory storage implementation
159+
2. For each scrape rule:
160+
a. Use the scrape manager's PromQL engine (configured with the same options as the query engine)
161+
b. Create a query context that points to the in-memory storage containing only the current scrape's samples
162+
c. Evaluate the pre-parsed expression against the in-memory sample set via the context
163+
d. Add result samples to the in-memory storage for subsequent rules
164+
3. After all rules are evaluated, merge results with scraped samples (or directly at `2.d` ?)
165+
5. Continue with normal relabeling pipeline
166+
167+
This design ensures:
168+
- No modifications to the PromQL engine itself
169+
- Consistent behavior between scrape-time and query-time evaluation
170+
- The in-memory storage naturally prevents access to historical data
171+
172+
#### PromQL Expression Restrictions
173+
174+
**Disallowed (will fail config validation with descriptive error):**
175+
- Range vector selectors: `metric_name[5m]`
176+
- Time-based modifiers: `offset 5m`, `@ 1234567890`
177+
- Range-dependent functions should all use range vectors so need to explicitly disallow them.
178+
- Subqueries: `rate(metric[5m])[10m:1m]`
179+
180+
### Action plan
181+
182+
This should be straightforward to implement.
183+
184+
There should be performance tests to measure that the performances of the default scrape path (without rules) is not impacted.
185+
186+
Better behind a feature flag as this is experimental.

0 commit comments

Comments
 (0)