Skip to content

Conversation

@lukel97
Copy link
Contributor

@lukel97 lukel97 commented Dec 1, 2025

We can remove a lot of the noise from execution time results by just ignoring any changes when the binary hashes are the same.
This implements it by adding a column to the TestSuiteSampleFields table, similar to bigger_is_better, and setting it to true for execution_time.
We want to still flag regressions on identical binaries for e.g. compile time.

This was my first time poking around the migrations infrastructure. They get automatically applied whenever the server is launched so there shouldn't be any need to manually run any scripts.

We can remove a lot of the noise from execution time results by just ignoring any changes when the binary hashes are the same.
This implements it by adding a column to the TestSuiteSampleFields table, similar to bigger_is_better, and setting it to true for execution_time.
We want to still flag regressions on identical binaries for e.g. compile time.

This was my first time poking around the migrations infrastructure. They get automatically applied whenever the server is launched so there shouldn't be any need to manually run any scripts.
Copilot finished reviewing on behalf of lukel97 December 1, 2025 09:24
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR reduces noise in execution time regression reports by ignoring changes when binary hashes are identical. The implementation adds an ignore_same_hash field to the database schema, similar to the existing bigger_is_better field, and sets it to true for execution_time metrics while keeping compile time and other metrics unaffected.

Key changes:

  • Added ignore_same_hash database column and field to SampleField class
  • Implemented logic in ComparisonResult to skip regression detection when hashes match
  • Created database migration script to add the new column and initialize it appropriately

Reviewed changes

Copilot reviewed 6 out of 7 changed files in this pull request and generated no comments.

Show a summary per file
File Description
lnt/server/db/migrations/upgrade_17_to_18.py Migration script that adds ignore_same_hash column to TestSuiteSampleFields table and sets it to 1 for execution_time
lnt/server/db/testsuite.py Updated SampleField class to include ignore_same_hash field in constructor, copy method, and JSON serialization
lnt/server/reporting/analysis.py Added ignore_same_hash logic to ComparisonResult to return UNCHANGED_PASS when hashes match
tests/server/reporting/analysis.py Added comprehensive test coverage for the ignore_same_hash feature with multiple scenarios
schemas/nts.yaml Added ignore_same_hash: true flag to execution_time metric
tests/server/ui/test_api.py Updated schema validation test to include default ignore_same_hash field
.gitignore Added *~ to ignore backup files

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Copy link
Contributor

@DavidSpickett DavidSpickett left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The idea makes sense to me, but I'm out of my depth with this database stuff.

.gitignore Outdated
test_run_tmp
tests/**/Output
venv
*~
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is for what exactly, some kind of temp file?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, for emacs and some other editors. I didn't mean to include this in this PR though, will take out.

# Ignore changes if the hash of the binary is the same and the field is
# sensitive to the hash, e.g. execution time.
if self.ignore_same_hash:
if self.cur_hash and self.prev_hash and self.cur_hash == self.prev_hash:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure about completely ignoring, if same binaries have changes it can be a good indication of the noise level, and changed binaries may also be impacted by the same noise. Not sure if that's possible, but it may be good to display the results for the binaries with same hash separately.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FWIW LNT already detects noisy results based off the stddev and ignores them, the code for it is later on this function.

This also only affects when regressions are flagged, i.e. the Run-over-run changes detail > performance regressions - execution time" table at the top. You can still see the differences in the runs in the test results table below when you check "show all values", which will reveal the noisy tests.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tend to agree with @fhahn, I don't really understand why we'd ignore subsequent results entirely.

I also don't fully understand the impact of this change: for multi-valued runs (e.g. running the same program multiple times and submitting multiple execution times for it), what does this PR change, if anything? I'm not familiar with how ComparisonResult is used, so that might be part of my confusion.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is addressing a long standing FIXME, see above in the code.

LNT flags improvements and regressions when there is a significant change detected between runs. It still always saves all the results of each run and you can always still view them. This just determines what is flagged to the user in the regressions list, i.e. this page here: https://cc-perf.igalia.com/db_default/v4/nts/regressions/?state=0

It ignores changes that aren't significant or are likely noise, e.g. smaller than MIN_PERCENTAGE_CHANGE. For runs with multiple samples it also uses the standard deviation and the Mann-Whitney U test to ignore changes that are statistically likely to be noise.

LNT has always done this to remove false positives from the list of regressions. This list of regressions is what you read on a daily basis from the LNT reports that are sent out by email etc., so the regressions should be as actionable as possible.

Some noisy tests that are only slightly noisy still slip through the statistical checks, but given that the binary hasn't changed we shouldn't flag them as regressions. Here's an example from cc-perf.igalia.com, the colour of each run indicates the binary hash. The Equivalencing-flt binary hasn't changed over the past 7 runs, but there's 3 improvements detected in the green boxes. This PR would stop them from being flagged. It would however ensure that the improvements in miniFE above are still flagged, because the hashes are different.

image

display_name: Execution Time
unit: seconds
unit_abbrev: s
ignore_same_hash: true
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we go with this, it should also be applied to score for consistency

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants