Skip to content

Conversation

torokati44
Copy link
Member

Re: #21146

Let's see what happens...

@torokati44 torokati44 added A-build Area: Build scripts & CI T-chore Type: Chore (like updating a dependency, it's gotta be done) labels Jul 31, 2025
@torokati44
Copy link
Member Author

Also referencing: #19259

@torokati44
Copy link
Member Author

Ouch, OpenH264 only added Windows ARM builds with 2.5.0, which is blocked on #18581 (or #19445, but eh). See also: cisco/openh264#3873

@torokati44
Copy link
Member Author

Does this mean the whole thing is running on x86 emulation? 🤔
https://github.com/ruffle-rs/ruffle/actions/runs/16645092011/job/47103772107?pr=21150#step:7:114

@lexcyn
Copy link

lexcyn commented Jul 31, 2025

Does this mean the whole thing is running on x86 emulation? 🤔 https://github.com/ruffle-rs/ruffle/actions/runs/16645092011/job/47103772107?pr=21150#step:7:114

Hmm, strange as the rust checks show the correct aarch64 host - are there any artifacts/output from that build test? I could test it on my ARM64 system

@torokati44
Copy link
Member Author

That step is just setting up the nextest test runner.

@kjarosh
Copy link
Member

kjarosh commented Jul 31, 2025

Tests on Windows ARM take even longer than on Windows x86_64 :/ despite the fact that some of them are disabled

It's 15m 47s ARM vs 12m 41s x64 vs 4–5min Linux / Mac

@torokati44
Copy link
Member Author

That could also be because nothing is cached for it yet.

@adrian17
Copy link
Collaborator

Can we not run this on each PR, perhaps? Only daily? (btw this question applies to more CI types, not just ARM)
Or maybe at least skip a batch of tests, like from_avmplus, which are <50% (but still growing fast)?

@kjarosh
Copy link
Member

kjarosh commented Jul 31, 2025

btw this question applies to more CI types, not just ARM

To be fair, only the longest job negatively impacts PR checks performance, which as of now is windows-latest, and possibly windows-arm in the future.

Or maybe at least skip a batch of tests

I tried marking slow tests one day, but that was a failure. Maybe there's some other mechanism than features that we can use for filtering slow tests

@torokati44
Copy link
Member Author

Referencing: #15950 (comment)

@torokati44
Copy link
Member Author

@kjarosh
Copy link
Member

kjarosh commented Oct 5, 2025

  1. Do we want to try filtering slow tests again for this use case?

  2. What will happen if auto merge is enabled and a non-reauired check is still running?

@torokati44
Copy link
Member Author

torokati44 commented Oct 6, 2025

  1. My suggestion is no, because tests that don't run automatically and frequently, effectively don't exist. So, for this particular (admittedly not very popular - at the moment) arch/OS combo, we would have a lot less tests. Granted, at the moment, according to this philosophy, we have none at all.

  2. The docs say all required checks have to pass before auto-merge takes effect. (And this is what makes logical sense to me as well.)
    https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/incorporating-changes-from-a-pull-request/automatically-merging-a-pull-request#about-auto-merge
    Once it's merged, I think the remaining non-required checks still running for (and in the context of) the PR are cancelled (I think I saw this happening a few times before), and an entirely new, complete suite is started, for the pushing of the "master" branch. Those are then all run to completion, and block nothing - but still take up overall runner slots while they are running, probably making any PRs made/rebased immediately after, wait a bit longer for some of its checks to even be started.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

A-build Area: Build scripts & CI T-chore Type: Chore (like updating a dependency, it's gotta be done)

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants