forked from llvm/llvm-project
-
Notifications
You must be signed in to change notification settings - Fork 353
[LoopUnroll] Pick changes introducing parallel reduction phis when unrolling. #11900
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
juliannagele
wants to merge
6
commits into
swiftlang:stable/21.x
Choose a base branch
from
juliannagele:pick-parallel-accumulators
base: stable/21.x
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
[LoopUnroll] Pick changes introducing parallel reduction phis when unrolling. #11900
juliannagele
wants to merge
6
commits into
swiftlang:stable/21.x
from
juliannagele:pick-parallel-accumulators
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add tests for unrolling loops with reductions. In some cases, multiple parallel reduction phis could be retained to improve performance. (cherry picked from commit 90f733c)
Add additional tests from llvm#149470. (cherry picked from commit d10dc67)
…149470) When partially or runtime unrolling loops with reductions, currently the reductions are performed in-order in the loop, negating most benefits from unrolling such loops. This patch extends unrolling code-gen to keep a parallel reduction phi per unrolled iteration and combining the final result after the loop. For out-of-order CPUs, this allows executing mutliple reduction chains in parallel. For now, the initial transformation is restricted to cases where we unroll a small number of iterations (hard-coded to 4, but should maybe be capped by TTI depending on the execution units), to avoid introducing an excessive amount of parallel phis. It also requires single block loops for now, where the unrolled iterations are known to not exit the loop (either due to runtime unrolling or partial unrolling). This ensures that the unrolled loop will have a single basic block, with a single exit block where we can place the final reduction value computation. The initial implementation also only supports parallelizing loops with a single reduction and only integer reductions. Those restrictions are just to keep the initial implementation simpler, and can easily be lifted as follow-ups. With corresponding TTI to the AArch64 unrolling preferences which I will also share soon, this triggers in ~300 loops across a wide range of workloads, including LLVM itself, ffmgep, av1aom, sqlite, blender, brotli, zstd and more. PR: llvm#149470 (cherry picked from commit 2d9e452)
…llvm#166353) In combination with llvm#149470 this will introduce parallel accumulators when unrolling reductions with vector instructions. See also llvm#166630, which aims to introduce parallel accumulators for FP reductions. (cherry picked from commit c73de97)
…ions. (llvm#166630) This is building on top of llvm#149470, also introducing parallel accumulator PHIs when the reduction is for floating points, provided we have the reassoc flag. See also llvm#166353, which aims to introduce parallel accumulators for reductions with vector instructions. (cherry picked from commit b641509)
Member
Author
|
@swift-ci please test |
Member
Author
|
@swift-ci please test llvm |
Member
Author
|
@swift-ci please test |
Member
Author
|
@swift-ci please test windows platform |
1 similar comment
Member
Author
|
@swift-ci please test windows platform |
fhahn
approved these changes
Dec 4, 2025
fhahn
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.