-
Notifications
You must be signed in to change notification settings - Fork 1.6k
feat: ClassicJoin
for PWMJ
#17482
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
feat: ClassicJoin
for PWMJ
#17482
Conversation
@2010YOUY01 Would you like to take a look at if this is how you wanted to split up the work? I just wanted to put this out today then i'll clean it up better this week. Only failing one external test currently. |
let join: Arc<dyn ExecutionPlan> = if join_on.is_empty() { | ||
if join_filter.is_none() && matches!(join_type, JoinType::Inner) { | ||
// cross join if there is no join conditions and no join filter set | ||
Arc::new(CrossJoinExec::new(physical_left, physical_right)) | ||
} else if num_range_filters == 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would like to refactor this in another pull request, just a refactor but it should be quite simple to do. Just wanted to get this version in first.
statement ok | ||
set datafusion.execution.batch_size = 8192; | ||
|
||
# TODO: partitioned PWMJ execution |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently doesn't allow partitioned execution, this would make reviewing the tests a little messy as many of the partitioned single range queries would switch to PWMJ. Another follow up, will be tracked in #17427
…jonathanc-n/datafusion into classic-join-physical-planner
cc @2010YOUY01 @comphead this pr is now ready! |
This is great! I have some suggestions for the planning part, and I'll review the execution part tomorrow. Refactor the in-equality extracting logicI suggest to move the inequality-extracting logic from The reason is we'd better put similar code into a single place, instead of let it scatter to multiple places. To do this I think we need to extend the logical plan join node with extra ie predicate field (maybe we can define a new struct for IE predicate with
To make it compatible for systems only use the
Perhaps we can open a PR only for this IE predicates extracting task, and during the initial planning we can simply move the IE predicates back to the filter with the above mentioned utility. Make it configurable to turn on/off PWMJI'll try to finish #17467 soon to make it easier, so let's put this on hold for now. |
Thanks @jonathanc-n and @2010YOUY01 #17467 definitely would be nice to have as PWMJ can start as optional experimental join, which would be separately documented, showing benefits and limitations for the end user. Actually the same happened for SMJ being experimental feature for quite some time. Another great point to identify bottlenecks in performance is to absorb some knowledge from #17488 and keep the join more stable. As optional feature it is pretty safe to go, again referring to SMJ there was a separate ticket which post launch checks to make sure it is safe to use like #9846 Let me know your thoughts? |
Yes I think the experimental flag should be added first and we can do the equality extraction logic as a follow up. WDYT @2010YOUY01 Do you think you want to get #17467 before this one? |
Yes, so let's do other work first. If I can't get #17467 done when this PR is ready, let's add |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have gone over the exec.rs
, and will continue with the stream implementation part soon.
ExecutionPlan, PlanProperties, | ||
}; | ||
use crate::{DisplayAs, DisplayFormatType, ExecutionPlanProperties}; | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is one of the best module comments I have seen.
datafusion/physical-plan/src/joins/piecewise_merge_join/exec.rs
Outdated
Show resolved
Hide resolved
datafusion/physical-plan/src/joins/piecewise_merge_join/exec.rs
Outdated
Show resolved
Hide resolved
datafusion/physical-plan/src/joins/piecewise_merge_join/exec.rs
Outdated
Show resolved
Hide resolved
datafusion/physical-plan/src/joins/piecewise_merge_join/exec.rs
Outdated
Show resolved
Hide resolved
datafusion/physical-plan/src/joins/piecewise_merge_join/exec.rs
Outdated
Show resolved
Hide resolved
datafusion/physical-plan/src/joins/piecewise_merge_join/exec.rs
Outdated
Show resolved
Hide resolved
} else { | ||
// Sort the right side in memory, so we do not need to enforce any sorting | ||
vec![ | ||
Some(OrderingRequirements::from(self.left_sort_exprs.clone())), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A question here for future clean-up: now we're storing the required input ordering property inside the executor, is it possible to move them into PlanProperties
struct?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I recall correctly I don't believe PlanProperties enforces input ordering? Plan properties only enforces output
@2010YOUY01 I have added the requested changes! Should be good for another go. |
@comphead Should a flag be added to let this be optional, like |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I took a quick look through the classic_join.rs
, the general structure looks great. I left some major issues I'd like to tackle first.
The goal now is to ensure it's significantly faster than NLJ, I ran some micro-bench and found it's slower, so I'd like to better understand its implementation and make it faster.
> set datafusion.execution.target_partitions = 1;
0 row(s) fetched.
Elapsed 0.001 seconds.
> SELECT *
FROM range(30000) AS t1
INNER JOIN range(30000) AS t2
ON (t1.value > t2.value);
...
885262824 row(s) fetched. (First 40 displayed. Use --maxrows to adjust)
Elapsed 0.840 seconds.
> SELECT *
FROM range(30000) AS t1
FULL JOIN range(30000) AS t2
ON (t1.value > t2.value);
...
885262825 row(s) fetched. (First 40 displayed. Use --maxrows to adjust)
Elapsed 1.592 seconds.
They're Q11 and Q12 from https://github.com/apache/datafusion/blob/main/benchmarks/src/nlj.rs
Using NLJ they're both around 0.55s, also the results don't match.
The remaining part for me to review:
- minor issues in
classic_join.rs
- test coverage
} | ||
} | ||
|
||
// Holds all information for processing incremental output |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you add more doc for how this struct work? Maybe with a walkthrough on simple examples.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I took a quick glance, it seem possible to cut big output, and output one by one according to batch_size
. However it does not support combining/coalescing small batches to batch_size
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it possible to do the combining to create batch_size
in a follow up. I think I am going to try to just get the right behaviour in this pull request + simplify the logic (this one is much needed right now) 😆
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it is fine to leave it to a follow up. Just a heads up, when working on the performance issue, if there is any downstream operator for PWMJ
, the small batch might harm the performance.
datafusion/physical-plan/src/joins/piecewise_merge_join/classic_join.rs
Outdated
Show resolved
Hide resolved
|
||
// For Left, Right, Full, and Inner joins, incoming stream batches will already be sorted. | ||
#[allow(clippy::too_many_arguments)] | ||
fn resolve_classic_join( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I found the implementation of this function is quite hard to understand, is it possible to structure this way:
// Materialize the result when possible
if batch_process_state.has_ready_batch() {
return Ok(batch_process_state.finish());
}
// Else advancing the stream/buffer side index, and put the matched indices into `batch_process_state` for it to materialize incrementally later
// ...
I'll try to complete all the refactoring tomorrow. The performance may be due to the sides that are being used, I will need to take a look into that. The results don't match because it currently doesnt allow for execution of more than record batch |
The performance saw a similar hit in #16660 (the benchmark is in the description). I think I can tune when to use this join based on the incoming size in a follow up, for now the config will restrain this join to keep it purely experimental |
How did you get the incorrect result? I'm testing query 12 and it doesnt optimize into a piecewisemergejoin |
That bench doesn't include sort time, PWMJ should be faster than NLJ even it includes the sorting overhead (n*log(n) v.s. n^2). I think the main motivation for adding this executor is its performance advantage, so we probably shouldn’t merge an initial PR without first getting it to a good performance level. (Also, since there aren’t many merge conflicts to resolve for this PR, I don’t think there’s any rush.) I can help diagnose it later.
Moreover we should get sqlite extended test passed later, either through configuring |
Which issue does this PR close?
PiecewiseMergeJoin
work in Datafusion #17427Rationale for this change
Adds regular joins (left, right, full, inner) for PWMJ as they behave differently in the code path.
What changes are included in this PR?
Adds classic join + physical planner
Are these changes tested?
Yes SLT tests + unit tests
Follow up work to this pull request
next would be to implement the existence joins