Skip to content

Conversation

@precup
Copy link
Contributor

@precup precup commented Aug 11, 2025

There are many places where selections have O(n^2) behavior, likely because they're generally small enough for it not to matter. When trying to work with large scenes, however, I naturally found myself selecting 1000 or more elements at a time, and that's around the scale where that sort of behavior starts to blow up.

I realize most people aren't going to be working with selections this large, but I don't see a good reason not to support it. Some of these changes slightly slow down small selections, some speed them up. I didn't bother testing whether small selections were sped up or slowed down on the net because they're still fast and infrequent enough for it not to matter.

I've split this PR in two commits to make it easier to review; I'll squash them once people are happy. About half the changes are just using references instead of copies, and it clutters up the actually interesting changes, so I split them out into their own commit.

Most of these changes are direct fixes to issues that showed up on a flamegraph while I was testing, so they do have actual impact. The exception is the swapping of copies for const references. After I ran into two instances that mattered, I just searched the entire codebase and fixed them all, so some of them probably don't matter in practice.

Most of the changes are just complexity class improvements that should be self explanatory. The editor_data change is a fix for what looks like a rather impactful typo. The two signal emission updates prevent doing work until the selection is done changing.

This speeds up many types of selections and deselections.

Selecting 5000 sprites via the scene tree dock: 6s -> ~0.25s
Deselecting 5000 sprites by selecting 1 new sprite in the scene tree dock: 4s -> imperceptible
Selecting 25000 sprites via the scene tree dock: 172s -> ~0.5s

There's similarly massive speedups to selection via drag select and to more types than just sprites, but you basically know what to expect from the above numbers.


All performance tests were run on builds made with the following command: scons optimize=speed_trace debug_symbols=yes platform=linuxbsd dev_build=no production=yes.

The test project editor-optimization-test-scenes.zip consists of a series of scenes that each have 25,000 of a common type of node. Unless you also apply some of #109517, #109514, #109513, and #109512, it might be difficult to test anything but the Sprite2D.tscn scene.

Copy link
Member

@Geometror Geometror left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tested locally and confirmed. Nice work!
Just a few minor nitpicks, otherwise the code looks fine.
I'd suggest squashing the two commits into one.

@precup precup force-pushed the speedy-selections branch from 25c9cba to 20e1cfb Compare August 14, 2025 07:58
if (get_node_count() != p_other->get_node_count()) {
return false;
}
HashSet<NodePath> nodes_in_selection;
Copy link
Member

@KoBeWi KoBeWi Aug 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This could use use some benchmarking. Allocating a set, inserting and then iterating it is not necessarily faster, even if complexity is theoretically lower.

Copy link
Contributor Author

@precup precup Aug 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, so there's a lot of context to this change. This is a partial fix for the case where you are switching to a scene tab that already has a large selection. To reproduce the issue, you would:

  • Open Sprite25k.tscn. Select all 25k nodes.
  • Open MeshInstance.tscn.
  • Try to switch back to Sprite25k.tscn. This will lag so badly you might as well restart godot.

There's actually two different slow operations that lag the tab switch, and this is only one of the two. The second (and much more severe) issue, which is that the scene tree dock attempts to scroll to each selected node individually, I haven't put up a fix PR for yet, it's still on my backlog. This makes it difficult to show on a master build why this is needed, but if you apply the patch

diff --git a/editor/scene/scene_tree_editor.cpp b/editor/scene/scene_tree_editor.cpp
index c964393f2c..ed185440bd 100644
--- a/editor/scene/scene_tree_editor.cpp
+++ b/editor/scene/scene_tree_editor.cpp
@@ -1039,7 +1039,7 @@ bool SceneTreeEditor::_update_filter(TreeItem *p_parent, bool p_scroll_to_select
        bool keep_for_children = false;
        for (TreeItem *child = p_parent->get_first_child(); child; child = child->get_next()) {
                // Always keep if at least one of the children are kept.
-               keep_for_children = _update_filter(child, p_scroll_to_selected) || keep_for_children;
+               keep_for_children = _update_filter(child, p_scroll_to_selected && !keep_for_children) || keep_for_children;
        }
 
        if (!is_root) {

so that it only scrolls to the first child, then you'll be able to see that impact of this change.

With all of my speed up PRs and the above patch, it takes about 3 seconds to switch, and as shown in the below flamegraph, about 50% of that time (1.6s) is spent in is_same_selection.
image

Running the same test with the is_same_selection change, as well, results in the below flamegraph and a scene switch time of about 1.5s. The time spent in is_same_selection is now less than 3 ms, a 500x improvement in the function's runtime.
image

@precup precup force-pushed the speedy-selections branch from 20e1cfb to 1c8e3f9 Compare August 31, 2025 21:25
@KoBeWi KoBeWi modified the milestones: 4.x, 4.6 Sep 18, 2025
@Repiteo Repiteo merged commit 709226a into godotengine:master Sep 23, 2025
20 checks passed
@Repiteo
Copy link
Contributor

Repiteo commented Sep 23, 2025

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants