-
Notifications
You must be signed in to change notification settings - Fork 3.7k
Description
This issue is a continuation of some conversation that was had on #12974, I apologize ahead of time for the length of this issue, theres quite a lot to cover 😅
I put together a Proxy
which mirrors all actions done on an object cache to a map, times all of the operations, and stores them on a global statistics store letting me benchmark and compare performance (semi-accurately as performance.now()
accuracy is limited 😢 ). I orchestrated every object cache I could find in the repository still, and the following is an example of my findings from playing around in the sandcastle with different samples, using a 3D tileset w/ couple thousand entities + toggling on different imagery layers and moving the camera around quite a lot.
Each row contains:
- location+name of the cache
- How many instances of that specific object cache
- the total count of each of the operations
- the total time spent on those operations on that cache.
The time was measured within the proxy (not outside) so there is no overhead from the Proxy present in this table. The counts are accurate, the time in (ms) is a lower bound, as most operations happen much faster than the precision of performance.now()
allows us to measure.
Results
cache | count | gets | sets | deletes | clears | has | enums | obj total get (ms) | map total get (ms) | obj total set (ms) | map total set (ms) | obj total delete (ms) | map total delete (ms) | obj total clear (ms) | map total clear (ms) | Savings If Convert to Map (ms) |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
renderStateCache | 1 | 4380 | 124 | 0 | 0 | 0 | 0 | 4.3 | 1.9 | 0.5 | 0 | 0 | 0 | 0 | 0 | -2.9 |
ResourceCache.cacheEntries | 1 | 30170 | 8667 | 5200 | 0 | 0 | 0 | 14.4 | 14.8 | 14.7 | 3.4 | 1.6 | 1.6 | 0 | 0 | -10.9 |
ResourceCacheStatistics._geometrySizes | 1 | 7802 | 5204 | 5200 | 0 | 0 | 0 | 5.2 | 4.8 | 20.6 | 3.4 | 2.8 | 2.3 | 0 | 0 | -18.1 |
ResourceCacheStatistics._textureSizes | 1 | 6063 | 1726 | 5200 | 0 | 0 | 0 | 5.6 | 4.1 | 58 | 1.7 | 4.3 | 1.5 | 0 | 0 | -60.6 |
polylineCollections | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
AssociativeArray._hash | 933 | 1146838 | 22092 | 0 | 2052 | 0 | 0 | 429.2 | 319.1 | 37.7 | 5.8 | 0 | 0 | 0.6 | 0.4 | -142.2 |
ShaderCache._shaders | 1 | 54568 | 171 | 62 | 0 | 0 | 0 | 23.4 | 26 | 0.7 | 0.2 | 0 | 0 | 0 | 0 | -2.1 |
TextureCache._textures | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
TextureCache._texturesToRelease | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
PostProcessStageCollection.stageNames | 1 | 0 | 12 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
ScreenSpaceEventHandler._inputEvents | 2 | 5106 | 46 | 0 | 0 | 0 | 0 | 7.5 | 4.8 | 0.2 | 0 | 0 | 0 | 0 | 0 | -2.9 |
Scene.updateHeight.tilesetRemoveCallbacks | 18328 | 18319 | 18327 | 0 | 18320 | 0 | 18320 | 13.4 | 8.9 | 54.8 | 5 | 0 | 0 | 8.4 | 8.8 | - 29.7 |
ImageryLayer._imageryCache | 1 | 24 | 12 | 12 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
EntityCluster._collectionIndicesByEntity | 1 | 3 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
primitive._external._composites | 16 | 6 | 16 | 3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
ModelVisualizer._modelHash | 1 | 36502 | 3 | 0 | 0 | 0 | 0 | 20.5 | 15.1 | 0 | 0 | 0 | 0 | 0 | 0 | -5.4 |
Cesium3DTilesetVisualizer._tilesetHash | 1 | 4046 | 0 | 0 | 0 | 0 | 0 | 3.6 | 1.5 | 0 | 0 | 0 | 0 | 0 | 0 | -2.1 |
CesiumTerrainProvider.LayerInformation.availabilityPromiseCache | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
I have the code for this locally if anyone is interested in benchmarking themselves.
Out of all the caches in this table, the most important one is by far AssociativeArray._hash
, the internal object used to support O(1)
get/contains
operations on an array of items.
AssociativeArray in detail
Replacing the internal cache is one thing, but theres an argument to be made that the entire class is problematic:
Map
s in JS are iterable, and are actually implemented similar to LinkedHashMaps
like in java, meaning iteration is done by insertion order. In theory this means that the entire AssociativeArray
class could be removed and all instances converted to Map
instead.
- Because the
AssociativeArray
actually stores both an in-memory array and a dictionary, it uses more memory than just aMap
alone - Also due to 1, the performance of
AssociativeArray.remove/set
isO(n)
rather thanO(1)
.
There are many cases where internal AssociativeArrays
can be converted 100% to a Map
.
There are also many cases where they should not be:
- Arrays are much faster to iterate over (provided there are no conditionals in the iteration):

Note that this is browser specific, firefox for example is even slower than this.
With conditional:

- Arrays support reverse iteration order. There are some cases of this in the code, again case-by-case basis whether this is required or not.
- Most importantly,
AssociativeArray
is part of the public api. Iterating overMap
requiresfor (const ... of ...)
syntax, so replacing publicAssociativeArray
members withMap
is a significant breaking change, i.eviewer.entities
.
I think its best to start with replacing the hotspot {}
dictionaries with Map
, which is supported by performance gains seen in #12896, #12916, #12974. Only after that is it worth it to consider on a case-by-case basis which instances of AssociativeArray
could be replaced with Map
(or Set
in some cases).