-
Notifications
You must be signed in to change notification settings - Fork 9
Fix concurrenty issue with global MC cache #203
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
penelopeysm
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice! And yeah, this is pretty much impossible to test (much like how I couldn't get an MWE without monkey-patching hash). Quite happy to sign off on it and forget about it.
On the original Turing issue I claimed that
so I decided to put it to the test: using Libtask, Chairmarks
f(x) = x
sig = Tuple{typeof(f),Int}
key = Libtask.CacheKey(Base.get_world_counter(), sig)
@be hash(key)1.11: julia> @be hash(key)
Benchmark: 3162 samples with 1013 evaluations
min 27.229 ns (1 allocs: 16 bytes)
median 27.724 ns (1 allocs: 16 bytes)
mean 29.536 ns (1 allocs: 16 bytes, 0.03% gc time)
max 4.827 μs (1 allocs: 16 bytes, 98.61% gc time)1.12: julia> @be hash(key)
Benchmark: 3038 samples with 778 evaluations
min 35.882 ns (1 allocs: 16 bytes)
median 36.204 ns (1 allocs: 16 bytes)
mean 39.878 ns (1 allocs: 16 bytes, 0.06% gc time)
max 5.440 μs (1 allocs: 16 bytes, 98.54% gc time)Idk, I guess maybe that's enough to cause problems. Also the Base dictionary code hashes multiple keys in a loop so the difference in time will be scaled multiplicatively too (and I suppose it could be performance regressions in other parts of the Dict code, too, although my naive guess is that hashing is the most time-consuming aspect of Dict code(?)). |
|
Nice, good check. I thought about checking the performance effects on Libtask too, but then decided that there's little point because this clearly just needs to be done. If we start optimising Libtask there's other fruit hanging way lower. I can tell you that the totality of the test suite runs noticably, but not horribly slower (7.something seconds vs 8.something, I forget now). |
|
Libtask.jl documentation for PR #203 is available at: |
|
I changed the docstring of GlobalMCCache to a comment, because Documenter was complaining that something in The other two failures are Turing.jl/Mooncake issues and not new. |
Closes #202
I tried to come up with a version of the MWE in that issue that could be put into tests, but failed. My hope was to write a slow method for particular CacheKeys, not all. But
Base.cachefalls back on this:which doesn't give me much surface to hold onto. Hence no tests added.
That implementation of
hashmakes me extra confused as to how we can reliably hit this in Turing.jl tests. Thathashshould take no time at all.