-
-
Notifications
You must be signed in to change notification settings - Fork 648
Refactor Hilbert_basis() and replace a slow test #40387
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
da05285
to
3542b29
Compare
I don't understand this. On my computer, in a fresh sage, current develop branch, I have
Are you saying that this is too slow? Or does this depend on certain packages which may or may not be installed? |
The result is cached so running it in a loop and picking the fastest one makes it look very fast indeed. Runs 2,3,... are instantaneous. When I run |
Oh, you are absolutely right. However, ...
... I find this hard to believe:
This would mean that my CPU would be 30 times faster. |
src/sage/geometry/cone.py
Outdated
else: | ||
# Avoid the PointCollection overhead if nothing was | ||
# added to the irreducible list beyond self.rays(). | ||
return self.rays() | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
else: | |
# Avoid the PointCollection overhead if nothing was | |
# added to the irreducible list beyond self.rays(). | |
return self.rays() | |
# Avoid the PointCollection overhead if nothing was | |
# added to the irreducible list beyond self.rays(). | |
return self.rays() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good idea, thanks. I also made a similar change at the beginning of the function. Defining L = ()
unconditionally does not take any time and lets us eliminate one branch of the if/else.
One tiny change that seems to make a slight difference is diff --git a/src/sage/geometry/cone.py b/src/sage/geometry/cone.py
index bcaa578c339..d8aed5e222e 100644
--- a/src/sage/geometry/cone.py
+++ b/src/sage/geometry/cone.py
@@ -1685,7 +1685,7 @@ class ConvexRationalPolyhedralCone(IntegralRayCollection, Container, ConvexSet_c
need_strict = region.endswith("interior")
M = self.dual_lattice()
for c in self._PPL_cone().minimized_constraints():
- pr = M(c.coefficients()) * point
+ pr = M(*(c.coefficients())) * point
if c.is_equality():
if pr != 0:
return False It may be possible to speed up this line further, or perhaps |
0abdf6a
to
9dda8a0
Compare
I tried this but the results were inconsistent, it gets slower as the lattice gets bigger. I let myself get carried away and spent a few hours trying to optimize this method last night. I was only able to obtain a very small improvement by reorganizing the conditionals inside of the loop (see the latest commit). The speedup is consistent though. |
A factor of 30x is not outrageous. Keep in mind that CPU time will be affected by the options used to compile sage and its dependencies, the vector features (SSE, AVX, etc.) of the CPU, hardware mitigations for spectre and meltdown, and some other internal details of the processor. I have an old Core 2 Duo Thinkpad with all of the vulnerability mitigations turned on, and this test takes about 28s on it. The computer where it takes 40s is actually brand new, but it has 64 processors with the trade-off being that each of them is individually pretty slow. So long as it leads to useful refactorings and performance improvements I'm not too worried about accidentally speeding up a test that might not technically be considered slow once we have the normalizing factor. |
Some refactorings to the implementation of Hilbert_basis(): * Construct a cone L from our linear_subspace() so that "y in L" works as intended (currently we try to coerce y into a vector space in a try/except block). This is not any faster, but it makes the code easier to read. * Remove the irreducibles from "gens" as we construct it. * Negate a condition in a loop to avoid bailing out with "continue" as part of the normal control flow. * Use a boolean indicator to check if the list of irreducibles was modified, rather than recomputing its length.
We have one Hilbert_basis() test that is raising "slow test!" warnings at around 40s. Here we replace it with three tests, each of which runs relatively quickly. The trio completes in about 15s. Since I know very little about Hilbert bases, I have checked the results using Normaliz. For example, $ cat cone.in amb_space 4 cone 4 1 0 1 0 -1 0 1 0 0 1 1 0 0 -1 1 0 $ normaliz --HilbertBasis cone $ cat cone.out ... (the resulting basis is written to cone.out).
Tighten the whitespace around list/generator comprehensions, and simplify the control flow in two instances by eliminating an if/else branch. (Only one of these was suggested by the reviewer, but the other is in a similar spirit.) Thanks to Martin Rubey for the suggestions.
Ah, you are right! Calling a function with many arguments is more expensive. |
Interestingly, the new version is about 10% slower on my computer on the example above :-( Edit: Hm, not sure. develop is the same speed, so it's probably noise. |
9dda8a0
to
0d254be
Compare
If you use |
The following seems to make a big difference, but is, apparently, not correct - there are failing tests in
|
src/sage/geometry/cone.py
Outdated
return False | ||
elif pr > 0: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
elif pr > 0: | |
if pr > 0: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have dropped this commit as well for now. It turns out that the new version can be slower in some pathological cases (like the empty cone in a big space) where most of the constraints are equality and the equality constraints are listed first. It might be possible to work around, but I have other things I should be doing instead of trying to shave nanoseconds off of this method :)
0d254be
to
90a51fc
Compare
Quotient lattices have a different |
When constructing the subcone that represents the given cone's linear_subspace(), we don't need to check that the generators are valid or minimal -- fewer generators might work, but no subset will work.
When the cone K has no rays, the test "x in K" can be done quickly by checking if x is zero. This can be a significant improvement if the lattice is large, and risks wasting only as much time as it takes to compare an integer to zero (i.e. nothing compared to how long the rest of the containment test is going to take).
Two more small improvements:
|
Looks good, the failures seem to be unrelated. Thank you for your patience! |
Thank you for the careful review! |
sagemathgh-40387: Refactor Hilbert_basis() and replace a slow test Initially my goal was to replace one slow `Hilbert_basis()` test, but I did some refactoring along the way -- none of which really affects the performance of the Hilbert basis calculation. The commit message lists these changes. To replace the test, I made up some examples and fed them to normaliz. If the sage answer agrees with the normaliz answer, they must both be right, right? URL: sagemath#40387 Reported by: Michael Orlitzky Reviewer(s): Martin Rubey, Michael Orlitzky
sagemathgh-40387: Refactor Hilbert_basis() and replace a slow test Initially my goal was to replace one slow `Hilbert_basis()` test, but I did some refactoring along the way -- none of which really affects the performance of the Hilbert basis calculation. The commit message lists these changes. To replace the test, I made up some examples and fed them to normaliz. If the sage answer agrees with the normaliz answer, they must both be right, right? URL: sagemath#40387 Reported by: Michael Orlitzky Reviewer(s): Martin Rubey, Michael Orlitzky
Initially my goal was to replace one slow
Hilbert_basis()
test, but I did some refactoring along the way -- none of which really affects the performance of the Hilbert basis calculation. The commit message lists these changes.To replace the test, I made up some examples and fed them to normaliz. If the sage answer agrees with the normaliz answer, they must both be right, right?