-
Notifications
You must be signed in to change notification settings - Fork 255
BinaryPowerMethod should cache M^**(2^k) steps #3843
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
In
after finding out that it was slower than the stupid method:
I'm not sure I understand why exactly but I'm worried there might be other circumstances where the same is true. |
Incidentally, I also don't understand why the last 2 arguments of |
Can you give an example where it's slower? My guess it that your
As opposed to what? They need some information about the input, so they have to be functions. |
|
In your case this is because the iterative algorithm only computes characters of L because of the way tensor is implemented, whereas the binary algorithm computes characters for L^**2 and L^**4 and so on as well, which is the heaviest step. However, if I'm not mistaken the character is multiplicative, so a simple precomputation should close most of the gap: @@ -1205,7 +1207,11 @@ LieAlgebraModule ** LieAlgebraModule := (V,W) -> ( -- cf Humpheys' intro to LA &
);
if i === null then add(u-rho,a*b*t);
)));
- new LieAlgebraModule from (g,ans)
+ M := new LieAlgebraModule from (g,ans);
+ if V.cache#?character and W.cache#?character
+ then M.cache#character = character V * character W;
+ if V === W then V.cache#(symbol ^**, 2) = M;
+ M After this, the next bottleneck is this step: M2/M2/Macaulay2/packages/LieTypes.m2 Lines 1198 to 1199 in 00b89ec
In the iterative algorithm, since wd is the weight diagram of the smaller module, there are fewer weights here to deal with, but in the binary algorithm there are potentially many weights to deal with here. I'm not sure what's the algorithm doing here. Is it not possible to loop only over the highest weights for both V and W?
Beyond this it's a question of what you want to optimize:
Given how many times you use |
no that'd be too easy... On the other hand, we typically don't want to compute the characters of V and W when doing |
Using BinaryPowerMethod for tensor powers is pretty good, but if one needs every tensor power from 1 to d for instance, then there's a lot of repeated computation of M^**(2^k) for k < log_2 d. I think some opportunistic caching here might go a long way. Not sure exactly how to do it at the generality of BinaryPowerMethod however.
The text was updated successfully, but these errors were encountered: