Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimisation: Maintain a lookup of frequently used big.Int instances #154

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

timwintle
Copy link

@timwintle timwintle commented Mar 11, 2025

Addresses Issue #155

@echarrod
Copy link
Contributor

Hi @timwintle, many thanks for the contribution!

Would it be possible to share a memory allocation graph (or similar) showing the problem, or what made you notice?

Ideally with justification for selecting the value of 64? 🙏

@echarrod echarrod force-pushed the big-decimal-lookup branch from 52923a0 to 271f728 Compare March 12, 2025 14:45
@timwintle
Copy link
Author

@echarrod

Re: choice of 64 - you could probably make it lower: Here's a frequency plot of the values of i (note the logarithmic left axis):

Screenshot from 2025-03-13 10-36-31

(I counted the first 2,000,000 samples from startup of my own application)

The theoretical explanation is that a significant amount of calls are coming from ToScale() - which is a call for 10 and a call for the abs(delta between two exponents).

RE: malloc / gc pressure - I guess it's fairly application-specific - but here's an extract of a cpu profile for my own application (before the change):

      flat  flat%   sum%        cum   cum%
(top 2 filtered out)
     0.05s  0.75%  1.50%      2.12s 31.88%  github.com/luno/luno-go/decimal.Decimal.ToScale
     0.34s  5.11%  6.62%         2s 30.08%  runtime.mallocgc
     0.04s   0.6%  7.22%      1.62s 24.36%  github.com/luno/luno-go/decimal.scaleToMax
     0.01s  0.15%  7.37%      1.40s 21.05%  github.com/luno/luno-go/decimal.Decimal.Cmp

Screenshot from 2025-03-13 10-06-19

@echarrod
Copy link
Contributor

Thanks for the detailed reply! Given your graph, could we change to 32 rather than 64? I think that would meet our needs. We can always up it again to 64 if we need to 🙏

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants