-
Notifications
You must be signed in to change notification settings - Fork 167
Speed up caching #4383
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Speed up caching #4383
Conversation
de050ee
to
3951383
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Otherwise, looks good to me.
I have run a big G-ADOPT example on 16 cores and it ran without issue. I think that this is safe to merge. |
Should I consider re-running my performance experiments after this PR is merged? |
That might potentially be a good idea. Were your experiments done against release or master? (This is going into master) |
Release |
I guess this doesn't actually matter. You might just need to reinstall. The changes here only really make an impact if you are running with small problems at the strong scaling limit. Does that apply? |
This is a bit of a refactor of
parallel_cache
to try and address the performance issues reported in #3983.Fixes #3983
TODOs
The main changes are:
comm_getter
toget_comm
andcache_factory
tomake_cache
.as_hexdigest
every time we do a cache lookup. I think this was the main cause of the slowdown.type(make_cache()).__class__.__name__
which was unsafe and slow (?).Using the example from #3983 the performance improvement takes us (with a hot cache) from 31s to 14s, so I must be doing something right.