-
Notifications
You must be signed in to change notification settings - Fork 7
MPI shared memory #14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: unstable
Are you sure you want to change the base?
Conversation
92e2e2b
to
1738e29
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we make this non-MPI compatible?
7a4661b
to
9b51e65
Compare
This adds a new abstraction for MPI_Group to be able to use the post-start-complete-wait RMA cycle. Also adds documentation and more tests. Co-authored-by: Mohamed Aziz Bellaaj <[email protected]>
- update docs - wrap MPI calls with check_mpi_call - remove noexcept if not necessary
- update docs - simplify implementation
- update docs - remove get_attr and return the stored attributes instead - remove noexcept
- update docs - remove noexcept when not necessary
For consistency with mpi::window which uses comm_, rename the communicator member variable from com_ to comm_ throughout the mpi::communicator class.
- Change window<T>::size() to return element count for consistency - Update test expectation accordingly Co-Authored-By: Claude <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Wentzell LGTM, only minor nitpick.
Co-authored-by: Henri Menke <[email protected]>
Co-authored-by: Henri Menke <[email protected]>
Co-authored-by: Henri Menke <[email protected]>
Co-authored-by: Henri Menke <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you also @Thoemi09 for your improvements |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, but could you please not squash on merge? This makes it much easier to bisect later if something is wrong. You can clean up the history and rebase on unstable
if you want.
I would have been ok with a full squash, but if you want group things into a few commits along the way, feel free to create a suggested a grouping. |
All tests were run with multiple nodes and different number of slots on each node:
Some ideas:
MPI allocatorSimilar to
std::allocator
implement ashared_allocator
and adistributed_shared_allocator
, such that one can use e.g.std::vector<double, mpi::shared_allocator<double>>
Questions:
split_shared()
on the default communicator, but for distributed shared memory there needs to be internode communication and that is not easily guessed from the default communicator. Maybe a global hash table with allocation information is needed?On top of that, for distributed shared memory access must be fenced and broadcasted between nodes. That's not so easy to abstract away.