Skip to content

Update various generic communciation #27

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 6 commits into from
Apr 3, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
52 changes: 32 additions & 20 deletions c++/mpi/array.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@

/**
* @file
* @brief Provides an MPI broadcast, reduce, scatter and gather for std::vector.
* @brief Provides an MPI broadcast and reduce for `std::array`.
*/

#pragma once
Expand All @@ -29,6 +29,8 @@

#include <array>
#include <cstddef>
#include <type_traits>
#include <utility>

namespace mpi {

Expand All @@ -38,55 +40,65 @@ namespace mpi {
*/

/**
* @brief Implementation of an MPI broadcast for a std::arr.
* @brief Implementation of an MPI broadcast for a `std::array`.
*
* @details It simply calls mpi::broadcast_range with the input array.
* @details It calls mpi::broadcast_range with the given array.
*
* @tparam T Value type of the array.
* @tparam N Size of the array.
* @param arr std::array to broadcast.
* @param arr `std::array` to broadcast (into).
* @param c mpi::communicator.
* @param root Rank of the root process.
*/
template <typename T, std::size_t N> void mpi_broadcast(std::array<T, N> &arr, communicator c = {}, int root = 0) { broadcast_range(arr, c, root); }

/**
* @brief Implementation of an in-place MPI reduce for a std::array.
* @brief Implementation of an MPI reduce for a `std::array`.
*
* @details It simply calls mpi::reduce_in_place_range with the given input array.
* @details It constructs the output array with its value type equal to the return type of `reduce(std::declval<T>())`
* and calls mpi::reduce_range with the input and constructed output array.
*
* Note that the output array will always have the same size as the input array, no matter if the rank receives the
* reduced data or not.
*
* @tparam T Value type of the array.
* @tparam N Size of the array.
* @param arr std::array to reduce.
* @param arr `std::array` to reduce.
* @param c mpi::communicator.
* @param root Rank of the root process.
* @param all Should all processes receive the result of the reduction.
* @param op `MPI_Op` used in the reduction.
* @return `std::array` containing the result of the reduction.
*/
template <typename T, std::size_t N>
void mpi_reduce_in_place(std::array<T, N> &arr, communicator c = {}, int root = 0, bool all = false, MPI_Op op = MPI_SUM) {
reduce_in_place_range(arr, c, root, all, op);
auto mpi_reduce(std::array<T, N> const &arr, communicator c = {}, int root = 0, bool all = false, MPI_Op op = MPI_SUM) {
using value_t = std::remove_cvref_t<decltype(reduce(std::declval<T>()))>;
std::array<value_t, N> res{};
reduce_range(arr, res, c, root, all, op);
return res;
}

/**
* @brief Implementation of an MPI reduce for a std::array.
* @brief Implementation of an MPI reduce for a `std::array` that reduces directly into an existing output array.
*
* @details It simply calls mpi::reduce_range with the given input array and an empty array of the same size.
* @details It calls mpi::reduce_range with the input and output array. The output array must be the same size as the
* input array on receiving ranks.
*
* @tparam T Value type of the array.
* @tparam N Size of the array.
* @param arr std::array to reduce.
* @tparam T1 Value type of the array to be reduced.
* @tparam N1 Size of the array to be reduced.
* @tparam T2 Value type of the array to be reduced into.
* @tparam N2 Size of the array to be reduced into.
* @param arr_in `std::array` to reduce.
* @param arr_out `std::array` to reduce into.
* @param c mpi::communicator.
* @param root Rank of the root process.
* @param all Should all processes receive the result of the reduction.
* @param op `MPI_Op` used in the reduction.
* @return std::array containing the result of each individual reduction.
*/
template <typename T, std::size_t N>
auto mpi_reduce(std::array<T, N> const &arr, communicator c = {}, int root = 0, bool all = false, MPI_Op op = MPI_SUM) {
std::array<regular_t<T>, N> res{};
reduce_range(arr, res, c, root, all, op);
return res;
template <typename T1, std::size_t N1, typename T2, std::size_t N2>
void mpi_reduce_into(std::array<T1, N1> const &arr_in, std::array<T2, N2> &arr_out, communicator c = {}, int root = 0, bool all = false,
MPI_Op op = MPI_SUM) {
reduce_range(arr_in, arr_out, c, root, all, op);
}

/** @} */
Expand Down
6 changes: 3 additions & 3 deletions c++/mpi/chunk.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -22,11 +22,11 @@
#pragma once

#include "./communicator.hpp"
#include "./macros.hpp"

#include <itertools/itertools.hpp>

#include <iterator>
#include <stdexcept>
#include <utility>

namespace mpi {
Expand All @@ -39,7 +39,7 @@ namespace mpi {
* @details The optional parameter `min_size` can be used to first divide the range into equal parts of size
* `min_size` before distributing them as evenly as possible across the number of specified subranges.
*
* It throws an exception if `min_size < 1` or if it is not a divisor of `end`.
* It is expected that `min_size > 0` and that `min_size` is a divisor of `end`.
*
* @param end End of the integer range `[0, end)`.
* @param nranges Number of subranges.
Expand All @@ -48,7 +48,7 @@ namespace mpi {
* @return Length of the i<sup>th</sup> subrange.
*/
[[nodiscard]] inline long chunk_length(long end, int nranges, int i, long min_size = 1) {
if (min_size < 1 || end % min_size != 0) throw std::runtime_error("Error in mpi::chunk_length: min_size must be a divisor of end");
EXPECTS_WITH_MESSAGE(min_size > 0 && end % min_size == 0, "Error in mpi::chunk_length: min_size must be a divisor of end");
auto [node_begin, node_end] = itertools::chunk_range(0, end / min_size, nranges, i);
return (node_end - node_begin) * min_size;
}
Expand Down
5 changes: 3 additions & 2 deletions c++/mpi/datatypes.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -91,8 +91,9 @@ namespace mpi {
template <typename T> struct mpi_type<const T> : mpi_type<T> {};

/**
* @brief Type trait to check if a type T has a corresponding MPI datatype, i.e. if mpi::mpi_type has been specialized.
* @tparam T Type to be checked.
* @brief Type trait to check if a type `T` has a corresponding MPI datatype, i.e. if mpi::mpi_type has been
* specialized.
* @tparam `T` Type to be checked.
*/
template <typename T, typename = void> constexpr bool has_mpi_type = false;

Expand Down
Loading
Loading