Skip to content

Commit b584c74

Browse files
Thoemi09Wentzell
authored andcommitted
Some basic clean up and doc updates
1 parent 0380007 commit b584c74

File tree

8 files changed

+127
-53
lines changed

8 files changed

+127
-53
lines changed

c++/mpi/chunk.hpp

+3-3
Original file line numberDiff line numberDiff line change
@@ -22,11 +22,11 @@
2222
#pragma once
2323

2424
#include "./communicator.hpp"
25+
#include "./macros.hpp"
2526

2627
#include <itertools/itertools.hpp>
2728

2829
#include <iterator>
29-
#include <stdexcept>
3030
#include <utility>
3131

3232
namespace mpi {
@@ -39,7 +39,7 @@ namespace mpi {
3939
* @details The optional parameter `min_size` can be used to first divide the range into equal parts of size
4040
* `min_size` before distributing them as evenly as possible across the number of specified subranges.
4141
*
42-
* It throws an exception if `min_size < 1` or if it is not a divisor of `end`.
42+
* It is expected that `min_size > 0` and that `min_size` is a divisor of `end`.
4343
*
4444
* @param end End of the integer range `[0, end)`.
4545
* @param nranges Number of subranges.
@@ -48,7 +48,7 @@ namespace mpi {
4848
* @return Length of the i<sup>th</sup> subrange.
4949
*/
5050
[[nodiscard]] inline long chunk_length(long end, int nranges, int i, long min_size = 1) {
51-
if (min_size < 1 || end % min_size != 0) throw std::runtime_error("Error in mpi::chunk_length: min_size must be a divisor of end");
51+
EXPECTS_WITH_MESSAGE(min_size > 0 && end % min_size == 0, "Error in mpi::chunk_length: min_size must be a divisor of end");
5252
auto [node_begin, node_end] = itertools::chunk_range(0, end / min_size, nranges, i);
5353
return (node_end - node_begin) * min_size;
5454
}

c++/mpi/datatypes.hpp

+3-2
Original file line numberDiff line numberDiff line change
@@ -91,8 +91,9 @@ namespace mpi {
9191
template <typename T> struct mpi_type<const T> : mpi_type<T> {};
9292

9393
/**
94-
* @brief Type trait to check if a type T has a corresponding MPI datatype, i.e. if mpi::mpi_type has been specialized.
95-
* @tparam T Type to be checked.
94+
* @brief Type trait to check if a type `T` has a corresponding MPI datatype, i.e. if mpi::mpi_type has been
95+
* specialized.
96+
* @tparam `T` Type to be checked.
9697
*/
9798
template <typename T, typename = void> constexpr bool has_mpi_type = false;
9899

c++/mpi/macros.hpp

+6-6
Original file line numberDiff line numberDiff line change
@@ -87,12 +87,12 @@
8787

8888
#ifdef NDEBUG
8989

90-
#define EXPECTS(X)
91-
#define ASSERT(X)
92-
#define ENSURES(X)
93-
#define EXPECTS_WITH_MESSAGE(X, ...)
94-
#define ASSERT_WITH_MESSAGE(X, ...)
95-
#define ENSURES_WITH_MESSAGE(X, ...)
90+
#define EXPECTS(X) {}
91+
#define ASSERT(X) {}
92+
#define ENSURES(X) {}
93+
#define EXPECTS_WITH_MESSAGE(X, ...) {}
94+
#define ASSERT_WITH_MESSAGE(X, ...) {}
95+
#define ENSURES_WITH_MESSAGE(X, ...) {}
9696

9797
#else
9898

doc/DoxygenLayout.xml

+2-7
Original file line numberDiff line numberDiff line change
@@ -22,6 +22,7 @@
2222
<tab type="user" url="@ref ex1" title="Example 1: Hello world!"/>
2323
<tab type="user" url="@ref ex2" title="Example 2: Use monitor to communicate errors"/>
2424
<tab type="user" url="@ref ex3" title="Example 3: Custom type and operator"/>
25+
<tab type="user" url="@ref ex4" title="Example 4: Provide custom spezializations"/>
2526
</tab>
2627
<tab type="usergroup" url="@ref documentation" title="API Documentation">
2728
<tab type="usergroup" url="@ref mpi_essentials" title="MPI essentials">
@@ -50,17 +51,11 @@
5051
</tab>
5152
</tab>
5253
<tab type="user" url="@ref coll_comm" title="Collective MPI communication"/>
53-
<tab type="usergroup" url="@ref mpi_lazy" title="Lazy MPI communication">
54-
<tab type="user" url="@ref mpi::lazy" title="lazy"/>
55-
<tab type="user" url="@ref mpi::tag::gather" title="gather tag"/>
56-
<tab type="user" url="@ref mpi::tag::reduce" title="reduce tag"/>
57-
<tab type="user" url="@ref mpi::tag::scatter" title="scatter tag"/>
58-
</tab>
5954
<tab type="usergroup" url="@ref event_handling" title="Event handling">
6055
<tab type="user" url="@ref mpi::monitor" title="monitor"/>
6156
</tab>
6257
<tab type="usergroup" url="@ref utilities" title="Utilities">
63-
<tab type="user" url="@ref mpi::contiguous_sized_range" title="contiguous_sized_range"/>
58+
<tab type="user" url="@ref mpi::MPICompatibleRange" title="MPICompatibleRange"/>
6459
</tab>
6560
<tab type="filelist" visible="yes" title="" intro=""/>
6661
</tab>

doc/documentation.md

+3-20
Original file line numberDiff line numberDiff line change
@@ -35,17 +35,7 @@ Furthermore, it offers tools to simplify the creation of custom MPI operations u
3535

3636
## Collective MPI communication
3737

38-
The following generic collective communications are defined in @ref coll_comm "Collective MPI communication":
39-
40-
* @ref mpi::all_gather "all_gather"
41-
* @ref mpi::all_reduce "all_reduce"
42-
* @ref mpi::all_reduce_in_place "all_reduce_in_place"
43-
* @ref mpi::broadcast "broadcast"
44-
* @ref mpi::gather "gather"
45-
* @ref mpi::reduce "reduce"
46-
* @ref mpi::reduce_in_place "reduce_in_place"
47-
* @ref mpi::scatter "scatter"
48-
38+
**mpi** provides several generic @ref coll_comm "Collective MPI communication".
4939
They offer a much simpler interface than their MPI C library analogs.
5040
For example, the following broadcasts a `std::vector<double>` from the process with rank 0 to all others:
5141

@@ -61,18 +51,11 @@ MPI_Bcast(vec.data(), static_cast<int>(vec.size()), MPI_DOUBLE, 0, MPI_COMM_WORL
6151

6252
Under the hood, the generic mpi::broadcast implementation calls the specialized
6353
@ref "mpi::mpi_broadcast(std::vector< T >&, mpi::communicator, int)".
64-
The other generic functions are implemented in the same way.
65-
See the "Functions" section in @ref coll_comm to check which datatypes are supported out of the box.
54+
Other generic functions in **mpi** work similarly.
55+
See the "Functions" section in @ref coll_comm to check which datatypes and MPI operations are supported out of the box.
6656

6757
In case your datatype is not supported, you are free to provide your own specialization.
6858

69-
Furthermore, there are several functions to simplify communicating generic, contiguous ranges:
70-
- mpi::broadcast_range,
71-
- mpi::gather_range,
72-
- mpi::reduce_in_place_range,
73-
- mpi::reduce_range and
74-
- mpi::scatter_range.
75-
7659
## Lazy MPI communication
7760

7861
@ref mpi_lazy can be used to provied collective MPI communication for lazy expression types.

doc/ex4.md

+63
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,63 @@
1+
@page ex4 Example 4: Provide custom spezializations
2+
3+
[TOC]
4+
5+
In this example, we show how to write a specialized `mpi_reduce_into` for a custom type.
6+
7+
```cpp
8+
#include <mpi/mpi.hpp>
9+
#include <iostream>
10+
#include <vector>
11+
12+
// Custom type.
13+
class foo {
14+
public:
15+
// Constructor.
16+
foo(int x = 5) : x_(x) {}
17+
18+
// Get the value stored in the class.
19+
int x() const { return x_; }
20+
21+
// Specialization of mpi_reduce_into for the custom type.
22+
friend void mpi_reduce_into(foo const &f_in, foo &f_out, mpi::communicator c = {}, int root = 0, bool all = false, MPI_Op op = MPI_SUM) {
23+
mpi::reduce_into(f_in.x_, f_out.x_, c, root, all, op);
24+
}
25+
26+
private:
27+
int x_;
28+
};
29+
30+
int main(int argc, char *argv[]) {
31+
// initialize MPI environment
32+
mpi::environment env(argc, argv);
33+
mpi::communicator world;
34+
35+
// create a vector of foo objects
36+
std::vector<foo> vec {foo{1}, foo{2}, foo{3}, foo{4}, foo{5}};
37+
38+
// reduce the vector of foo objects
39+
auto result = mpi::reduce(vec, world);
40+
41+
// print the result on rank 0
42+
if (world.rank() == 0) {
43+
std::cout << "Reduced vector: ";
44+
for (auto const &f : result) std::cout << f.x() << " ";
45+
std::cout << "\n";
46+
}
47+
}
48+
```
49+
50+
Output (running with `-n 4`):
51+
52+
```
53+
Reduced vector: 4 8 12 16 20
54+
```
55+
56+
Note that by providing a simple `mpi_reduce_into` for our custom `foo` type, we are able to reduce a `std::vector` of
57+
`foo` objects without any additional work.
58+
59+
Under the hood, each `foo` object is reduced spearately using the above specialization.
60+
For large amounts of data or in performance critical code sections, this might not be desired.
61+
In such a case, it is usally better to make the type MPI compatible such that the reduction can be done with a single
62+
call to MPI C library.
63+
See @ref ex3 for more details.

doc/examples.md

+7-4
Original file line numberDiff line numberDiff line change
@@ -5,13 +5,16 @@
55
- @ref ex1 "Example 1: Hello world!"
66
- @ref ex2 "Example 2: Use monitor to communicate errors"
77
- @ref ex3 "Example 3: Custom type and operator"
8+
- @ref ex4 "Example 4: Provide custom spezializations"
89

910
@section compiling Compiling the examples
1011

11-
All examples have been compiled on a MacBook Pro with an Apple M2 Max chip and [open-mpi](https://www.open-mpi.org/) 4.1.5.
12-
We further used clang 16.0.6 together with cmake 3.27.2.
12+
All examples have been compiled on a MacBook Pro with an Apple M2 Max chip and [open-mpi](https://www.open-mpi.org/)
13+
5.0.1.
14+
We further used clang 19.1.7 together with cmake 3.31.5.
1315

14-
Assuming that the actual example code is in a file `main.cpp`, the following generic `CMakeLists.txt` should work for all examples:
16+
Assuming that the actual example code is in a file `main.cpp`, the following generic `CMakeLists.txt` should work for
17+
all examples:
1518

1619
```cmake
1720
cmake_minimum_required(VERSION 3.20)
@@ -28,7 +31,7 @@ include (FetchContent)
2831
FetchContent_Declare(
2932
mpi
3033
GIT_REPOSITORY https://github.com/TRIQS/mpi.git
31-
GIT_TAG 1.2.x
34+
GIT_TAG 1.3.x
3235
)
3336
FetchContent_MakeAvailable(mpi)
3437

doc/groups.dox

+40-11
Original file line numberDiff line numberDiff line change
@@ -51,17 +51,46 @@
5151
* @brief Generic and specialized implementations for a subset of collective MPI communications (broadcast, reduce,
5252
* gather, scatter).
5353
*
54-
* @details The generic functions (mpi::broadcast, mpi::reduce, mpi::scatter, ...) call their more specialized
55-
* counterparts (e.g. mpi::mpi_broadcast, mpi::mpi_reduce, mpi::mpi_scatter, ...).
56-
*
57-
* **mpi** provides (some) implementations for
58-
* - scalar types that have a corresponding mpi::mpi_type,
59-
* - `std::vector` and `std::array` types with MPI compatible value types,
60-
* - `std::string` and
61-
* - `std::pair`.
62-
*
63-
* Furthermore, there are several functions to simplify communicating generic, contiguous ranges: mpi::broadcast_range,
64-
* mpi::gather_range, mpi::reduce_in_place_range, mpi::reduce_range and mpi::scatter_range.
54+
* @details **mpi** provides several generic collective communications routines as well as specializations for certain
55+
* common types. The generic functions usually simply forward the call to one of the specializations (`mpi_broadcast`,
56+
* `mpi_gather`, `mpi_gather_into`, `mpi_reduce`, `mpi_reduce_into`, `mpi_scatter` or `mpi_scatter_into`) using ADL but
57+
* can also perform some additional checks. It is therefore recommended to always use the generic versions when
58+
* possible.
59+
*
60+
* Here is a short overview of the available generic functions:
61+
* - mpi::broadcast: Calls the specialization `mpi_broadcast`.
62+
* - mpi::gather: Calls the specialization `mpi_gather` if it is implemented. Otherwise, it calls mpi::gather_into with
63+
* a default constructed output object.
64+
* - mpi::gather_into: Calls the specialization `mpi_gather_into`.
65+
* - mpi::reduce: Calls the specialization `mpi_reduce` if it is implemented. Otherwise, it calls mpi::reduce_into with
66+
* a default constructed output object.
67+
* - mpi::reduce_in_place: Calls the specialization `mpi_reduce_into` with the same input and output object.
68+
* - mpi::reduce_into: Calls the specialization `mpi_reduce_into`.
69+
* - mpi::scatter: Calls the specialization `mpi_scatter` if it is implemented. Otherwise, it calls mpi::scatter_into
70+
* with a default constructed output object.
71+
* - mpi::scatter_into: Calls the specialization `mpi_scatter_into`.
72+
*
73+
* In case, all processes should receive the result of the MPI operation, one can use the convenience functions
74+
* mpi::all_gather, mpi::all_gather_into, mpi::all_reduce, mpi::all_reduce_in_place or mpi::all_reduce_into. They
75+
* forward the given arguments to their "non-all" counterparts with the `all` argument set to true.
76+
*
77+
* **mpi** provides various specializations for several types. For example,
78+
* - for MPI compatible types, i.e. for types that have a corresponding mpi::mpi_type, it provides an
79+
* @ref "mpi::mpi_broadcast(T &x, mpi::communicator, int)" "mpi_broadcast",
80+
* @ref "mpi::mpi_reduce(T const &, mpi::communicator, int, bool, MPI_Op)" "mpi_reduce",
81+
* @ref "mpi::mpi_reduce_into(T const &, T &, mpi::communicator, int, bool, MPI_Op)" "mpi_reduce_into",
82+
* @ref "mpi::mpi_gather(T const &, mpi::communicator, int, bool)" "mpi_gather" and an
83+
* @ref "mpi::mpi_gather_into(T const &, R &&, mpi::communicator, int, bool)" "mpi_gather_into".
84+
* - for strings, it provides an @ref "mpi::mpi_broadcast(std::string &, mpi::communicator, int)" "mpi_broadcast"
85+
* and an @ref "mpi::mpi_gather_into(std::string const &, std::string &, mpi::communicator, int, bool)"
86+
* "mpi_gather_into".
87+
*
88+
* Users are encouraged to implement their own specializations for their custom types or in case a specialization is
89+
* missing (see e.g. @ref ex4).
90+
*
91+
* Furthermore, there are several functions to simplify communicating (contiguous) ranges: mpi::broadcast_range,
92+
* mpi::gather_range, mpi::reduce_range and mpi::scatter_range. Some of these range functions are more generic than
93+
* others. Please check the documentation of the specific function for more details.
6594
*/
6695

6796
/**

0 commit comments

Comments
 (0)