Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Strange initmpi behavior #12

Open
zuoshifan opened this issue Oct 23, 2014 · 2 comments
Open

Strange initmpi behavior #12

zuoshifan opened this issue Oct 23, 2014 · 2 comments

Comments

@zuoshifan
Copy link
Collaborator

While the following test script can pass the test,

from mpi4py import MPI
from scalapy import core

comm = MPI.COMM_WORLD

rank = comm.rank
size = comm.size

if size != 4:
    raise Exception("Test needs 4 processes.")


def test_Dup():

    core.initmpi([2, 2], block_shape=[5, 5]) # comment it out to see what happens

    newcomm = comm.Dup()
    core.initmpi([2, 2], block_shape=[5, 5], comm=newcomm)


if __name__ == '__main__':
    test_Dup()

it would get failed if I comment out core.initmpi([2, 2], block_shape=[5, 5]). The failed message is

scalapy.blacs.BLACSException: Grid initialisation failed.

This is strange because I think grid initialization on newcomm should not depend on the original grid initialization on MPI.COMM_WORLD. I don't know it is a bug or not.

@zuoshifan
Copy link
Collaborator Author

@zuoshifan
Copy link
Collaborator Author

Seems one must first initialize a BLACS grid on MPI_COMM_WORLD before one can further initailize BLACS grids on other MPI communicators. Why?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant