You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To give a simplified example:
If you have 50000 permutations, 5 jobs, and fsaverage source space data for a group of subject these dot products become heavy as via array split perms would be of size 10000, n_samples
Symptoms:
computers die from memory and swapping limits
We should maybe change the parallelization and re-evaluate speed/memory tradeoffs.
The text was updated successfully, but these errors were encountered:
It turns out it's simply the dot products's memory consumption.
1 big dot product (n_jobs=1) is still ok, but all the copies generated by parallel force computers down.
We could write an inner loop that specifies a buffer size and makes sure the length of perms does not exceed 1000 or so.
Yeah it should be simple enough to do it in chunks, the speed penalty should be small. you can even just keep track of the max on each loop since at the end that's what it's doing anyway.
I suspect the reason is here:
https://github.com/mne-tools/mne-python/blob/master/mne/stats/permutations.py#L54
To give a simplified example:
If you have 50000 permutations, 5 jobs, and fsaverage source space data for a group of subject these dot products become heavy as via array split
perms
would be of size 10000, n_samplesSymptoms:
We should maybe change the parallelization and re-evaluate speed/memory tradeoffs.
The text was updated successfully, but these errors were encountered: