Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: Interpolator class to cache interpolation matrix #4658

Closed

Conversation

christianbrodbeck
Copy link
Member

Addressing #4651

This should be the bare bones of the refactoring.

@larsoner is this enough to make it savable?

And what additional information should we store? The list of bad channels might be good, to make sure it's the same as inst before applying?

@codecov-io
Copy link

codecov-io commented Oct 11, 2017

Codecov Report

Merging #4658 into master will decrease coverage by 0.02%.
The diff coverage is 72.5%.

@@            Coverage Diff             @@
##           master    #4658      +/-   ##
==========================================
- Coverage   87.67%   87.65%   -0.03%     
==========================================
  Files         351      351              
  Lines       66609    68483    +1874     
  Branches    11548    12469     +921     
==========================================
+ Hits        58401    60026    +1625     
- Misses       5253     5504     +251     
+ Partials     2955     2953       -2

@jasmainak
Copy link
Member

practically speaking, how much speed gain do you observe for EEG and for MEG by caching these matrices?

@christianbrodbeck
Copy link
Member Author

computing one interpolation matrix takes in the order of seconds for me

@jasmainak
Copy link
Member

also for EEG?

@christianbrodbeck
Copy link
Member Author

I haven't looked at EEG, only MEG.

@larsoner
Copy link
Member

And what additional information should we store? The list of bad channels might be good, to make sure it's the same as inst before applying?

I think the things we'd need to check would be:

  1. The set (and order) of good and bad channel names should need to match exactly for it to operate.
  2. The sensor coordinates in the head coordinate frame must match to numerical precision.
  3. The origin (for MEG, maybe also EEG?) of the approximation must match to numerical precision.

I think this is enough to guarantee that the same interpolator can be used.

@jasmainak
Copy link
Member

jasmainak commented Oct 12, 2017

okay. IMO, it makes sense only to cache self_dots and cross_dots as EEG interpolation is rather fast

@christianbrodbeck
Copy link
Member Author

@jasmainak If we already go through the trouble of managing a cached object, is there an advantage to not caching the rest? I'm also thinking of epoch-wise interpolation, where you might want to apply an interpolator hundreds of times, so it might be preferable to store the highest level possible.

@jasmainak
Copy link
Member

@christianbrodbeck from what I understood, you only store the mapping_matrix but not self_dots and cross_dots in the Interpolator object. The problem with this approach is that it won't help in epoch-wise interpolation if different epochs have different bad channels ... (if you all epochs have the same bad channel, you might as well do the interpolation on the raw object once before epoching)

My first thought had been to compute an n_channels x n_channels mapping matrix and slice it, but it cannot be sliced to give you a mapping matrix of n_channels x n_bads. The math doesn't work out. You can only slice the self_dots and cross_dots and then you have to assemble the mapping_matrix again, but that already saves you significant computation

@larsoner
Copy link
Member

You can only slice the self_dots and cross_dots and then you have to assemble the mapping_matrix again, but that already saves you significant computation

If this is indeed the bottleneck, then for you would only ever need one of these for each MEG system, since the electrodes do not move. That would be nice. However, for EEG I doubt it's this simple since it uses some other interpolation mode.

@christianbrodbeck this one is pretty old, perhaps we should close and reconsider the best way to cache the interpolation next time someone hits a need for it?

@larsoner
Copy link
Member

larsoner commented Jul 9, 2019

I'll close since it seems unlikely that this will be implemented soon, and rebases are needed anyway

@larsoner larsoner closed this Jul 9, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants