Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

consider whether the big array fwbatch should be malloced at plan or at execute. #480

Open
ahbarnett opened this issue Jul 11, 2024 · 5 comments

Comments

@ahbarnett
Copy link
Collaborator

ahbarnett commented Jul 11, 2024

Martin's HVOX users want a small plan object (not sure why since it's going to get malloc'ed anyway)...
discuss whether to malloc in each execute call, as Martin was trying to do in DUCC-FFT PR #463 (negligible overhead).

@mreineck
Copy link
Collaborator

To set the record straight: I misinterpreted the remarks I heard from the HVOX authors (sorry about that, and thanks again for the correction, @SepandKashani!). In the most recent version of their particular application the memory overhead doesn't seem to matter any more.

The situation I imagine is the following:

Assume you want to perform an approximate inverse NUFFT from a large set of nonuniform points to a large 3D grid, using CG iteration. Then (as far as I know, but I may be wrong) two finufft plans are needed, one type 1 and one type 2, that are executed alternatingly in every CG iteration. Both of the plans allocate memory for a copy of the fine grid, but at any point in time only one of them is needed, which, depending on the problem size, can become a huge overhead.
When FFTW is used, this is a bit tricky to avoid, since typically an FFTW plan stores a pointer to its input and output, so this memory has to have the same lifetime as the FFTW plan. For ducc's FFT, this is not the case, so the fine grid could be allocated "on demand", while a particular FINUFFT plan is executed.
I'd consider this a "low hanging fruit" quality of implementation issue, but if the goal is absolute maximum performance, I agree it will be better to allocate once during plan generation and avoid the overhead of allocation/deallocation during every plan execution.

@SepandKashani
Copy link
Contributor

Actually, if you want to solve an optimization problem of the form $x^{*} = \arg\min_{x} \lVert y - A x \rVert^{2}$ (or similar) using first-order methods where $A$ is an NUFFT (of either type 1/2/3), then a single NUFFT plan suffices to evaluate $A x$ and $A^{H} y$ needed by these methods. CG falls squarely in this category.

I don't have this written up yet, but you can monitor this repo over the next 2 weeks which will show how to achieve this.

@ahbarnett
Copy link
Collaborator Author

ahbarnett commented Jul 16, 2024 via email

@SepandKashani
Copy link
Contributor

My apologies for the delay: putting everything together took longer than expected.

It is interesting that you can reuse one plan for t1 and t2... this definitely counts as a hack that we wouldn't expect others to know about :) (how do you change the plan type after planning, given the plan is a blind pointer?)

How to concretely re-use a T1 plan to perform a T2 (or a T3 to perform T3^{*}) depends on how the NUFFT is implemented.
FINUFFT doesn't currently allow you to do this natively via the API, but adding this is quite simple. [Not sure about the blind pointer issue you mentioned above.]
Moreover it is useful when performing NUFFTs as part of a large optimization problem since it effectively halves the memory use by storing one plan only.

The implementations in the repo above use DUCC since the number of transforms may vary at runtime, but the same can be done with FFTW via up-front planning if desired.

Note that this version does not use FINUFFT since the hierarchical plan trick requires me to set ($\gamma, S_{i}, X_{i}$) in FINUFFT wording a bit differently; nevertheless I have a good idea how to add this support in FINUFFT too if there is interest.

In any case happy to contribute to FINUFFT once the tide settles a bit on my side.

@ahbarnett
Copy link
Collaborator Author

ahbarnett commented Aug 13, 2024 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants