Skip to content

feat: nufftax-backed TransformerNUFFT as default; rename pynufft variant to TransformerNUFFTPyNUFFT#303

Merged
Jammy2211 merged 1 commit intomainfrom
feature/nufftax-default-transformer
May 10, 2026
Merged

feat: nufftax-backed TransformerNUFFT as default; rename pynufft variant to TransformerNUFFTPyNUFFT#303
Jammy2211 merged 1 commit intomainfrom
feature/nufftax-default-transformer

Conversation

@Jammy2211
Copy link
Copy Markdown
Collaborator

Summary

  • Replace the pynufft-backed TransformerNUFFT with a JAX-native nufftax-based implementation as the default. Preserve the original pynufft class as TransformerNUFFTPyNUFFT for backwards compatibility.
  • The new class matches TransformerDFT to ~1e-13 relative across 13 stress-test configurations (odd/even/non-square sizes, varying pixel scales, sparse/dense uv, low/high frequencies). Pynufft sits at ~6% relative gridding error on production 256×256 cases.
  • nufftax supports jax.jit, jax.grad, jax.vmap — unblocks end-to-end JAX-jitted interferometer likelihoods. JIT'd forward NUFFT measures ~20% faster than pynufft (6.1 ms vs 7.7 ms on 256² / 1000 vis).

API Changes

  • aa.TransformerNUFFT is now the JAX-native nufftax variant (default).
  • aa.TransformerNUFFTPyNUFFT exposes the legacy pynufft variant unchanged.
  • Interferometer.apply_sparse_operator() raises NotImplementedError when the dataset's transformer is the new TransformerNUFFT. The sparse-operator path depends on pynufft's kernel-deconvolved adjoint scale; users can opt back via transformer_class=aa.TransformerNUFFTPyNUFFT, or use TransformerDFT (which is what the JAX-likelihood scripts already do).
  • pyproject.toml: nufftax added to optional and dev extras alongside pynufft.

Behavior change

TransformerNUFFT.image_from now returns the strict mathematical adjoint of the forward (matches TransformerDFT.image_from exactly). The legacy pynufft class included internal Kaiser-Bessel kernel deconvolution and IFFT normalization — the absolute scale and sign of dirty images differ from before. Use TransformerNUFFTPyNUFFT if you need legacy adjoint behaviour.

Companion PRs

  • PyAutoGalaxy: re-export TransformerNUFFTPyNUFFT (forthcoming PR on feature/nufftax-default-transformer).
  • PyAutoLens: re-export TransformerNUFFTPyNUFFT (forthcoming PR on feature/nufftax-default-transformer).

These should merge after this PR is merged.

Scripts Changed

None in this repo. Workspace parity script autolens_workspace_test/scripts/interferometer/nufft.py already exists and now reports 0.0 residual between the new class and its inline nufftax helper.

Test plan

  • pytest test_autoarray/ — 750 / 750 passed
  • Workspace parity script — all four cases pass; new class is bit-for-bit identical to the script's nufftax helper
  • Hard-fail guard on apply_sparse_operator: new class raises, DFT and legacy pass through unchanged
  • Both classes export through aa.X; Transformer Union extended
  • black --check clean

🤖 Generated with Claude Code

…ant to TransformerNUFFTPyNUFFT

Replaces the pynufft-backed TransformerNUFFT with a JAX-native nufftax
implementation that matches TransformerDFT to ~1e-13 relative across
odd/even/non-square sizes (vs pynufft's ~6% gridding error on production
256x256 cases) and supports jit/grad/vmap. The original pynufft class is
preserved as TransformerNUFFTPyNUFFT for backwards compatibility.

apply_sparse_operator() raises NotImplementedError when called with the
new TransformerNUFFT; the sparse-operator path depends on pynufft's
kernel-deconvolved adjoint scale, so users must opt back to TransformerDFT
or TransformerNUFFTPyNUFFT for that path.

Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
@Jammy2211 Jammy2211 merged commit f126249 into main May 10, 2026
4 checks passed
@Jammy2211 Jammy2211 deleted the feature/nufftax-default-transformer branch May 10, 2026 10:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant