diff --git a/CHANGELOG.md b/CHANGELOG.md index 02e2e35..6a7c837 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -5,6 +5,63 @@ All notable changes to PyChebyshev will be documented in this file. The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). +## [0.21.1] - 2026-04-27 + +### Added + +- `ChebyshevTT.sobol_indices()` — first-order and total-order Sobol + sensitivity indices computed natively by contracting through the TT + coefficient cores. O(d · n · r²) per dim, no dense materialization. + Mirrors v0.20 `ChebyshevApproximation.sobol_indices` API; keys are + user-frame dim indices regardless of internal `_dim_order`. + +### Fixed + +- `ChebyshevTT.roots()`/`minimize()`/`maximize()` previously validated + `fixed` values against the storage-frame domain instead of the + user-frame physical domain. Under `with_auto_order()` or `reorder()` + with non-uniform per-dim domains, this could raise misleading errors + or accept invalid inputs. Now validates against user-frame domain. +- `ChebyshevTT.inner_product()` previously returned a meaningless + Frobenius product when `self._dim_order != other._dim_order`, with + no error. Now raises `ValueError` with a `reorder()` alignment hint, + matching v0.20.1 binary algebra behavior. +- `ChebyshevTT.get_evaluation_points()` previously returned columns in + storage order, breaking `eval(get_evaluation_points()[i])` for any + TT with non-identity `_dim_order`. Now returns columns in user-frame + order. +- `ChebyshevTT.eval_multi()` previously mutated `self._dim_order` via + try/finally, racing under concurrent calls (issue #19). Now uses a + private `_eval_storage_frame` helper with no mutation. +- `ChebyshevTT.integrate()` error messages on out-of-domain bounds + previously referenced the storage-frame dim index instead of the + user-frame index passed by the caller (issue #20). Now references + the user-frame index. +- `_algebra._check_compatible` previously raised "Domain mismatch" + when comparing two interpolants with mixed `tuple` vs `list` domain + syntax even when bounds were numerically identical (issue #22). Now + uses `np.allclose` for comparison. + +### Performance + +- `vectorized_eval_batch` now hoists the differentiation matrix matmul + outside the per-point loop. Significant speedup for derivative + batch evaluations on large point sets. +- `_calculus._optimize_1d` (used by all four classes' `minimize/maximize`) + now uses a single vectorized barycentric evaluation over critical + points + endpoints instead of a Python list comprehension. + +### Notes + +- Closes the v0.20+v0.20.1 `_dim_order` cluster on `ChebyshevTT`. + All TT methods that read `self.domain[d]` or `self.n_nodes[d]` now + consistently translate user-frame indices to storage-frame + internally. +- No breaking API changes: all "Fixed" items change wrong behavior to + correct behavior. The `inner_product` strict-mode raises on mismatched + `_dim_order` (previously silently returned wrong numbers), which is + a behavior change in the failure path only. + ## [0.21.0] - 2026-04-27 ### Added diff --git a/CLAUDE.md b/CLAUDE.md index 439624c..2625000 100644 --- a/CLAUDE.md +++ b/CLAUDE.md @@ -12,7 +12,7 @@ PyChebyshev is a pip-installable Python library for multi-dimensional Chebyshev # Setup uv sync -# Run tests (~1112 tests, ~115s due to 5D Black-Scholes builds) +# Run tests (~1133 tests, ~115s due to 5D Black-Scholes builds) uv run pytest tests/ -v # Run a single test @@ -84,6 +84,12 @@ The installable package. Public classes: `ChebyshevApproximation`, `ChebyshevSpl - v0.21 adds `ChebyshevSlider.roots()/minimize()/maximize()` and `ChebyshevTT.roots()/minimize()/maximize()`. After v0.21, all four classes support the full calculus surface (integrate + roots + min/max). +- v0.21.1 closes the v0.20+v0.20.1 `_dim_order` cluster on `ChebyshevTT`: + `roots/minimize/maximize` validate against user-frame domain; + `inner_product` raises on mismatched `_dim_order`; `get_evaluation_points` + returns user-frame columns; `eval_multi` no longer mutates `_dim_order`. + Perf: vectorized `_optimize_1d` and `vectorized_eval_batch` derivative + hoist. Adds `ChebyshevTT.sobol_indices()` parity (native TT contraction). ### Benchmark Scripts (project root) @@ -105,6 +111,7 @@ Not part of the library. Compare Chebyshev barycentric against alternative metho - `compare_calculus_completion.py` — PyChebyshev v0.17 Slider/TT integrate vs MoCaX 4.3.1 (no equivalent — beyond-MoCaX feature) - `compare_v018_tt_parity.py` — PyChebyshev v0.18 TT surface (extrude/slice/algebra/from_values/to_dense) vs MoCaX 4.3.1 - `compare_v019_build_diagnostics.py` — PyChebyshev v0.19 build optimization (parallel eval, progress bars, visualization) — no MoCaX equivalent +- `compare_v0211_dim_cluster.py` — PyChebyshev v0.21.1 TT `_dim_order` cluster fixes demo (no MoCaX equivalent — internal-correctness fixes) ### Tests (`tests/`) @@ -129,12 +136,13 @@ Not part of the library. Compare Chebyshev barycentric against alternative metho `get_evaluation_points`, `get_num_evaluation_points`), `peek_format_version`, `is_dimensionality_allowed`, `defer_build` + `set_original_function_values`, `Domain`/`Ns`/`SpecialPoints` typed helpers. -- `test_calculus_completion.py` — ~101 tests: `ChebyshevSlider.integrate/roots/minimize/maximize`, +- `test_calculus_completion.py` — ~106 tests: `ChebyshevSlider.integrate/roots/minimize/maximize`, `ChebyshevTT.integrate/roots/minimize/maximize` (full and partial, user-frame dim/fixed transparent under `_dim_order`), cross-class consistency checks, bounds validation. v0.21 additions: 57 tests across 9 new test classes covering Slider/TT roots/min/max parity - with Approximation/Spline. + with Approximation/Spline. v0.21.1 additions: 5 tests covering + user-frame domain validation fix under non-identity `_dim_order`. - `test_v018_tt_parity.py` — ~52 tests: `ChebyshevTT.nodes()`, `from_values()`, `extrude()`, `slice()`, algebra (`+`, `-`, `*` scalar, in-place, `__neg__`), `to_dense()`; cross-feature and round-trip checks. diff --git a/compare_v0211_dim_cluster.py b/compare_v0211_dim_cluster.py new file mode 100644 index 0000000..dbc7bf4 --- /dev/null +++ b/compare_v0211_dim_cluster.py @@ -0,0 +1,114 @@ +"""v0.21.1 demo: TT _dim_order cluster fixes. + +Demonstrates each correctness fix with a non-uniform-domain TT under +explicit reorder([2, 0, 1]) — the case that v0.20.1 / v0.21.0 tests +masked with uniform domains. +""" + +from __future__ import annotations + +import time + +import numpy as np + +from pychebyshev import ChebyshevApproximation, ChebyshevTT + + +def _check(label: str, ok: bool, detail: str = "") -> None: + status = "OK " if ok else "FAIL" + print(f" [{status}] {label}{(' — ' + detail) if detail else ''}") + + +def demo_inner_product_strict() -> None: + print("\n=== inner_product strict _dim_order check (Item B) ===") + def f(x, _): return x[0] ** 2 + x[1] + tt = ChebyshevTT(f, num_dimensions=2, domain=[(-1, 1), (-1, 1)], n_nodes=[5, 5]) + tt.build(verbose=False) + tt_p = tt.reorder([1, 0]) + try: + tt.inner_product(tt_p) + _check("ValueError raised on dim_order mismatch", False) + except ValueError as e: + _check("ValueError raised on dim_order mismatch", True, str(e)[:80]) + + +def demo_get_evaluation_points_round_trip() -> None: + print("\n=== get_evaluation_points user-frame round-trip (Item C) ===") + def f(x, _): return 0.3 * x[0] + 0.7 * x[1] - 0.2 * x[2] + tt = ChebyshevTT( + f, num_dimensions=3, + domain=[(-1, 1), (-2, 2), (-3, 3)], n_nodes=[5, 5, 5], + ) + tt.build(verbose=False) + tt_p = tt.reorder([2, 0, 1]) + points = tt_p.get_evaluation_points() + max_err = 0.0 + for i in range(0, len(points), 25): + pt = points[i] + expected = f(pt, None) + got = float(tt_p.eval(pt.tolist())) + max_err = max(max_err, abs(got - expected)) + _check("round-trip eval == f for non-identity _dim_order", + max_err < 1e-9, f"max_err={max_err:.2e}") + + +def demo_roots_user_frame_validation() -> None: + print("\n=== roots/min/max validate against user-frame domain (Item A) ===") + # User-frame dim 1 has domain [-2, 2]; storage-frame after reorder has different range + def f(x, _): return (x[0] - 0.4) * (1.0 + 0.0 * x[1] + 0.0 * x[2]) + tt = ChebyshevTT( + f, num_dimensions=3, + domain=[(-1, 1), (-2, 2), (-3, 3)], n_nodes=[8, 8, 8], + ) + tt.build(verbose=False) + tt_p = tt.reorder([2, 0, 1]) + # fixed=1.5 is valid in user-frame dim 1, NOT in storage-frame after reorder + try: + roots = tt_p.roots(dim=0, fixed={1: 1.5, 2: 0.0}) + _check("roots accepts user-frame-valid fixed value", + abs(float(roots[0]) - 0.4) < 1e-7, + f"root={float(roots[0]):.4f}") + except Exception as e: + _check("roots accepts user-frame-valid fixed value", False, + f"raised {type(e).__name__}: {e}") + + +def demo_integrate_error_user_frame() -> None: + print("\n=== integrate error message uses user-frame dim (Item E / #20) ===") + def f(x, _): return x[0] + x[1] + x[2] + tt = ChebyshevTT( + f, num_dimensions=3, + domain=[(-1, 1), (-2, 2), (-3, 3)], n_nodes=[5, 5, 5], + ) + tt.build(verbose=False) + tt_p = tt.reorder([2, 0, 1]) + try: + tt_p.integrate(dims=[1], bounds=[(5.0, 6.0)]) + _check("ValueError raised on out-of-domain bounds", False) + except ValueError as e: + msg = str(e) + _check("error references user-frame dim 1", + "dim 1" in msg, msg[:100]) + + +def demo_eval_multi_no_mutation() -> None: + print("\n=== eval_multi no longer mutates _dim_order (Item D / #19) ===") + import inspect + source = inspect.getsource(ChebyshevTT.eval_multi) + _check("eval_multi source contains no 'self._dim_order = ' assignment", + "self._dim_order = " not in source and "self._dim_order=" not in source) + + +def main() -> None: + print("PyChebyshev v0.21.1 cluster fix demo") + t0 = time.time() + demo_inner_product_strict() + demo_get_evaluation_points_round_trip() + demo_roots_user_frame_validation() + demo_integrate_error_user_frame() + demo_eval_multi_no_mutation() + print(f"\nTotal: {time.time() - t0:.2f}s") + + +if __name__ == "__main__": + main() diff --git a/docs/user-guide/calculus.md b/docs/user-guide/calculus.md index a7a41c2..aa7b378 100644 --- a/docs/user-guide/calculus.md +++ b/docs/user-guide/calculus.md @@ -570,6 +570,8 @@ roots_b = tt_optimized.roots(dim=0, fixed={1: 0.0, 2: 0.0}) np.testing.assert_array_almost_equal(roots_a, roots_b) ``` +> **Non-uniform domains:** v0.21.1 closed a latent bug where TT calculus methods validated `fixed` values against the storage-frame domain. With non-uniform per-dim domains and a non-identity `_dim_order` (after `with_auto_order` / `reorder`), this could either reject valid user-frame inputs or silently accept invalid ones. Since v0.21.1, validation always uses the user-frame physical domain. + ### Slider example ```python diff --git a/pyproject.toml b/pyproject.toml index c51f9bf..2ac2e98 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -4,7 +4,7 @@ build-backend = "hatchling.build" [project] name = "pychebyshev" -version = "0.21.0" +version = "0.21.1" description = "Fast multi-dimensional Chebyshev tensor interpolation with analytical derivatives" readme = "README.md" license = {text = "MIT"} diff --git a/src/pychebyshev/_algebra.py b/src/pychebyshev/_algebra.py index 4dd5141..7cd9c15 100644 --- a/src/pychebyshev/_algebra.py +++ b/src/pychebyshev/_algebra.py @@ -38,12 +38,15 @@ def _check_compatible(a, b) -> None: f"Dimension mismatch: {a.num_dimensions} vs {b.num_dimensions}" ) - if a.n_nodes != b.n_nodes: + if not np.array_equal(np.asarray(a.n_nodes, dtype=int), np.asarray(b.n_nodes, dtype=int)): raise ValueError( f"Node count mismatch: {a.n_nodes} vs {b.n_nodes}" ) - if a.domain != b.domain: + if not np.allclose( + np.asarray(a.domain, dtype=float), + np.asarray(b.domain, dtype=float), + ): raise ValueError( f"Domain mismatch: {a.domain} vs {b.domain}" ) diff --git a/src/pychebyshev/_calculus.py b/src/pychebyshev/_calculus.py index 1ae95b8..400cd20 100644 --- a/src/pychebyshev/_calculus.py +++ b/src/pychebyshev/_calculus.py @@ -133,7 +133,7 @@ def _compute_sub_interval_weights(n: int, t_lo: float, return weights_desc[::-1].copy() -def _normalize_bounds(dims, bounds, domain): +def _normalize_bounds(dims, bounds, domain, dim_labels=None): """Normalize and validate the *bounds* parameter for ``integrate()``. Parameters @@ -144,6 +144,12 @@ def _normalize_bounds(dims, bounds, domain): User-provided bounds specification. domain : list Per-dimension ``[lo, hi]`` bounds. + dim_labels : list of int or None + Optional override for dim labels used in error messages. When + ``None`` (default), error messages use the corresponding entry + from ``dims``. When provided, ``dim_labels[i]`` is used instead + of ``dims[i]`` so callers can present user-frame indices when + ``dims`` is in storage frame. Returns ------- @@ -173,14 +179,15 @@ def _normalize_bounds(dims, bounds, domain): result.append(None) continue lo, hi = bd + label = dim_labels[i] if dim_labels is not None else dims[i] if lo > hi: - raise ValueError(f"bounds lo={lo} > hi={hi} for dim {dims[i]}") + raise ValueError(f"bounds lo={lo} > hi={hi} for dim {label}") d = dims[i] dom_lo, dom_hi = domain[d] if lo < dom_lo - 1e-14 or hi > dom_hi + 1e-14: raise ValueError( f"bounds ({lo}, {hi}) outside domain [{dom_lo}, {dom_hi}] " - f"for dim {d}" + f"for dim {label}" ) lo = max(lo, dom_lo) hi = min(hi, dom_hi) @@ -259,8 +266,6 @@ def _optimize_1d(values: np.ndarray, nodes: np.ndarray, ------- (value, location) : (float, float) """ - from pychebyshev.barycentric import barycentric_interpolate - # Derivative values at nodes deriv_values = diff_matrix @ values @@ -271,11 +276,22 @@ def _optimize_1d(values: np.ndarray, nodes: np.ndarray, a, b = domain candidates = np.concatenate([[a], critical, [b]]) - # Evaluate original function at all candidates - vals = np.array([ - barycentric_interpolate(float(x), nodes, values, bary_weights) - for x in candidates - ]) + # Vectorized barycentric evaluation at all candidates simultaneously. + candidates_arr = np.asarray(candidates, dtype=float).reshape(-1) + diff = candidates_arr[:, None] - nodes[None, :] # shape (M, n) + abs_diff = np.abs(diff) + exact_mask = abs_diff < 1e-14 # (M, n) + has_exact = exact_mask.any(axis=1) # (M,) + # Replace zero diffs with 1.0 to avoid division by zero; overwritten below. + safe_diff = np.where(abs_diff < 1e-14, 1.0, diff) + w_over_diff = bary_weights[None, :] / safe_diff # (M, n) + numer = (w_over_diff * values[None, :]).sum(axis=1) + denom = w_over_diff.sum(axis=1) + vals = numer / denom # (M,) + # For candidates that hit a node exactly, take the node value directly. + if has_exact.any(): + exact_node_idx = exact_mask.argmax(axis=1) + vals = np.where(has_exact, values[exact_node_idx], vals) idx = np.argmin(vals) if mode == "min" else np.argmax(vals) return float(vals[idx]), float(candidates[idx]) diff --git a/src/pychebyshev/_sensitivity.py b/src/pychebyshev/_sensitivity.py index e94b1d0..9f73a5c 100644 --- a/src/pychebyshev/_sensitivity.py +++ b/src/pychebyshev/_sensitivity.py @@ -138,3 +138,133 @@ def _compute_sobol_from_coeffs( }, "variance": variance, } + + +def _compute_sobol_from_tt_cores(cores: list) -> dict: + """Compute first-order + total-order Sobol indices from TT coefficient cores. + + Mathematically equivalent to ``_compute_sobol_from_coeffs`` applied to the + dense coefficient tensor, but contracts through TT cores in coefficient + space with cost O(d * n * r^2) instead of O(n^d). + + Parameters + ---------- + cores : list of np.ndarray + TT cores in Chebyshev coefficient space. Each core has shape + (r_{k-1}, n_k, r_k) with cores[0] starting from r_0=1 and + cores[-1] ending at r_d=1. + + Returns + ------- + dict + Same shape as ``_compute_sobol_from_coeffs``: + ``{"first_order": {d: index}, "total_order": {d: index}, "variance": float}`` + where keys are storage-frame dim indices (0..len(cores)-1). + """ + d = len(cores) + pi = float(np.pi) + n_per_dim = [c.shape[1] for c in cores] + + # Per-dim Chebyshev inner-product weights: [pi, pi/2, pi/2, ..., pi/2] + w_full = [] + for n_k in n_per_dim: + w = np.full(n_k, pi / 2.0) + w[0] = pi + w_full.append(w) + + # ---- total_weighted_squared = sum over all alpha of + # coeffs[alpha]^2 * prod_k w_full[k][alpha_k] + M = np.array([[1.0]]) + for k in range(d): + A = cores[k] + Aw = A * w_full[k][None, :, None] + M = np.einsum("ij,ipa,jpb->ab", M, Aw, A) + total_weighted_squared = float(M[0, 0]) + + # ---- constant term c_0 (alpha = 0) -- contract along all-zero slices + v = np.array([1.0]) + for k in range(d): + v = v @ cores[k][:, 0, :] + c_0 = float(v[0]) + constant_weighted_squared = (c_0 ** 2) * (pi ** d) + + variance = total_weighted_squared - constant_weighted_squared + + if variance <= 0: + return { + "first_order": {j: 0.0 for j in range(d)}, + "total_order": {j: 0.0 for j in range(d)}, + "variance": float(max(variance, 0.0)), + } + + # ---- Precompute left and right partial self-inner-product matrices + # to reduce total-order computation from O(d^2 * n * r^2) to + # O(d * n * r^2). + # + # L[k] has shape (r_k, r_k): partial contraction of dims 0..k-1. + # R[k] has shape (r_k, r_k): partial contraction of dims k..d-1. + # + # L[0] = identity(1); L[k+1] = einsum("ij,ipa,jpb->ab", L[k], Aw_k, A_k) + # R[d] = identity(1); R[k] = einsum("ab,ipa,jpb->ij", R[k+1], Aw_k, A_k) + # + # For total-order[j]: the "alpha_j = 0" weighted sum decomposes as + # L[j] x cores[j][:,0,:] x R[j+1] (contracted with itself and scaled by pi). + # sum_alpha_j_zero = pi * einsum("ij,ia,jb,ab->", L[j], c_j0, c_j0, R[j+1]) + # where c_j0 = cores[j][:, 0, :]. + + # Build L[0..d] + L = [None] * (d + 1) + L[0] = np.array([[1.0]]) + for k in range(d): + A_k = cores[k] + Aw_k = A_k * w_full[k][None, :, None] + L[k + 1] = np.einsum("ij,ipa,jpb->ab", L[k], Aw_k, A_k) + + # Build R[0..d] + R = [None] * (d + 1) + R[d] = np.array([[1.0]]) + for k in range(d - 1, -1, -1): + A_k = cores[k] + Aw_k = A_k * w_full[k][None, :, None] + R[k] = np.einsum("ab,ipa,jpb->ij", R[k + 1], Aw_k, A_k) + + first_order_energy = {} + total_order_energy = {} + + for j in range(d): + # ---- first-order energy[j]: alpha_j >= 1 AND alpha_k = 0 for k != j + # left boundary: row vector formed by chaining cores[k][:, 0, :] for k < j + left = np.array([1.0]) + for k in range(j): + left = left @ cores[k][:, 0, :] + # left has shape (r_j,) + + # right boundary: column vector formed by chaining cores[k][:, 0, :] for k > j + right = np.array([1.0]) + for k in range(d - 1, j, -1): + right = cores[k][:, 0, :] @ right + # right has shape (r_{j+1},) + + G_j = cores[j] + sum_squared = 0.0 + for m in range(1, n_per_dim[j]): + coef_m = float(left @ G_j[:, m, :] @ right) + sum_squared += coef_m * coef_m + weight_first = (pi / 2.0) * (pi ** (d - 1)) + first_order_energy[j] = sum_squared * weight_first + + # ---- total-order energy[j]: alpha_j >= 1 (other dims unrestricted) + # = total_weighted_squared - sum_{alpha_j = 0} weighted + # Using cached L[j] and R[j+1]: + # sum_alpha_j_zero = pi * einsum("ij,ia,jb,ab->", L[j], c_j0, c_j0, R[j+1]) + c_j0 = cores[j][:, 0, :] # shape (r_j, r_{j+1}) + sum_alpha_j_zero_weighted = pi * float( + np.einsum("ij,ia,jb,ab->", L[j], c_j0, c_j0, R[j + 1]) + ) + total_order_energy[j] = total_weighted_squared - sum_alpha_j_zero_weighted + + return { + "first_order": {j: first_order_energy[j] / variance for j in range(d)}, + "total_order": {j: total_order_energy[j] / variance for j in range(d)}, + "variance": float(variance), + } diff --git a/src/pychebyshev/_version.py b/src/pychebyshev/_version.py index 6a726d8..76f2458 100644 --- a/src/pychebyshev/_version.py +++ b/src/pychebyshev/_version.py @@ -1 +1 @@ -__version__ = "0.21.0" +__version__ = "0.21.1" diff --git a/src/pychebyshev/barycentric.py b/src/pychebyshev/barycentric.py index 98b5e50..d92ebe4 100644 --- a/src/pychebyshev/barycentric.py +++ b/src/pychebyshev/barycentric.py @@ -948,6 +948,47 @@ def vectorized_eval( return float(current) + def _apply_derivative_passes( + self, tensor: np.ndarray, derivative_order: List[int] | None + ) -> np.ndarray: + """Apply differentiation-matrix passes to the full coefficient tensor. + + For each dimension ``d`` with ``derivative_order[d] > 0``, applies the + transposed differentiation matrix ``D_T`` along axis ``d`` the required + number of times. Operates on the **full** tensor (no axes reduced), + using ``np.moveaxis`` to expose the target axis as the last axis for + efficient matmul. + + This helper is designed for use in :meth:`vectorized_eval_batch`, where + the derivative passes are hoisted outside the per-point loop. + :meth:`vectorized_eval` inlines the same logic per-dimension as it + reduces axes incrementally and does not call this helper. + + Parameters + ---------- + tensor : np.ndarray + Full coefficient tensor of shape matching ``self.tensor_values``. + derivative_order : list of int or None + Derivative order per dimension. ``None`` is treated as all-zeros. + + Returns + ------- + np.ndarray + Tensor with all derivative passes applied, same shape as input. + """ + if derivative_order is None: + return tensor + result = tensor + for d in range(self.num_dimensions - 1, -1, -1): + deriv = derivative_order[d] + if deriv > 0: + D_T = self.diff_matrices[d].T + for _ in range(deriv): + arr = np.moveaxis(result, d, -1) + arr = arr @ D_T + result = np.moveaxis(arr, -1, d) + return result + def vectorized_eval_batch( self, points: np.ndarray, @@ -976,10 +1017,33 @@ def vectorized_eval_batch( derivative_order = self._resolve_derivative_args( derivative_order, derivative_id ) + if self.tensor_values is None: + raise RuntimeError("Call build() first") + + # Hoist: apply all derivative-matrix matmuls once — they are + # point-independent (depend only on the grid, not on query coords). + # Delegates to _apply_derivative_passes which operates on the full + # tensor using moveaxis to target each axis in turn. + tensor_with_derivs = self._apply_derivative_passes( + self.tensor_values, derivative_order + ) + + # Per-point: only the barycentric reduction (no derivative passes). + _matmul = self._matmul_last_axis N = points.shape[0] results = np.empty(N) for i in range(N): - results[i] = self.vectorized_eval(points[i], derivative_order=derivative_order) + current = tensor_with_derivs + for d in range(self.num_dimensions - 1, -1, -1): + x = points[i, d] + diff = x - self.nodes[d] + exact = np.where(np.abs(diff) < 1e-14)[0] + if len(exact) > 0: + current = current[..., exact[0]] + else: + w_over_diff = self.weights[d] / diff + current = _matmul(current, w_over_diff) / np.sum(w_over_diff) + results[i] = float(current) return results def vectorized_eval_multi( diff --git a/src/pychebyshev/tensor_train.py b/src/pychebyshev/tensor_train.py index c6f0a59..2bd21ae 100644 --- a/src/pychebyshev/tensor_train.py +++ b/src/pychebyshev/tensor_train.py @@ -1472,7 +1472,10 @@ def inner_product(self, other: "ChebyshevTT") -> float: f"other must be a ChebyshevTT, got {type(other).__name__}" ) other._check_built() - if self.domain != other.domain: + if not np.allclose( + np.asarray(self.domain, dtype=float), + np.asarray(other.domain, dtype=float), + ): raise ValueError( "inner_product requires matching domains; " f"got {self.domain} vs {other.domain}" @@ -1482,6 +1485,14 @@ def inner_product(self, other: "ChebyshevTT") -> float: "inner_product requires matching n_nodes; " f"got {self.n_nodes} vs {other.n_nodes}" ) + if list(self._dim_order) != list(other._dim_order): + raise ValueError( + f"inner_product requires matching _dim_order: " + f"{self._dim_order} vs {other._dim_order}. " + f"Call other = other.reorder(self._dim_order) " + f"(or self = self.reorder(other._dim_order)) to align " + f"before computing inner_product." + ) M = np.array([[1.0]]) # (r_self_0, r_other_0) = (1, 1) for k in range(self.num_dimensions): @@ -1562,9 +1573,12 @@ def integrate( # Normalize bounds: validates against domain in *storage* frame using # storage positions (since `self.domain[storage_pos]` gives the # physical domain of the corresponding original dim after reorder). + # Pass dims_sorted as dim_labels so error messages reference the + # user-frame dim the caller passed, not the storage-frame index. bounds_storage_dims = [storage_for[d] for d in dims_sorted] normalized_bounds = _normalize_bounds( - bounds_storage_dims, bounds, self.domain + bounds_storage_dims, bounds, self.domain, + dim_labels=dims_sorted, ) # Compute scaled quadrature weights per *storage* position. @@ -1720,6 +1734,18 @@ def _to_1d_chebyshev(self, sliced_1d): n_nodes=[int(sliced_1d.n_nodes[0])], ) + def _user_frame_domain(self) -> list: + """Return ``self.domain`` reordered into user-frame indexing. + + For canonical ``_dim_order``, this returns ``self.domain`` unchanged + (in semantic terms — a fresh list either way). + For non-identity ``_dim_order``, ``self.domain[d]`` is the + storage-frame domain at storage position ``d``; user-frame dim ``u`` + lives at storage position ``self._dim_order.index(u)``. + """ + return [self.domain[self._dim_order.index(u)] + for u in range(self.num_dimensions)] + def roots(self, dim=None, fixed=None): """Find all roots of the TT-approximated function along *dim*. @@ -1756,7 +1782,7 @@ def roots(self, dim=None, fixed=None): from pychebyshev._calculus import _validate_calculus_args dim, slice_params = _validate_calculus_args( - self.num_dimensions, dim, fixed, self.domain + self.num_dimensions, dim, fixed, self._user_frame_domain() ) sliced = self.slice(slice_params) if slice_params else self @@ -1797,7 +1823,7 @@ def minimize(self, dim=None, fixed=None): from pychebyshev._calculus import _validate_calculus_args dim, slice_params = _validate_calculus_args( - self.num_dimensions, dim, fixed, self.domain + self.num_dimensions, dim, fixed, self._user_frame_domain() ) sliced = self.slice(slice_params) if slice_params else self @@ -1838,7 +1864,7 @@ def maximize(self, dim=None, fixed=None): from pychebyshev._calculus import _validate_calculus_args dim, slice_params = _validate_calculus_args( - self.num_dimensions, dim, fixed, self.domain + self.num_dimensions, dim, fixed, self._user_frame_domain() ) sliced = self.slice(slice_params) if slice_params else self @@ -2134,23 +2160,59 @@ def eval(self, point: List[float]) -> float: # dim_order was set by with_auto_order(). Identity order is a no-op. canonical = list(range(self.num_dimensions)) if self._dim_order != canonical: - point = [point[self._dim_order[k]] for k in range(self.num_dimensions)] + point_storage = [ + point[self._dim_order[k]] for k in range(self.num_dimensions) + ] + else: + point_storage = list(point) - result = np.ones((1, 1)) - for d in range(self.num_dimensions): - # Map point[d] from [a, b] to [-1, 1] - a, b = self.domain[d] - scaled = 2.0 * (point[d] - a) / (b - a) - 1.0 - # Evaluate all Chebyshev polynomials T_0..T_{n-1} at scaled point - q = np.polynomial.chebyshev.chebval( - scaled, np.eye(self.n_nodes[d]) - ) # shape (n_d,) - # Contract polynomial values with coefficient core: - # v[i,k] = sum_j q[j] * core[i,j,k] → shape (r_{d-1}, r_d) - v = np.einsum("j,ijk->ik", q, self._coeff_cores[d]) - # Chain multiply: result = result @ v - result = result @ v - return float(result[0, 0]) + zero_deriv = [0] * self.num_dimensions + return self._eval_storage_frame(point_storage, zero_deriv) + + def _eval_storage_frame( + self, + point_storage: List[float], + derivative_order_storage: List[int], + ) -> float: + """Evaluate at a single point assuming storage-frame inputs. + + This is the structural workhorse used by both :meth:`eval` (after + a single user→storage permutation) and :meth:`eval_multi` (which + permutes once then calls this helper N times). Eliminates the + need for any temporary mutation of ``self._dim_order``. + + Parameters + ---------- + point_storage : list of float + Query coordinates in **storage frame** — i.e., already + permuted by ``self._dim_order`` if non-canonical. + derivative_order_storage : list of int + Derivative orders in **storage frame**. ``[0, 0, ..., 0]`` + triggers the core-contraction value path; otherwise the + finite-difference machinery is used. + + Returns + ------- + float + Interpolated value (or FD derivative) at the query point. + """ + if all(d == 0 for d in derivative_order_storage): + result = np.ones((1, 1)) + for d in range(self.num_dimensions): + # Map point_storage[d] from [a, b] to [-1, 1] + a, b = self.domain[d] + scaled = 2.0 * (point_storage[d] - a) / (b - a) - 1.0 + # Evaluate all Chebyshev polynomials T_0..T_{n-1} at scaled point + q = np.polynomial.chebyshev.chebval( + scaled, np.eye(self.n_nodes[d]) + ) # shape (n_d,) + # Contract polynomial values with coefficient core: + # v[i,k] = sum_j q[j] * core[i,j,k] → shape (r_{d-1}, r_d) + v = np.einsum("j,ijk->ik", q, self._coeff_cores[d]) + # Chain multiply: result = result @ v + result = result @ v + return float(result[0, 0]) + return self._fd_derivative(point_storage, derivative_order_storage) def eval_batch(self, points: np.ndarray) -> np.ndarray: """Evaluate at multiple points simultaneously. @@ -2233,41 +2295,29 @@ def eval_multi( """ self._check_built() - # If a non-identity ``_dim_order`` is set (e.g., from - # :meth:`with_auto_order` or :meth:`reorder`), permute both the - # input ``point`` and each ``deriv_order`` from user-frame into - # storage frame so the FD machinery (which references - # ``self.domain`` / ``self.n_nodes`` in storage frame) operates - # consistently. Then temporarily neutralize ``_dim_order`` so - # the inner ``self.eval`` calls do not re-permute the - # already-storage-frame points. + # Structural fix (issue #19): permute user-frame coords ONCE into + # storage frame, then call :meth:`_eval_storage_frame` for each + # derivative. The previous implementation temporarily mutated + # ``self._dim_order`` via try/finally to neutralize re-permutation + # inside the inner ``self.eval`` calls, which raced under + # concurrent invocations. No mutation occurs here. canonical = list(range(self.num_dimensions)) if self._dim_order != canonical: - point = [point[self._dim_order[k]] for k in range(self.num_dimensions)] - derivative_orders = [ + point_storage = [ + point[self._dim_order[k]] for k in range(self.num_dimensions) + ] + derivs_storage = [ [d[self._dim_order[k]] for k in range(self.num_dimensions)] for d in derivative_orders ] - saved_dim_order = self._dim_order - self._dim_order = canonical - try: - results = [] - for deriv_order in derivative_orders: - if all(d == 0 for d in deriv_order): - results.append(self.eval(point)) - else: - results.append(self._fd_derivative(point, deriv_order)) - finally: - self._dim_order = saved_dim_order - return results - - results = [] - for deriv_order in derivative_orders: - if all(d == 0 for d in deriv_order): - results.append(self.eval(point)) - else: - results.append(self._fd_derivative(point, deriv_order)) - return results + else: + point_storage = list(point) + derivs_storage = [list(d) for d in derivative_orders] + + return [ + self._eval_storage_frame(point_storage, d_storage) + for d_storage in derivs_storage + ] def _fd_derivative(self, point: List[float], deriv_order: List[int]) -> float: """Compute a single derivative via central finite differences. @@ -2320,34 +2370,48 @@ def _nudge_point(self, point: List[float], d: int, h: float) -> List[float]: return pt def _fd_single_dim(self, point: List[float], d: int, order: int) -> float: - """Central FD for a single dimension.""" + """Central FD for a single dimension. + + Operates entirely in storage frame: ``point`` is assumed already + permuted, ``d`` is a storage-frame dim index, and the inner + evaluations go through :meth:`_eval_storage_frame` to skip the + user→storage permutation. + """ h = self._fd_step(d) pt = self._nudge_point(point, d, h) + zero = [0] * self.num_dimensions if order == 1: pt_plus = list(pt) pt_minus = list(pt) pt_plus[d] += h pt_minus[d] -= h - return (self.eval(pt_plus) - self.eval(pt_minus)) / (2.0 * h) + return ( + self._eval_storage_frame(pt_plus, zero) + - self._eval_storage_frame(pt_minus, zero) + ) / (2.0 * h) elif order == 2: pt_plus = list(pt) pt_minus = list(pt) pt_plus[d] += h pt_minus[d] -= h - f_plus = self.eval(pt_plus) - f_center = self.eval(pt) - f_minus = self.eval(pt_minus) + f_plus = self._eval_storage_frame(pt_plus, zero) + f_center = self._eval_storage_frame(pt, zero) + f_minus = self._eval_storage_frame(pt_minus, zero) return (f_plus - 2.0 * f_center + f_minus) / (h * h) else: raise ValueError(f"Derivative order {order} not supported (use 1 or 2)") def _fd_cross_deriv(self, point: List[float], d1: int, d2: int) -> float: - """Central FD for mixed partial d^2f / dx_{d1} dx_{d2}.""" + """Central FD for mixed partial d^2f / dx_{d1} dx_{d2}. + + Operates entirely in storage frame. + """ h1 = self._fd_step(d1) h2 = self._fd_step(d2) pt = self._nudge_point(point, d1, h1) pt = self._nudge_point(pt, d2, h2) + zero = [0] * self.num_dimensions def make_pt(delta1: float, delta2: float) -> List[float]: p = list(pt) @@ -2355,19 +2419,23 @@ def make_pt(delta1: float, delta2: float) -> List[float]: p[d2] += delta2 return p - f_pp = self.eval(make_pt(+h1, +h2)) - f_pm = self.eval(make_pt(+h1, -h2)) - f_mp = self.eval(make_pt(-h1, +h2)) - f_mm = self.eval(make_pt(-h1, -h2)) + f_pp = self._eval_storage_frame(make_pt(+h1, +h2), zero) + f_pm = self._eval_storage_frame(make_pt(+h1, -h2), zero) + f_mp = self._eval_storage_frame(make_pt(-h1, +h2), zero) + f_mm = self._eval_storage_frame(make_pt(-h1, -h2), zero) return (f_pp - f_pm - f_mp + f_mm) / (4.0 * h1 * h2) def _fd_nested( self, point: List[float], active_dims: List[Tuple[int, int]] ) -> float: - """Nested FD for higher-order or multi-dimensional derivatives.""" + """Nested FD for higher-order or multi-dimensional derivatives. + + Operates entirely in storage frame. + """ # Apply FD one dimension at a time if len(active_dims) == 0: - return self.eval(point) + zero = [0] * self.num_dimensions + return self._eval_storage_frame(point, zero) d, order = active_dims[0] remaining = active_dims[1:] @@ -2714,6 +2782,8 @@ def get_evaluation_points(self) -> np.ndarray: ------- np.ndarray Shape ``(N, num_dimensions)`` where ``N = prod(n_nodes)``. + Columns are in user-frame order: column ``d`` corresponds to + user-frame dim ``d`` regardless of internal ``_dim_order``. """ per_dim = [] for d in range(self.num_dimensions): @@ -2721,7 +2791,13 @@ def get_evaluation_points(self) -> np.ndarray: a, b = self.domain[d] per_dim.append(np.sort(0.5 * (a + b) + 0.5 * (b - a) * nodes_std)) grids = np.meshgrid(*per_dim, indexing="ij") - return np.stack([g.ravel() for g in grids], axis=-1).astype(np.float64) + # grids columns are in storage frame. Permute back to user frame: + # user-frame dim u corresponds to storage position + # self._dim_order.index(u), so grids[self._dim_order.index(u)] is + # the array of values for user-frame dim u. + user_frame_grids = [grids[self._dim_order.index(u)] + for u in range(self.num_dimensions)] + return np.stack([g.ravel() for g in user_frame_grids], axis=-1).astype(np.float64) def clone(self) -> "ChebyshevTT": """Return an independent deep copy of this interpolant. @@ -2744,6 +2820,53 @@ def clone(self) -> "ChebyshevTT": import copy return copy.deepcopy(self) + def sobol_indices(self) -> dict: + """Compute first-order + total-order Sobol sensitivity indices. + + Returns + ------- + dict + ``{"first_order": {dim: index}, "total_order": {dim: index}, + "variance": float}`` -- keyed by user-frame dim indices. + + Raises + ------ + RuntimeError + If ``build()`` has not been called. + + Notes + ----- + Computed natively by contracting through the TT coefficient cores; + no dense materialization. Cost O(d * n * r^2) per dim. Mathematically + equivalent to ``ChebyshevApproximation.sobol_indices()`` on the + dense version of the same function. + + Under non-identity ``_dim_order`` (after ``with_auto_order`` / + ``reorder``), result keys are user-frame dim indices. + """ + if not self._built: + raise RuntimeError("Call build() first") + + from pychebyshev._sensitivity import _compute_sobol_from_tt_cores + + # Helper returns dict keyed by storage-frame indices (0..d-1) + storage = _compute_sobol_from_tt_cores(self._coeff_cores) + + # Translate keys to user-frame: storage position s holds + # original-dim self._dim_order[s]. + user_first = {} + user_total = {} + for s in range(self.num_dimensions): + user_d = self._dim_order[s] + user_first[user_d] = storage["first_order"][s] + user_total[user_d] = storage["total_order"][s] + + return { + "first_order": user_first, + "total_order": user_total, + "variance": storage["variance"], + } + @classmethod def from_values( cls, @@ -3179,7 +3302,10 @@ def _check_compatible_tt(self, other) -> None: raise ValueError( f"n_nodes mismatch: {self.n_nodes} vs {other.n_nodes}" ) - if self.domain != other.domain: + if not np.allclose( + np.asarray(self.domain, dtype=float), + np.asarray(other.domain, dtype=float), + ): raise ValueError( f"domain mismatch: {self.domain} vs {other.domain}" ) diff --git a/tests/test_algebra.py b/tests/test_algebra.py index ca23a35..49fbdd9 100644 --- a/tests/test_algebra.py +++ b/tests/test_algebra.py @@ -697,3 +697,50 @@ def test_portfolio_greeks(self, instruments): delta_strad = straddle.vectorized_eval(p, [1, 0]) weighted = 0.4 * delta_call + 0.3 * delta_put + 0.3 * delta_strad assert abs(delta_port - weighted) < 1e-10, f"Delta mismatch at {p}" + + +class TestMixedDomainSyntax: + """Issue #22: _check_compatible should treat tuple-of-tuples and + list-of-lists domain syntax as numerically equal.""" + + def test_chebyshev_algebra_tuple_vs_list_domain(self): + """ChebyshevApproximation: tuple domain + list domain combine OK.""" + def f(x, _): return x[0] + def g(x, _): return -x[0] + cheb_tuple = ChebyshevApproximation(f, 1, [(-1, 1)], [5]) + cheb_list = ChebyshevApproximation(g, 1, [[-1, 1]], [5]) + cheb_tuple.build(verbose=False) + cheb_list.build(verbose=False) + h = cheb_tuple + cheb_list + # f + g = x - x = 0 + assert abs(h.eval([0.5], [0])) < 1e-12 + + def test_slider_algebra_tuple_vs_list_domain(self): + """ChebyshevSlider: same parity through scalar mul + add.""" + def f(x, _): return x[0] + def g(x, _): return -x[0] + s_f = ChebyshevSlider( + f, num_dimensions=1, domain=[(-1, 1)], n_nodes=[5], + partition=[[0]], pivot_point=[0.0], + ) + s_g = ChebyshevSlider( + g, num_dimensions=1, domain=[(-1, 1)], n_nodes=[5], + partition=[[0]], pivot_point=[0.0], + ) + s_f.build(verbose=False) + s_g.build(verbose=False) + # 2.0 * s_g internally normalizes domain to list-of-lists; + # adding to s_f (still tuple-of-tuples) must succeed. + h = s_f + 2.0 * s_g + # f + 2*g = x - 2x = -x; at x=0.3 should be -0.3 + assert abs(h.eval([0.3], [0]) - (-0.3)) < 1e-12 + + def test_n_nodes_int_vs_list_difference_still_fails(self): + """Sanity: legitimate n_nodes mismatch still raises.""" + def f(x, _): return x[0] + cheb_5 = ChebyshevApproximation(f, 1, [[-1, 1]], [5]) + cheb_7 = ChebyshevApproximation(f, 1, [[-1, 1]], [7]) + cheb_5.build(verbose=False) + cheb_7.build(verbose=False) + with pytest.raises(ValueError, match="[Nn]ode"): + _ = cheb_5 + cheb_7 diff --git a/tests/test_barycentric.py b/tests/test_barycentric.py index 138e1fc..9a843c7 100644 --- a/tests/test_barycentric.py +++ b/tests/test_barycentric.py @@ -501,3 +501,48 @@ def test_per_dim_requires_build(self): ) with pytest.raises(RuntimeError, match="build"): cheb._error_estimate_per_dim() + + +class TestVectorizedEvalBatchDerivativeHoist: + """v0.21.1: vectorized_eval_batch should produce identical results + before and after the derivative-matrix hoist. Non-regression tests.""" + + def test_batch_eval_value_only(self): + """Batch eval with derivative_order=[0] returns same as per-point eval.""" + def f(x, _): return x[0] ** 2 + 0.5 * x[0] + cheb = ChebyshevApproximation(f, 1, [(-1, 1)], [10]) + cheb.build(verbose=False) + points = np.array([[-0.7], [-0.3], [0.0], [0.4], [0.8]]) + batch = cheb.vectorized_eval_batch(points, derivative_order=[0]) + per_point = np.array([ + cheb.vectorized_eval([float(p[0])], derivative_order=[0]) + for p in points + ]) + np.testing.assert_array_almost_equal(batch, per_point, decimal=12) + + def test_batch_eval_with_derivative(self): + """Batch eval with derivative_order=[1] returns same as per-point.""" + def f(x, _): return x[0] ** 2 + 0.5 * x[0] + cheb = ChebyshevApproximation(f, 1, [(-1, 1)], [10]) + cheb.build(verbose=False) + points = np.array([[-0.7], [-0.3], [0.0], [0.4], [0.8]]) + batch = cheb.vectorized_eval_batch(points, derivative_order=[1]) + per_point = np.array([ + cheb.vectorized_eval([float(p[0])], derivative_order=[1]) + for p in points + ]) + np.testing.assert_array_almost_equal(batch, per_point, decimal=12) + + def test_batch_eval_2d_with_derivative(self): + """2D function, mixed derivative — batch matches per-point.""" + def f(x, _): return x[0] ** 2 + x[1] ** 2 + cheb = ChebyshevApproximation(f, 2, [(-1, 1), (-1, 1)], [8, 8]) + cheb.build(verbose=False) + points = np.array([[-0.5, 0.3], [0.2, -0.4], [0.7, 0.1]]) + batch = cheb.vectorized_eval_batch(points, derivative_order=[1, 0]) + per_point = np.array([ + cheb.vectorized_eval([float(p[0]), float(p[1])], + derivative_order=[1, 0]) + for p in points + ]) + np.testing.assert_array_almost_equal(batch, per_point, decimal=12) diff --git a/tests/test_calculus.py b/tests/test_calculus.py index 054ae9e..0850ae6 100644 --- a/tests/test_calculus.py +++ b/tests/test_calculus.py @@ -804,3 +804,39 @@ def f(x, _): return abs(x[0]) result = sp.integrate(bounds=(0.0, 0.5)) expected = 0.125 # int_0^0.5 x dx = 0.5^2/2 assert abs(result - expected) < 1e-10, f"Expected {expected}, got {result}" + + +# ====================================================================== +# TestOptimize1DVectorization +# ====================================================================== + +class TestOptimize1DVectorization: + """v0.21.1: _optimize_1d should produce bit-identical results before + and after vectorization. Non-regression tests.""" + + def test_optimize_1d_quadratic_min_unchanged(self): + """min of (x-0.3)^2 on [-1,1] is 0 at x=0.3.""" + def f(x, _): return (x[0] - 0.3) ** 2 + cheb = ChebyshevApproximation(f, 1, [(-1, 1)], [10]) + cheb.build(verbose=False) + val, loc = cheb.minimize() + assert abs(val - 0.0) < 1e-10 + assert abs(loc - 0.3) < 1e-10 + + def test_optimize_1d_quadratic_max_unchanged(self): + """max of -(x-0.3)^2 + 5 on [-1,1] is 5 at x=0.3.""" + def f(x, _): return -(x[0] - 0.3) ** 2 + 5.0 + cheb = ChebyshevApproximation(f, 1, [(-1, 1)], [10]) + cheb.build(verbose=False) + val, loc = cheb.maximize() + assert abs(val - 5.0) < 1e-10 + assert abs(loc - 0.3) < 1e-10 + + def test_optimize_1d_endpoint_min(self): + """Linear x on [0,1] — min at x=0 (endpoint, no critical points).""" + def f(x, _): return x[0] + cheb = ChebyshevApproximation(f, 1, [(0, 1)], [5]) + cheb.build(verbose=False) + val, loc = cheb.minimize() + assert abs(val - 0.0) < 1e-10 + assert abs(loc - 0.0) < 1e-10 diff --git a/tests/test_calculus_completion.py b/tests/test_calculus_completion.py index c2af0e1..858894f 100644 --- a/tests/test_calculus_completion.py +++ b/tests/test_calculus_completion.py @@ -311,6 +311,23 @@ def f(x, _): loaded = ChebyshevTT.load(str(path)) assert loaded.eval([0.5]) == pytest.approx(result.eval([0.5]), abs=1e-12) + def test_out_of_domain_error_uses_user_frame_dim(self): + """Issue #20: error message must reference the user-frame dim + the user passed, not the storage-frame index.""" + def f(x, _): return x[0] + x[1] + x[2] + tt = ChebyshevTT( + f, num_dimensions=3, + domain=[(-1, 1), (-2, 2), (-3, 3)], n_nodes=[5, 5, 5], + ) + tt.build(verbose=False) + tt_reordered = tt.reorder([2, 0, 1]) + # User passes dims=[1] (user-frame dim 1, range [-2, 2]). + # Pre-fix: error would reference dim 2 (storage position). + with pytest.raises(ValueError) as exc_info: + tt_reordered.integrate(dims=[1], bounds=[(5.0, 6.0)]) + msg = str(exc_info.value) + assert "dim 1" in msg, f"Error message references wrong dim: {msg}" + # ============================================================================ # T6: Slider full integration (returns scalar) @@ -1282,6 +1299,67 @@ def f(x, _): return x[0] ** 2 - 0.25 roots_after = tt_loaded.roots() np.testing.assert_array_almost_equal(roots_before, roots_after, decimal=10) + def test_roots_non_uniform_domain_after_reorder(self): + """Non-uniform domain + reorder: fixed= must validate against + user-frame domain, and roots must be in user-frame coordinates.""" + # f(x, y, z) = (x - 0.4) (1 + 0*y + 0*z) + # User-frame dim 0: domain [-1, 1], root at x=0.4 + # User-frame dim 1: domain [-2, 2] + # User-frame dim 2: domain [-3, 3] + def f(x, _): return (x[0] - 0.4) * (1.0 + 0.0 * x[1] + 0.0 * x[2]) + tt = ChebyshevTT( + f, num_dimensions=3, + domain=[(-1, 1), (-2, 2), (-3, 3)], n_nodes=[8, 8, 8], + ) + tt.build(verbose=False) + tt_reordered = tt.reorder([2, 0, 1]) + # fixed[1] = 1.5 is valid in user-frame dim 1 ([-2, 2]). + # In storage-frame after reorder([2,0,1]), storage dim 1 is original dim 0 + # with domain [-1, 1]; 1.5 would be out of that storage-frame range. + # Pre-fix: validation against storage-frame raises misleading error. + roots = tt_reordered.roots(dim=0, fixed={1: 1.5, 2: 0.0}) + assert len(roots) == 1 + assert abs(roots[0] - 0.4) < 1e-7 + + def test_roots_non_uniform_domain_user_frame_out_of_range_raises(self): + """A fixed value outside the user-frame domain must raise ValueError.""" + def f(x, _): return x[0] + x[1] + x[2] + tt = ChebyshevTT( + f, num_dimensions=3, + domain=[(-1, 1), (-2, 2), (-3, 3)], n_nodes=[5, 5, 5], + ) + tt.build(verbose=False) + tt_reordered = tt.reorder([2, 0, 1]) + # User-frame dim 1 has range [-2, 2]; passing 5.0 must fail + with pytest.raises(ValueError, match="domain"): + tt_reordered.roots(dim=0, fixed={1: 5.0, 2: 0.0}) + + def test_minimize_non_uniform_domain_after_reorder(self): + """Same non-uniform-domain pattern for minimize.""" + def f(x, _): return (x[0] - 0.3) ** 2 + 0.0 * x[1] + 0.0 * x[2] + tt = ChebyshevTT( + f, num_dimensions=3, + domain=[(-1, 1), (-2, 2), (-3, 3)], n_nodes=[8, 8, 8], + ) + tt.build(verbose=False) + tt_reordered = tt.reorder([2, 0, 1]) + val, loc = tt_reordered.minimize(dim=0, fixed={1: 1.5, 2: 0.0}) + assert abs(val - 0.0) < 1e-7 + assert abs(loc - 0.3) < 1e-7 + + def test_maximize_non_uniform_domain_after_reorder(self): + """Same non-uniform-domain pattern for maximize.""" + def f(x, _): return -(x[0] - 0.3) ** 2 + 0.0 * x[1] + 0.0 * x[2] + tt = ChebyshevTT( + f, num_dimensions=3, + domain=[(-1, 1), (-2, 2), (-3, 3)], n_nodes=[8, 8, 8], + ) + tt.build(verbose=False) + tt_reordered = tt.reorder([2, 0, 1]) + val, loc = tt_reordered.maximize(dim=0, fixed={1: 1.5, 2: 0.0}) + assert abs(val - 0.0) < 1e-7 + assert abs(loc - 0.3) < 1e-7 + # ============================================================================ # TestCrossClassCalculusConsistency diff --git a/tests/test_tensor_train.py b/tests/test_tensor_train.py index 9dd6ce4..cbe9a0a 100644 --- a/tests/test_tensor_train.py +++ b/tests/test_tensor_train.py @@ -587,6 +587,41 @@ def test_inner_product_raises_on_unbuilt_other(self): with pytest.raises(RuntimeError, match="Call build"): tt_a.inner_product(tt_b) + def test_inner_product_dim_order_mismatch_raises(self): + """Two TTs with different _dim_order must raise ValueError.""" + def f(x, _): return x[0] ** 2 + x[1] + x[2] ** 2 + tt_canonical = ChebyshevTT( + f, num_dimensions=3, domain=[(-1, 1)] * 3, n_nodes=[5, 5, 5] + ) + tt_canonical.build(verbose=False) + tt_reordered = tt_canonical.reorder([2, 0, 1]) + with pytest.raises(ValueError, match="dim_order"): + tt_canonical.inner_product(tt_reordered) + # Also the reverse direction + with pytest.raises(ValueError, match="dim_order"): + tt_reordered.inner_product(tt_canonical) + + def test_inner_product_after_explicit_reorder_alignment(self): + """After explicit reorder() to align dim_orders, inner_product + succeeds and matches the canonical-vs-canonical result.""" + def f(x, _): return x[0] ** 2 + x[1] + x[2] ** 2 + tt_a = ChebyshevTT( + f, num_dimensions=3, domain=[(-1, 1)] * 3, n_nodes=[5, 5, 5] + ) + tt_b = ChebyshevTT( + f, num_dimensions=3, domain=[(-1, 1)] * 3, n_nodes=[5, 5, 5] + ) + tt_a.build(verbose=False) + tt_b.build(verbose=False) + baseline = tt_a.inner_product(tt_b) + # Permute b, then realign before calling inner_product + tt_b_permuted = tt_b.reorder([2, 0, 1]) + tt_b_realigned = tt_b_permuted.reorder([0, 1, 2]) + aligned = tt_a.inner_product(tt_b_realigned) + assert abs(baseline - aligned) < 1e-9, ( + f"baseline {baseline} != aligned {aligned}" + ) + class TestALSInternals: """White-box tests for the internal ALS sweep primitive.""" @@ -951,3 +986,70 @@ def f(x, data=None): assert err_after <= err_before * 1.1 + 1e-14, ( f"completion increased error: before={err_before:.3e}, after={err_after:.3e}" ) + + +# ====================================================================== +# v0.21.1: Sobol indices via native TT contraction +# ====================================================================== + + +class TestSobolFromTTCores: + """v0.21.1: _compute_sobol_from_tt_cores must produce identical Sobol + indices to _compute_sobol_from_coeffs on the dense coefficient tensor.""" + + def test_separable_2d(self): + """Separable f(x, y) = x + y -- equal main-effect indices ~50/50.""" + from pychebyshev import ChebyshevApproximation + from pychebyshev._sensitivity import _compute_sobol_from_tt_cores + + def f(x, _): + return x[0] + x[1] + + cheb = ChebyshevApproximation(f, 2, [(-1, 1), (-1, 1)], [7, 7]) + cheb.build(verbose=False) + ref = cheb.sobol_indices() + tt = ChebyshevTT(f, num_dimensions=2, domain=[(-1, 1), (-1, 1)], n_nodes=[7, 7]) + tt.build(verbose=False) + tt_indices = _compute_sobol_from_tt_cores(tt._coeff_cores) + # Check first_order and total_order dicts agree + for key in ["first_order", "total_order"]: + assert set(tt_indices[key].keys()) == set(ref[key].keys()) + for d in tt_indices[key]: + assert abs(tt_indices[key][d] - ref[key][d]) < 1e-9, ( + f"{key}[{d}]: TT={tt_indices[key][d]}, Approx={ref[key][d]}" + ) + # Variance: should match too (same coefficients) + assert abs(tt_indices["variance"] - ref["variance"]) < 1e-9 * abs(ref["variance"]) + + def test_dominant_dim_3d(self): + """3D function dominated by dim 0 -- first_order[0] near 1.0.""" + from pychebyshev._sensitivity import _compute_sobol_from_tt_cores + + def f(x, _): + return x[0] + 0.01 * x[1] + 0.01 * x[2] + + tt = ChebyshevTT(f, num_dimensions=3, domain=[(-1, 1)] * 3, n_nodes=[8, 8, 8]) + tt.build(verbose=False) + idx = _compute_sobol_from_tt_cores(tt._coeff_cores) + # Dim 0 should dominate + assert idx["first_order"][0] > 0.99 + assert idx["first_order"][1] < 0.01 + assert idx["first_order"][2] < 0.01 + + def test_quadratic_3d_parity(self): + """3D quadratic -- match Approximation indices to ~1e-8.""" + from pychebyshev import ChebyshevApproximation + from pychebyshev._sensitivity import _compute_sobol_from_tt_cores + + def f(x, _): + return x[0] ** 2 + 0.5 * x[1] ** 2 + 0.25 * x[2] ** 2 + + cheb = ChebyshevApproximation(f, 3, [(-1, 1)] * 3, [9, 9, 9]) + cheb.build(verbose=False) + ref = cheb.sobol_indices() + tt = ChebyshevTT(f, num_dimensions=3, domain=[(-1, 1)] * 3, n_nodes=[9, 9, 9]) + tt.build(verbose=False) + idx = _compute_sobol_from_tt_cores(tt._coeff_cores) + for key in ["first_order", "total_order"]: + for d in idx[key]: + assert abs(idx[key][d] - ref[key][d]) < 1e-8 diff --git a/tests/test_v0201_dim_threading.py b/tests/test_v0201_dim_threading.py index 48c3619..c9cf26e 100644 --- a/tests/test_v0201_dim_threading.py +++ b/tests/test_v0201_dim_threading.py @@ -572,3 +572,114 @@ def test_save_load_pcb_not_supported_for_tt(self): restored = pickle.loads(blob) assert restored.dim_order == [2, 0, 1] assert abs(restored.eval([0.1, -0.2, 0.3]) - tt.eval([0.1, -0.2, 0.3])) < 1e-9 + + +class TestGetEvaluationPointsFrame: + """v0.21.1: get_evaluation_points must return columns in user-frame + order so that eval(get_evaluation_points()[i]) round-trips.""" + + def test_round_trip_canonical(self): + """Canonical (identity _dim_order) — round-trip works.""" + def f(x, _): return 0.3 * x[0] + 0.7 * x[1] - 0.2 * x[2] + tt = ChebyshevTT( + f, num_dimensions=3, + domain=[(-1, 1), (-2, 2), (-3, 3)], n_nodes=[5, 5, 5], + ) + tt.build(verbose=False) + points = tt.get_evaluation_points() + # Sanity: shape and column count + assert points.shape == (5 * 5 * 5, 3) + # Round-trip a sample of points + for i in [0, 7, 31, 50, 124]: + pt = points[i] + expected = f(pt, None) + got = float(tt.eval(pt.tolist())) + assert abs(got - expected) < 1e-9, ( + f"point {i}={pt}: got {got}, expected {expected}" + ) + + def test_round_trip_after_reorder(self): + """Non-identity _dim_order via reorder([2, 0, 1]) — round-trip + must still work in user-frame coordinates.""" + def f(x, _): return 0.3 * x[0] + 0.7 * x[1] - 0.2 * x[2] + tt = ChebyshevTT( + f, num_dimensions=3, + domain=[(-1, 1), (-2, 2), (-3, 3)], n_nodes=[5, 5, 5], + ) + tt.build(verbose=False) + tt_reordered = tt.reorder([2, 0, 1]) + # _dim_order is now non-identity + assert tt_reordered._dim_order != [0, 1, 2] + points = tt_reordered.get_evaluation_points() + assert points.shape == (5 * 5 * 5, 3) + # Round-trip in user-frame + for i in [0, 7, 31, 50, 124]: + pt = points[i] + expected = f(pt, None) + got = float(tt_reordered.eval(pt.tolist())) + assert abs(got - expected) < 1e-9, ( + f"point {i}={pt}: got {got}, expected {expected}" + ) + + def test_columns_match_user_frame_domain_after_reorder(self): + """After reorder, column d's range must match the user-frame + domain[d], not the storage-frame domain.""" + def f(x, _): return x[0] + x[1] + x[2] + tt = ChebyshevTT( + f, num_dimensions=3, + domain=[(-1, 1), (-2, 2), (-3, 3)], n_nodes=[5, 5, 5], + ) + tt.build(verbose=False) + tt_reordered = tt.reorder([2, 0, 1]) + points = tt_reordered.get_evaluation_points() + # Column 0 must be in user-frame dim 0 range: [-1, 1] + assert points[:, 0].min() >= -1.0 - 1e-12 + assert points[:, 0].max() <= 1.0 + 1e-12 + # Column 1 must be in user-frame dim 1 range: [-2, 2] + assert points[:, 1].min() >= -2.0 - 1e-12 + assert points[:, 1].max() <= 2.0 + 1e-12 + # Column 2 must be in user-frame dim 2 range: [-3, 3] + assert points[:, 2].min() >= -3.0 - 1e-12 + assert points[:, 2].max() <= 3.0 + 1e-12 + + +class TestEvalMultiNoDimOrderMutation: + """Issue #19: eval_multi must not mutate self._dim_order via try/finally + (race-prone under concurrent calls). The fix is structural: introduce + _eval_storage_frame and call it from eval_multi after permuting once.""" + + def test_eval_multi_source_does_not_assign_dim_order(self): + """Source-level check: the body of eval_multi contains no + 'self._dim_order = ' assignment.""" + import inspect + from pychebyshev.tensor_train import ChebyshevTT + + source = inspect.getsource(ChebyshevTT.eval_multi) + forbidden = [ + "self._dim_order = ", + "self._dim_order=", + ] + for pat in forbidden: + assert pat not in source, ( + f"eval_multi must not contain assignment '{pat}' " + f"after issue #19 fix" + ) + + def test_eval_multi_correctness_under_reorder(self): + """Behavioral check: eval_multi still returns correct values + for a reordered TT.""" + def f(x, _): return x[0] ** 2 + x[1] - x[2] + tt = ChebyshevTT( + f, num_dimensions=3, domain=[(-1, 1)] * 3, n_nodes=[7, 7, 7], + ) + tt.build(verbose=False) + tt_reordered = tt.reorder([2, 0, 1]) + # eval_multi for value + first-derivative-wrt-x0 (in user frame) + results = tt_reordered.eval_multi( + point=[0.3, 0.4, 0.5], + derivative_orders=[[0, 0, 0], [1, 0, 0]], + ) + # f(0.3, 0.4, 0.5) = 0.09 + 0.4 - 0.5 = -0.01 + # df/dx0(0.3, 0.4, 0.5) = 2*0.3 = 0.6 + assert abs(results[0] - (-0.01)) < 1e-9 + assert abs(results[1] - 0.6) < 1e-9 diff --git a/tests/test_v020_adaptive_refinement.py b/tests/test_v020_adaptive_refinement.py index 0e98d61..d89f85f 100644 --- a/tests/test_v020_adaptive_refinement.py +++ b/tests/test_v020_adaptive_refinement.py @@ -515,3 +515,85 @@ def test_sobol_rejects_inf_coeffs(self): coeffs = np.array([1.0, float("inf"), 0.0]) with pytest.raises(ValueError, match="NaN or Inf"): _compute_sobol_from_coeffs(coeffs, num_dimensions=1) + + +# ============================================================================ +# v0.21.1: ChebyshevTT.sobol_indices parity (native TT contraction) +# ============================================================================ + +class TestTTSobolParity: + """v0.21.1: ChebyshevTT.sobol_indices must match + ChebyshevApproximation.sobol_indices on the same function.""" + + def test_separable_2d(self): + def f(x, _): + return x[0] + x[1] + + cheb = ChebyshevApproximation(f, 2, [(-1, 1), (-1, 1)], [7, 7]) + tt = ChebyshevTT(f, num_dimensions=2, domain=[(-1, 1), (-1, 1)], n_nodes=[7, 7]) + cheb.build(verbose=False) + tt.build(verbose=False) + cheb_idx = cheb.sobol_indices() + tt_idx = tt.sobol_indices() + for key in ["first_order", "total_order"]: + for d in cheb_idx[key]: + assert abs(cheb_idx[key][d] - tt_idx[key][d]) < 1e-9 + + def test_quadratic_3d(self): + def f(x, _): + return x[0] ** 2 + 0.5 * x[1] ** 2 + 0.25 * x[2] ** 2 + + cheb = ChebyshevApproximation(f, 3, [(-1, 1)] * 3, [10, 10, 10]) + tt = ChebyshevTT(f, num_dimensions=3, domain=[(-1, 1)] * 3, n_nodes=[10, 10, 10]) + cheb.build(verbose=False) + tt.build(verbose=False) + cheb_idx = cheb.sobol_indices() + tt_idx = tt.sobol_indices() + for key in ["first_order", "total_order"]: + for d in cheb_idx[key]: + assert abs(cheb_idx[key][d] - tt_idx[key][d]) < 1e-7 + + def test_user_frame_after_reorder(self): + """sobol_indices keys must be user-frame, even after reorder.""" + def f(x, _): + return x[0] + 0.01 * x[1] + 0.01 * x[2] + + tt = ChebyshevTT(f, num_dimensions=3, domain=[(-1, 1)] * 3, n_nodes=[8, 8, 8]) + tt.build(verbose=False) + tt_reordered = tt.reorder([2, 0, 1]) + idx = tt_reordered.sobol_indices() + # Dim 0 (user-frame) should still dominate + assert idx["first_order"][0] > 0.99 + assert idx["first_order"][1] < 0.01 + assert idx["first_order"][2] < 0.01 + + def test_before_build_raises(self): + def f(x, _): + return x[0] + + tt = ChebyshevTT(f, num_dimensions=1, domain=[(-1, 1)], n_nodes=[5]) + with pytest.raises(RuntimeError, match="build"): + tt.sobol_indices() + + def test_non_uniform_domain_after_reorder(self): + """Cross-validate TT sobol against Approximation under non-uniform + per-dim domains AND non-identity _dim_order — the configuration that + v0.21.0 cluster bugs hid behind uniform-domain test coverage.""" + def f(x, _): return (x[0] - 0.2) ** 2 + 0.5 * (x[1] / 2.0) ** 2 + 0.25 * (x[2] / 3.0) ** 2 + cheb = ChebyshevApproximation(f, 3, [(-1, 1), (-2, 2), (-3, 3)], [10, 10, 10]) + tt = ChebyshevTT( + f, num_dimensions=3, + domain=[(-1, 1), (-2, 2), (-3, 3)], n_nodes=[10, 10, 10], + ) + cheb.build(verbose=False) + tt.build(verbose=False) + tt_reordered = tt.reorder([2, 0, 1]) + # Reference (canonical Approximation) + ref = cheb.sobol_indices() + # Reordered TT — keys must still be in user frame + idx = tt_reordered.sobol_indices() + for key in ["first_order", "total_order"]: + for d in ref[key]: + assert abs(ref[key][d] - idx[key][d]) < 1e-7, ( + f"{key}[{d}]: cheb={ref[key][d]}, tt_reordered={idx[key][d]}" + )