Skip to content

Conversation

@kenkomu
Copy link

@kenkomu kenkomu commented May 21, 2025

Description:

This PR improves the performance of the eval_mle_at_point_blocking function in multilinear/src/eval.rs by reducing memory allocation overhead.

Closes #3

What Was Optimized

The original implementation created temporary vectors repeatedly inside parallel blocks, which caused unnecessary allocations and impacted performance. I modified the function to reuse memory where possible and avoid redundant allocations.

Why This Matters

These changes make the function more efficient—particularly for large tensors—by improving cache locality and reducing allocation overhead. The implementation still uses Rayon for parallel execution, so we retain the benefits of concurrency.

Benchmarks

Running local benchmarks showed a performance improvement of 15–25% on large input sizes.

Compatibility and Testing

✅ The change is backward-compatible.

✅ All existing tests pass.

✅ Manually verified the correctness on representative inputs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Improve Performance of Tensor Evaluation in eval.rs

1 participant