Skip to content

Numba Optimizations#415

Open
DO-Ui wants to merge 3 commits intogr812b:developfrom
DO-Ui:numba-optimization
Open

Numba Optimizations#415
DO-Ui wants to merge 3 commits intogr812b:developfrom
DO-Ui:numba-optimization

Conversation

@DO-Ui
Copy link

@DO-Ui DO-Ui commented Feb 19, 2026

Description
Optimizations using numba kernels

  • Replaced Python-side calculations in primary_pulley_flyweight.py, secondary_pulley_torque_reactive.py, pulley_interface.py, and slip_model.py with calls to new Numba-accelerated kernels for force, torque, and slip computations.

  • Updated setup.py to add a numba optional dependency group, and documented the optional installation for runtime acceleration in README.md.

  • Refactored simulation_runner.py to cache model references and precompute lookup tables for CVT ratio, ratio derivatives, and engine torque.

  • Changed event function references in the simulation runner to use instance methods instead of global functions.

Copilot AI review requested due to automatic review settings February 19, 2026 18:42
@DO-Ui DO-Ui requested review from camdnnn and gr4cem as code owners February 19, 2026 18:42
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Introduces optional Numba-accelerated compute kernels and refactors the simulation runner to reduce per-step Python overhead during ODE integration.

Changes:

  • Added utils/numba_kernels.py with maybe_njit wrappers and kernels for slip, torque demand, and pulley force/torque calculations.
  • Refactored simulation_runner.py to cache model references and use precomputed lookup tables for CVT ratio/derivative and engine torque.
  • Updated pulley and slip model implementations to call the new kernels; added an optional numba extra and documented installation.

Reviewed changes

Copilot reviewed 8 out of 8 changed files in this pull request and generated 5 comments.

Show a summary per file
File Description
cvtModel/src/cvt_simulator/utils/numba_kernels.py New optional JIT-friendly kernels (with fallback when numba is absent).
cvtModel/src/cvt_simulator/simulation_runner.py Major runtime hot-path refactor: caching + LUTs + kernel calls + event functions moved to instance methods.
cvtModel/src/cvt_simulator/models/slip_model.py Replaces Python/Numpy-side slip + torque-demand computations with kernel calls.
cvtModel/src/cvt_simulator/models/pulley/secondary_pulley_torque_reactive.py Uses kernels for helix force + max torque computation.
cvtModel/src/cvt_simulator/models/pulley/pulley_interface.py Uses kernel for radial force computation.
cvtModel/src/cvt_simulator/models/pulley/primary_pulley_flyweight.py Uses kernels for flyweight force + max torque computation.
cvtModel/setup.py Adds extras_require["numba"].
cvtModel/README.md Documents optional .[dev,numba] install for acceleration.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +3 to +4
import numpy as np

Copy link

Copilot AI Feb 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

numpy is imported but not used in this module, which will trigger flake8 F401 in CI. Remove the unused import numpy as np (or use it if intentionally needed).

Suggested change
import numpy as np

Copilot uses AI. Check for mistakes.
def maybe_njit(*args, **kwargs):
return njit(*args, **kwargs)

except ImportError: # pragma: no cover - exercised when numba is installed
Copy link

Copilot AI Feb 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The # pragma: no cover note here is misleading: this except ImportError path runs when numba is not installed. Either remove the pragma or update the comment so coverage expectations/documentation match actual behavior.

Suggested change
except ImportError: # pragma: no cover - exercised when numba is installed
except ImportError: # pragma: no cover - exercised only when numba is not installed

Copilot uses AI. Check for mistakes.
self._pulley_calc_shift_max = min(MAX_SHIFT, secondary_max_shift)



Copy link

Copilot AI Feb 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This blank line contains trailing whitespace, which will trigger flake8 W293. Remove the extra spaces so the line is truly empty.

Suggested change

Copilot uses AI. Check for mistakes.
)

net_radial = primary_radial - secondary_radial
friction = self._cvt_shift_model._frictional_force(net_radial, shift_velocity)
Copy link

Copilot AI Feb 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

simulation_runner is calling the private method CvtShiftModel._frictional_force(...) directly. This tightly couples the runner to an internal implementation detail; consider exposing a public method (or reusing get_breakdown) so changes inside CvtShiftModel don't silently break the runner.

Suggested change
friction = self._cvt_shift_model._frictional_force(net_radial, shift_velocity)
def _compute_shift_friction(model, normal_force, axial_velocity):
"""
Compute the shift friction using the CVT shift model.
Prefer a public `frictional_force` method if available, and fall back
to the private `_frictional_force` for backward compatibility.
"""
friction_method = getattr(model, "frictional_force", None)
if friction_method is None:
friction_method = getattr(model, "_frictional_force", None)
if friction_method is None:
raise AttributeError(
"CvtShiftModel instance does not provide a friction computation method."
)
return friction_method(normal_force, axial_velocity)
friction = _compute_shift_friction(
self._cvt_shift_model,
net_radial,
shift_velocity,
)

Copilot uses AI. Check for mistakes.
Comment on lines +23 to +44
@maybe_njit(cache=True, fastmath=True)
def slip_relative_speed_kernel(
primary_angular_velocity: float,
secondary_angular_velocity: float,
cvt_ratio: float,
) -> float:
return primary_angular_velocity - (secondary_angular_velocity * cvt_ratio)


@maybe_njit(cache=True, fastmath=True)
def slip_coupling_torque_kernel(
relative_speed: float,
torque_demand: float,
t_max_capacity: float,
slip_speed_smoothing: float,
) -> tuple[float, bool]:
coulomb_torque = t_max_capacity * math.tanh(relative_speed / slip_speed_smoothing)
alpha = min(max(abs(relative_speed) / slip_speed_smoothing, 0.0), 1.0)
torque_demand_clamped = min(max(torque_demand, -t_max_capacity), t_max_capacity)
coupling_torque = (1.0 - alpha) * torque_demand_clamped + alpha * coulomb_torque
return coupling_torque, alpha > 0.0

Copy link

Copilot AI Feb 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR introduces several new kernel functions that are now part of the core dynamics (torque demand, slip coupling, etc.), but there are no unit tests validating their numerical behavior against known values / the previous Python implementations. Adding focused tests for these kernels would help catch regressions (e.g., smoothing/threshold edge cases, sign conventions, clamping).

Copilot uses AI. Check for mistakes.
Copy link
Owner

@gr812b gr812b left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As far as I understand, numba's main purpose is to optimize all the operations that aren't numpy operations, with the JIT compliling into C-like machine code. Solid and easy approach, alternative is to make it all with numpy numbers throughout instead of the current mix.

Happy to use this approach if the numba stuff is fixed to just be added at the top of existing functions instead of moving them all to one file

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the idea of the @maybe_njit(cache=True, fastmath=True) decorator, but redefining all functions in this one spot is a real bad idea. It would be much better to just use that annotation throughout the codebase.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 7 out of 7 changed files in this pull request and generated 3 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +45 to +47
exp_term = math.exp(mu_effective * wrap_angle)
capstan_term = (exp_term - 1.0) / (exp_term + 1.0)
radial_force_term = total_radial * radius / math.sin(wrap_angle / 2.0)
Copy link

Copilot AI Feb 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Inconsistent use of math.exp and math.sin in this numba kernel. In the _primary_flyweight_force_kernel above (lines 32-33), you use math.atan and math.tan, but this file also imports numpy as np. For consistency and to ensure numba can properly optimize all kernels in this file, consider using either all math functions or all np functions within numba kernels. The primary_flyweight_force_kernel uses math, so this kernel should also use math for consistency.

Copilot uses AI. Check for mistakes.
def maybe_njit(*args, **kwargs):
return njit(*args, **kwargs)

except ImportError: # pragma: no cover - exercised when numba is installed
Copy link

Copilot AI Feb 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The pragma comment is misleading. It states "exercised when numba is installed" but this is the except ImportError block, which executes when numba is NOT installed. The comment should either be "exercised when numba is not installed" or be moved to line 1-7 which is exercised when numba is installed.

Suggested change
except ImportError: # pragma: no cover - exercised when numba is installed
except ImportError: # pragma: no cover - exercised when numba is not installed

Copilot uses AI. Check for mistakes.
Comment on lines +38 to +43
) -> tuple[float, bool]:
coulomb_torque = t_max_capacity * math.tanh(relative_speed / slip_speed_smoothing)
alpha = min(max(abs(relative_speed) / slip_speed_smoothing, 0.0), 1.0)
torque_demand_clamped = min(max(torque_demand, -t_max_capacity), t_max_capacity)
coupling_torque = (1.0 - alpha) * torque_demand_clamped + alpha * coulomb_torque
return coupling_torque, alpha > 0.0
Copy link

Copilot AI Feb 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The second return value (boolean) from _coupling_torque_kernel is unused. If this value is not needed for any future functionality, consider simplifying the kernel to only return the coupling_torque. This would make the API cleaner and avoid confusion about the unused return value.

Suggested change
) -> tuple[float, bool]:
coulomb_torque = t_max_capacity * math.tanh(relative_speed / slip_speed_smoothing)
alpha = min(max(abs(relative_speed) / slip_speed_smoothing, 0.0), 1.0)
torque_demand_clamped = min(max(torque_demand, -t_max_capacity), t_max_capacity)
coupling_torque = (1.0 - alpha) * torque_demand_clamped + alpha * coulomb_torque
return coupling_torque, alpha > 0.0
) -> float:
coulomb_torque = t_max_capacity * math.tanh(relative_speed / slip_speed_smoothing)
alpha = min(max(abs(relative_speed) / slip_speed_smoothing, 0.0), 1.0)
torque_demand_clamped = min(max(torque_demand, -t_max_capacity), t_max_capacity)
coupling_torque = (1.0 - alpha) * torque_demand_clamped + alpha * coulomb_torque
return coupling_torque

Copilot uses AI. Check for mistakes.
Copy link
Owner

@gr812b gr812b left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a reason you can't add those @ decorators to the functions that already exist? Ideally it would be best if we can just paste those @ above the existing functions instead of separating anything out if you catch my drift

Comment on lines +22 to +24
"numba": [
"numba"
]
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If numba is to be used in production it should likely just be added to the top default packages

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants