Skip to content

Conversation

SwayamInSync
Copy link
Member

Copilot Summary

This pull request adds full support for the float_power ufunc to the quad-precision dtype implementation, ensuring its behavior matches NumPy's expectations for floating-point types. The changes include the implementation of the float_power operation, updates to documentation, and comprehensive tests to verify correctness and edge cases.

Quad-precision ufunc implementation:

  • Added support for the float_power ufunc in init_quad_binary_ops, using the same implementation as power, since quad-precision is already a floating-point type.

Documentation and release tracking:

  • Updated release_tracker.md to mark float_power as implemented and tested, reflecting its new support status.

Testing and validation:

  • Added extensive parameterized tests for float_power in test_quaddtype.py, covering a wide range of cases including integer and fractional exponents, negative bases, zero and infinity handling, NaN propagation, integer promotion, and array operations. These tests ensure the quad-precision implementation matches NumPy's behavior for all relevant scenarios.

@SwayamInSync
Copy link
Member Author

Nothing new implementation here, just registering power as float_power and writing test cases.

Copy link
Contributor

@juntyr juntyr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@SwayamInSync
Copy link
Member Author

Thanks @juntyr , merging this

@SwayamInSync SwayamInSync merged commit 4e5d7d3 into numpy:main Oct 16, 2025
8 checks passed
@SwayamInSync SwayamInSync deleted the float_power branch October 16, 2025 14:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants