-
Notifications
You must be signed in to change notification settings - Fork 50
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add complex dtype support for mean
#850
Conversation
mean
mean
PyTorch, JAX and CuPy all yield the same It's possible the behavior in NumPy may change (following CPython's change for division): numpy/numpy#26560. |
I thought we had decided at some point that the distinction between things like I've certainly taken that view in some places when implement things, taking something like "the result should be nan" to mean |
Let me add a +1 LGTM to this proposal. Having |
I think so too, and that is what resulted in the current spec for isnan, which says: For complex floating-point operands, let
That sounds right to me, |
The discussion concerned infinities and the one-infinity model. It did not extend to NaNs, apart from the behavior of |
I've updated the proposed changes to include a note explaining that |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The last change addresses the concern about special cases. I also quickly checked that all of NumPy, PyTorch, CuPy, JAX and Dask support complex input to mean
and behave the same. So this should be good to go - thanks @kgryte, all!
This PR
mean
#846.sum
followed by scalar division, thenNaN
components should only propagate relative to the respective component. However, in NumPy, givenarray([ 1.+0.j, 2.+0.j, nan+0.j])
, NumPy will returnnan+nanj
when invokingnp.mean
.isnan
, rather than rely on individual components.