You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the line: self._features = numpy.delete(self._features, index, axis=0)
it is recommended to consider using numpy.take() as an alternative. While numpy.delete() is convenient, it creates a new array and copies data internally, which can introduce unnecessary memory and performance overhead — especially when used in performance-critical or high-frequency operations.
Since the goal here is to exclude a specific row, the same effect can be achieved more efficiently using numpy.take() with an explicit index list of the rows to keep. This avoids the overhead of internal deletion logic and gives more control over index management.
Suggested replacement: self._features = numpy.take(self._features, [i for i in range(self._features.shape[0]) if i != index], axis=0)
This provides the same functionality while offering better performance characteristics in larger datasets.
The text was updated successfully, but these errors were encountered:
dedupe/dedupe/labeler.py
Line 93 in 54ecfe7
In the line:
self._features = numpy.delete(self._features, index, axis=0)
it is recommended to consider using numpy.take() as an alternative. While numpy.delete() is convenient, it creates a new array and copies data internally, which can introduce unnecessary memory and performance overhead — especially when used in performance-critical or high-frequency operations.
Since the goal here is to exclude a specific row, the same effect can be achieved more efficiently using numpy.take() with an explicit index list of the rows to keep. This avoids the overhead of internal deletion logic and gives more control over index management.
Suggested replacement:
self._features = numpy.take(self._features, [i for i in range(self._features.shape[0]) if i != index], axis=0)
This provides the same functionality while offering better performance characteristics in larger datasets.
The text was updated successfully, but these errors were encountered: