-
Notifications
You must be signed in to change notification settings - Fork 28.7k
[SPARK-52877][PYTHON] Improve Python UDF Arrow Serializer Performance #51225
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From a cursory look, seems making sense
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we may need to update corresponding doc especially for the type coercion difference. Maybe migration guide.
For the type coercion table, I remember we have document it in Python code doc. We may also need to update it. |
Merged to master. |
What changes were proposed in this pull request?
This PR removes pandas <> Arrow conversion in Arrow-optimized Python UDF by directly using PyArrow, to improve arrow-optimized Python UDF performance and memory usage.
Why are the changes needed?
Python UDF arrow serializer has a lot of overhead from converting arrow batches into pandas series and converting UDF results back to a pandas dataframe.
We can instead convert Python object directly into arrow to avoid the expensive pandas conversion.
Does this PR introduce any user-facing change?
Legacy type coercion:
New type coercion:
Table diff:
Only display differing values,
<table1_value>
vs.<table2_value>
How was this patch tested?
Correctness:
Added tests for both the legacy and new codepath, for arrow-batch eval.
Memory usage improvement:
According to manual benchmark, ~ 1.25x improvement in memory usage comparing the new path with the legacy pandas<>arrow conversion serialization.
Sample output from
memory_profiler
:Benchmark details here
Was this patch authored or co-authored using generative AI tooling?
No