-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
High Memory Usage and Long GC Times When Writing Parquet Files #3102
Comments
I noticed that when I set withDictionaryEncoding(false), the writer switches from using FallbackValuesWriter to PlainValuesWriter. These two have significantly different memory usage. It seems that using PlainValuesWriter might address my issue. Here is the context: I would like to know: |
In general, dictionary encoding consumes a lot of memory due to buffering all entries. So yes,
I'd say Parquet is not designed for small data in which case the metadata overhead is non-trivial. It is more suitable for 100,000+ rows of data to enjoy the columnar encoding and compression. |
Yes, in our business scenario, we split the total sample into multiple Parquet files, each with a fixed 500 rows but with varying column counts. When the number of columns is high (over 30,000), we encounter GC issues lasting over 1 minute. I modified the configuration to use dictionary encoding only for BINARY and BOOLEAN type columns, while setting withDictionaryEncoding(false) for other column types. After this modification, the GC time improved significantly, changing from minutes to normal millisecond levels. However, I encountered another issue: after setting withDictionaryEncoding(false), the size of all generated Parquet files increased substantially. For a task with 800,000 rows and 30,000+ columns, the total file size grew from around 20GB to 90GB. Our business requirements limit the maximum size to 50GB. To address this issue, I discovered that ParquetFileWriter doesn't configure file compression by default. After implementing builder.withCompressionCodec(CompressionCodecName.SNAPPY) for file compression, the file size reduced to around 30GB, which meets our business requirements while also solving the GC issue. However, we still occasionally encounter cases where file sizes exceed 50GB, which didn't happen before implementing withDictionaryEncoding(false). It seems that builder.withCompressionCodec(CompressionCodecName.SNAPPY) sometimes doesn't provide compression as effective as the original approach (without configuring withDictionaryEncoding(false) and without builder.withCompressionCodec). I suspect the compression ratio might be dependent on the file content. |
Describe the usage question you have. Please include as many useful details as possible.
In my project, I am using the following code to write Parquet files to the server:
Each Parquet file contains 30000 columns. This code is executed by multiple threads simultaneously, which results in increased GC time. Upon analyzing memory usage, I found that the main memory consumers are related to the following chain:
InternalParquetRecordWriter -> ColumnWriterV1 -> FallbackValuesWriter -> PlainDoubleDictionaryValuesWriter -> IntList
Each thread writes to a file with the same table schema (header), differing only in the filePath.
I initially suspected that the memory usage was caused by the file buffer not being flushed in time. To address this, I tried configuring the writer with the following parameters:
However, these adjustments did not solve the issue. The program still experiences long GC pauses and excessive memory usage.
Expected Behavior
Efficient Parquet file writing with reduced GC time and optimized memory usage when multiple threads are writing files simultaneously.
Observed Behavior
• Increased GC time and excessive memory usage.
• Memory analysis indicates IntList under PlainDoubleDictionaryValuesWriter is the primary consumer of memory.
Request
What are the recommended strategies to mitigate excessive memory usage in this scenario?
Is there a way to share table schema-related objects across threads, or other optimizations to reduce memory overhead?
Please let me know if additional information is needed!
No response
The text was updated successfully, but these errors were encountered: