[VL][Delta] Add native deletion vector scan foundation#11900
[VL][Delta] Add native deletion vector scan foundation#11900malinjawi wants to merge 30 commits intoapache:mainfrom
Conversation
Implements complete read-only Deletion Vector support for Delta Lake: - C++ DV reader and row index finder (14 files) - JNI bindings for native DV operations (2 files) - Runtime integration for DV-aware scans (3 files) - Scala preprocessing and scan preparation (3 files) - Comprehensive test coverage (5 files) This is the foundation for native DV MoR operations. Read-only implementation ensures no risk of data corruption while enabling performance testing of the native read path. Key components: - DeltaDeletionVectorReader: Reads DV files from storage - DeltaRowIndexFinder: Identifies deleted rows during scans - WholeStageResultIterator: Integrates DV filtering into result iteration - PreprocessTableWithDVs: Prepares Delta tables with DVs for native execution Total: 27 files (read-only DV infrastructure only)
|
Run Gluten Clickhouse CI on x86 |
7 similar comments
|
Run Gluten Clickhouse CI on x86 |
|
Run Gluten Clickhouse CI on x86 |
|
Run Gluten Clickhouse CI on x86 |
|
Run Gluten Clickhouse CI on x86 |
|
Run Gluten Clickhouse CI on x86 |
|
Run Gluten Clickhouse CI on x86 |
|
Run Gluten Clickhouse CI on x86 |
7025b9b to
4fbb08e
Compare
|
Run Gluten Clickhouse CI on x86 |
This patch added a docker image with maven cache for spark package/test, covering 3.3/3.4/3.5/4.0/4.1. A second patch will be made to enable this cache in the CI jobs
Clean up redundant logging. fixes: apache#11863
bump the actions to match apache policies, also fixed the cache image build Signed-off-by: Yuan <[email protected]>
* [GLUTEN-6887][VL] Daily Update Velox Version (dft-2026_04_01) Upstream Velox's New Commits: 24e6ab97b by Chengcheng Jin, fix(cudf): Fix complex data type name in format conversion and add tests(Part1) (#16818) d92b90029 by Natasha Sehgal, refactor: Propagate CastRule cost through canCoerce (#16821) 361a42252 by Rui Mo, fix(fuzzer): Reduce Spark aggregate fuzzer test pressure (#16964) 2c2fe2ab7 by root, fix: Ignore string column statistics for parquet-mr versions before 1.8.2 (#16744) 7faf27a86 by Chengcheng Jin, feat(cudf): Add the log to show detailed fallback messgae (#16900) e603315e5 by Chang chen, feat(parquet): Add type widening support for INT and Decimal types with configurable narrowing (#16611) 1e1674dd8 by Rajeev Singh, docs: Add blog post for Adaptive per-function CPU tracking (#16945) 0c6b89d61 by Masha Basmanova, fix(build): Guard fuzzer examples subdirectory with VELOX_BUILD_TESTING (#16992) 8d6355d8d by Pratik Pugalia, build: Improve build impact comment layout (#16971) 44d561990 by Masha Basmanova, refactor: Add ConnectorRegistry class with tryGet and unregisterAll (#16977) 793f13f16 by Rajeev Singh, feat(expr-eval):Adaptive per-function CPU sampling for Velox expression evaluation (#16646) 1a4dc7a5a by Pratik Pugalia, fix: Off-by-one boundary bug in make_timestamp validation (#16944) 7f2c75c26 by Pratik Pugalia, Fix incorrect substr length in Tokenizer::matchUnquotedSubscript (#16972) 22b90045e by Masha Basmanova, docs: Add truncate markers to blog posts for cleaner listing page (#16975) Signed-off-by: glutenperfbot <[email protected]> * Fix SPARK-18108: exclude partition columns from HiveTableHandle dataColumns When Gluten creates HiveTableHandle, it was passing all columns (including partition columns) as dataColumns. This caused Velox's convertType() to validate partition column types against the Parquet file's physical types, failing when they differ (e.g., LongType in file vs IntegerType from partition inference). Fix: build dataColumns excluding partition columns (ColumnType::kPartitionKey). Partition column values come from the partition path, not from the file. Co-authored-by: Copilot <[email protected]> * Point Velox to PR3 branch with parquet type widening support * Update VeloxTestSettings for Velox PR2 With OAP INT narrowing commit replaced by upstream Velox PR #15173: - Remove 2 excludes now passing: LongType->IntegerType, LongType->DateType - Add 2 excludes for new failures: IntegerType->ShortType (OAP removed) Exclude 63 (net unchanged: -2 +2). Test results: 21 pass / 63 ignored. Co-authored-by: Copilot <[email protected]> * Disable native writer for ParquetTypeWideningSuite This suite tests the READ path only. Disable native writer so Spark's writer produces correct V2 encodings (DELTA_BINARY_PACKED/DELTA_BYTE_ARRAY). - Remove 10 excludes for decimal widening tests now passing Remaining 38 excludes: - 34: Velox native reader rejects incompatible decimal conversions regardless of reader config (no parquet-mr fallback) - 4: Velox does not support DELTA_BYTE_ARRAY encoding Test results: 46 pass / 38 ignored. Co-authored-by: Copilot <[email protected]> * Override 33 type widening tests with expectError=true Velox native reader always behaves like Spark's vectorized reader, so tests that rely on parquet-mr behavior (vectorized=false) fail. Instead of just excluding these 33 tests, add testGluten overrides with expectError=true to verify Velox correctly rejects incompatible conversions. - 16 unsupported INT->Decimal conversions - 6 decimal precision narrowing cases - 11 decimal precision+scale narrowing/mixed cases VeloxTestSettings: 38 excludes (parent tests) + 33 testGluten overrides Test results: 79 pass / 38 ignored (33 excluded parent + 5 truly excluded) * fix velox rebase Signed-off-by: Yuan <[email protected]> * ignore ut Signed-off-by: Yuan <[email protected]> * ignore more ut Signed-off-by: Yuan <[email protected]> * fix ignore api Signed-off-by: Yuan <[email protected]> * ignore failed ut Signed-off-by: Yuan <[email protected]> * fix on clickhouse tpcds queries the testing data on clickhouse side is not upated, so revert to use the old query Signed-off-by: Yuan <[email protected]> * fix q30 * ignore ut * fix Signed-off-by: Yuan <[email protected]> --------- Signed-off-by: glutenperfbot <[email protected]> Signed-off-by: Yuan <[email protected]> Co-authored-by: glutenperfbot <[email protected]> Co-authored-by: Chang chen <[email protected]> Co-authored-by: Copilot <[email protected]> Co-authored-by: Chang chen <[email protected]> Co-authored-by: Yuan <[email protected]>
…ession (apache#11679) Check the CrossRelNode's expression, fallback it if experssion is not supported. Fix apache#11678.
…back (apache#11720) This PR adds a config to control fallback validation for TimestampNTZType in the Velox backend and adds a test for localtimestamp(). Currently, the validator treats TimestampNTZType as unsupported and forces the query to fall back to Spark. This makes it hard to develop and test features related to TimestampNTZ, including functions like localtimestamp(). With this change, the validation rule can be temporarily disabled during development and testing. Related issue: apache#1433 Co-authored-by: Mariam-Almesfer <[email protected]>
…when effective row count < 2 (apache#11850) Co-authored-by: xumingyong <[email protected]>
…eordered' suite (apache#11884) After apache#9473 , there is an issue when executing suite 'Eliminate two aggregate joins with attribute reordered'.
Signed-off-by: glutenperfbot <[email protected]> Co-authored-by: glutenperfbot <[email protected]>
Since Spark 3.2 support was dropped a few months ago, the related tests can be removed now. related: apache#11379
|
Run Gluten Clickhouse CI on x86 |
zhztheplayer
left a comment
There was a problem hiding this comment.
@malinjawi Thanks.
A broad design question: Can you check if it's worth it to maintain native deletion vector reader vs. passing serialized deletion vector from Java to C++? In terms of the overall reading performance.
|
Run Gluten Clickhouse CI on x86 |
Thanks for the design query @zhztheplayer. I checked this with 2 benchmark layers:
I compared native path Vs. JVM-materialized serialized DV handoff path, where I setup as follows:
Here what I found: Microbenchmark:
End-to-end Spark benchmark:
From my observation, the smaller run shows a modest advantage for the JVM-materialized handoff path, but the larger run narrows that gap to about Based on that, I think keeping the native DV reader is still the right foundation for this PR. I think we can introduce caching / avoiding repeated DV loads, rather than shifting the DV load boundary from native to JVM. Although what we could also checkout is a zero-copy JNI byte-transfer design. What do you think @zhztheplayer ? cc: @zhouyuan |
|
Run Gluten Clickhouse CI on x86 |
|
Run Gluten Clickhouse CI on x86 |
|
Run Gluten Clickhouse CI on x86 |
1 similar comment
|
Run Gluten Clickhouse CI on x86 |
6c67e6d to
a88dc84
Compare
|
Run Gluten Clickhouse CI on x86 |
a88dc84 to
d2d279e
Compare
|
Run Gluten Clickhouse CI on x86 |
1 similar comment
|
Run Gluten Clickhouse CI on x86 |
1a08f72 to
ea18efe
Compare
|
Run Gluten Clickhouse CI on x86 |
ea18efe to
6a17c21
Compare
|
Run Gluten Clickhouse CI on x86 |
|
@malinjawi many thanks for taking time for testing.
If the performance is at the same level, should we just pass DVs from Java to C++? So we can save some energy from maintaining the C++ DV reader by ourselves. Or do you have other reasons to add the C++ DV reader? |
Thanks @zhztheplayer . I looked at this more closely and tried to separate the architectural reasoning from the benchmark question. Based on the results so far, I do not think we have a strong performance argument for keeping the C++ DV reader by itself as it stands now the observed advantage is small and actually shrank as total query work grew. The more important point is the breakdown inside that DV slice:
So from an Amdahl's law perspective, even if we made the native reader perfect, the upside is usually limited because the hotter path is still native apply/filter, not materialization/loading. So I think my takeaways are:
I think this also helps separate the native path into more specific responsibilities:
Based on the current measurements, the hottest and most valuable native parts are:
The native reader/materialization side is easier to question, because it does not currently show a strong speed win. So my current view is: -The lowest maintenance design would be to let JVM / Delta utilities own DV materialization/loading and keep native focused on consume/apply/filter -The most self-contained approach would be to let native scan as we can build on top of it as shown in Velox POC and Gluten POC The main architectural reasons I still see for keeping the native DV reader are:
But this is not currently evident by this PR but rather future architectural not currently demonstrated benchmark wins. So, I think we could go either way with this implementation and I think the design choice between native reader and JVM should now be made mostly on maintenance / integration / future tradeoffs. What do you think would be the best approach all things considered @zhouyuan @zhztheplayer ? |
|
@malinjawi thanks for your write-up. I understand the current status better now. Overall, I thought a native DV IO layer might possibly become beneficial in the future, when we meet more DV-heavy workloads than the time being. So we don't have to stick on one single approach against the other. I would be inclined to start from a simpler, easier-to-maintain approach, i.e., passing DVs from Java to C++, and preserve the possibility of switching to native DV layer in our design and roadmap. Just my 2 cents, insights are welcome here. |
Issue: #11901
What changes are proposed in this pull request?
This PR adds the native Delta deletion vector scan foundation for the Velox backend in Gluten.
The architecture is based of the design of Delta Lake's: Deletion Vectors High Level Design.
The scope is intentionally limited to the read path. It introduces the JVM and native plumbing required to read Delta tables with deletion vectors natively, while keeping DML and DV write-path work out of scope for a follow-up PR.
The main changes are:
cpp/velox/compute/delta, including:RoaringBitmapArrayhelper needed by the native DV read path and tests.This PR does not include:
How was this patch tested?
cpp/velox/compute/delta/tests:DeltaConnectorTestDeltaDeletionVectorReaderTestDeltaSplitTestDeltaUuidUtilsTestveloxtarget and compile through the new Delta translation units.Was this patch authored or co-authored using generative AI tooling?
Co-authored: IBM Bob