Skip to content

Implement intermediate result blocked approach to aggregation memory management #15591

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 61 commits into
base: main
Choose a base branch
from

Conversation

Rachelint
Copy link
Contributor

@Rachelint Rachelint commented Apr 5, 2025

Which issue does this PR close?

Rationale for this change

As mentioned in #7065 , we use a single Vec to manage aggregation intermediate results both in GroupAccumulator and GroupValues.

It is simple but not efficient enough in high-cardinality aggregation, because when Vec is not large enough, we need to allocate a new Vec and copy all data from the old one.

  • Copying a large amount of data(due to high-cardinality) is obviously expansive
  • And it is also not friendly to cpu (will refresh cache and tlb)

So this pr introduces a blocked approach to manage the aggregation intermediate results. We will never resize the Vec in the approach, and instead we split the data to blocks, when the capacity is not enough, we just allocate a new block. Detail can see #7065

What changes are included in this PR?

  • Implement the sketch for blocked approach
  • Implement blocked groups supporting PrimitiveGroupsAccumulator and GroupValuesPrimitive as the example

Are these changes tested?

Test by exist tests. And new unit tests, new fuzzy tests.

Are there any user-facing changes?

Two functions are added to GroupValues and GroupAccumulator trait.

But as you can see, there are default implementations for them, and users can choose to really support the blocked approach when wanting a better performance for their udafs.

    /// Returns `true` if this accumulator supports blocked groups.
    fn supports_blocked_groups(&self) -> bool {
        false
    }

    /// Alter the block size in the accumulator
    ///
    /// If the target block size is `None`, it will use a single big
    /// block(can think it a `Vec`) to manage the state.
    ///
    /// If the target block size` is `Some(blk_size)`, it will try to
    /// set the block size to `blk_size`, and the try will only success
    /// when the accumulator has supported blocked mode.
    ///
    /// NOTICE: After altering block size, all data in previous will be cleared.
    ///
    fn alter_block_size(&mut self, block_size: Option<usize>) -> Result<()> {
        if block_size.is_some() {
            return Err(DataFusionError::NotImplemented(
                "this accumulator doesn't support blocked mode yet".to_string(),
            ));
        }

        Ok(())
    }

@Rachelint Rachelint changed the title Impl Intermeidate result blocked approach framework Impl intermeidate result blocked approach framework Apr 5, 2025
@Rachelint Rachelint changed the title Impl intermeidate result blocked approach framework Impl intermeidate result blocked approach sketch Apr 5, 2025
@github-actions github-actions bot added the logical-expr Logical plan and expressions label Apr 5, 2025
@Dandandan
Copy link
Contributor

Hi @Rachelint I think I have a alternative proposal that seems relatively easy to implement.
I'll share it with you once I have some time to validate the design (probably this evening).

@Rachelint
Copy link
Contributor Author

Rachelint commented Apr 8, 2025

Hi @Rachelint I think I have a alternative proposal that seems relatively easy to implement. I'll share it with you once I have some time to validate the design (probably this evening).

Really thanks. This design in pr indeed still introduces quite a few code changes...

I tried to not modify anythings about GroupAccumulator firstly:

  • Only implement the blocked logic in GroupValues
  • Then we reorder the input batch according to their block indices got from GroupValues
  • Apply input batch to related GroupAccumulator using slice
  • And when we found the new block is needed, create a new GroupAccumulator (one block one GroupAccumulator)

But I found this way will introduce too many extra cost...

Maybe we place the block indices into values in merge/update_batch as a Array?

@Rachelint Rachelint force-pushed the intermeidate-result-blocked-approach branch 2 times, most recently from cc37eba to f690940 Compare April 9, 2025 14:37
@github-actions github-actions bot added the functions Changes to functions implementation label Apr 10, 2025
@Rachelint Rachelint force-pushed the intermeidate-result-blocked-approach branch from 95c6a36 to a4c6f42 Compare April 10, 2025 11:10
@github-actions github-actions bot added the physical-expr Changes to the physical-expr crates label Apr 10, 2025
@Rachelint Rachelint force-pushed the intermeidate-result-blocked-approach branch 6 times, most recently from 2100a5b to 0ee951c Compare April 17, 2025 11:56
@Rachelint
Copy link
Contributor Author

Rachelint commented Apr 17, 2025

Has finished development(and test) of all needed common structs!
Rest four things for this one:

  • Support blocked related logic in GroupedHashAggregateStream(we can copy it from Sketch for aggregation intermediate results blocked management #11943 )
  • Logic about deciding when we should enable this optimization
  • Example blocked version for GroupAccumulator and GroupValues
  • Unit test for blocked GroupValuesPrimitive, it is a bit complex
  • Fuzzy tests
  • Chore: fix docs, fix clippy, add more comments...

@Rachelint Rachelint force-pushed the intermeidate-result-blocked-approach branch 2 times, most recently from c51d409 to 2863809 Compare April 20, 2025 14:46
@github-actions github-actions bot added execution Related to the execution crate common Related to common crate sqllogictest SQL Logic Tests (.slt) labels Apr 21, 2025
@Rachelint
Copy link
Contributor Author

It is very close, just need to add more tests!

@Rachelint Rachelint force-pushed the intermeidate-result-blocked-approach branch 3 times, most recently from 31d660d to 2b8dd1e Compare April 22, 2025 18:52
@Rachelint
Copy link
Contributor Author

I suspect if it is due to the row-level random access of VecDeque?
I have replace VecDeque with Vec.

And I am trying to run the benchmark in more enviorments rather than only my dev machine (x86 + centos + 6core).

@Rachelint
Copy link
Contributor Author

Rachelint commented May 6, 2025

@jayzhan211 @alamb I guess I nearly understand why

It is related to target_partitions.

We maintain the intermediate results respectively in each partition.

So when target_partitions is larger, amount of intermediate results in one partition is smaller.

And when intermediate results assigned to partition is small enough, the blocked approach will almost make no difference (assume the extreme case, the Vec used to store results never resize).

Here is my benchmark result in a production machine (x86_64 + 16core + 3699.178CPU MHz + 64G RAM) with different target_partitions

  • target_partitions = 4 (amazing improvement!)
// blocked approach
Q7: SELECT "WatchID", MIN("ResolutionWidth") as wmin, MAX("ResolutionWidth") as wmax, SUM("IsRefresh") as srefresh FROM hits GROUP BY "WatchID" ORDER BY "WatchID" DESC LIMIT 10;
Query 7 iteration 0 took 4996.7 ms and returned 10 rows
Query 7 iteration 1 took 5177.6 ms and returned 10 rows
Query 7 iteration 2 took 5335.4 ms and returned 10 rows
Query 7 iteration 3 took 5232.7 ms and returned 10 rows
Query 7 iteration 4 took 5350.5 ms and returned 10 rows
Query 7 avg time: 5218.59 ms

// main
Q7: SELECT "WatchID", MIN("ResolutionWidth") as wmin, MAX("ResolutionWidth") as wmax, SUM("IsRefresh") as srefresh FROM hits GROUP BY "WatchID" ORDER BY "WatchID" DESC LIMIT 10;
Query 7 iteration 0 took 8484.9 ms and returned 10 rows
Query 7 iteration 1 took 8194.4 ms and returned 10 rows
Query 7 iteration 2 took 8317.9 ms and returned 10 rows
Query 7 iteration 3 took 8183.2 ms and returned 10 rows
Query 7 iteration 4 took 8225.7 ms and returned 10 rows
Query 7 avg time: 8281.19 ms
  • target_partitions = 8 (emmm... some improvement)
// blocked approach
Q7: SELECT "WatchID", MIN("ResolutionWidth") as wmin, MAX("ResolutionWidth") as wmax, SUM("IsRefresh") as srefresh FROM hits GROUP BY "WatchID" ORDER BY "WatchID" DESC LIMIT 10;
Query 7 iteration 0 took 2599.2 ms and returned 10 rows
Query 7 iteration 1 took 2857.9 ms and returned 10 rows
Query 7 iteration 2 took 3046.3 ms and returned 10 rows
Query 7 iteration 3 took 2766.0 ms and returned 10 rows
Query 7 iteration 4 took 2830.8 ms and returned 10 rows
Query 7 avg time: 2820.04 ms

// main
Q7: SELECT "WatchID", MIN("ResolutionWidth") as wmin, MAX("ResolutionWidth") as wmax, SUM("IsRefresh") as srefresh FROM hits GROUP BY "WatchID" ORDER BY "WatchID" DESC LIMIT 10;
Query 7 iteration 0 took 3820.0 ms and returned 10 rows
Query 7 iteration 1 took 3716.6 ms and returned 10 rows
Query 7 iteration 2 took 3728.3 ms and returned 10 rows
Query 7 iteration 3 took 3628.2 ms and returned 10 rows
Query 7 iteration 4 took 3912.6 ms and returned 10 rows
Query 7 avg time: 3761.15 ms
  • target_partitions = 32 (sadly, almost no improvement...)
// blocked approach
Q7: SELECT "WatchID", MIN("ResolutionWidth") as wmin, MAX("ResolutionWidth") as wmax, SUM("IsRefresh") as srefresh FROM hits GROUP BY "WatchID" ORDER BY "WatchID" DESC LIMIT 10;
Query 7 iteration 0 took 1383.7 ms and returned 10 rows
Query 7 iteration 1 took 1274.3 ms and returned 10 rows
Query 7 iteration 2 took 1321.7 ms and returned 10 rows
Query 7 iteration 3 took 1308.1 ms and returned 10 rows
Query 7 iteration 4 took 1310.6 ms and returned 10 rows
Query 7 avg time: 1319.68 ms

// main
Q7: SELECT "WatchID", MIN("ResolutionWidth") as wmin, MAX("ResolutionWidth") as wmax, SUM("IsRefresh") as srefresh FROM hits GROUP BY "WatchID" ORDER BY "WatchID" DESC LIMIT 10;
Query 7 iteration 0 took 1440.7 ms and returned 10 rows
Query 7 iteration 1 took 1430.3 ms and returned 10 rows
Query 7 iteration 2 took 1357.5 ms and returned 10 rows
Query 7 iteration 3 took 1352.6 ms and returned 10 rows
Query 7 iteration 4 took 1342.2 ms and returned 10 rows
Query 7 avg time: 1384.64 ms

@Rachelint Rachelint force-pushed the intermeidate-result-blocked-approach branch from e08a109 to 8807026 Compare May 6, 2025 09:01
@@ -93,20 +94,27 @@ where
opt_filter: Option<&BooleanArray>,
total_num_groups: usize,
) -> Result<()> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this seems rather small? won't larger values have lower overhead?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this seems rather small? won't larger values have lower overhead?

Does it mean block size?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, i mean block size. I would expect something like batchsize (4-8k), or maybe even bigger to have lower overhead? Did you run some experiments?

Copy link
Contributor Author

@Rachelint Rachelint May 6, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, i mean block size. I would expect something like batchsize (4-8k), or maybe even bigger to have lower overhead? Did you run some experiments?

Yes, I try it.

Now I set block_size = batch_size.

I try the smaller batch_size like 1024, and this pr show improvement compared to main.

It is due to this pr can also eliminating the call of Array::slice, which is non-trivial due to the computation of null count. Detail can see: apache/arrow-rs#6155

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But it make sense, we make block_size a dedicated config.
I think it can get improvement in case that batch_size is set to a too samll value, and the group by cardinality is actually large.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah I see that batch size is used by default :).
Yeah I think it makes sense to test it a bit further, maybe for this a slightly larger value (e.g. 2x, 4x batch size) will be beneficial when the cardinality is above the batch size.

Also at some point might make sense to think of it in size in memory instead of number of elements (e.g. block of u8 values might hold 16x more values than u128).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I found the hash compute for primitive is actually non-trivial...
And in high cardinality case, the duplicated hash compute led by rehash is really expansive!

I am experimenting about also save hash in hashtable like multi group by. Actually I have found improvement in the new added query in extended.sql.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah I see that batch size is used by default :). Yeah I think it makes sense to test it a bit further, maybe for this a slightly larger value (e.g. 2x, 4x batch size) will be beneficial when the cardinality is above the batch size.

Also at some point might make sense to think of it in size in memory instead of number of elements (e.g. block of u8 values might hold 16x more values than u128).

It is really a good idea to split block by size rather than num rows, I will experiment on it after trying some exist ideas later.

/// The values for each group index
values: Vec<T::Native>,
values: Vec<Vec<T::Native>>,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we use

Suggested change
values: Vec<Vec<T::Native>>,
values: Vec<Box<[T::Native]>>,

here?

@alamb
Copy link
Contributor

alamb commented May 6, 2025

🤖: Benchmark completed
Details

It is really suprised that it show slower?

FWIW the machine I am running this benchmark on has 16 not very good cores:

cat /proc/cpuinfo
...
processor	: 15
vendor_id	: GenuineIntel
cpu family	: 6
model		: 85
model name	: Intel(R) Xeon(R) CPU @ 3.10GHz
stepping	: 7
microcode	: 0xffffffff
cpu MHz		: 3100.320
cache size	: 25344 KB
physical id	: 0
siblings	: 16
core id		: 7
cpu cores	: 8
apicid		: 15
initial apicid	: 15
fpu		: yes
fpu_exception	: yes
cpuid level	: 13
wp		: yes
flags		: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat avx512_vnni md_clear arch_capabilities
bugs		: spectre_v1 spectre_v2 spec_store_bypass swapgs taa mmio_stale_data retbleed eibrs_pbrsb bhi
bogomips	: 6200.64
clflush size	: 64
cache_alignment	: 64
address sizes	: 46 bits physical, 48 bits virtual
power management:

@alamb
Copy link
Contributor

alamb commented May 6, 2025

🤖 ./gh_compare_branch.sh Benchmark Script Running
Linux aal-dev 6.11.0-1013-gcp #13~24.04.1-Ubuntu SMP Wed Apr 2 16:34:16 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
Comparing intermeidate-result-blocked-approach (62157a9) to 6cc4953 diff
Benchmarks: tpch_mem clickbench_partitioned clickbench_extended
Results will be posted here when complete

@Rachelint
Copy link
Contributor Author

FWIW the machine I am running this benchmark on has 16 not very good cores

OK, it may be really related to target_partitions.
I also found only slight improvement in my local when target_partitions = 16 (default to cpu num).

The target_partitions is large, so intermediate results Vec in partition will be small, so blocked approach make few difference.

@alamb
Copy link
Contributor

alamb commented May 6, 2025

🤖: Benchmark completed

Details

Comparing HEAD and intermeidate-result-blocked-approach
--------------------
Benchmark clickbench_extended.json
--------------------
┏━━━━━━━━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓
┃ Query        ┃       HEAD ┃ intermeidate-result-blocked-approach ┃        Change ┃
┡━━━━━━━━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩
│ QQuery 0     │  1940.88ms │                            1976.66ms │     no change │
│ QQuery 1     │   760.85ms │                             766.84ms │     no change │
│ QQuery 2     │  1501.65ms │                            1542.77ms │     no change │
│ QQuery 3     │   719.82ms │                             727.87ms │     no change │
│ QQuery 4     │  1484.50ms │                            1494.48ms │     no change │
│ QQuery 5     │ 15426.35ms │                           15206.88ms │     no change │
│ QQuery 6     │  2069.76ms │                            2128.21ms │     no change │
│ QQuery 7     │  2708.23ms │                            2071.61ms │ +1.31x faster │
└──────────────┴────────────┴──────────────────────────────────────┴───────────────┘
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━┓
┃ Benchmark Summary                                   ┃            ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━┩
│ Total Time (HEAD)                                   │ 26612.04ms │
│ Total Time (intermeidate-result-blocked-approach)   │ 25915.33ms │
│ Average Time (HEAD)                                 │  3326.50ms │
│ Average Time (intermeidate-result-blocked-approach) │  3239.42ms │
│ Queries Faster                                      │          1 │
│ Queries Slower                                      │          0 │
│ Queries with No Change                              │          7 │
└─────────────────────────────────────────────────────┴────────────┘
--------------------
Benchmark clickbench_partitioned.json
--------------------
┏━━━━━━━━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓
┃ Query        ┃       HEAD ┃ intermeidate-result-blocked-approach ┃        Change ┃
┡━━━━━━━━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩
│ QQuery 0     │     2.25ms │                               2.20ms │     no change │
│ QQuery 1     │    36.40ms │                              39.03ms │  1.07x slower │
│ QQuery 2     │    90.29ms │                              91.70ms │     no change │
│ QQuery 3     │    99.54ms │                             102.47ms │     no change │
│ QQuery 4     │   803.59ms │                             611.45ms │ +1.31x faster │
│ QQuery 5     │   874.76ms │                             860.26ms │     no change │
│ QQuery 6     │     2.29ms │                               2.29ms │     no change │
│ QQuery 7     │    43.79ms │                              45.26ms │     no change │
│ QQuery 8     │   902.84ms │                             913.05ms │     no change │
│ QQuery 9     │  1166.55ms │                            1218.38ms │     no change │
│ QQuery 10    │   265.93ms │                             276.09ms │     no change │
│ QQuery 11    │   307.22ms │                             315.94ms │     no change │
│ QQuery 12    │   929.85ms │                             927.87ms │     no change │
│ QQuery 13    │  1362.99ms │                            1364.08ms │     no change │
│ QQuery 14    │   831.28ms │                             855.27ms │     no change │
│ QQuery 15    │  1027.92ms │                             841.50ms │ +1.22x faster │
│ QQuery 16    │  1764.73ms │                            1738.44ms │     no change │
│ QQuery 17    │  1595.54ms │                            1615.47ms │     no change │
│ QQuery 18    │  3163.56ms │                            3090.69ms │     no change │
│ QQuery 19    │    83.33ms │                              84.19ms │     no change │
│ QQuery 20    │  1148.16ms │                            1163.96ms │     no change │
│ QQuery 21    │  1335.22ms │                            1362.75ms │     no change │
│ QQuery 22    │  2173.36ms │                            2251.51ms │     no change │
│ QQuery 23    │  8305.24ms │                            8570.07ms │     no change │
│ QQuery 24    │   481.58ms │                             478.64ms │     no change │
│ QQuery 25    │   400.16ms │                             404.96ms │     no change │
│ QQuery 26    │   542.87ms │                             547.19ms │     no change │
│ QQuery 27    │  1569.96ms │                            1610.78ms │     no change │
│ QQuery 28    │ 12626.33ms │                           12723.51ms │     no change │
│ QQuery 29    │   544.45ms │                             527.61ms │     no change │
│ QQuery 30    │   820.33ms │                             839.81ms │     no change │
│ QQuery 31    │   856.68ms │                             896.19ms │     no change │
│ QQuery 32    │  2686.42ms │                            2661.76ms │     no change │
│ QQuery 33    │  3404.03ms │                            3380.42ms │     no change │
│ QQuery 34    │  3384.26ms │                            3372.65ms │     no change │
│ QQuery 35    │  1279.60ms │                            1305.74ms │     no change │
│ QQuery 36    │   127.62ms │                             129.44ms │     no change │
│ QQuery 37    │    58.72ms │                              57.92ms │     no change │
│ QQuery 38    │   127.53ms │                             127.19ms │     no change │
│ QQuery 39    │   200.45ms │                             206.16ms │     no change │
│ QQuery 40    │    52.11ms │                              50.59ms │     no change │
│ QQuery 41    │    45.71ms │                              48.66ms │  1.06x slower │
│ QQuery 42    │    40.79ms │                              40.32ms │     no change │
└──────────────┴────────────┴──────────────────────────────────────┴───────────────┘
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━┓
┃ Benchmark Summary                                   ┃            ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━┩
│ Total Time (HEAD)                                   │ 57566.23ms │
│ Total Time (intermeidate-result-blocked-approach)   │ 57753.47ms │
│ Average Time (HEAD)                                 │  1338.75ms │
│ Average Time (intermeidate-result-blocked-approach) │  1343.10ms │
│ Queries Faster                                      │          2 │
│ Queries Slower                                      │          2 │
│ Queries with No Change                              │         39 │
└─────────────────────────────────────────────────────┴────────────┘
--------------------
Benchmark tpch_mem_sf1.json
--------------------
┏━━━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━┓
┃ Query        ┃     HEAD ┃ intermeidate-result-blocked-approach ┃    Change ┃
┡━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━┩
│ QQuery 1     │ 123.23ms │                             128.90ms │ no change │
│ QQuery 2     │  24.21ms │                              23.25ms │ no change │
│ QQuery 3     │  34.68ms │                              34.89ms │ no change │
│ QQuery 4     │  20.63ms │                              20.65ms │ no change │
│ QQuery 5     │  55.82ms │                              55.13ms │ no change │
│ QQuery 6     │  12.26ms │                              12.21ms │ no change │
│ QQuery 7     │ 104.79ms │                             105.61ms │ no change │
│ QQuery 8     │  26.97ms │                              26.69ms │ no change │
│ QQuery 9     │  61.07ms │                              61.85ms │ no change │
│ QQuery 10    │  56.94ms │                              57.98ms │ no change │
│ QQuery 11    │  12.90ms │                              12.63ms │ no change │
│ QQuery 12    │  46.09ms │                              44.05ms │ no change │
│ QQuery 13    │  29.68ms │                              30.13ms │ no change │
│ QQuery 14    │  10.47ms │                              10.19ms │ no change │
│ QQuery 15    │  26.18ms │                              25.38ms │ no change │
│ QQuery 16    │  23.65ms │                              23.74ms │ no change │
│ QQuery 17    │  97.76ms │                             101.06ms │ no change │
│ QQuery 18    │ 243.09ms │                             240.17ms │ no change │
│ QQuery 19    │  27.98ms │                              27.06ms │ no change │
│ QQuery 20    │  39.30ms │                              39.33ms │ no change │
│ QQuery 21    │ 169.96ms │                             166.55ms │ no change │
│ QQuery 22    │  18.31ms │                              18.76ms │ no change │
└──────────────┴──────────┴──────────────────────────────────────┴───────────┘
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━┓
┃ Benchmark Summary                                   ┃           ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━┩
│ Total Time (HEAD)                                   │ 1265.96ms │
│ Total Time (intermeidate-result-blocked-approach)   │ 1266.21ms │
│ Average Time (HEAD)                                 │   57.54ms │
│ Average Time (intermeidate-result-blocked-approach) │   57.56ms │
│ Queries Faster                                      │         0 │
│ Queries Slower                                      │         0 │
│ Queries with No Change                              │        22 │
└─────────────────────────────────────────────────────┴───────────┘

@Rachelint
Copy link
Contributor Author

Rachelint commented May 6, 2025

🤖: Benchmark completed

@alamb I think the faster queries here mainly led by the hash reusing optimization.
I have submitted a new pr here:
#15962

But I also have some other ideas for this pr.
The general thought is to reduce the extra cost led by blocked approach, and try best to let it only enjoy the benefits.
I will try it later today.

@alamb alamb marked this pull request as draft May 6, 2025 12:35
@alamb
Copy link
Contributor

alamb commented May 6, 2025

Marking as draft as @Rachelint works on the next set of things

@Rachelint Rachelint force-pushed the intermeidate-result-blocked-approach branch from c42454e to d41f99e Compare May 8, 2025 06:33
@Rachelint
Copy link
Contributor Author

Rachelint commented May 8, 2025

The current benchmark results in my 16cores production machine:

  • target_partitions = 8 (amazing 1.61x faster)
// main
Query 7 iteration 0 took 3724.4 ms and returned 10 rows
Query 7 iteration 1 took 3668.5 ms and returned 10 rows
Query 7 iteration 2 took 4003.0 ms and returned 10 rows
Query 7 iteration 3 took 3677.8 ms and returned 10 rows
Query 7 iteration 4 took 3912.1 ms and returned 10 rows
Query 7 avg time: 3797.17 ms

// blocked
Query 7 iteration 0 took 2278.9 ms and returned 10 rows
Query 7 iteration 1 took 2123.4 ms and returned 10 rows
Query 7 iteration 2 took 2400.0 ms and returned 10 rows
Query 7 iteration 3 took 2315.6 ms and returned 10 rows
Query 7 iteration 4 took 2716.3 ms and returned 10 rows
Query 7 avg time: 2366.87 ms
  • target_partitions = 16(also 1.35x faster)
// main
Query 7 iteration 0 took 1955.1 ms and returned 10 rows
Query 7 iteration 1 took 2002.4 ms and returned 10 rows
Query 7 iteration 2 took 1762.5 ms and returned 10 rows
Query 7 iteration 3 took 1805.2 ms and returned 10 rows
Query 7 iteration 4 took 1758.7 ms and returned 10 rows
Query 7 avg time: 1856.78 ms

// blocked
Query 7 iteration 0 took 1392.8 ms and returned 10 rows
Query 7 iteration 1 took 1357.3 ms and returned 10 rows
Query 7 iteration 2 took 1378.4 ms and returned 10 rows
Query 7 iteration 3 took 1376.6 ms and returned 10 rows
Query 7 iteration 4 took 1367.1 ms and returned 10 rows
Query 7 avg time: 1374.47 ms
  • target_partitions = 32(still 1.10x faster)
// main
Query 7 iteration 0 took 1471.3 ms and returned 10 rows
Query 7 iteration 1 took 1350.5 ms and returned 10 rows
Query 7 iteration 2 took 1361.4 ms and returned 10 rows
Query 7 iteration 3 took 1393.6 ms and returned 10 rows
Query 7 iteration 4 took 1427.4 ms and returned 10 rows
Query 7 avg time: 1400.82 ms

// blocked
Query 7 iteration 0 took 1289.6 ms and returned 10 rows
Query 7 iteration 1 took 1281.9 ms and returned 10 rows
Query 7 iteration 2 took 1293.4 ms and returned 10 rows
Query 7 iteration 3 took 1258.8 ms and returned 10 rows
Query 7 iteration 4 took 1279.4 ms and returned 10 rows
Query 7 avg time: 1280.62 ms

@Rachelint
Copy link
Contributor Author

BTW, I also simplified codes although not help to performance.

like NullState, I found we actually don't need to introduce blocked approach for it(even will lead to slight regression if we do so).
I remove the related codes to make it nearly unchanged.

@Rachelint Rachelint force-pushed the intermeidate-result-blocked-approach branch from dc94961 to 9d0b73b Compare May 8, 2025 19:07
@Rachelint Rachelint force-pushed the intermeidate-result-blocked-approach branch from 6ce6a7f to 7f529b9 Compare May 8, 2025 19:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
common Related to common crate core Core DataFusion crate documentation Improvements or additions to documentation functions Changes to functions implementation logical-expr Logical plan and expressions physical-expr Changes to the physical-expr crates sqllogictest SQL Logic Tests (.slt)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants