Skip to content

Excessive data retention on fast benchmarks #61

@jdmarshall

Description

@jdmarshall

I don't think the warmup time for tinybench is anywhere near adequate to getting V8 to settle the code. 'Solving' that problem by deoptimizing tinybench is fighting the wrong fight.

In an attempt to get more accurate data out of tinybench, I've been using time: 2000 or in some cases 6000 for some test suites.

The problem here is that given some leaf node tests are running 2, 3, even 4 orders of magnitude faster than the more complex tests, some of my tests are generating thousands of samples, and others are generating several million. This extra memory pressure is causing some weird race conditions with the code under test.

Now while switching the telemetry calculation to an algorithm that uses k or logn memory complexity for running totals is probably a lot to ask, retaining samples for task A while running task C is a waste of resources.

Expected behavior: Disabling all but one task should result in the same success or failure modes for the code under test. The summaries should be calculated at the end of a task and the working data discarded.

Actual behavior: all statistics are kept for the entire run.

Workaround: iterations attribute instead of time

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions