Skip to content

Waiting for runner's bench_func and bench_command functions to complete instead of receiving outputs individually? #141

Open
@HunterAP23

Description

@HunterAP23

I'm trying to utilize pyperf to benchmark some functions and save their results to a CSV of my formatting.
I'm using this example code from the documentation:

#!/usr/bin/env python3
import pyperf
import time


def func():
    time.sleep(0.001)

runner = pyperf.Runner()
runner.bench_func('sleep', func)

I want to benchmark the same function with mulitprocessing.pool.Pool / mulitprocessing.pool.ThreadPool / concurrent.futures.ProcessPoolExecutor / concurrent.futures.ThreadPoolExecutor and varying values for things like the number of CPU cores and the chunksize for map functions.

The issue is that assigning a variable to store the output of runner.bench_func and printing out that variable leads to an output like this:

<Benchmark 'python_startup' with 1 runs>
.<Benchmark 'python_startup' with 1 runs>
.<Benchmark 'python_startup' with 1 runs>
.<Benchmark 'python_startup' with 1 runs>
.<Benchmark 'python_startup' with 1 runs>
.<Benchmark 'python_startup' with 1 runs>
.<Benchmark 'python_startup' with 1 runs>
.<Benchmark 'python_startup' with 1 runs>
.<Benchmark 'python_startup' with 1 runs>
.<Benchmark 'python_startup' with 1 runs>
.<Benchmark 'python_startup' with 1 runs>
.<Benchmark 'python_startup' with 1 runs>
.<Benchmark 'python_startup' with 1 runs>
.<Benchmark 'python_startup' with 1 runs>
.<Benchmark 'python_startup' with 1 runs>
.<Benchmark 'python_startup' with 1 runs>
.<Benchmark 'python_startup' with 1 runs>
.<Benchmark 'python_startup' with 1 runs>
.<Benchmark 'python_startup' with 1 runs>
.<Benchmark 'python_startup' with 1 runs>
.<Benchmark 'python_startup' with 1 runs>
.
python_startup: Mean +- std dev: 24.5 ms +- 0.4 ms
<Benchmark 'python_startup' with 21 runs>

Whereas I want to suppress this output and wait for all runs to complete before moving forward with the program.

Is there some other way of storing the results of a benchmark that I can't seem to find in the documentation? Or should is there a way to force the runner to wait for a benchmark to complete?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions