Skip to content

LinuxPerf, @profile, and other experiments #377

Open
@willow-ahrens

Description

@willow-ahrens

This issue is to document various PRs surrounding Linuxperf and other extensible benchmarking in Benchmarktools. I've seen many great approaches, with various differences in semantics and interfaces. It seems that #375 profiles each eval loop (toggling on and off with a boolean), #347 is a generalized version of the same (unclear whether this can generalize to more than one extension at a time, such as profiling and perfing), and #325 only perfs a single execution.I recognize that different experiments require different setups. A sampling profiler requires a warmup and a minimum runtime, but probably doesn't need fancy tuning. A wall-clock time benchmark requires a warmup and a fancy eval loop where the evaluations are tuned, and maybe a gc scrub. What does LinuxPerf actually need? Are there any other experiments we also want to run (other than linuxperf?). Do we need to use metaprogramming to inline the LinuxPerf calls, or are function calls wrapping the samplefunc sufficient here?
@vchuravy @DilumAluthge @topolarity @Zentrik

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions