Skip to content

Dynamic control over ninja parallelism, or generic dynamic features #416

@rgommers

Description

@rgommers

First the specific feature we need in SciPy: set the number of parallel compile jobs for ninja to equal the number of available physical cores (n_phys). We have a number of reports about the ninja default setting, which is 2*n_phys + 2, either giving out of memory errors or even crashing the build machine outright:

For local development this was not difficult to address (see scipy/scipy#18451), because SciPy has a developer interface where we can run custom code before invoking Meson. However, for pip install & co, there is no way to do that currently. So a user who may be installing some random package that happens to depend on SciPy may get crash or hang that we can't easily prevent. Hardcoding, say, -j4 is not great - there is no setting that works on low-end machines without throwing away lots of performance on high-end machines. The optimal settings seems to always be close to the number of physical cores.

So, ideally there would be a hook that meson-python exposes to package authors which can run arbitrary Python code to then set compile-args with.

I also see a more general pattern here, with gh-159 being another example of the need to execute code first and then set some build-related property (the package version in that case). It's a very similar case, and rather than a specific solution for each of these two problems, we may consider a general mechanism to deal with this kind of thing. Maybe a single Python function to execute, which must return a dict containing a pre-determined set of keys, including version, compile-args and the other *-args knobs.

Thoughts?

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions