Skip to content

Scaling to large datasets #37

Open
@rth

Description

@rth

This issues aims to discuss scaling of PyKridge to large datsets (which could impact, for instance, the optimization approaches in issue #35).

Here are approximate (and possibly inaccurate) time complexity estimations for different processing steps of the kriging process in 2D, according to these benchmarks (adapted from PR #36), applied to a 5k-10k dataset which only have 2 measurement points for each parameter,

  • Calculation (training) of the kriging model: ~O(N_train²)
  • Prediction from a trained model (no moving window):
    • all backends backend : ~O(N_test*N_train^1.5)
  • Prediction from a trained model (with window):
    • loop and C backends: ~O(N_test*N_nn^(1~2))

For information, the approximate time complexity of linear algebra operations that may limit the performance are,

(though the constant term would be quite different).

This may be of interest to @kvanlombeek and @basaks as discussed in issue #29 . The training part indeed doesn't scale so well with the dataset size and also affect the predictions time. The total run time for the attached benchmarks is 48min wall time and 187min CPU time (on a 4 core CPU), so most of the cricial operation do take advantage of a multi-threaded BLAS for linear algebra operations.

Any suggestions of how we could improve scaling (or general performance) are very welcome..

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions