Replies: 2 comments 3 replies
-
After fitting each MMM, you can using the arviz and pymc model comparison functions import pymc as pm
import arviz as az
from pymc_marketing.mmm import MMM
first_model: MMM
second_model: MMM
models: list[MMM] = [first_model, second_model]
for model in models:
pymc_model: pm.Model = model.model
with pymc_model:
pm.compute_log_likelihood(model.idata)
df_compare = az.compare({
f"model_{i}": model.idata
for i, model
in enumerate(models)
}) references:
Related to #352 |
Beta Was this translation helpful? Give feedback.
-
@wd60622 Thanks for the sample code, it worked well. Just one question, how can this method be compared to Out-of-sample R^2 or MAPE? My question is because the best model based on In my case, I have the luxury of being able to split my data into training and testing but at the end of the day, I need to use the model with the best performance using the most up-to-date data. Thanks! |
Beta Was this translation helpful? Give feedback.
-
Hi @ulfaslak,
Do you have any examples regarding ArviZ LOO cross-validation you mentioned in the Q&A of your Time-Varying Coefficients in PyMC-Marketing talk?
You mentioned 10 samples and an in-sample test but I'm not quite sure/familiar with how I can set it up and iterate through some parameters without being a computationally expensive task.
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions