You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: examples/gaussian_processes/HSGP-Basic.myst.md
+24-15
Original file line number
Diff line number
Diff line change
@@ -5,9 +5,9 @@ jupytext:
5
5
format_name: myst
6
6
format_version: 0.13
7
7
kernelspec:
8
-
display_name: pymc-examples
8
+
display_name: pymc-dev
9
9
language: python
10
-
name: pymc-examples
10
+
name: pymc-dev
11
11
---
12
12
13
13
(hsgp)=
@@ -58,6 +58,12 @@ import matplotlib.pyplot as plt
58
58
import numpy as np
59
59
import pymc as pm
60
60
import pytensor.tensor as pt
61
+
62
+
# Sample on the CPU
63
+
%env CUDA_VISIBLE_DEVICES=''
64
+
# import jax
65
+
# import numpyro
66
+
# numpyro.set_host_device_count(6)
61
67
```
62
68
63
69
```{code-cell} ipython3
@@ -325,12 +331,12 @@ In practice, you'll need to infer the lengthscale from the data, so the HSGP nee
325
331
For example, if you're using the `Matern52` covariance and your data ranges from $x=-5$ to $x=95$, and the bulk of your lengthscale prior is between $\ell=1$ and $\ell=50$, then the smallest recommended values are $m=543$ and $c=3.7$, as you can see below:
@@ -585,7 +594,7 @@ Before sampling and looking at the results, there are a few things to pay attent
585
594
586
595
First, `prior_linearized` returns the eigenvector basis, `phi`, and the square root of the power spectrum at the eigenvalues, `sqrt_psd`. You have to construct the HSGP approximation from these. The following are the relevant lines of code, showing both the centered and non-centered parameterization.
f = pm.Deterministic("f", phi @ (beta * sqrt_psd))
609
618
```
610
-
where we use a $\text{Gamma}(\alpha=2, \beta=0.1)$ prior for $\nu$, which places around 50% probability that $\nu > 30$, the point where a Student-T roughly becomes indistinguishable from a Gaussian.
619
+
where we use a $\text{Gamma}(\alpha=2, \beta=0.1)$ prior for $\nu$, which places around 50% probability that $\nu > 30$, the point where a Student-T roughly becomes indistinguishable from a Gaussian. See [this link](https://github.com/stan-dev/stan/wiki/prior-choice-recommendations#prior-for-degrees-of-freedom-in-students-t-distribution) for more information.
611
620
612
621
+++
613
622
@@ -639,7 +648,7 @@ az.plot_trace(
639
648
);
640
649
```
641
650
642
-
Sampling went great, but, interestingly, we seem to have a bias in the model, for `eta`, `ell` and `sigma`. It's not the focus of this notebook, but it'd be interesting to dive into this in a real use-case.
651
+
Sampling went great, but, interestingly, we seem to have a bias in the posterior for `sigma`. It's not the focus of this notebook, but it'd be interesting to dive into this in a real use-case.
0 commit comments