Skip to content

Commit

Permalink
🏗️ Export normalizing flows to standalone package Zuko
Browse files Browse the repository at this point in the history
  • Loading branch information
francois-rozet committed Oct 17, 2022
1 parent 8543612 commit 822c3cc
Show file tree
Hide file tree
Showing 49 changed files with 475 additions and 2,976 deletions.
5 changes: 2 additions & 3 deletions .github/ISSUE_TEMPLATE/bug-report.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,7 @@
---
name: "🐛 Bug report"
about: "Report a bug to help LAMPE improve"
about: "Report an issue that seems like a bug"
labels: bug

---

### Description
Expand All @@ -14,7 +13,7 @@ A clear description of what the bug is.
A minimal working example demonstrating the current behavior.

```python
from lampe.module import component
from lampe import component
...
y = component(x) # bug happens
```
Expand Down
1 change: 0 additions & 1 deletion .github/ISSUE_TEMPLATE/documentation.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,6 @@
name: "📝 Documentation"
about: "Report an issue related to the documentation"
labels: documentation

---

### Description
Expand Down
1 change: 0 additions & 1 deletion .github/ISSUE_TEMPLATE/feature-request.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,6 @@
name: "✨ Feature request"
about: "Suggest a feature or improvement"
labels: enhancement

---

### Description
Expand Down
14 changes: 7 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
<p align="center"><img src="https://raw.githubusercontent.com/francois-rozet/lampe/master/sphinx/images/banner.svg" width="100%"></p>
![LAMPE's banner](https://raw.githubusercontent.com/francois-rozet/lampe/master/sphinx/images/banner.svg)

# LAMPE

`lampe` is a simulation-based inference (SBI) package that focuses on amortized estimation of posterior distributions, without relying on explicit likelihood functions; hence the name *Likelihood-free AMortized Posterior Estimation* (LAMPE). The package provides [PyTorch](https://pytorch.org) implementations of modern amortized simulation-based inference algorithms like neural ratio estimation (NRE), neural posterior estimation (NPE) and more. Similar to PyTorch, the philosophy of LAMPE is to avoid obfuscation and expose all components, from network architecture to optimizer, to the user such that they are free to modify or replace anything they like.
LAMPE is a simulation-based inference (SBI) package that focuses on amortized estimation of posterior distributions, without relying on explicit likelihood functions; hence the name *Likelihood-free AMortized Posterior Estimation* (LAMPE). The package provides [PyTorch](https://pytorch.org) implementations of modern amortized simulation-based inference algorithms like neural ratio estimation (NRE), neural posterior estimation (NPE) and more. Similar to PyTorch, the philosophy of LAMPE is to avoid obfuscation and expose all components, from network architecture to optimizer, to the user such that they are free to modify or replace anything they like.

As part of the inference pipeline, LAMPE provides components to efficiently [store and load data](lampe/data.py) from disk, [diagnose predictions](lampe/diagnostics.py) and [display results](lampe/plots.py) graphically. The package also implements [normalizing flows](lampe/nn/flows.py) from scratch in a way that is both easy to understand and extend.
As part of the inference pipeline, `lampe` provides components to efficiently [store and load data](lampe/data.py) from disk, [diagnose predictions](lampe/diagnostics.py) and [display results](lampe/plots.py) graphically.

## Installation

Expand All @@ -20,10 +20,10 @@ Alternatively, if you need the latest features, you can install it from the repo
pip install git+https://github.com/francois-rozet/lampe
```

## Contributing

If you have a question, an issue or would like to contribute, please read our [contributing guidelines](CONTRIBUTING.md).

## Documentation

The documentation is made with [Sphinx](https://www.sphinx-doc.org) and [Furo](https://github.com/pradyunsg/furo) and is hosted at [francois-rozet.github.io/lampe](https://francois-rozet.github.io/lampe).

## Contributing

If you have a question, an issue or would like to contribute, please read our [contributing guidelines](CONTRIBUTING.md).
49 changes: 24 additions & 25 deletions lampe/data.py
Original file line number Diff line number Diff line change
Expand Up @@ -48,23 +48,16 @@ def __iter__(self) -> Iterator[Tuple[Tensor, Tensor]]:
yield theta, x


def JointLoader(
prior: Distribution,
simulator: Callable,
batch_size: int = 2**10, # 1024
vectorized: bool = False,
numpy: bool = False,
**kwargs,
) -> DataLoader:
r"""Creates a data loader of batched pairs :math:`(\theta, x)` generated by
a prior distribution :math:`p(\theta)` and a simulator.
class JointLoader(DataLoader):
r"""Creates an infinite data loader of batched pairs :math:`(\theta, x)` generated
by a prior distribution :math:`p(\theta)` and a simulator.
The simulator is a stochastic function taking (a vector of) parameters
:math:`\theta`, in the form of a NumPy array or a PyTorch tensor, as input and
returning an observation :math:`x` as output, which (implicitly) defines a
returning an observation :math:`x` as output, which implicitly defines a
likelihood distribution :math:`p(x | \theta)`. Together with the prior, they form
a joint distribution :math:`p(\theta, x) = p(\theta) p(x | \theta)` from which
the pairs :math:`(\theta, x)` are independently drawn.
pairs :math:`(\theta, x)` are independently drawn.
Arguments:
prior: A prior distribution :math:`p(\theta)`.
Expand All @@ -74,26 +67,32 @@ def JointLoader(
numpy: Whether the simulator requires NumPy or PyTorch inputs.
kwargs: Keyword arguments passed to :class:`torch.utils.data.DataLoader`.
Returns:
An infinite data loader of batched pairs :math:`(\theta, x)`.
Example:
>>> loader = joint_loader(prior, simulator, numpy=True, num_workers=4)
>>> loader = JointLoader(prior, simulator, numpy=True, num_workers=4)
>>> for theta, x in loader:
... theta, x = theta.cuda(), x.cuda()
... something(theta, x)
"""

return DataLoader(
IterableJointDataset(
prior,
simulator,
batch_shape=(batch_size,) if vectorized else (),
numpy=numpy,
),
batch_size=None if vectorized else batch_size,
def __init__(
self,
prior: Distribution,
simulator: Callable,
batch_size: int = 2**10, # 1024
vectorized: bool = False,
numpy: bool = False,
**kwargs,
)
):
super().__init__(
IterableJointDataset(
prior,
simulator,
batch_shape=(batch_size,) if vectorized else (),
numpy=numpy,
),
batch_size=None if vectorized else batch_size,
**kwargs,
)


class H5Dataset(IterableDataset):
Expand Down
25 changes: 11 additions & 14 deletions lampe/diagnostics.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,9 +15,8 @@ def expected_coverage_mc(
pairs: Iterable[Tuple[Tensor, Tensor]],
n: int = 1024,
) -> Tuple[Tensor, Tensor]:
r"""Estimates the expected coverages of a posterior estimator :math:`q(\theta | x)`
over pairs :math:`(\theta^*, x^*)` drawn from the joint distribution
:math:`p(\theta, x)`.
r"""Estimates by Monte Carlo (MC) the expected coverages of a posterior estimator
:math:`q(\theta | x)` over pairs :math:`(\theta^*, x^*) \sim p(\theta, x)`.
The expected coverage at credible level :math:`1 - \alpha` is the probability
of :math:`\theta^*` to be included in the highest density region of total probability
Expand All @@ -36,12 +35,11 @@ def expected_coverage_mc(
.. math:: P \big( \theta^* \in \Theta_{q(\theta | x^*)}(1 - \alpha) \big)
= P(r^* \leq 1 - \alpha) .
In practice, Monte Carlo (MC) estimations of :math:`r^*` are used.
In practice, Monte Carlo estimations of :math:`r^*` are used.
References:
Averting A Crisis In Simulation-Based Inference
(Hermans et al., 2021)
https://arxiv.org/abs/2110.06581
| Averting A Crisis In Simulation-Based Inference (Hermans et al., 2021)
| https://arxiv.org/abs/2110.06581
Arguments:
posterior: A posterior estimator :math:`q(\theta | x)`.
Expand All @@ -51,7 +49,7 @@ def expected_coverage_mc(
Returns:
A vector of increasing credible levels and their respective expected coverages.
Examples:
Example:
>>> posterior = lampe.inference.NPE(3, 4)
>>> testset = lampe.data.H5Dataset('test.h5')
>>> levels, coverages = expected_coverage_mc(posterior.flow, testset)
Expand Down Expand Up @@ -84,13 +82,12 @@ def expected_coverage_ni(
domain: Tuple[Tensor, Tensor],
**kwargs,
) -> Tuple[Tensor, Tensor]:
r"""Estimates the expected coverages of a posterior estimator :math:`q(\theta | x)`
over pairs :math:`(\theta^*, x^*)` drawn from the joint distribution
:math:`p(\theta, x)`.
r"""Estimates by numerical integration (NI) the expected coverages of a posterior
estimator :math:`q(\theta | x)` over pairs :math:`(\theta^*, x^*) \sim p(\theta, x)`.
Equivalent to :func:`expected_coverage_mc` but the proportions :math:`r^*` are
approximated by numerical integration (NI) over the domain, which is useful
when the posterior estimator can be evaluated but not be sampled from.
approximated by numerical integration over the domain, which is useful when the
posterior estimator can be evaluated but not be sampled from.
Arguments:
posterior: A posterior estimator :math:`\log q(\theta | x)`.
Expand All @@ -101,7 +98,7 @@ def expected_coverage_ni(
Returns:
A vector of increasing credible levels and their respective expected coverages.
Examples:
Example:
>>> domain = (torch.zeros(3), torch.ones(3))
>>> prior = lampe.distributions.BoxUniform(*domain)
>>> ratio = lampe.inference.NRE(3, 4)
Expand Down
Loading

0 comments on commit 822c3cc

Please sign in to comment.