
deel-torchlip is an open source Python API to
build and train Lipschitz neural networks. It is built on top of PyTorch.
Explore deel-torchlip docs »
deel-torchlip provides:
- Easy-to-use Lipschitz layers -- deel-torchlip layers are custom PyTorch layers and are very user-friendly. No need to be an expert in Lipschitz networks!
- Custom losses for robustness -- The provided losses help improving adversarial robustness in classification tasks by increasing margins between outputs of the network (see our paper for more information).
- Certified robustness -- One main advantage of Lipschitz networks is the costless computation of certificates ensuring that there is no adversarial attacks smaller than these certified radii of robustness.
For TensorFlow/Keras users, we released the deel-lip package offering a similar implementation based on Keras.
- 📚 Table of contents
- 🔥 Tutorials
- 🚀 Quick Start
- 📦 What's Included
- 👍 Contributing
- 👀 See Also
- 🙏 Acknowledgments
- 👨🎓 Creator
- 🗞️ Citation
- 📝 License
We propose some tutorials to get familiar with the library and its API:
The latest release can be installed using pip
. The torch
package will also be
installed as a dependency. If torch
is already present, be sure that the version is
compatible with the deel-torchlip version.
$ pip install deel-torchlip
Creating a Lipschitz network is similar to building a PyTorch model: standard layers are
replaced with their Lipschitz counterparts from deel-torchlip. PyTorch layers that are
already Lipschitz can still be used in Lipschitz networks, such as torch.nn.ReLU()
or
torch.nn.Flatten()
.
import torch
from deel import torchlip
# Build a Lipschitz network with 4 layers, that can be used in a training loop,
# like any torch.nn.Sequential network
model = torchlip.Sequential(
torchlip.SpectralConv2d(
in_channels=3, out_channels=16, kernel_size=(3, 3), padding="same"
),
torchlip.GroupSort2(),
torch.nn.Flatten(),
torchlip.SpectralLinear(15544, 64)
)
The deel-torchlip
library proposes a list of 1-Lipschitz layers equivalent to torch.nn
ones.
torch.nn |
1-Lipschitz? | deel-torchlip equivalent |
comments |
---|---|---|---|
torch.nn.Linear |
no | .SpectralLinear .FrobeniusLinear |
.SpectralLinear and .FrobeniusLinear are similar when there is a single output. |
torch.nn.Conv2d |
no | .SpectralConv2d .FrobeniusConv2d |
.SpectralConv2d also implements Björck normalization. |
torch.nn.Conv1d |
no | .SpectralConv1d |
.SpectralConv1d also implements Björck normalization. |
MaxPooling GlobalMaxPooling |
yes | n/a | |
torch.nn.AvgPool2d torch.nn.AdaptiveAvgPool2d |
no | .ScaledAvgPool2d .ScaledAdaptiveAvgPool2d .ScaledL2NormPool2d .ScaledAdaptativeL2NormPool2d |
The Lipschitz constant is bounded by sqrt(pool_h * pool_w) . |
Flatten |
yes | n/a | |
torch.nn.ConvTranspose2d |
no | .SpectralConvTranspose2d |
.SpectralConvTranspose2d also implements Björck normalization. |
torch.nn.BatchNorm1d torch.nn.BatchNorm2d torch.nn.BatchNorm3d |
no | .BatchCentering |
This layer apply a bias based on statistics on batch, but no normalization factor (1-Lipschitz). |
torch.nn.LayerNorm |
no | .LayerCentering |
This layer apply a bias based on statistics on each sample, but no normalization factor (1-Lipschitz). |
Residual connections | no | .LipResidual |
Learn a factor for mixing residual and a 1-Lipschitz branch. |
torch.nn.Dropout |
no | None | The Lipschitz constant is bounded by the dropout factor. |
The deel-torchlip
library proposes a list of classification losses
Type | torch.nn |
deel-torchlip equivalent |
comments |
---|---|---|---|
Binary classification | torch.nn.BCEWithLogitsLoss |
.HKRLoss |
alpha: Regularization factor ([0,1]) between the hinge and the KR loss; min_margin: Minimal margin for the hinge loss. |
Multiclass classification | torch.nn.CrossEntropyLoss |
.HKRMulticlassLoss .SoftHKRMulticlassLoss |
alpha: Regularization factor ([0,1]) between the hinge and the KR loss; min_margin: Minimal margin for the hinge loss. temperature for the softmax calculation |
Contributions are welcome! You can open an issue or fork this repository and propose a pull-request. The development environment with all required dependencies should be installed by running:
$ make prepare-dev
Code formatting and linting are performed with black
and flake8
. Tests are run with
pytest
. These three commands are gathered in:
$ make test
Finally, commits should respect pre-commit hooks. To be sure that your code changes are accepted, you can run the following target:
$ make check_all
More from the DEEL project:
- Xplique a Python library exclusively dedicated to explaining neural networks.
- deel-lip a Python library for training k-Lipschitz neural networks on TF.
- Influenciae Python toolkit dedicated to computing influence values for the discovery of potentially problematic samples in a dataset.
- oodeel a Python library for post-hoc deep OOD (Out-of-Distribution) detection on already trained neural network image classifiers
- DEEL White paper a summary of the DEEL team on the challenges of certifiable AI and the role of data quality, representativity and explainability for this purpose.
Main contributors of the deel-torchlip library are:
This library was built to support the work presented in our CVPR 2021 paper Achieving robustness in classification using optimal transport with Hinge regularization. If you use our library for your work, please cite our paper 😉
@misc{2006.06520,
Author = {Mathieu Serrurier and Franck Mamalet and Alberto González-Sanz and Thibaut Boissin and Jean-Michel Loubes and Eustasio del Barrio},
Title = {Achieving robustness in classification using optimal transport with hinge regularization},
Year = {2020},
Eprint = {arXiv:2006.06520},
}
The package is released under MIT license.