You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
AI Security Testing tool can be divided into *general-purpose*, which can be used to test a variety of adversarial attacks on the image domain or at the feature-level of every model, and *domain-specific*, that enables security testing directly on the input source.
35
+
36
+
## General-purpose tools
37
+
-**Adversarial Library**
38
+
- A powerful library of various adversarial attacks resources in PyTorch. It contains the most efficient implementations of several state-of-the-art attacks, at the expense of less OOP-structured tools.
39
+
- Tool Link: [Adversarial Library on GitHub](https://github.com/jeromerony/adversarial-library)
40
+
-**Foolbox**
41
+
- Tool for creating adversarial examples and evaluating model robustness, compatible with PyTorch, TensorFlow, and JAX.
42
+
- Tool Link: [Foolbox on GitHub](https://github.com/bethgelab/foolbox)
43
+
-**SecML-Torch**
44
+
- Tool for evaluating adversarial robustness of deep learning models. Based on PyTorch, it includes debugging functionalities and interfaces to customize attacks and conduct trustworthy security evaluations.
45
+
- Tool Link: [SecML-Torch on GitHub](https://github.com/pralab/secml-torch)
37
46
47
+
## Domain-specific tools
48
+
-**Maltorch**
49
+
- Python library for computing security evaluations against Windows malware detectors implemented in Pytorch. The library contains most of the proposed attacks in the literature, and pre-trained models that can be used to test attacks.
50
+
- Tool Link: [Maltorch on Github](https://github.com/zangobot/maltorch)
51
+
-**Waf-a-MoLE**
52
+
- Python library for computing adversarial SQL injections against Web Application Firewalls
53
+
- Tool Link: [Waf-a-MoLE on GitHub](https://github.com/AvalZ/WAF-A-MoLE)
38
54
-**TextAttack**
39
55
- Python framework specifically designed to evaluate and enhance the adversarial robustness of NLP models.
40
56
- Tool Link: [TextAttack on GitHub](https://github.com/QData/TextAttack)
41
57
42
-
-**Foolbox**
43
-
- Tool for creating adversarial examples and evaluating model robustness, compatible with PyTorch, TensorFlow, and JAX.
44
-
- Tool Link: [Foolbox on GitHub](https://github.com/bethgelab/foolbox)
58
+
## Jack-of-all-trades
59
+
-**Adversarial Robustness Toolbox (ART)**
60
+
- Framework for adversarial attack generation, detection, and mitigation for AI models.
We also list here some of the libraries that have been used years ago, but now are inactive, not maintained and probably bugged.
65
+
-**CleverHans**
66
+
- Library for computing adversarial evasion attacks against model deployed in Pytorch, Tensorflow / Keras, and JAX.
67
+
- Tool link: [CleverHans on GitHub](https://github.com/cleverhans-lab/cleverhans)
45
68
46
-
-**DeepSec**
47
-
- Security evaluation toolkit focused on deep learning models for adversarial example detection and defense.
69
+
-**DeepSec**(BUGGED)
70
+
- Security evaluation toolkit focused on deep learning models for adversarial example detection and defense. It has been strongly criticized as bugged, as visible from the (still) open [issues](https://github.com/ryderling/DEEPSEC/issues).
48
71
- Tool Link: [DeepSec on GitHub](https://github.com/ryderling/DEEPSEC)
0 commit comments