Skip to content

Commit c0c38b5

Browse files
authored
Merge pull request #32 from zangobot/main
Include more testing tools, by dividing them between general-purpouse or domain-specific
2 parents 5268eff + 0749eed commit c0c38b5

File tree

1 file changed

+32
-9
lines changed

1 file changed

+32
-9
lines changed

Document/content/tests/AITG-MOD-01_Testing_for_Evasion_Attacks.md

Lines changed: 32 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -30,21 +30,44 @@ AI-generated outputs must:
3030
- Regularly evaluate models using adversarial robustness tools to proactively detect and mitigate vulnerabilities.
3131
- Continuously update and refine input validation and sanitization strategies to counter evolving adversarial techniques.
3232

33-
#### Suggested Tools for this Specific Test
34-
- **Adversarial Robustness Toolbox (ART)**
35-
- Framework for adversarial attack generation, detection, and mitigation for AI models.
36-
- Tool Link: [Adversarial Robustness Toolbox](https://github.com/Trusted-AI/adversarial-robustness-toolbox)
33+
#### Suggested Tools for this Specific Test
34+
AI Security Testing tool can be divided into *general-purpose*, which can be used to test a variety of adversarial attacks on the image domain or at the feature-level of every model, and *domain-specific*, that enables security testing directly on the input source.
35+
36+
## General-purpose tools
37+
- **Adversarial Library**
38+
- A powerful library of various adversarial attacks resources in PyTorch. It contains the most efficient implementations of several state-of-the-art attacks, at the expense of less OOP-structured tools.
39+
- Tool Link: [Adversarial Library on GitHub](https://github.com/jeromerony/adversarial-library)
40+
- **Foolbox**
41+
- Tool for creating adversarial examples and evaluating model robustness, compatible with PyTorch, TensorFlow, and JAX.
42+
- Tool Link: [Foolbox on GitHub](https://github.com/bethgelab/foolbox)
43+
- **SecML-Torch**
44+
- Tool for evaluating adversarial robustness of deep learning models. Based on PyTorch, it includes debugging functionalities and interfaces to customize attacks and conduct trustworthy security evaluations.
45+
- Tool Link: [SecML-Torch on GitHub](https://github.com/pralab/secml-torch)
3746

47+
## Domain-specific tools
48+
- **Maltorch**
49+
- Python library for computing security evaluations against Windows malware detectors implemented in Pytorch. The library contains most of the proposed attacks in the literature, and pre-trained models that can be used to test attacks.
50+
- Tool Link: [Maltorch on Github](https://github.com/zangobot/maltorch)
51+
- **Waf-a-MoLE**
52+
- Python library for computing adversarial SQL injections against Web Application Firewalls
53+
- Tool Link: [Waf-a-MoLE on GitHub](https://github.com/AvalZ/WAF-A-MoLE)
3854
- **TextAttack**
3955
- Python framework specifically designed to evaluate and enhance the adversarial robustness of NLP models.
4056
- Tool Link: [TextAttack on GitHub](https://github.com/QData/TextAttack)
4157

42-
- **Foolbox**
43-
- Tool for creating adversarial examples and evaluating model robustness, compatible with PyTorch, TensorFlow, and JAX.
44-
- Tool Link: [Foolbox on GitHub](https://github.com/bethgelab/foolbox)
58+
## Jack-of-all-trades
59+
- **Adversarial Robustness Toolbox (ART)**
60+
- Framework for adversarial attack generation, detection, and mitigation for AI models.
61+
- Tool Link: [Adversarial Robustness Toolbox](https://github.com/Trusted-AI/adversarial-robustness-toolbox)
62+
63+
## Outdated libraries
64+
We also list here some of the libraries that have been used years ago, but now are inactive, not maintained and probably bugged.
65+
- **CleverHans**
66+
- Library for computing adversarial evasion attacks against model deployed in Pytorch, Tensorflow / Keras, and JAX.
67+
- Tool link: [CleverHans on GitHub](https://github.com/cleverhans-lab/cleverhans)
4568

46-
- **DeepSec**
47-
- Security evaluation toolkit focused on deep learning models for adversarial example detection and defense.
69+
- **DeepSec** (BUGGED)
70+
- Security evaluation toolkit focused on deep learning models for adversarial example detection and defense. It has been strongly criticized as bugged, as visible from the (still) open [issues](https://github.com/ryderling/DEEPSEC/issues).
4871
- Tool Link: [DeepSec on GitHub](https://github.com/ryderling/DEEPSEC)
4972

5073
#### References

0 commit comments

Comments
 (0)