Read the paper here!
https://doi.org/10.5281/zenodo.17274437
Resilient by Design: Investigating Backdoor Vulnerabilities in Malware Detection Systems. This report presents a large scale empirical study of backdoor (data poisoning) attacks against ML-based malware detectors. Across 420 experiments spanning poisoning ratios, trigger types and model architectures, the work finds that static malware classifiers exhibit surprisingly strong natural resistance to backdoor attacks, with attack success rates generally below 4%. Tree-based models (LightGBM) show superior robustness compared with neural networks, and simple defences such as ensemble averaging and clean-tuning further reduce attack effectiveness. The study draws on the EMBER feature pipeline and provides code and experimental details to reproduce the results.
- Poison the Training Data: Inject backdoor samples into the dataset.
- Train the Model: Train a malware classifier on the poisoned dataset.
- Test on Clean Data: Evaluate the model’s performance on unpoisoned data.
- Test on Backdoor Data: Assess the model’s vulnerability to backdoor samples.
- Update and rebuild the container
./update_build.sh
# this will remove the existing container and build a new one
# and will run and enter the container- Run the unit tests
python -m unittest discover -s scripts/unit_tests
# or ./unit_tests.sh- Execute the pipeline detailed below.
- Create config.yaml file
- Run the pipeline:
python -m scripts.pipeline --config config.yaml --log data/pipeline.log &
# or ./run_pipeline.shpython scripts/grid_search.py --grid_searchThe test suite now generates two types of confusion matrices:
- A standard confusion matrix with three categories (benign, malicious, and backdoor malicious).
- A simplified “square” confusion matrix focusing on benign vs. malicious only. It also calculates updated metrics (Accuracy, Precision, Recall, F1 Score, ROC AUC) for each variant, providing a more detailed view of how the model performs against backdoored samples.
The test suite evaluates the trained model across the following data types:
- Clean Data:
- Unpoisoned benign samples
- Unpoisoned malicious samples
- Poisoned Data:
- Poisoned benign samples
- Poisoned malicious samples
The test suite provides the following evaluation metrics:
- Accuracy
- Precision
- Recall
- F1 Score
- ROC AUC
The following plots are generated during testing:
- Confusion Matrix
- ROC Curve
The data is organized into the following directories:
data/
├── raw/ # Contains unprocessed executables
│ ├── benign/
│ └── malicious/
├── poisoned/ # Contains poisoned executables
│ ├── <backdoor_name>/
|── benign/
|── malicious/
│ └── <backdoor_name>/
└── ember/ # Contains the poisoned dataset in EMBER format
├── test.jsonl
├── train.jsonl
Please use the below .bib to cite the research report!
@article{burke2025resilient,
author = {Hamish Burke},
title = {Resilient by Design: Investigating Backdoor Vulnerabilities in Malware Detection Systems},
year = {2025},
month = {January},
institution = {Victoria University of Wellington},
}