$PD^3F$ : A Pluggable and Dynamic DoS-Defense Framework Against Resource Consumption Attacks Targeting Large Language Models
Large Language Models (LLMs), due to substantial computational requirements, are vulnerable to resource consumption attacks, which can severely degrade server performance or even cause crashes, as demonstrated by denial-of-service (DoS) attacks designed for LLMs. However, existing works lack mitigation strategies against such threats, resulting in unresolved security risks for real-world LLM deployments. To this end, we propose the Pluggable and Dynamic DoS-Defense Framework (
- Python 3.8+
- CUDA-compatible GPU (recommended)
- PyTorch 2.0+
- Transformers library
- Clone the repository:
git clone <repository-url>
cd PDF_defense- Install dependencies:
pip install -r requirements.txtYou can use the bash script to test our approach.
bash run__main.bash--test_mode: Attack method testing (GCG,autodos,pdos)--input_file: Benign request test path--model_name: Target model name (Gemma-2-9b,Gemma-2-27b,Llama-3-8B,Llama-3-70B,Mistral-7b,Qwen-2.5-7b,Qwen-2.5-14b,Qwen-2.5-32b,Qwen-2.5-72b)--log_dir: Log storage path--data_limit: Number of simulated users--questioner_num: Simulated number of user requests--attack_num: Simulated number of attackers
Update the model paths in config.py according to your local setup:
self.MODEL_PATHS = {
'Gemma-2-9b': 'model/Gemma/gemma-2-9b-it',
'Gemma-2-27b': 'model/Gemma/gemma-2-27b-it',
'Llama-3-8B': 'model/Llama/meta-Llama-3.1-8B-Instruct',
'Llama-3-70B': 'model/Llama/Llama-3.1-70B-Instruct',
'Mistral-7b': 'model/Mistral/Mistral-7B-Instruct-v0.2',
'Qwen-2.5-7b': 'model/Qwen/Qwen2.5-7B-Instruct',
'Qwen-2.5-14b': 'model/Qwen/Qwen2.5-14B-Instruct',
'Qwen-2.5-32b': 'model/Qwen/Qwen2.5-32B-Instruct',
'Qwen-2.5-72b': 'model/Qwen/Qwen2.5-72B-Instruct'
}logs/: Reasoning process detailslogs/main/: Specific test results data and indicators
If you use this code in your research, please cite:
@article{zhang2025pd,
title={$ PD\^{3}F $: A Pluggable and Dynamic DoS-Defense Framework Against Resource Consumption Attacks Targeting Large Language Models},
author={Zhang, Yuanhe and Wang, Xinyue and Gao, Haoran and Zhou, Zhenhong and Meng, Fanyu and Zhang, Yuyao and Su, Sen},
journal={arXiv preprint arXiv:2505.18680},
year={2025}
}For questions or issues, please open an issue on the repository or contact the maintainers.
Disclaimer: This research is conducted for academic purposes to improve the security and robustness of AI systems. Users are responsible for ensuring compliance with applicable laws and ethical guidelines.
