-
Notifications
You must be signed in to change notification settings - Fork 114
Add ICML2025 FL Paper #190
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
KarhouTam
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for contributing to FL-bench.There would be better if you can show some experimental results (and the command to run them).
Of course it is optional. Anyway. Thanks for your upcoming contribution.
|
Certainly, I would be more than happy to provide more detailed information. May I ask where I should add the experimental results and the command lines to run them? |
|
Just add them in this PR conversation is fine. |
|
Here are partial experiments from our paper along with the running commands. We hope this PR can contribute and be merged into your repository. 1. Utility ExperimentsWe compare FedCEO with baseline methods under different privacy settings (controlled by Table: Test Accuracy (%) on EMNIST and CIFAR-10
Command to Run Utility Experiment:# CIFAR-10
nohup python -u FedCEO.py --privacy True --noise_multiplier 2.0 --flag True --dataset "cifar10" --model "cnn" --lamb 0.6 --r 1.04 --interval 10 > ./logs/log_fedceo_noise=2.0_cifar10_LeNet.log 2>&1 &2. Privacy ExperimentsWe use the DLG attack (Zhu et al., 2019) to evaluate privacy leakage. Lower PSNR indicates better privacy protection. Figure: Privacy Attack Results on CIFAR-10 (PSNR in dB, lower is better)
Command to Run Privacy Experiment:# privacy exps
nohup python -u attack_FedCEO.py --privacy True --noise_multiplier 2.0 --flag True --dataset "cifar10" --model "cnn" --index 100 --gpu "" > ./logs/log_attack_fedceo_noise=1.0_cifar10_LeNet.log 2>&1 & |
|
LGTM so far. |
|
Hello, I noticed that this paper focuses on DP-Federated Learning. I’m curious if there are any plans to extend this repo to include support for DP-Federated Learning algorithms. |
|
Of course, we are planning to expand this library by including basic DPFL algorithms (such as DP-FedAvg) and some cutting-edge conference algorithms (like our FedCEO) to further enhance the privacy performance of FL-bench. If you are also interested, feel free to join us for discussion and collaboration~ |
|
I’m also developing some DP-FL algorithms and would be interested in contributing to this repo. Could you let me know how I can get involved? |
|
Thank you for your interest in FL-bench @6lyc @ASUKaiwenFang. I opened an RFC-style issue (#192) to discuss DP-related features. Feel free to share your comments, suggestions, or reach out to me for assistance. |

Hello,
Our paper titled "Clients Collaborate: Flexible Differentially Private Federated Learning with Guaranteed Improvement of Utility-Privacy Trade-off" has been accepted by ICML 2025. We would like to add the arxiv link and corresponding code to your repository.
Thank you.