Skip to content

Conversation

@6lyc
Copy link

@6lyc 6lyc commented Aug 31, 2025

Hello,
Our paper titled "Clients Collaborate: Flexible Differentially Private Federated Learning with Guaranteed Improvement of Utility-Privacy Trade-off" has been accepted by ICML 2025. We would like to add the arxiv link and corresponding code to your repository.

Thank you.

Copy link
Owner

@KarhouTam KarhouTam left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for contributing to FL-bench.There would be better if you can show some experimental results (and the command to run them).
Of course it is optional. Anyway. Thanks for your upcoming contribution.

@6lyc
Copy link
Author

6lyc commented Sep 2, 2025

Certainly, I would be more than happy to provide more detailed information. May I ask where I should add the experimental results and the command lines to run them?

@KarhouTam
Copy link
Owner

Just add them in this PR conversation is fine.

@6lyc
Copy link
Author

6lyc commented Sep 7, 2025

Here are partial experiments from our paper along with the running commands. We hope this PR can contribute and be merged into your repository.

1. Utility Experiments

We compare FedCEO with baseline methods under different privacy settings (controlled by σ_g). FedCEO consistently achieves the highest test accuracy.

Table: Test Accuracy (%) on EMNIST and CIFAR-10

Dataset Model σ_g UDP-FedAvg PPSGD CENTAUR FedCEO (ϑ=1) FedCEO (ϑ>1)
1.0 76.59% 77.01% 77.26% 77.14% 78.05%
EMNIST MLP-2-Layers 1.5 69.91% 70.78% 71.86% 71.56% 72.44%
2.0 60.32% 61.51% 62.12% 63.38% 64.20%
1.0 43.87% 49.24% 50.14% 50.09% 54.16%
CIFAR-10 LeNet-5 1.5 34.34% 47.56% 46.90% 48.89% 50.00%
2.0 26.88% 34.61% 36.70% 37.39% 45.35%

Command to Run Utility Experiment:

# CIFAR-10
nohup python -u FedCEO.py --privacy True --noise_multiplier 2.0 --flag True --dataset "cifar10" --model "cnn" --lamb 0.6 --r 1.04 --interval 10 > ./logs/log_fedceo_noise=2.0_cifar10_LeNet.log 2>&1 &

2. Privacy Experiments

We use the DLG attack (Zhu et al., 2019) to evaluate privacy leakage. Lower PSNR indicates better privacy protection.

Figure: Privacy Attack Results on CIFAR-10 (PSNR in dB, lower is better)

image

Command to Run Privacy Experiment:

# privacy exps
nohup python -u attack_FedCEO.py --privacy True --noise_multiplier 2.0 --flag True --dataset "cifar10" --model "cnn" --index 100 --gpu "" > ./logs/log_attack_fedceo_noise=1.0_cifar10_LeNet.log 2>&1 &

@KarhouTam
Copy link
Owner

LGTM so far.
Looking forward to the code contribution.

@ASUKaiwenFang
Copy link

Hello, I noticed that this paper focuses on DP-Federated Learning. I’m curious if there are any plans to extend this repo to include support for DP-Federated Learning algorithms.

@6lyc
Copy link
Author

6lyc commented Sep 15, 2025

@ASUKaiwenFang

Of course, we are planning to expand this library by including basic DPFL algorithms (such as DP-FedAvg) and some cutting-edge conference algorithms (like our FedCEO) to further enhance the privacy performance of FL-bench.

If you are also interested, feel free to join us for discussion and collaboration~

@ASUKaiwenFang
Copy link

I’m also developing some DP-FL algorithms and would be interested in contributing to this repo. Could you let me know how I can get involved?

@KarhouTam
Copy link
Owner

KarhouTam commented Sep 16, 2025

Thank you for your interest in FL-bench @6lyc @ASUKaiwenFang. I opened an RFC-style issue (#192) to discuss DP-related features. Feel free to share your comments, suggestions, or reach out to me for assistance.

@KarhouTam KarhouTam added enhancement New feature or request differential-privacy Fix, feature and algorithm implementation that relate to differential privacy labels Sep 21, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

differential-privacy Fix, feature and algorithm implementation that relate to differential privacy enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants