A new repository is under construction for better readablity at https://github.com/CenturionMKIII/PDHG-Net-New.git
Solving large-scale linear programming (LP) problems is an important task in various areas such as communication networks, power systems, finance and logistics. Recently, two distinct approaches have emerged to expedite LP solving: (i) First-order methods (FOMs); (ii) Learning to optimize (L2O). In this work, we propose an FOM-unrolled neural network (NN) called PDHG-Net, and propose a two-stage L2O method to solve large-scale LP problems. The new architecture PDHG-Net is designed by unrolling the recently emerged PDHG method into a neural network, combined with channel-expansion techniques borrowed from graph neural networks. We prove that the proposed PDHG-Net can recover PDHG algorithm, thus can approximate optimal solutions of LP instances with a polynomial number of neurons. We propose a two-stage inference approach: first use PDHG-Net to generate an approximate solution, and then apply PDHG algorithm to further improve the solution. Experiments show that our approach can significantly accelerate LP solving, achieving up to a 3× speedup compared to FOMs for large-scale LP problems. This work has beend accepted by ICML 2024
All codes are inside the src folder
- Generate instance using genIpsol.py/genLbsol.py to generate training samples. Detailed args can be found in the code.
- Training the model using training_ip.py/training_ip_small.py/training_lb.py/training_lb_small.py.
- Test the performance using test-packing-ip.py/test-packing-lb.py/test-packing-lbsmall.py. Log files will be stored at designated directories.
- Generate instance using pagerank111_gen.py to generate training samples. Detailed args can be found in the code.
- Training the model using pagerank222_train.py or pagerank222_train_save_memory.py for better memory management.
- Test the performance using test-packing.py. Log files will be stored at designated directories.
If you would like to use this repository, please consider citing this work using the following Bibtex:
@inproceedings{lipdhg,
title={PDHG-Unrolled Learning-to-Optimize Method for Large-Scale Linear Programming},
author={Li, Bingheng and Yang, Linxin and Chen, Yupeng and Wang, Senmiao and Chen, Qian and Mao, Haitao and Ma, Yao and Wang, Akang and Ding, Tian and Tang, Jiliang and others},
booktitle={Forty-first International Conference on Machine Learning},
year={2024}
}