Solving complex optimization problems with machine learning, commonly referred to as Learn-to-Optimize, has attracted significant attention due to growing industrial demands. However, existing L2O methods typically rely on large labeled datasets, where generating labels incurs substantial computational overhead. Furthermore, unlike traditional optimization algorithms, these methods often target a single dataset, limiting their ability to generalize to out-of-distribution instances. To address these limitations, we propose KUTF, an unsupervised learning framework that leverages KKT conditions in its loss function, enabling effective training for convex optimization problems without the need for labeled data. Additionally, KUTF mimics classical iterative optimization behaviors, unlocking the potential for auto-regressive improvements that progressively enhance solution quality. We evaluate KUTF on several publicly available datasets, each containing optimization problems from diverse domains and backgrounds. Extensive experiments demonstrate that KUTF achieves over 10X speedup compared to conventional solvers, highlighting its efficiency and strong generalization to out-of-distribution instances.
To generate datasets, you can use the script ./ins/src/gen_ins.py. The code provides detailed usage instructions.
After generating instances, you can use ./src/julia/PDQP.jl/gen_bat.py to generate a task file that collects training/testing samples from generated cases. The code provides detailed usage instructions. Then, please run ./src/extract_sample.py -t XXX to extract pickle files. (XXXX is the dataset name. For example, if you have gen_train_XXXX in your ins folder, you will use -t XXXX.)
Using ./src/train_new.py to train the unsupervised model, ./src/train_supervised.py to train the supervised model, and ./src/train_gnn.py for the mentioned GNNs model.
After training, you first need to generate predictions by running ./src/predict_*.py. Then, use ./src/julia/PDQP.jl/gen_bat.py to generate a batch file that runs the test.
TODO