This repository is the code for the paper Improving Flow Matching by Aligning Flow Divergence. We use parts of code from Lipman et al., 2023, Davtyan et al., 2023 Stark et al., 2024 and Finzi et al., 2023.
Conditional Flow Matching (CFM) is an efficient, simulation-free method for training flow-based generative models, but it struggles to accurately learn probability paths. We address this by introducing a new PDE-based error characterization and show that the total variation between learned and true paths can be bounded by combining the CFM loss with a divergence loss. This leads to a new training objective that jointly matches flow and divergence, significantly improving performance on tasks like dynamical systems, DNA sequence, and video generation—without sacrificing efficiency.
To validate the efficacy and efficiency of the proposed FDM in enhancing FM across various bench-mark tasks, including
In this experiment, we train FM and FDM for sampling 2D synthetic checkboard data. See the subdirectory README.
In this experiment, we demonstrate that FDM enhances FM with the conditional OT path and Dirichlet path (Stark et al.,2024) on the probability simplex for DNA sequence generation, both with and without guidance, following experiments conducted in (Stark et al., 2024). See the subdirectory README.
In this experiment, we compare FDM against FM and DM from (Finzi et al., 2023) on the Lorenz and FitzHugh-Nagumo models (Farazmand & Sapsis, 2019). See the subdirectory README.
We train a latent FM (Davtyan et al., 2023) and a latent FDM for video prediction. We further utilize a pre-trained VQGAN (Esser et al., 2021) to encode (resp. decode) each frame of the video to (resp. from) the latent space. See the subdirectory README.