The maker particle_filter.build_filter shows that the functions propagate_sample and log_potential are called seprately in a filtering step. Although this is textbook standard, I think it is more reasonable to merge them two in one call, i.e., propose_and_log_potential(). As in,
p(x_k | x_{k-1}) = N(x_k | f(x_{k-1}), V)
q(x_k | x_{k-1}, y) = some approximation to p(x_k | x_{k-1}) p(y_k | x_k)
Choose proposal and potential
M(x_k | x_{k-1}) = q(x_k | x_{k-1}, y)
G(x_k, x_{k-1}) =p(x_k | x_{k-1}) p(y_k | x_k) / q(x_k | x_{k-1}, y)
It is very common that p and q share the same evaluation of f(x_{k-1}). Now we have to evaluate f at least two times, unnecessarily. This is going to be slow when f is a neural net. You don't want to double the NFEs while you can just do one. Or does the current framework support passing intermediate values (in an obvious way)?
The maker
particle_filter.build_filtershows that the functionspropagate_sampleandlog_potentialare called seprately in a filtering step. Although this is textbook standard, I think it is more reasonable to merge them two in one call, i.e.,propose_and_log_potential(). As in,p(x_k | x_{k-1}) = N(x_k | f(x_{k-1}), V) q(x_k | x_{k-1}, y) = some approximation to p(x_k | x_{k-1}) p(y_k | x_k) Choose proposal and potential M(x_k | x_{k-1}) = q(x_k | x_{k-1}, y) G(x_k, x_{k-1}) =p(x_k | x_{k-1}) p(y_k | x_k) / q(x_k | x_{k-1}, y)It is very common that p and q share the same evaluation of
f(x_{k-1}). Now we have to evaluatefat least two times, unnecessarily. This is going to be slow whenfis a neural net. You don't want to double the NFEs while you can just do one. Or does the current framework support passing intermediate values (in an obvious way)?