Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Include objective function values and gradients at nominal parameter values #259

Open
dweindl opened this issue Dec 18, 2024 · 4 comments
Open

Comments

@dweindl
Copy link
Member

dweindl commented Dec 18, 2024

As mentioned in #208, I think the relevance of this repository could be increased further, by including objective function values and gradients at the provided nominal parameter values, supporting its use a PEtab integration test suite.

I would like to include:

  • the likelihood (potentially for each individual simulation condition + total)
  • the prior
  • the unnormalized posterior
  • the respective gradients
  • ...?

The specific format is to be discussed. I'd suggest using some easily extendable json/yaml file in each problem directory.

I could provide the results from pypesto/amici. Would somebody be willing to do the same using another tool for validation?

@FFroehlich
Copy link
Collaborator

@sebapersson ?

@sebapersson
Copy link
Contributor

I would be happy to validate with PEtab.jl, and I fully agree that many of these things should be included.

From experience comparing gradients against AMICI, we have to use really small tolerances (around 1e-12 for both ODE solvers and steady-state termination to get consistent results, as especially models with pre-equlibration are numerically tricky).

The only things I wonder about are:

  • Is the unnormalized prior needed (as if one uses built-in function, at least in Julia they do not return unnormalised prior)?
  • Do we need likelihood per simulation condition? Can be tricky to extract for most tools.
  • Do we also want gradient for the really large models (e.g. Chen and Froehlich). As adjoint is bit fickle in Julia, I think at least getting the gradient for Froehlich will prove tricky.

@dweindl
Copy link
Member Author

dweindl commented Dec 18, 2024

I would be happy to validate with PEtab.jl, and I fully agree that many of these things should be included.

Great. Let's talk when you are available.

From experience comparing gradients against AMICI, we have to use really small tolerances (around 1e-12 for both ODE solvers and steady-state termination to get consistent results, as especially models with pre-equlibration are numerically tricky).

Right, there will be numerical differences for sure. I would try to deposit reference values obtained with low integration tolerances and leave it up to users to decide what differences they are willing to accept.

* Is the unnormalized prior needed (as if one uses built-in function, at least in Julia they do not return unnormalised prior)?

* Do we need likelihood per simulation condition? Can be tricky to extract for most tools.

* Do we also want gradient for the really large models (e.g. Chen and Froehlich). As adjoint is bit fickle in Julia, I think at least getting the gradient for Froehlich will prove tricky.

Strictly needed, no. However, from own experience, if you have some failing test, it's nice if you can quickly narrow down the problem. For example: Is the problem in the initial conditions of the simulations? Does it occur later in the simulation? Is it some specific condition? Is the likelihood okay, but only the posterior differs?
Certain intermediary results may only be available from certain tools. I don't see this as a big problem. The bare minimum would be the objective function value. Additional things could be provided along with some confidence rating (something like "obtained from petab.jl==1.2.3, amici==2.3.4", "only tested some_tool").

@sebapersson
Copy link
Contributor

Strictly needed, no. However, from own experience, if you have some failing test, it's nice if you can quickly narrow down the problem. For example: Is the problem in the initial conditions of the simulations? Does it occur later in the simulation? Is it some specific condition? Is the likelihood okay, but only the posterior differs?
Certain intermediary results may only be available from certain tools. I don't see this as a big problem. The bare minimum would be the objective function value. Additional things could be provided along with some confidence rating (something like "obtained from petab.jl==1.2.3, amici==2.3.4", "only tested some_tool").

Fair point, we can then include nllh per simulation condition as well as prior to help pinpoint issues. And agree, we can do objective value at least for each problem.

Great. Let's talk when you are available.

Great, I have some things to wrap up before holidays, and then I will be gone until 3:rd of February, but after we can coordinate everything.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants