Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[mpm] Add Transfer #22807

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

xuchenhan-tri
Copy link
Contributor

@xuchenhan-tri xuchenhan-tri commented Mar 25, 2025

Add related testing infra -- ComputeTotalMassAndMomentum for particles.
Fix a bug in SparseGrid::GetGridData().


This change is Reviewable

@xuchenhan-tri xuchenhan-tri added the release notes: none This pull request should not be mentioned in the release notes label Mar 25, 2025
Copy link
Contributor Author

@xuchenhan-tri xuchenhan-tri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+@amcastro-tri for feature review please

Reviewable status: LGTM missing from assignee amcastro-tri, needs platform reviewer assigned, needs at least two assigned reviewers

Copy link
Contributor

@amcastro-tri amcastro-tri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thank you for the references @xuchenhan-tri!

Checkpoint before lunch

Reviewed 8 of 13 files at r1.
Reviewable status: 23 unresolved discussions, LGTM missing from assignee amcastro-tri, needs platform reviewer assigned, needs at least two assigned reviewers


multibody/mpm/particle_data.cc line 42 at r1 (raw file):

}

/* Computes A: ε where ε is the Levi-Civita tensor. */

minor,

Also, consider stating explicitly what indexes get contracted, maybe like wₖ=Aᵢⱼεᵢⱼₖ

Suggestion:

A:ε

multibody/mpm/particle_data.cc line 176 at r1 (raw file):

    const ParticleData<T>& particle_data, const T& dx) {
  MassAndMomentum<T> result;
  const T D = dx * dx * 0.25;

are we okay with baking in the assumption of quadratic kernels here?
If mostly used for testing, consider at least a comment/warning in the documentation of this function

Code quote:

const T D = dx * dx * 0.25;

multibody/mpm/mock_sparse_grid.h line 97 at r1 (raw file):

  }

  void IterateGrid(const std::function<void(GridData<T>*)>& func);

wouldn't it make sense for this to also provide a Vector3 for the position of that grid data? ditto in SparseGrid


multibody/mpm/transfer.cc line 25 at r1 (raw file):

void Transfer<Grid>::ParticleToGrid() {
  /* U == T when Grid == SparseGrid<T>.
     U == double when Grid == MockSparseGrid<T>. */

isn't this reversed?

Code quote:

  /* U == T when Grid == SparseGrid<T>.
     U == double when Grid == MockSparseGrid<T>. */

multibody/mpm/transfer.cc line 29 at r1 (raw file):

  using PadNodeType = typename Grid::PadNodeType;
  using PadDataType = typename Grid::PadDataType;
  auto p2g_kernel = [this](int data_index, const PadNodeType& grid_x,

nit, consider particle_index, or even just p, to match notation below, since we also have grid data.

Code quote:

data_index

multibody/mpm/transfer.cc line 39 at r1 (raw file):

    const bool participating = particle_data.in_constraint()[data_index];
    const BsplineWeights<U> bspline =
        MakeBsplineWeights(x, static_cast<U>(grid_->dx()));

x is of type Vector3<T>, shouldn't this be using T instead of U?
i.e.

Suggestion:

    const BsplineWeights<T> bspline =
        MakeBsplineWeights(x, static_cast<T>(grid_->dx()));

multibody/mpm/transfer.cc line 42 at r1 (raw file):

    for (int i = 0; i < 3; ++i) {
      for (int j = 0; j < 3; ++j) {
        for (int k = 0; k < 3; ++k) {

consider to reserve i for "grid index" (which I realize in practice means nothing, only in math)
And here for local pad indexes consider a,b,c

Code quote:

    for (int i = 0; i < 3; ++i) {
      for (int j = 0; j < 3; ++j) {
        for (int k = 0; k < 3; ++k) {

multibody/mpm/transfer.cc line 43 at r1 (raw file):

      for (int j = 0; j < 3; ++j) {
        for (int k = 0; k < 3; ++k) {
          const U& w = bspline.weight(i, j, k);

ditto, T?

Code quote:

  const U& w = bspline.weight(i, j, k);

multibody/mpm/transfer.cc line 43 at r1 (raw file):

      for (int j = 0; j < 3; ++j) {
        for (int k = 0; k < 3; ++k) {
          const U& w = bspline.weight(i, j, k);

To match notation with papers and proof below, consider

Suggestion:

const U& w_ip = 

multibody/mpm/transfer.cc line 51 at r1 (raw file):

           equivalence here:

           The new grid momentum is given by mvᵢⁿ + fᵢdt with mvᵢⁿ being the

should we add time superscript to the grid force as well?

Code quote:

by mvᵢⁿ + fᵢdt 

multibody/mpm/transfer.cc line 55 at r1 (raw file):

           particles. That is,

           mvᵢⁿ = Σₚ mₚvₚ + Cₚ(xᵢ - xₚ) wᵢₚ (equation 178 [Jiang et al. 2016])

nit,

Suggestion:

 mvᵢⁿ = Σₚ mₚ(vₚ + Cₚ(xᵢ - xₚ))wᵢₚ 

multibody/mpm/transfer.cc line 65 at r1 (raw file):

           Noting that
             Fₚ = (I + dtCₚ)Fₚⁿ (equation 17 [Hu et al. 2018])
             Cₚ = Bₚ * D⁻¹ (equation 173 [Jiang et al. 2016]), and

Eq. 173 in Jiang, shows the P2G momentum transfer equation

Code quote:

  Cₚ = Bₚ * D⁻¹ (equation 173 [Jiang et al. 2016]), and

multibody/mpm/transfer.cc line 70 at r1 (raw file):

           we compute -∂E/∂xᵢ and get

            fᵢ = -∑ₚ Vₚ * Pₚ * Fₚⁿᵀ * D⁻¹ * (xᵢ − xₚ) * wᵢₚ

isn't a dt factor misssing?

Code quote:

 fᵢ = -∑ₚ Vₚ * Pₚ * Fₚⁿᵀ * D⁻¹ * (xᵢ − xₚ) * wᵢₚ

multibody/mpm/transfer.cc line 76 at r1 (raw file):

           reveals that mvᵢⁿ + fᵢdt is given by the equation in the code below.
          */
          const T mi = m * w;

mi sounds a whole lot as "grid" mass, at least in papers.
Consider m_ip or similar (contribution to m_i from p)

Code quote:

onst T mi = m * w;

multibody/mpm/transfer.cc line 78 at r1 (raw file):

          const T mi = m * w;
          (*grid_data)[i][j][k].v +=
              mi * v + (m * C - D_inverse_dt_ * tau_volume) * (xi - x) * w;

I'm having problesm grokking this. Since you arelady went trhough so much trouble above, could you complete the proof until the final result?

Code quote:

(m * C - D_inverse_dt_ * tau_volume) 

multibody/mpm/transfer.cc line 89 at r1 (raw file):

template <typename Grid>
void Transfer<Grid>::GridToParticle() {

comments from above on U, data_index` and similar minor notation, also apply here.


multibody/mpm/transfer.cc line 114 at r1 (raw file):

          const U& w = bspline.weight(i, j, k);
          v += w * vi;
          C += (w * vi) * (xi - x).transpose();

minor, consider adding references with equation numbers as above. I found that very usueful, even in this trivial case.


multibody/mpm/transfer.cc line 120 at r1 (raw file):

    x += v * dt_;
    C *= D_inverse_;
    F += C * dt_ * F;

maybe that's the common usage of the "transfer" language? it wasn't clear to me from papers.
But here in my mind two things happen: transfer + time stepping (same abover fro P2G).
Consider a name that reflects that. Maybe GridToParticleAndStep()?


multibody/mpm/transfer.h line 43 at r1 (raw file):

   `grid` and `particle_data` must outlive the Transfer object.
   @pre grid and particles are not null and dt > 0. */
  Transfer(T dt, Grid* grid, ParticleData<T>* particle_data);

It looks like dt and dx is all you need at construction.
Why not to pass grid and particles as arguments to the transfer function below?
An advantage would be that you can clearly mark const/non-const these arguments as needed?, plus thre would no be any lifetime requirements.


multibody/mpm/transfer.h line 49 at r1 (raw file):

  const ParticleData<T>& particle_data() const { return *particle_data_; }

  /* The Particle to grid transfer (P2G). After the call to P2G, the grid store

minor,

Suggestion:

stores

multibody/mpm/transfer.h line 50 at r1 (raw file):

  /* The Particle to grid transfer (P2G). After the call to P2G, the grid store
   the mass and momentum transfered from the particles using APIC. */

clarify that v will store momentum, not velocities


multibody/mpm/transfer.h line 62 at r1 (raw file):

  T dt_{};
  Grid* grid_{};
  ParticleData<T>* particle_data_{};

minor,

Suggestion:

  Grid* grid_{nullptr};
  ParticleData<T>* particle_data_{nullptr};

multibody/mpm/transfer.h line 64 at r1 (raw file):

  ParticleData<T>* particle_data_{};
  /* The D inverse matrix in computing the affine matrix. See page 42 in the MPM
   course notes referenced in the class documentation. */

Thanks for the reference!
are we okay with assuming only quadratic interpolation functions?
I imagine the answer is always yes in practice, so please consider adding note/warning in the class documentation.

Add related testing infra -- ComputeTotalMassAndMomentum for particles
Fix a bug in SparseGrid::GetGridData()
Copy link
Contributor Author

@xuchenhan-tri xuchenhan-tri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 17 unresolved discussions, LGTM missing from assignee amcastro-tri, needs platform reviewer assigned, needs at least two assigned reviewers


multibody/mpm/mock_sparse_grid.h line 97 at r1 (raw file):

Previously, amcastro-tri (Alejandro Castro) wrote…

wouldn't it make sense for this to also provide a Vector3 for the position of that grid data? ditto in SparseGrid

For many operations (e.g. turning momentum to velocity and adding external forces into the momentum), the position of the grid is not necessary. We can add an overload that provides more information when necessary.


multibody/mpm/particle_data.cc line 42 at r1 (raw file):

Previously, amcastro-tri (Alejandro Castro) wrote…

minor,

Also, consider stating explicitly what indexes get contracted, maybe like wₖ=Aᵢⱼεᵢⱼₖ

Done


multibody/mpm/particle_data.cc line 176 at r1 (raw file):

Previously, amcastro-tri (Alejandro Castro) wrote…

are we okay with baking in the assumption of quadratic kernels here?
If mostly used for testing, consider at least a comment/warning in the documentation of this function

I added a note in the header.

The assumption of quadratic kernels have been built into this module since the very beginning. The Bspline weights class is explicitly quadratic, the pad data structure is designed to serve quadratic interpolation, etc.


multibody/mpm/transfer.h line 43 at r1 (raw file):

Previously, amcastro-tri (Alejandro Castro) wrote…

It looks like dt and dx is all you need at construction.
Why not to pass grid and particles as arguments to the transfer function below?
An advantage would be that you can clearly mark const/non-const these arguments as needed?, plus thre would no be any lifetime requirements.

working


multibody/mpm/transfer.h line 49 at r1 (raw file):

Previously, amcastro-tri (Alejandro Castro) wrote…

minor,

Done


multibody/mpm/transfer.h line 50 at r1 (raw file):

Previously, amcastro-tri (Alejandro Castro) wrote…

clarify that v will store momentum, not velocities

Done


multibody/mpm/transfer.h line 62 at r1 (raw file):

Previously, amcastro-tri (Alejandro Castro) wrote…

minor,

Both spellings are acceptable


multibody/mpm/transfer.h line 64 at r1 (raw file):

Previously, amcastro-tri (Alejandro Castro) wrote…

Thanks for the reference!
are we okay with assuming only quadratic interpolation functions?
I imagine the answer is always yes in practice, so please consider adding note/warning in the class documentation.

Done.

See the thread above for related discussions.


multibody/mpm/transfer.cc line 25 at r1 (raw file):

Previously, amcastro-tri (Alejandro Castro) wrote…

isn't this reversed?

I don't think so. Can you elaborate the concern?


multibody/mpm/transfer.cc line 29 at r1 (raw file):

Previously, amcastro-tri (Alejandro Castro) wrote…

nit, consider particle_index, or even just p, to match notation below, since we also have grid data.

Done, thanks.


multibody/mpm/transfer.cc line 39 at r1 (raw file):

Previously, amcastro-tri (Alejandro Castro) wrote…

x is of type Vector3<T>, shouldn't this be using T instead of U?
i.e.

T is the particle/grid data scalar type, which can be autodiff. There's no autodiff support for bsplineweights.


multibody/mpm/transfer.cc line 42 at r1 (raw file):

Previously, amcastro-tri (Alejandro Castro) wrote…

consider to reserve i for "grid index" (which I realize in practice means nothing, only in math)
And here for local pad indexes consider a,b,c

Done


multibody/mpm/transfer.cc line 43 at r1 (raw file):

Previously, amcastro-tri (Alejandro Castro) wrote…

ditto, T?

See other discussions.


multibody/mpm/transfer.cc line 43 at r1 (raw file):

Previously, amcastro-tri (Alejandro Castro) wrote…

To match notation with papers and proof below, consider

Done.


multibody/mpm/transfer.cc line 51 at r1 (raw file):

Previously, amcastro-tri (Alejandro Castro) wrote…

should we add time superscript to the grid force as well?

Done.


multibody/mpm/transfer.cc line 55 at r1 (raw file):

Previously, amcastro-tri (Alejandro Castro) wrote…

nit,

Done, thanks


multibody/mpm/transfer.cc line 65 at r1 (raw file):

Previously, amcastro-tri (Alejandro Castro) wrote…

Eq. 173 in Jiang, shows the P2G momentum transfer equation

The term C isn't explicitly defined. You have to observe that Cₚ = Bₚ * D⁻¹ from the context.


multibody/mpm/transfer.cc line 70 at r1 (raw file):

Previously, amcastro-tri (Alejandro Castro) wrote…

isn't a dt factor misssing?

This is the energy's derivative w.r.t. position, and it's force, not impulse.


multibody/mpm/transfer.cc line 76 at r1 (raw file):

Previously, amcastro-tri (Alejandro Castro) wrote…

mi sounds a whole lot as "grid" mass, at least in papers.
Consider m_ip or similar (contribution to m_i from p)

Done.


multibody/mpm/transfer.cc line 78 at r1 (raw file):

Previously, amcastro-tri (Alejandro Castro) wrote…

I'm having problesm grokking this. Since you arelady went trhough so much trouble above, could you complete the proof until the final result?

Hmm, this is just replacing Vₚ * Pₚ * Fₚⁿᵀ with tau_v0 and grouping terms a little differently (which I describe in words). There's not much more to show.


multibody/mpm/transfer.cc line 89 at r1 (raw file):

Previously, amcastro-tri (Alejandro Castro) wrote…

comments from above on U, data_index` and similar minor notation, also apply here.

Done (except the comment on U)


multibody/mpm/transfer.cc line 114 at r1 (raw file):

Previously, amcastro-tri (Alejandro Castro) wrote…

minor, consider adding references with equation numbers as above. I found that very usueful, even in this trivial case.

Done


multibody/mpm/transfer.cc line 120 at r1 (raw file):

Previously, amcastro-tri (Alejandro Castro) wrote…

maybe that's the common usage of the "transfer" language? it wasn't clear to me from papers.
But here in my mind two things happen: transfer + time stepping (same abover fro P2G).
Consider a name that reflects that. Maybe GridToParticleAndStep()?

That would a deviating from the convention and I don't want to do that. I think it will bring more confusion than clarity. Some would argue that the stepping happens in the grid update, not the interpolation, and G2P is only interpolating the "deformed grid" to the particle

Copy link
Contributor Author

@xuchenhan-tri xuchenhan-tri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 17 unresolved discussions, LGTM missing from assignee amcastro-tri, needs platform reviewer assigned, needs at least two assigned reviewers


multibody/mpm/transfer.h line 43 at r1 (raw file):

Previously, xuchenhan-tri wrote…

working

I tried it out, but I'm not sure how much value this gets in practice. We get to mark the const/non-const a bit more clearly, but we need to preprocess the grid somewhere else to prepare it for the transfer. Right now we are doing that in the constructor.

I don't see the lifetime requirement as too big an issue. The transfer class is lightweight and you can always throw the old away and grab a new one. WDYT?

Copy link
Contributor

@amcastro-tri amcastro-tri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed 4 of 4 files at r2.
Reviewable status: 7 unresolved discussions, LGTM missing from assignee amcastro-tri, needs platform reviewer assigned, needs at least two assigned reviewers


multibody/mpm/mock_sparse_grid.h line 97 at r1 (raw file):

Previously, xuchenhan-tri wrote…

For many operations (e.g. turning momentum to velocity and adding external forces into the momentum), the position of the grid is not necessary. We can add an overload that provides more information when necessary.

Fair, I imagine if you introduce the new signature it'll make this one obsolete.


multibody/mpm/particle_data.cc line 176 at r1 (raw file):

Previously, xuchenhan-tri wrote…

I added a note in the header.

The assumption of quadratic kernels have been built into this module since the very beginning. The Bspline weights class is explicitly quadratic, the pad data structure is designed to serve quadratic interpolation, etc.

yes, that's actually. I just figured it was worth documenting somewhere in this scope.


multibody/mpm/transfer.h line 43 at r1 (raw file):

Previously, xuchenhan-tri wrote…

I tried it out, but I'm not sure how much value this gets in practice. We get to mark the const/non-const a bit more clearly, but we need to preprocess the grid somewhere else to prepare it for the transfer. Right now we are doing that in the constructor.

I don't see the lifetime requirement as too big an issue. The transfer class is lightweight and you can always throw the old away and grab a new one. WDYT?

I see your point. I tradeoff could be that you do take a mutable gridata to prepare it as part of the construction of this object, but do not keep a reference to it. Instead, you do pass grid and particle data as function arguments.
Non-blocking though, just for your consideration.


multibody/mpm/transfer.h line 62 at r1 (raw file):

Previously, xuchenhan-tri wrote…

Both spellings are acceptable

I would've sworn this was in our GSG. It's not. Thanks.


multibody/mpm/transfer.cc line 25 at r1 (raw file):

Previously, xuchenhan-tri wrote…

I don't think so. Can you elaborate the concern?

humm.... I guess I thought you wanted U = T when using the mock grid? I thought that for unit testing with the mock grid, and in particular with T=AutoDiff, that "everything" would resolve to AutoDiffXd. It takes me a bit by surprise that you want nodes to be of type double in such case?

For SparseGrid<T>, where T can only be double or float, then it makese sense U = T.
So.... wouldn't we want U = T always? regardless of the grid type?


multibody/mpm/transfer.cc line 39 at r1 (raw file):

Previously, xuchenhan-tri wrote…

T is the particle/grid data scalar type, which can be autodiff. There's no autodiff support for bsplineweights.

ah.. is that why the U = double above? I wonder, if you do allow autodiff for bsplines? wouldn't that simplify codes like this where you could eliminate the coordination of two different scalar types?
Or is there a constraint in bsplines I don't see right now for that'd block this idea?


multibody/mpm/transfer.cc line 65 at r1 (raw file):

Previously, xuchenhan-tri wrote…

The term C isn't explicitly defined. You have to observe that Cₚ = Bₚ * D⁻¹ from the context.

ah... I see what you mean. I was thinking you tried to reference an explicit definition of C, which obviously 173 is not.


multibody/mpm/transfer.cc line 70 at r1 (raw file):

Previously, xuchenhan-tri wrote…

This is the energy's derivative w.r.t. position, and it's force, not impulse.

I see a collection of problems, though I still don't unerstand the origin. I'll point them out and I'm sure you can tell me what's going on:

  1. If I follow your text (i.e. chain rule through those expressions), I get: fᵢⁿ = -∑ₚ Vₚ * Pₚ * Fₚⁿᵀ * D⁻¹ * wᵢₚ * dt * vᵢ (where vᵢ stems from the gradient through Bₚ and dt comes from the gradient through Fₚ).
  2. You wrote fᵢⁿ = -∂E/∂xᵢ, but then below, to use the chain rule, you are using an approximation for Fₚⁿ⁺¹? i.e. Fₚⁿ⁺¹ = Fₚ = (I + dtCₚ)Fₚⁿ. So... then we are computing an approximation for fᵢⁿ⁺¹?
  3. Finally I realize I do not understand what ∂E/∂xᵢ means when grid nodes do not move, maybe that's the root cause of my confusion?

multibody/mpm/transfer.cc line 78 at r1 (raw file):

Previously, xuchenhan-tri wrote…

Hmm, this is just replacing Vₚ * Pₚ * Fₚⁿᵀ with tau_v0 and grouping terms a little differently (which I describe in words). There's not much more to show.

ah, I forgot to merge with the equation at the very top, i.e. (mvᵢ)ⁿ⁺¹ = mvᵢⁿ + fᵢⁿdt


multibody/mpm/transfer.cc line 120 at r1 (raw file):

Previously, xuchenhan-tri wrote…

That would a deviating from the convention and I don't want to do that. I think it will bring more confusion than clarity. Some would argue that the stepping happens in the grid update, not the interpolation, and G2P is only interpolating the "deformed grid" to the particle

In my mind, when papers talk about P2G/G2P, they refer to the bare bones interpolation back and forth.
For intance, here you have:

  • x += v * dt_;
  • F += C * dt_ * F

Isn't that stepping in time? so this class does more than "Transfer", no?

Copy link
Contributor

@amcastro-tri amcastro-tri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Working... everyone seems to have their own MPM flavor, I am trying to understand yours.
That's the easy part, equations.

There is one more higher level design thing I am trying to understand, and that is time-stepping vs transfer. Are they unavoidably coupled as done here? or could they belong to separate files?

I'll study some more docs and come back to you on that one.

Reviewable status: 7 unresolved discussions, LGTM missing from assignee amcastro-tri, needs platform reviewer assigned, needs at least two assigned reviewers

Copy link
Contributor Author

@xuchenhan-tri xuchenhan-tri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are they unavoidably coupled as done here? or could they belong to separate files?

I don't see a reason to separate them, the grid state is already at the next time step when we start G2P. G2P is only collecting the next step state back to the particles.

Reviewable status: 8 unresolved discussions, LGTM missing from assignee amcastro-tri, needs platform reviewer assigned, needs at least two assigned reviewers


multibody/mpm/transfer.h line 43 at r1 (raw file):

Previously, amcastro-tri (Alejandro Castro) wrote…

I see your point. I tradeoff could be that you do take a mutable gridata to prepare it as part of the construction of this object, but do not keep a reference to it. Instead, you do pass grid and particle data as function arguments.
Non-blocking though, just for your consideration.

I'm experimenting that idea on my follow-up PRs. Let me come back to this once we are happy with the rest of the PR.


multibody/mpm/transfer.cc line 25 at r1 (raw file):

Previously, amcastro-tri (Alejandro Castro) wrote…

humm.... I guess I thought you wanted U = T when using the mock grid? I thought that for unit testing with the mock grid, and in particular with T=AutoDiff, that "everything" would resolve to AutoDiffXd. It takes me a bit by surprise that you want nodes to be of type double in such case?

For SparseGrid<T>, where T can only be double or float, then it makese sense U = T.
So.... wouldn't we want U = T always? regardless of the grid type?

The grid dx is a fixed parameter. It doesn't make sense to differentiate it.

Furthermore, the PadNodes are not differentiable. It's not even continuous. As particle move across cell boundary, the affected pad of grid nodes change discretely.


multibody/mpm/transfer.cc line 39 at r1 (raw file):

Previously, amcastro-tri (Alejandro Castro) wrote…

ah.. is that why the U = double above? I wonder, if you do allow autodiff for bsplines? wouldn't that simplify codes like this where you could eliminate the coordination of two different scalar types?
Or is there a constraint in bsplines I don't see right now for that'd block this idea?

I guess it's possible to make a specialization for autodiffxd for bspline and implement the derivatives. But it's pointless because 1. it doesn't eliminate the two scalars like you said, and 2. the derivatives are useless because they will all be zero in my tests.


multibody/mpm/transfer.cc line 70 at r1 (raw file):

Previously, amcastro-tri (Alejandro Castro) wrote…

I see a collection of problems, though I still don't unerstand the origin. I'll point them out and I'm sure you can tell me what's going on:

  1. If I follow your text (i.e. chain rule through those expressions), I get: fᵢⁿ = -∑ₚ Vₚ * Pₚ * Fₚⁿᵀ * D⁻¹ * wᵢₚ * dt * vᵢ (where vᵢ stems from the gradient through Bₚ and dt comes from the gradient through Fₚ).
  2. You wrote fᵢⁿ = -∂E/∂xᵢ, but then below, to use the chain rule, you are using an approximation for Fₚⁿ⁺¹? i.e. Fₚⁿ⁺¹ = Fₚ = (I + dtCₚ)Fₚⁿ. So... then we are computing an approximation for fᵢⁿ⁺¹?
  3. Finally I realize I do not understand what ∂E/∂xᵢ means when grid nodes do not move, maybe that's the root cause of my confusion?

A common way to think about the interpolation is to imagine the grid nodes are moving by defining

xi^{n+1} = xi^n+ dt * vi^{n+1} and interpolating the next time step's particle position from the "moved" grid position.

In other words, the independent variable is really vi. So when I write fᵢⁿ = -∂E/∂xi I really mean fᵢⁿ = -∂E/∂vi * 1/dt.
I can see how this introduces confusion, because in Bₚ = ∑ᵢ wᵢₚ vᵢ(xᵢ − xₚ), the xi is xi^n, not xi^{n+1}, and it's constant. The derivative should hit vi instead.

Let me know if that clears the confusion for you. If so, I'll can the comment accordingly.


multibody/mpm/transfer.cc line 78 at r1 (raw file):

Previously, amcastro-tri (Alejandro Castro) wrote…

ah, I forgot to merge with the equation at the very top, i.e. (mvᵢ)ⁿ⁺¹ = mvᵢⁿ + fᵢⁿdt

Done?


multibody/mpm/transfer.cc line 120 at r1 (raw file):

Previously, amcastro-tri (Alejandro Castro) wrote…

In my mind, when papers talk about P2G/G2P, they refer to the bare bones interpolation back and forth.
For intance, here you have:

  • x += v * dt_;
  • F += C * dt_ * F

Isn't that stepping in time? so this class does more than "Transfer", no?

Stepping in time has already happened when we interpolate the grid velocity back to the particle (i.e. vp is already at vp^{n+1}, and Cp is at Cp^{n+1}).

I don't really mind splitting the position update and the velocity update, but I'm not seeing a good reason to do that.

Copy link
Contributor Author

@xuchenhan-tri xuchenhan-tri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 8 unresolved discussions, LGTM missing from assignee amcastro-tri, needs platform reviewer assigned, needs at least two assigned reviewers


a discussion (no related file):
@amcastro-tri to be sure we are on the same page, this PR is waiting for you to understand the transfer math better. (This is my understanding from our f2f last week. Please correct me if I'm mistaken.)

Copy link
Contributor

@amcastro-tri amcastro-tri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed 5 of 13 files at r1.
Reviewable status: 5 unresolved discussions, needs platform reviewer assigned, needs at least two assigned reviewers


a discussion (no related file):

Previously, xuchenhan-tri wrote…

@amcastro-tri to be sure we are on the same page, this PR is waiting for you to understand the transfer math better. (This is my understanding from our f2f last week. Please correct me if I'm mistaken.)

Yes, the f2f we had last week was excellent, it really helped me with the review of the papers.

:lgtm:

___ *[`multibody/mpm/transfer.h` line 43 at r1](https://reviewable.io/reviews/RobotLocomotion/drake/22807#-OMICCgM4RdgEdeg8hDW:-OMhg19sEHfDbpZdQOO-:bb73djm) ([raw file](https://github.com/RobotLocomotion/drake/blob/16c2783e24ad52ad8e533d9bbb43acd5765379c7/multibody/mpm/transfer.h#L43)):*
Previously, xuchenhan-tri wrote…

I'm experimenting that idea on my follow-up PRs. Let me come back to this once we are happy with the rest of the PR.

SGTM


multibody/mpm/transfer.cc line 25 at r1 (raw file):

Previously, xuchenhan-tri wrote…

The grid dx is a fixed parameter. It doesn't make sense to differentiate it.

Furthermore, the PadNodes are not differentiable. It's not even continuous. As particle move across cell boundary, the affected pad of grid nodes change discretely.

Yes, that I know. I was just wondering if not having to spell out separate scalar types would be a win from a purely software perspective. Non-blocking (where performance is not a critical issue here since it's for testing purposes only)


multibody/mpm/transfer.cc line 39 at r1 (raw file):

1. it doesn't eliminate the two scalars like you said

I see, that's the part I thought to be true. Thanks

2. the derivatives are useless because they will all be zero in my tests.

Yes, the math is the obvious part.


multibody/mpm/test/particle_sorter_test.cc line 218 at r2 (raw file):

  /* Set all particle velocity to (10, 0, 0). */
  for (int i = 0; i < particle_data.num_particles(); ++i) {
    particle_data.mutable_v()[i] = Vector3<T>(10.0, 0.0, 0.0);

why this change in this PR?


multibody/mpm/test/transfer_test.cc line 83 at r2 (raw file):

                               const MassAndMomentum<double>& particle,
                               const MassAndMomentum<double>& expected) {
  const double kTol = 1e-12;

minor, wondering if this one could be tighter


multibody/mpm/test/transfer_test.cc line 116 at r2 (raw file):

  EXPECT_TRUE(CompareMatrices(particle_data.v()[0], vel, 1e-14));
  EXPECT_TRUE(CompareMatrices(particle_data.x()[0], x0 + vel * dt, 1e-14));
  EXPECT_TRUE(CompareMatrices(particle_data.C()[0], Matrix3d::Zero(), 1e-12));

is it expected to see such precision losss in a problem with only one particle?


multibody/mpm/test/particle_data_test.cc line 223 at r2 (raw file):

  }
  EXPECT_TRUE(
      CompareMatrices(result.linear_momentum, total_volume * kRho * v, 1e-14));

minor, wondering if this is the tightest tolerance we can use. Above we used machine epsilon.

Copy link
Contributor

@amcastro-tri amcastro-tri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+@jwnimmer-tri for platform review please

Reviewable status: 5 unresolved discussions, LGTM missing from assignee jwnimmer-tri(platform)

Copy link
Collaborator

@jwnimmer-tri jwnimmer-tri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed 8 of 13 files at r1, 2 of 4 files at r2, all commit messages.
Reviewable status: 8 unresolved discussions, LGTM missing from assignee jwnimmer-tri(platform)


multibody/mpm/test/particle_data_test.cc line 215 at r2 (raw file):

  const MassAndMomentum<T> result = ComputeTotalMassAndMomentum(this->dut_, dx);
  if constexpr (std::is_same_v<T, AutoDiffXd>) {

BTW Do we need the if constexpr? I thought ExtractDoubleOrThow was overloaded on T=double so we didn't need to add these branches everywhere. Maybe the worry is the lack of float overload? If so, let's just add that overload and avoid this extra branching?


multibody/mpm/particle_data.h line 214 at r2 (raw file):

 interpolation.
 @param[in] particle_data  The particle data.
 @param[in] dx             The background grid dx [meters] of the particles. */

nit Documentation needs @tparam T The scalar type, ... of some form.

I imagine the answer is "See ParticleData" or similar.


multibody/mpm/particle_data.h line 216 at r2 (raw file):

 @param[in] dx             The background grid dx [meters] of the particles. */
template <typename T>
MassAndMomentum<T> ComputeTotalMassAndMomentum(const ParticleData<T>& particles,

BTW Why is "total energy" a member method but "total mass & momentum" a free function?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
release notes: none This pull request should not be mentioned in the release notes
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants