Skip to content

Mirror of upstream PR #33506 #200

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 106 commits into
base: main
Choose a base branch
from
Open

Mirror of upstream PR #33506 #200

wants to merge 106 commits into from

Conversation

kushxg
Copy link
Owner

@kushxg kushxg commented Jul 1, 2025

Mirrored from facebook/react PR facebook#33506

sammy-SC and others added 30 commits May 12, 2025 17:39
This is a fix for a problem where React retains shadow nodes longer than
it needs to. The behaviour is shown in React Native test:
https://github.com/facebook/react-native/blob/main/packages/react-native/src/private/__tests__/utilities/__tests__/ShadowNodeReferenceCounter-itest.js#L169

# Problem
When React commits a new shadow tree, old shadow nodes are stored inside
`fiber.alternate.stateNode`. This is not cleared up until React clones
the node again. This may be problematic if mutation deletes a subtree,
in that case `fiber.alternate.stateNode` will retain entire subtree
until next update. In case of image nodes, this means retaining entire
images.

So when React goes from revision A: `<View><View /></View>` to revision
B: `<View />`, `fiber.alternate.stateNode` will be pointing to Shadow
Node that represents revision A..


![image](https://github.com/user-attachments/assets/076b677e-d152-4763-8c9d-4f923212b424)


# Fix
To fix this, this PR adds a new feature flag
`enableEagerAlternateStateNodeCleanup`. When enabled,
`alternate.stateNode` is proactively pointed towards finishedWork's
stateNode, releasing resources sooner.

I have verified this fixes the issue [demonstrated by React Native
tests](https://github.com/facebook/react-native/blob/main/packages/react-native/src/private/__tests__/utilities/__tests__/ShadowNodeReferenceCounter-itest.js#L169).
All existing React tests pass when the flag is enabled.
…ook#33028)

## Summary
Our builds generate files with a `.mjs` file extension. These are
currently filtered out by `ReactFlightWebpackPlugin` so I am updating it
to support this file extension.

This fixes facebook#33155

## How did you test this change?
I built the plugin with this change and used `yalc` to test it in my
project. I confirmed the expected files now show up in
`react-client-manifest.json`
)

Enabled in experimental channel.

We know this is critical semantics to enforce at the HTML level since if
you don't then you can't add explicit boundaries after the fact.
However, this might have to go in a major release to allow for
upgrading.
…ackTrace (facebook#33143)

When we get the source location for "View source for this element" we
should be using the enclosing function of the callsite of the child. So
that we don't just point to some random line within the component.

This is similar to the technique in facebook#33136.

This technique is now really better than the fake throw technique, when
available. So I now favor the owner technique. The only problem it's
only available in DEV and only if it has a child that's owned (and not
filtered).

We could implement this same technique for the error that's thrown in
the fake throwing solution. However, we really shouldn't need that at
all because for client components we should be able to call
`inspect(fn)` at least in Chrome which is even better.
…3159)

This keeps track of the transition lane allocated for this event. I want
to be able to use the current one within sync work flushing to know
which lane needs its loading indicator cleared.

It's also a bit weird that transition work scheduled inside sync updates
in the same event aren't entangled with other transitions in that event
when `flushSync` is.

Therefore this moves it to reset after flushing.

It should have no impact. Just splitting it out into a separate PR for
an abundance of caution.

The only thing this might affect would be if the React internals throws
and it doesn't reset after. But really it doesn't really have to reset
and they're all entangled anyway.
…onLane (facebook#33188)

When we're entangled with an async action lane we use that lane instead
of the currentEventTransitionLane. Conversely, if we start a new async
action lane we reuse the currentEventTransitionLane.

So they're basically supposed to be in sync but they're not if you
resolve the async action and then schedule new stuff in the same event.
Then you end up with two transitions in the same event with different
lanes.

By stashing it like this we fix that but it also gives us an opportunity
to check just the currentEventTransitionLane to see if this event
scheduled any regular Transition updates or Async Transitions.
Stacked on facebook#33159.

This implements `onDefaultTransitionIndicator`.

The sequence is:

1) In `markRootUpdated` we schedule Transition updates as needing
`indicatorLanes` on the root. This tracks the lanes that currently need
an indicator to either start or remain going until this lane commits.
2) Track mutations during any commit. We use the same hook that view
transitions use here but instead of tracking it just per view transition
scope, we also track a global boolean for the whole root.
3) If a sync/default commit had any mutations, then we clear the
indicator lane for the `currentEventTransitionLane`. This requires that
the lane is still active while we do these commits. See facebook#33159. In other
words, a sync update gets associated with the current transition and it
is assumed to be rendering the loading state for that corresponding
transition so we don't need a default indicator for this lane.
4) At the end of `processRootScheduleInMicrotask`, right before we're
about to enter a new "event transition lane" scope, it is no longer
possible to render any more loading states for the current transition
lane. That's when we invoke `onDefaultTransitionIndicator` for any roots
that have new indicator lanes.
5) When we commit, we remove the finished lanes from `indicatorLanes`
and once that reaches zero again, then we can clean up the default
indicator. This approach means that you can start multiple different
transitions while an indicator is still going but it won't stop/restart
each time. Instead, it'll wait until all are done before stopping.

Follow ups:

- [x] Default updates are currently not enough to cancel because those
aren't flush in the same microtask. That's unfortunate. facebook#33186
- [x] Handle async actions before the setState. Since these don't
necessarily have a root this is tricky. facebook#33190
- [x] Disable for `useDeferredValue`. ~Since it also goes through
`markRootUpdated` and schedules a Transition lane it'll get a default
indicator even though it probably shouldn't have one.~ EDIT: Turns out
this just works because it doesn't go through `markRootUpdated` when
work is left behind.
- [x] Implement built-in DOM version by default. facebook#33162
…n was scheduled (facebook#33186)

Stacked on facebook#33160.

The purpose of this is to avoid calling `onDefaultTransitionIndicator`
when a Default priority update acts as the loading indicator, but still
call it when unrelated Default updates happens nearby.

When we schedule Default priority work that gets batched with other
events in the same frame more or less. This helps optimize by doing less
work. However, that batching means that we can't separate work from one
setState from another. If we would consider all Default priority work in
a frame when determining whether to show the default we might never show
it in cases like when you have a recurring timer updating something.

This instead flushes the Default priority work eagerly along with the
sync work at the end of the event, if this event scheduled any
Transition work. This is then used to determine if the default indicator
needs to be shown.
…acebook#33162)

Stacked on facebook#33160.

By default, if `onDefaultTransitionIndicator` is not overridden, this
will trigger a fake Navigation event using the Navigation API. This is
intercepted to create an on-going navigation until we complete the
Transition. Basically each default Transition is simulated as a
Navigation.

This triggers the native browser loading state (in Chrome at least). So
now by default the browser spinner spins during a Transition if no other
loading state is provided. Firefox and Safari hasn't shipped Navigation
API yet and even in the flag Safari has, it doesn't actually trigger the
native loading state.

To ensures that you can still use other Navigations concurrently, we
don't start our fake Navigation if there's one on-going already.
Similarly if our fake Navigation gets interrupted by another. We wait
for on-going ones to finish and then start a new fake one if we're
supposed to be still pending.

There might be other routers on the page that might listen to intercept
Navigation Events. Typically you'd expect them not to trigger a refetch
when navigating to the same state. However, if they want to detect this
we provide the `"react-transition"` string in the `info` field for this
purpose.
…o root associated (facebook#33190)

Stacked on facebook#33160, facebook#33162, facebook#33186 and facebook#33188.

We have a special case that's awkward for default indicators. When you
start a new async Transition from `React.startTransition` then there's
not yet any associated root with the Transition because you haven't
necessarily `setState` on anything yet until the promise resolves.
That's what `entangleAsyncAction` handles by creating a lane that
everything entangles with until all async actions are done.

If there are no sync updates before the end of the event, we should
trigger a default indicator until either the async action completes
without update or if it gets entangled with some roots we should keep it
going until those roots are done.
Not sure where this was coming from.
…sition (facebook#33191)

And that doesn't disable with `update="none"`.

The principle here is that we want the content of a Portal to animate if
other things are animating with it but if other things aren't animating
then we don't.
…ook#33200)

This is a partial revert of facebook#33094. It's true that we don't need the
server and client ViewTransition names to line up. However the server
does need to be able to generate deterministic names for itself. The
cheapest way to do that is using the useId algorithm. When it's used by
the server, the client needs to also materialize an ID even if it
doesn't use it.
…3194)

Removes the `isFallback` flag on Tasks and tracks it on the
formatContext instead.

Less memory and avoids passing and tracking extra arguments to all the
pushStartInstance branches that doesn't need it.

We'll need to be able to track more Suspense related contexts on this
for View Transitions anyway.
…facebook#33206)

Stacked on facebook#33194 and facebook#33200.

When Suspense boundaries reveal during streaming, the Fizz runtime will
be responsible for animating the reveal if necessary (not in this PR).
However, for the future runtime to know what to do it needs to know
about the `<ViewTransition>` configuration to apply.

Ofc, these are virtual nodes that disappear from the HTML. We could
model them as comments like we do with other virtual nodes like Suspense
and Activity. However, that doesn't let us target them with
querySelector and CSS (for no-JS transitions). We also don't have to
model every ViewTransition since not every combination can happen using
only the server runtime. So instead this collapses `<ViewTransition>`
and applies the configuration to the inner DOM nodes.

```js
<ViewTransition name="hi">
  <div />
  <div />
</ViewTransition>
```

Becomes:

```html
<div vt-name="hi" vt-update="auto"></div>
<div vt-name="hi_1" vt-update="auto"></div>
```

I use `vt-` prefix as opposed to `data-` to keep these virtual
attributes away from user specific ones but we're effectively claiming
this namespace.

There are four triggers `vt-update`, `vt-enter`, `vt-exit` and
`vt-share`. The server resolves which ones might apply to this DOM node.
The value represents the class name (after resolving
view-transition-type mappings) or `"auto"` if no specific class name is
needed but this is still a trigger.

The value can also be `"none"`. This is different from missing because
for example an `vt-update="none"` will block mutations inside it from
triggering the boundary where as a missing `vt-update` would bubble up
to be handled by a parent.

`vt-name` is technically only necessary when `vt-share` is specified to
find a pair. However, since an explicit name can also be used to target
specific CSS selectors, we include it even for other cases.

We want to exclude as many of these annotations as possible.

`vt-enter` can only affect the first DOM node inside a Suspense
boundary's content since the reveal would cause it to enter but nothing
deeper inside. Similarly `vt-exit` can only affect the first DOM node
inside a fallback. So for every other case we can exclude them. (For
future MPA ViewTransitions of the whole document it might also be
something we annotate to children inside the `<body>` as well.) Ideally
we'd only include `vt-enter` for Suspense boundaries that actually
flushed a fallback but since we prepare all that content earlier it's
hard to know.

`vt-share` can be anywhere inside an fallback or content. Technically we
don't have to include it outside the root most Suspense boundary or for
boundaries that are inlined into the root shell. However, this is tricky
to detect. It would also not be correct for future MPA ViewTransitions
because in that case the shared scenario can affect anything in the two
documents so it needs to be in every node everywhere which is
effectively what we do. If a `share` class is specified but it has no
explicit name, we can exclude it since it can't match anything.

`vt-update` is only necessary if something below or a sibling might
update like a Suspense boundary. However, since we don't know when
rendering a segment if it'll later asynchronously add a Suspense
boundary later we have to assume that anywhere might have a child. So
these are always included. We collapse to use the inner most one when
directly nested though since that's the one that ends up winning.

There are some weird edge cases that can't be fully modeled by the lack
of virtual nodes.
For debugging purposes, log author_association
Noop detection for xplat syncs broke because `eslint-plugin-react-hooks`
uses versions like:

- `0.0.0-experimental-d85f86cf-20250514`

But xplat expects them to be of the form:

- `19.2.0-native-fb-63d664b2-20250514`

This PR fixes the noop by ignoring
`eslint-plugin-react-hooks/package.json` changes. This means we won't
create a sync if only that package.json changes, but that should be rare
and we can follow up with better detection if needed.

[Example failed
action](https://github.com/facebook/react/actions/runs/15032346805/job/42247414406):

<img width="1031" alt="Screenshot 2025-05-15 at 11 31 17 AM"
src="https://github.com/user-attachments/assets/d902079c-1afe-4e18-af1d-25e60e28929e"
/>

I believe the regression was caused by
facebook#33104
…cebook#33295)

We decremented `allPendingTasks` after invoking `onShellReady`. Which
means that in that scope it wasn't considered fully complete.

Since the pattern for flushing in Node.js is to start piping in
`onShellReady` and that's how you can get sync behavior, this led us to
think that we had more work left to do. For example we emitted the
`writeShellTimeInstruction` in this scenario before.
…ion (facebook#33293)

When needed.

For the external runtime we always include this wrapper.

For others, we only include it if we have an ViewTransitions affecting.
If we discover the ViewTransitions late, then we can upgrade an already
emitted instruction.

This doesn't yet do anything useful with it, that's coming in a follow
up. This is just the mechanism for how it gets installed.
So they can be shared by server. Incorporates the types from definitely
typed too.
…ebook#33306)

Basically we track a `SuspenseListRow` on the task. These keep track of
"pending tasks" that block the row. A row is blocked by:

- First itself completing rendering.
- A previous row completing.
- Any tasks inside the row and before the Suspense boundary inside the
row. This is mainly because we don't yet know if we'll discover more
SuspenseBoundaries.
- Previous row's SuspenseBoundaries completing.

If a boundary might get outlined, then we can't consider it completed
until we have written it because it determined whether other future
boundaries in the row can finish.

This is just handling basic semantics. Features not supported yet that
need follow ups later:

- CSS dependencies of previous rows should be added as dependencies of
future row's suspense boundary. Because otherwise if the client is
blocked on CSS then a previous row could be blocked but the server
doesn't know it.
- I need a second pass on nested SuspenseList semantics.
- `revealOrder="together"`
- `tail="hidden"`/`tail="collapsed"`. This needs some new runtime
semantics to the Fizz runtime and to allow the hydration to handle
missing rows in the HTML. This should also be future compatible with
AsyncIterable where we don't know how many rows upfront.
- Need to double check resuming semantics.

---------

Co-authored-by: Sebastian "Sebbie" Silbermann <[email protected]>
We were printing "Custom" instead of "hook".
We support AsyncIterable (more so when it's a cached form like in coming
from Flight) as children.

This fixes some warnings and bugs when passed to SuspenseList.

Ideally SuspenseList with `tail="hidden"` should support unblocking
before the full result has resolved but that's an optimization on top.
We also might want to change semantics for this for
`revealOrder="backwards"` so it becomes possible to stream items in
reverse order.
Follow up to facebook#33306.

If we're nested inside a SuspenseList and we have a row, then we can
point our last row to block the parent row and unblock the parent when
the last child unblocks.
For now we removed Rust from the codebase, remove this leftover script.

Also remove some dupes and Rust related files from `.gitignore`.
Stacked on facebook#33308.

For "together" mode, we can be a self-blocking row that adds all its
boundaries to the blocked set, but there's no parent row that unblocks
it.

A particular quirk of this mode is that it's not enough to just unblock
them all on the server together. Because if one boundary downloads all
its html and then issues a complete instruction it'll appear before the
others while streaming in. What we actually want is to reveal them all
in a single batch.

This implementation takes a short cut by unblocking the rows in
`flushPartialBoundary`. That ensures that all the segments of every
boundary has a chance to flush before we start emitting any of the
complete boundary instructions. Once the last one unblocks, all the
complete boundary instructions are queued. Ideally this would be a
single `<script>` tag so that they can't be split up even if we get a
chunk containing some of them.

~A downside of this approach is that we always outline these boundaries.
We could inline them if they all complete before the parent flushes.
E.g. by checking if the row is blocked only by its own boundaries and if
all the boundaries would fit without getting outlined, then we can
inline them all at once.~ I went ahead and did this because it solves an
issue with `renderToString` where it doesn't support the script runtime
so it can only handle this if inlined.
…future rows (facebook#33312)

Stacked on facebook#33311.

When a row contains Suspense boundaries that themselves depend on CSS,
they will not resolve until the CSS has loaded on the client. We need
future rows in a list to be blocked until this happens. We could do
something in the runtime but a simpler approach is to just add those CSS
dependencies to all those boundaries as well.

To do this, we first hoist the HoistableState from a completed boundary
onto its parent row. Then when the row finishes do we hoist it onto the
next row and onto any boundaries within that row.
unstubbable and others added 30 commits June 5, 2025 15:08
When I added the `ready_for_review` event in facebook#32344, no notifications
for opened draft PRs were sent due to some other condition. This is not
the case anymore, so we need to exclude draft PRs from triggering a
notification when the workflow is run because of an `opened` event. This
event is still needed because the `ready_for_review` event only fires
when an existing draft PR is converted to a non-draft state. It does not
trigger for pull requests that are opened directly as ready-for-review.
…ook#33447)

## Summary

Enables the `enableEagerAlternateStateNodeCleanup` feature flag for all
variants, while maintaining the `__VARIANT__` for the internal React
Native flavor for backtesting reasons.

## How did you test this change?

```
$ yarn test
```
This shouldn't be needed now that the lint rule was move
Block the view transition on suspensey images Up to 500ms just like the
client.

We can't use `decode()` because a bug in Chrome where those are blocked
on `startViewTransition` finishing we instead rely on sync decoding but
also that the image is live when it's animating in and we assume it
doesn't start visible.

However, we can block the View Transition from starting on the `"load"`
or `"error"` events.

The nice thing about blocking inside `startViewTransition` is that we
have already done the layout so we can only wait on images that are
within the viewport at this point. We might want to do that in Fiber
too. If many image doesn't have fixed size but need to load first, they
can all end up in the viewport. We might consider only doing this for
images that have a fixed size or only a max number that doesn't have a
fixed size.
…undaries (facebook#33454)

We want to make sure that we can block the reveal of a well designed
complete shell reliably. In the Suspense model, client transitions don't
have any way to implicitly resolve. This means you need to use Suspense
or SuspenseList to explicitly split the document. Relying on implicit
would mean you can't add a Suspense boundary later where needed. So we
highly encourage the use of them around large content.

However, if you have constructed a too large shell (e.g. by not adding
any Suspense boundaries at all) then that might take too long to render
on the client. We shouldn't punish users (or overzealous metrics
tracking tools like search engines) in that scenario.

This opts out of render blocking if the shell ends up too large to be
intentional and too slow to load. Instead it deopts to showing the
content split up in arbitrary ways (browser default). It only does this
for SSR, and not client navs so it's not reliable.

In fact, we issue an error to `onError`. This error is recoverable in
that the document is still produced. It's up to your framework to decide
if this errors the build or just surface it for action later.

What should be the limit though? There's a trade off here. If this limit
is too low then you can't fit a reasonably well built UI within it
without getting errors. If it's too high then things that accidentally
fall below it might take too long to load.

I came up with 512kB of uncompressed shell HTML. See the comment in code
for the rationale for this number. TL;DR: Data and theory indicates that
having this much content inside `rel="expect"` doesn't meaningfully
change metrics. Research of above-the-fold content on various websites
indicate that this can comfortable fit all of them which should be
enough for any intentional initial paint.
…ebook#33456)

Follow up to facebook#33442. This is the bundled version.

To keep type check passes from exploding and the maintainance of the
annoying `paths: []` list small, this doesn't add this to flow type
checks. We might miss some config but every combination should already
be covered by other one passes.

I also don't add any jest tests because to test these double export
entry points we need conditional importing to cover builds and
non-builds which turns out to be difficult for the Flight builds so
these aren't covered by any basic build tests.

This approach is what I'm going for, for the other bundlers too.
Adding throttling or delaying on images, can obviously impact metrics.
However, it's all in the name of better actual user experience overall.
(Note that it's not strictly worse even for metric. Often it's actually
strictly better due to less work being done overall thanks to batching.)

Metrics can impact things like search ranking but I believe this is on a
curve. If you're already pretty good, then a slight delay won't suddenly
make you rank in a completely different category. Similarly, if you're
already pretty bad then a slight delay won't make it suddenly way worse.
It's still in the same realm. It's just one weight of many. I don't
think this will make a meaningful practical impact and if it does,
that's probably a bug in the weights that will get fixed.

However, because there's a race to try to "make everything green" in
terms of web vitals, if you go from green to yellow only because of some
throttling or suspensey images, it can feel bad. Therefore this
implements a heuristic where if the only reason we'd miss a specific
target is because of throttling or suspensey images, then we shorten the
timeout to hit the metric. This is a worse user experience because it
can lead to extra flashing but feeling good about "green" matters too.

If you then have another reveal that happens to be the largest
contentful paint after that, then that's throttled again so that it
doesn't become flashy after that. If you've already missed the deadline
then you're not going to hit your metric target anyway. It can affect
average but not median.

This is mainly about LCP. It doesn't affect FCP since that doesn't have
a throttle. If your LCP is the same as your FCP then it also doesn't
matter.

We assume that `performance.now()`'s zero point starts at the "start of
the navigation" which makes this simple. Even if we used the
`PerformanceNavigationTiming` API it would just tell us the same thing.

This only implements for Fizz since these metrics tend to currently only
by tracked for initial loads, but with soft navs tracking we could
consider implementing the same for Fiber throttles.
When deeply nested Suspense boundaries inside a fallback of another
boundary resolve it is possible to encounter situations where you either
attempt to flush an aborted Segment or you have a boundary without any
root segment. We intended for both of these conditions to be impossible
to arrive at legitimately however it turns out in this situation you
can. The fix is two-fold

1. allow flushing aborted segments by simply skipping them. This does
remove some protection against future misconfiguraiton of React because
it is no longer an invariant that you hsould never attempt to flush an
aborted segment but there are legitimate cases where this can come up
and simply omitting the segment is fine b/c we know that the user will
never observe this. A semantically better solution would be to avoid
flushing boudaries inside an unneeded fallback but to do this we would
need to track all boundaries inside a fallback or create back pointers
which add to memory overhead and possibly make GC harder to do
efficiently. By flushing extra we're maintaining status quo and only
suffer in performance not with broken semantics.

2. when queuing completed segments allow for queueing aborted segments
and if we are eliding the enqueued segment allow for child segments that
are errored to be enqueued too. This will mean that we can maintain the
invariant that a boundary must have a root segment the first time we
flush it, it just might be aborted (see point 1 above).

This change has two seemingly similar test cases to exercise this fix.
The reason we need both is that when you have empty segments you hit
different code paths within Fizz and so each one (without this fix)
triggers a different error pathway.

This change also includes a fix to our tests where we were not
appropriately setting CSPnonce back to null at the start of each test so
in some contexts scripts would not run for some tests
Reverts facebook#33457, facebook#33456 and facebook#33442.

There are too many issues with wrappers, lazy init, stateful modules,
duplicate instantiation of async_hooks and duplication of code.

Instead, we'll just do a wrapper polyfill that uses Node Streams
internally.

I kept the client indirection files that I added for consistency with
the server though.
…k#33473)

This effectively lets us consume Web Streams in a Node build. In fact
the Node entry point is now just adding Node stream APIs.

For the client, this is simple because the configs are not actually
stream type specific. The server is a little trickier.
New take on facebook#33441.

This uses a wrapper instead of a separate bundle.
…k#33474)

This needs some tweaks to the implementation and a conversion but simple
enough.

---------

Co-authored-by: Hendrik Liebau <[email protected]>
…facebook#33478)

I noticed that the ThirdPartyComponent in the fixture was showing the
wrong stack and the `"use third-party"` is in the wrong location.

<img width="628" alt="Screenshot 2025-06-06 at 11 22 11 PM"
src="https://github.com/user-attachments/assets/f0013380-d79e-4765-b371-87fd61b3056b"
/>

When creating the initial JSX inside the third party server, we should
make sure that it has no owner. In a real cross-server environment you
get this by default by just executing in different context. But since
the fixture example is inside the same AsyncLocalStorage as the parent
it already has an owner which gets transferred. So we should make sure
that were we create the JSX has no owner to simulate this.

When we then parse a null owner on the receiving side, we replace its
owner/stack with the owner/stack of the call to `createFrom...` to
connect them. This worked fine with only two environments. The bug was
that when we did this and then transferred the result to a third
environment we took the original parsed stack trace. We should instead
parse a new one from the replaced stack in the current environment.

The second bug was that the `"use third-party"` badge ends up in the
wrong place when we do this kind of thing. Because the stack of the
thing entering the new environment is the call to `createFrom...` which
is in the old environment even though the component itself executes in
the new environment. So to see if there's a change we should be
comparing the current environment of the task to the owner's environment
instead of the next environment after the task.

After:

<img width="494" alt="Screenshot 2025-06-07 at 1 13 28 AM"
src="https://github.com/user-attachments/assets/e2e870ba-f125-4526-a853-bd29f164cf09"
/>
Technically the async call graph spans basically all the way back to the
start of the app potentially, but we don't want to include everything.
Similarly we don't want to include everything from previous components
in every child component. So we need some heuristics for filtering out
data.

We roughly want to be able to inspect is what might contribute to a
Suspense loading sequence even if it didn't this time e.g. due to a race
condition.

One flaw with the previous approach was that awaiting a cached promise
in a sibling that happened to finish after another sibling would be
excluded. However, in a different race condition that might end up being
used so I wanted to include an empty "await" in that scenario to have
some association from that component.

However, for data that resolved fully before the request even started,
it's a little different. This can be things that are part of the start
up sequence of the app or externally cached data. We decided that this
should be excluded because it doesn't contribute to the loading sequence
in the expected scenario. I.e. if it's cached. Things that end up being
cache misses would still be included. If you want to test externally
cached data misses, then it's up to you or the framework to simulate
those. E.g. by dropping the cache. This also helps free up some noise
since static / cached data can be excluded in visualizations.

I also apply this principle to forwarding debug info. If you reuse a
cached RSC payload, then the Server Component render time and its awaits
gets clamped to the caller as if it has zero render/await time. The I/O
entry is still back dated but if it was fully resolved before we started
then it's completely excluded.
…allow processing function props (facebook#32119)

## Summary

In react-native props that are passed as function get converted to a
boolean (`true`). This is the default pattern for event handlers in
react-native.
However, there are reasons for why you might want to opt-out of this
behavior, and instead, pass along the actual function as the prop.
Right now, there is no way to do this, and props that are functions
always get set to `true`.
The `ViewConfig` attributes already have the API for a `process`
function. I simply moved the check for the process function up, so if a
ViewConfig's prop attribute configured a process function this is always
called first.
This provides an API to opt out of the default behavior. 

This is the accompanied PR for react-native:

- facebook/react-native#48777

## How did you test this change?

<!--
Demonstrate the code is solid. Example: The exact commands you ran and
their output, screenshots / videos if the pull request changes the user
interface.
How exactly did you verify that your PR solves the issue you wanted to
solve?
  If you leave this empty, your PR will very likely be closed.
-->

I modified the code manually in a template react-native app and
confirmed its working. This is a code path you only need in very special
cases, thus it's a bit hard to provide a test for this. I recorded a
video where you can see that the changes are active and the prop is
being passed as native value.

For this I created a custom native component with a view config that
looked like this:

```js
const viewConfig = {
  uiViewClassName: 'CustomView',
  bubblingEventTypes: {},
  directEventTypes: {},
  validAttributes: {
    nativeProp: {
      process: (nativeProp) => {
		// Identity function that simply returns the prop function callback
        // to opt out of this prop being set to `true` as its a function
        return nativeProp
      },
    },
  },
}
```



https://github.com/user-attachments/assets/493534b2-a508-4142-a760-0b1b24419e19

Additionally I made sure that this doesn't conflict with any existing
view configs in react native. In general, this shouldn't be a breaking
change, as for existing view configs it didn't made a difference if you
simply set `myProp: true` or `myProp: { process: () => {...} }` because
as soon as it was detected that the prop is a function the config
wouldn't be used (which is what this PR fixes).
Probably everyone, including the react-native core components use
`myProp: true` for callback props, so this change should be fine.
…#33486)

The prettier check for this file is currently failing on `main`, after
facebook#32119 was merged.
…33450)

Summary: useEffectEvent values are not meant to be added to the dep
array
This adds some I/O to go get the third party thing to test how it
overlaps.

With facebook#33482, this is what it looks like. The await gets cut off when the
third party component starts rendering. I.e. after the latency to start.

<img width="735" alt="Screenshot 2025-06-08 at 5 42 46 PM"
src="https://github.com/user-attachments/assets/f68d9a84-05a1-4125-b3f0-8f3e4eaaa5c1"
/>

This doesn't fully simulate everything because it should actually also
simulate each chunk of the stream coming back too. We could wrap the
ReadableStream to simulate that. In that scenario, it would probably get
some awaits on the chunks at the end too.
…paces (facebook#33409)

## Summary

Problem #1: Running the `link-compiler.sh` bash script via `"prebuild"`
script fails if a developer has cloned the `react` repo into a folder
that contains _any_ spaces. 3 tests fail because of this.

<img width="1003" alt="fail-1"
src="https://github.com/user-attachments/assets/1fbfa9ce-4f84-48d7-b49c-b6e967b8c7ca"
/>
<img width="1011" alt="fail-2"
src="https://github.com/user-attachments/assets/0a8c6371-a2df-4276-af98-38f4784cf0da"
/>
<img width="1027" alt="fail-3"
src="https://github.com/user-attachments/assets/1c4f4429-800c-4b44-b3da-a59ac85a16b9"
/>

For example, my current folder is:
`/Users/wes/Development/Open Source Contributions/react`

The link compiler error returns:
`./scripts/react-compiler/link-compiler.sh: line 15: cd:
/Users/wes/Development/Open: No such file or directory`

Problem #2: 1 test in `ReactChildren-test.js` fails due the existing
stack trace regex which should be lightly revised.

`([^(\[\n]+)[^\n]*/g` is more robust for stack traces: it captures the
function/class name (with dots) and does not break on spaces in file
paths.
`([\S]+)[^\n]*/g` is simpler but breaks if there are spaces and doesn't
handle dotted names well.

Additionally, we trim the whitespace off the name to resolve extra
spaces breaking this test as well:

```
-     in div (at **)
+     in div  (at **)
```

<img width="987" alt="fail-4"
src="https://github.com/user-attachments/assets/56a673bc-513f-4458-95b2-224129c77144"
/>

All of the above tests pass if I hyphenate my local folder:
`/Users/wes/Development/Open-Source-Contributions/react`

I selfishly want to keep spaces in my folder names. 🫣

## How did you test this change?

**npx yarn prebuild**

Before:
<img width="896" alt="Screenshot at Jun 01 11-42-56"
src="https://github.com/user-attachments/assets/4692775c-1e5c-4851-9bd7-e12ed5455e47"
/>

After:
<img width="420" alt="Screenshot at Jun 01 11-43-42"
src="https://github.com/user-attachments/assets/4e303c00-02b7-4540-ba19-927b2d7034fb"
/>

**npx yarn test**
**npx yarn test
./packages/react/src/\_\_tests\_\_/ReactChildren-test.js**
**npx yarn test -r=xplat --env=development --variant=true --ci
--shard=3/5**

Before:
<img width="438" alt="before"
src="https://github.com/user-attachments/assets/f5eedb22-18c3-4124-a04b-daa95c0f7652"
/>

After:
<img width="439" alt="after"
src="https://github.com/user-attachments/assets/a94218ba-7c6a-4f08-85d3-57540e9d0029"
/>

<img width="650" alt="Screenshot at Jun 02 18-03-39"
src="https://github.com/user-attachments/assets/3eae993c-a56b-46c8-ae02-d249cb053fe7"
/>

<img width="685" alt="Screenshot at Jun 03 12-53-47"
src="https://github.com/user-attachments/assets/5b2caa33-d3dc-4804-981d-52cb10b6226f"
/>
… in shadow dom scnenario (facebook#33487)

## Summary

Minor changes around css and styling of Settings dialog.

1. `:root` selector was updated to `:is(:root, :host)` to make css
variables available on Shadow Root
2. CSS tweaks around Settings dialog: removed references to deleted
styles, removed unused styles, ironed out styling for cases when input
styles are enhanced by user agent stylesheet

<!--
Explain the **motivation** for making this change. What existing problem
does the pull request solve?
-->

## How did you test this change?

| Before | After |
|--------|--------|
| ![Screenshot 2025-06-09 at 15 35
55](https://github.com/user-attachments/assets/1ac5d002-744b-4b10-9501-d4f2a7c827d2)
| ![Screenshot 2025-06-09 at 15 26
12](https://github.com/user-attachments/assets/8cc07cda-99a5-4930-973b-b139b193e349)
|
| ![Screenshot 2025-06-09 at 15 36
02](https://github.com/user-attachments/assets/1af4257c-928d-4ec6-a614-801cc1936f4b)
| ![Screenshot 2025-06-09 at 15 26
25](https://github.com/user-attachments/assets/7a3a0f7c-5f3d-4567-a782-dd37368a15ae)
|
| ![Screenshot 2025-06-09 at 15 36
05](https://github.com/user-attachments/assets/a1e00381-2901-4e22-b1c6-4a3f66ba78c9)
| ![Screenshot 2025-06-09 at 15 26
30](https://github.com/user-attachments/assets/bdefce68-cbb5-4b88-b44c-a74f28533f7d)
|
| ![Screenshot 2025-06-09 at 15 36
12](https://github.com/user-attachments/assets/4eda6234-0ef0-40ca-ad9d-5990a2b1e8b4)
| ![Screenshot 2025-06-09 at 15 26
37](https://github.com/user-attachments/assets/5cac305e-fd29-460c-b0b8-30e477b8c26e)
|
Includes facebook#31412.

The issue is that `pushTreeFork` stores some global state when reconcile
children. This gets popped by `popTreeContext` in `completeWork`.
Normally `completeWork` returns its own `Fiber` again if it wants to do
a second pass which will call `pushTreeFork` again in the next pass.
However, `SuspenseList` doesn't return itself, it returns the next child
to work on.

The fix is to keep track of the count and push it again it when we
return the next child to attempt.

There are still some outstanding issues with hydration. Like the
backwards test still has the wrong behavior in it because it hydrates
backwards and so it picks up the DOM nodes in reverse order.
`tail="hidden"` also doesn't work correctly.

There's also another issue with `useId` and `AsyncIterable` in
SuspenseList when there's an unknown number of children. We don't
support those showing one at a time yet though so it's not an issue yet.
To fix it we need to add variable total count to the `useId` algorithm.
E.g. by falling back to varint encoding.

---------

Co-authored-by: Rick Hanlon <[email protected]>
Co-authored-by: Ricky <[email protected]>
facebook#33482)

Previously you weren't guaranteed to have only advancing time entries,
you could jump back in time, but now it omits unnecessary duplicates and
clamps automatically if you emit a previous time entry to enforce
forwards order only.

The reason I didn't do this originally is because `await` can jump in
the order because we're trying to encode a graph into a flat timeline
for simplicity of the protocol and consumers.

```js
async function a() {
  await fetch1();
  await fetch2();
}

async function b() {
  await fetch3();
}

async function foo() {
  const p = a();
  await b();
  return p;
}
```

This can effectively create two parallel sequences:

```
--1.................----2.......--
------3......---------------------
```

This can now be flattened to either:

```
--1.................3---2.......--
```

Or:

```
------3......1......----2.......--
```

Depending on which one we visit first. Regardless, information is lost.

I'd say that the second one is worse encoding of this scenario because
it pretends that we weren't waiting for part of the timespan that we
were. To solve this I think we should probably make `emitAsyncSequence`
create a temporary flat list and then sort it by start time before
emitting.

Although we weren't actually blocked since there was some CPU time that
was able to proceed to get to 3. So maybe the second one is actually
better. If we wanted that consistently we'd have to figure out what the
intersection was.

---------

Co-authored-by: Hendrik Liebau <[email protected]>
- Add shouldIgnoreValue() checks in ReactDOMInput.js and ReactDOMTextarea.js
- Filter out symbols and functions from value and defaultValue props
- Prevents stringification errors and unwanted string representations
- Maintains backward compatibility for all other value types

Fixes facebook#11734
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.