Skip to content

Mirror of upstream PR #33575 #191

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 161 commits into
base: main
Choose a base branch
from
Open

Mirror of upstream PR #33575 #191

wants to merge 161 commits into from

Conversation

kushxg
Copy link
Owner

@kushxg kushxg commented Jul 1, 2025

Mirrored from facebook/react PR facebook#33575

sammy-SC and others added 30 commits May 12, 2025 17:39
This is a fix for a problem where React retains shadow nodes longer than
it needs to. The behaviour is shown in React Native test:
https://github.com/facebook/react-native/blob/main/packages/react-native/src/private/__tests__/utilities/__tests__/ShadowNodeReferenceCounter-itest.js#L169

# Problem
When React commits a new shadow tree, old shadow nodes are stored inside
`fiber.alternate.stateNode`. This is not cleared up until React clones
the node again. This may be problematic if mutation deletes a subtree,
in that case `fiber.alternate.stateNode` will retain entire subtree
until next update. In case of image nodes, this means retaining entire
images.

So when React goes from revision A: `<View><View /></View>` to revision
B: `<View />`, `fiber.alternate.stateNode` will be pointing to Shadow
Node that represents revision A..


![image](https://github.com/user-attachments/assets/076b677e-d152-4763-8c9d-4f923212b424)


# Fix
To fix this, this PR adds a new feature flag
`enableEagerAlternateStateNodeCleanup`. When enabled,
`alternate.stateNode` is proactively pointed towards finishedWork's
stateNode, releasing resources sooner.

I have verified this fixes the issue [demonstrated by React Native
tests](https://github.com/facebook/react-native/blob/main/packages/react-native/src/private/__tests__/utilities/__tests__/ShadowNodeReferenceCounter-itest.js#L169).
All existing React tests pass when the flag is enabled.
…ook#33028)

## Summary
Our builds generate files with a `.mjs` file extension. These are
currently filtered out by `ReactFlightWebpackPlugin` so I am updating it
to support this file extension.

This fixes facebook#33155

## How did you test this change?
I built the plugin with this change and used `yalc` to test it in my
project. I confirmed the expected files now show up in
`react-client-manifest.json`
)

Enabled in experimental channel.

We know this is critical semantics to enforce at the HTML level since if
you don't then you can't add explicit boundaries after the fact.
However, this might have to go in a major release to allow for
upgrading.
…ackTrace (facebook#33143)

When we get the source location for "View source for this element" we
should be using the enclosing function of the callsite of the child. So
that we don't just point to some random line within the component.

This is similar to the technique in facebook#33136.

This technique is now really better than the fake throw technique, when
available. So I now favor the owner technique. The only problem it's
only available in DEV and only if it has a child that's owned (and not
filtered).

We could implement this same technique for the error that's thrown in
the fake throwing solution. However, we really shouldn't need that at
all because for client components we should be able to call
`inspect(fn)` at least in Chrome which is even better.
…3159)

This keeps track of the transition lane allocated for this event. I want
to be able to use the current one within sync work flushing to know
which lane needs its loading indicator cleared.

It's also a bit weird that transition work scheduled inside sync updates
in the same event aren't entangled with other transitions in that event
when `flushSync` is.

Therefore this moves it to reset after flushing.

It should have no impact. Just splitting it out into a separate PR for
an abundance of caution.

The only thing this might affect would be if the React internals throws
and it doesn't reset after. But really it doesn't really have to reset
and they're all entangled anyway.
…onLane (facebook#33188)

When we're entangled with an async action lane we use that lane instead
of the currentEventTransitionLane. Conversely, if we start a new async
action lane we reuse the currentEventTransitionLane.

So they're basically supposed to be in sync but they're not if you
resolve the async action and then schedule new stuff in the same event.
Then you end up with two transitions in the same event with different
lanes.

By stashing it like this we fix that but it also gives us an opportunity
to check just the currentEventTransitionLane to see if this event
scheduled any regular Transition updates or Async Transitions.
Stacked on facebook#33159.

This implements `onDefaultTransitionIndicator`.

The sequence is:

1) In `markRootUpdated` we schedule Transition updates as needing
`indicatorLanes` on the root. This tracks the lanes that currently need
an indicator to either start or remain going until this lane commits.
2) Track mutations during any commit. We use the same hook that view
transitions use here but instead of tracking it just per view transition
scope, we also track a global boolean for the whole root.
3) If a sync/default commit had any mutations, then we clear the
indicator lane for the `currentEventTransitionLane`. This requires that
the lane is still active while we do these commits. See facebook#33159. In other
words, a sync update gets associated with the current transition and it
is assumed to be rendering the loading state for that corresponding
transition so we don't need a default indicator for this lane.
4) At the end of `processRootScheduleInMicrotask`, right before we're
about to enter a new "event transition lane" scope, it is no longer
possible to render any more loading states for the current transition
lane. That's when we invoke `onDefaultTransitionIndicator` for any roots
that have new indicator lanes.
5) When we commit, we remove the finished lanes from `indicatorLanes`
and once that reaches zero again, then we can clean up the default
indicator. This approach means that you can start multiple different
transitions while an indicator is still going but it won't stop/restart
each time. Instead, it'll wait until all are done before stopping.

Follow ups:

- [x] Default updates are currently not enough to cancel because those
aren't flush in the same microtask. That's unfortunate. facebook#33186
- [x] Handle async actions before the setState. Since these don't
necessarily have a root this is tricky. facebook#33190
- [x] Disable for `useDeferredValue`. ~Since it also goes through
`markRootUpdated` and schedules a Transition lane it'll get a default
indicator even though it probably shouldn't have one.~ EDIT: Turns out
this just works because it doesn't go through `markRootUpdated` when
work is left behind.
- [x] Implement built-in DOM version by default. facebook#33162
…n was scheduled (facebook#33186)

Stacked on facebook#33160.

The purpose of this is to avoid calling `onDefaultTransitionIndicator`
when a Default priority update acts as the loading indicator, but still
call it when unrelated Default updates happens nearby.

When we schedule Default priority work that gets batched with other
events in the same frame more or less. This helps optimize by doing less
work. However, that batching means that we can't separate work from one
setState from another. If we would consider all Default priority work in
a frame when determining whether to show the default we might never show
it in cases like when you have a recurring timer updating something.

This instead flushes the Default priority work eagerly along with the
sync work at the end of the event, if this event scheduled any
Transition work. This is then used to determine if the default indicator
needs to be shown.
…acebook#33162)

Stacked on facebook#33160.

By default, if `onDefaultTransitionIndicator` is not overridden, this
will trigger a fake Navigation event using the Navigation API. This is
intercepted to create an on-going navigation until we complete the
Transition. Basically each default Transition is simulated as a
Navigation.

This triggers the native browser loading state (in Chrome at least). So
now by default the browser spinner spins during a Transition if no other
loading state is provided. Firefox and Safari hasn't shipped Navigation
API yet and even in the flag Safari has, it doesn't actually trigger the
native loading state.

To ensures that you can still use other Navigations concurrently, we
don't start our fake Navigation if there's one on-going already.
Similarly if our fake Navigation gets interrupted by another. We wait
for on-going ones to finish and then start a new fake one if we're
supposed to be still pending.

There might be other routers on the page that might listen to intercept
Navigation Events. Typically you'd expect them not to trigger a refetch
when navigating to the same state. However, if they want to detect this
we provide the `"react-transition"` string in the `info` field for this
purpose.
…o root associated (facebook#33190)

Stacked on facebook#33160, facebook#33162, facebook#33186 and facebook#33188.

We have a special case that's awkward for default indicators. When you
start a new async Transition from `React.startTransition` then there's
not yet any associated root with the Transition because you haven't
necessarily `setState` on anything yet until the promise resolves.
That's what `entangleAsyncAction` handles by creating a lane that
everything entangles with until all async actions are done.

If there are no sync updates before the end of the event, we should
trigger a default indicator until either the async action completes
without update or if it gets entangled with some roots we should keep it
going until those roots are done.
Not sure where this was coming from.
…sition (facebook#33191)

And that doesn't disable with `update="none"`.

The principle here is that we want the content of a Portal to animate if
other things are animating with it but if other things aren't animating
then we don't.
…ook#33200)

This is a partial revert of facebook#33094. It's true that we don't need the
server and client ViewTransition names to line up. However the server
does need to be able to generate deterministic names for itself. The
cheapest way to do that is using the useId algorithm. When it's used by
the server, the client needs to also materialize an ID even if it
doesn't use it.
…3194)

Removes the `isFallback` flag on Tasks and tracks it on the
formatContext instead.

Less memory and avoids passing and tracking extra arguments to all the
pushStartInstance branches that doesn't need it.

We'll need to be able to track more Suspense related contexts on this
for View Transitions anyway.
…facebook#33206)

Stacked on facebook#33194 and facebook#33200.

When Suspense boundaries reveal during streaming, the Fizz runtime will
be responsible for animating the reveal if necessary (not in this PR).
However, for the future runtime to know what to do it needs to know
about the `<ViewTransition>` configuration to apply.

Ofc, these are virtual nodes that disappear from the HTML. We could
model them as comments like we do with other virtual nodes like Suspense
and Activity. However, that doesn't let us target them with
querySelector and CSS (for no-JS transitions). We also don't have to
model every ViewTransition since not every combination can happen using
only the server runtime. So instead this collapses `<ViewTransition>`
and applies the configuration to the inner DOM nodes.

```js
<ViewTransition name="hi">
  <div />
  <div />
</ViewTransition>
```

Becomes:

```html
<div vt-name="hi" vt-update="auto"></div>
<div vt-name="hi_1" vt-update="auto"></div>
```

I use `vt-` prefix as opposed to `data-` to keep these virtual
attributes away from user specific ones but we're effectively claiming
this namespace.

There are four triggers `vt-update`, `vt-enter`, `vt-exit` and
`vt-share`. The server resolves which ones might apply to this DOM node.
The value represents the class name (after resolving
view-transition-type mappings) or `"auto"` if no specific class name is
needed but this is still a trigger.

The value can also be `"none"`. This is different from missing because
for example an `vt-update="none"` will block mutations inside it from
triggering the boundary where as a missing `vt-update` would bubble up
to be handled by a parent.

`vt-name` is technically only necessary when `vt-share` is specified to
find a pair. However, since an explicit name can also be used to target
specific CSS selectors, we include it even for other cases.

We want to exclude as many of these annotations as possible.

`vt-enter` can only affect the first DOM node inside a Suspense
boundary's content since the reveal would cause it to enter but nothing
deeper inside. Similarly `vt-exit` can only affect the first DOM node
inside a fallback. So for every other case we can exclude them. (For
future MPA ViewTransitions of the whole document it might also be
something we annotate to children inside the `<body>` as well.) Ideally
we'd only include `vt-enter` for Suspense boundaries that actually
flushed a fallback but since we prepare all that content earlier it's
hard to know.

`vt-share` can be anywhere inside an fallback or content. Technically we
don't have to include it outside the root most Suspense boundary or for
boundaries that are inlined into the root shell. However, this is tricky
to detect. It would also not be correct for future MPA ViewTransitions
because in that case the shared scenario can affect anything in the two
documents so it needs to be in every node everywhere which is
effectively what we do. If a `share` class is specified but it has no
explicit name, we can exclude it since it can't match anything.

`vt-update` is only necessary if something below or a sibling might
update like a Suspense boundary. However, since we don't know when
rendering a segment if it'll later asynchronously add a Suspense
boundary later we have to assume that anywhere might have a child. So
these are always included. We collapse to use the inner most one when
directly nested though since that's the one that ends up winning.

There are some weird edge cases that can't be fully modeled by the lack
of virtual nodes.
For debugging purposes, log author_association
Noop detection for xplat syncs broke because `eslint-plugin-react-hooks`
uses versions like:

- `0.0.0-experimental-d85f86cf-20250514`

But xplat expects them to be of the form:

- `19.2.0-native-fb-63d664b2-20250514`

This PR fixes the noop by ignoring
`eslint-plugin-react-hooks/package.json` changes. This means we won't
create a sync if only that package.json changes, but that should be rare
and we can follow up with better detection if needed.

[Example failed
action](https://github.com/facebook/react/actions/runs/15032346805/job/42247414406):

<img width="1031" alt="Screenshot 2025-05-15 at 11 31 17 AM"
src="https://github.com/user-attachments/assets/d902079c-1afe-4e18-af1d-25e60e28929e"
/>

I believe the regression was caused by
facebook#33104
…cebook#33295)

We decremented `allPendingTasks` after invoking `onShellReady`. Which
means that in that scope it wasn't considered fully complete.

Since the pattern for flushing in Node.js is to start piping in
`onShellReady` and that's how you can get sync behavior, this led us to
think that we had more work left to do. For example we emitted the
`writeShellTimeInstruction` in this scenario before.
…ion (facebook#33293)

When needed.

For the external runtime we always include this wrapper.

For others, we only include it if we have an ViewTransitions affecting.
If we discover the ViewTransitions late, then we can upgrade an already
emitted instruction.

This doesn't yet do anything useful with it, that's coming in a follow
up. This is just the mechanism for how it gets installed.
So they can be shared by server. Incorporates the types from definitely
typed too.
…ebook#33306)

Basically we track a `SuspenseListRow` on the task. These keep track of
"pending tasks" that block the row. A row is blocked by:

- First itself completing rendering.
- A previous row completing.
- Any tasks inside the row and before the Suspense boundary inside the
row. This is mainly because we don't yet know if we'll discover more
SuspenseBoundaries.
- Previous row's SuspenseBoundaries completing.

If a boundary might get outlined, then we can't consider it completed
until we have written it because it determined whether other future
boundaries in the row can finish.

This is just handling basic semantics. Features not supported yet that
need follow ups later:

- CSS dependencies of previous rows should be added as dependencies of
future row's suspense boundary. Because otherwise if the client is
blocked on CSS then a previous row could be blocked but the server
doesn't know it.
- I need a second pass on nested SuspenseList semantics.
- `revealOrder="together"`
- `tail="hidden"`/`tail="collapsed"`. This needs some new runtime
semantics to the Fizz runtime and to allow the hydration to handle
missing rows in the HTML. This should also be future compatible with
AsyncIterable where we don't know how many rows upfront.
- Need to double check resuming semantics.

---------

Co-authored-by: Sebastian "Sebbie" Silbermann <[email protected]>
We were printing "Custom" instead of "hook".
We support AsyncIterable (more so when it's a cached form like in coming
from Flight) as children.

This fixes some warnings and bugs when passed to SuspenseList.

Ideally SuspenseList with `tail="hidden"` should support unblocking
before the full result has resolved but that's an optimization on top.
We also might want to change semantics for this for
`revealOrder="backwards"` so it becomes possible to stream items in
reverse order.
Follow up to facebook#33306.

If we're nested inside a SuspenseList and we have a row, then we can
point our last row to block the parent row and unblock the parent when
the last child unblocks.
For now we removed Rust from the codebase, remove this leftover script.

Also remove some dupes and Rust related files from `.gitignore`.
Stacked on facebook#33308.

For "together" mode, we can be a self-blocking row that adds all its
boundaries to the blocked set, but there's no parent row that unblocks
it.

A particular quirk of this mode is that it's not enough to just unblock
them all on the server together. Because if one boundary downloads all
its html and then issues a complete instruction it'll appear before the
others while streaming in. What we actually want is to reveal them all
in a single batch.

This implementation takes a short cut by unblocking the rows in
`flushPartialBoundary`. That ensures that all the segments of every
boundary has a chance to flush before we start emitting any of the
complete boundary instructions. Once the last one unblocks, all the
complete boundary instructions are queued. Ideally this would be a
single `<script>` tag so that they can't be split up even if we get a
chunk containing some of them.

~A downside of this approach is that we always outline these boundaries.
We could inline them if they all complete before the parent flushes.
E.g. by checking if the row is blocked only by its own boundaries and if
all the boundaries would fit without getting outlined, then we can
inline them all at once.~ I went ahead and did this because it solves an
issue with `renderToString` where it doesn't support the script runtime
so it can only handle this if inlined.
…future rows (facebook#33312)

Stacked on facebook#33311.

When a row contains Suspense boundaries that themselves depend on CSS,
they will not resolve until the CSS has loaded on the client. We need
future rows in a list to be blocked until this happens. We could do
something in the runtime but a simpler approach is to just add those CSS
dependencies to all those boundaries as well.

To do this, we first hoist the HoistableState from a completed boundary
onto its parent row. Then when the row finishes do we hoist it onto the
next row and onto any boundaries within that row.
Joseph Savona and others added 30 commits June 18, 2025 15:47
Now that we have support for defining aliasing signatures in
moduleTypeProvider, which uses string names for
receiver/args/returns/etc, we can reuse that same form for builtin
declarations. The declarations are written in the unparsed form and than
parsed/validated when registered (in the addFunction/addHook call).

This also required flushing out configs/schemas for more effect types.

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/33530).
* facebook#33571
* facebook#33558
* facebook#33547
* facebook#33543
* facebook#33533
* facebook#33532
* __->__ facebook#33530
Start of docs describing the effects and the inference rules.

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/33533).
* facebook#33571
* facebook#33558
* facebook#33547
* facebook#33543
* __->__ facebook#33533
* facebook#33532
* facebook#33530
…tion expressions (facebook#33543)

Adds some typed helpers to represent aliasing, assign, capture,
createfrom, and mutate effects along with representative runtime
behavior, and then adds tests to demonstrate that we model
capture->createfrom and createfrom->capture correctly.

There is one case (createfrom->capture in a lambda) where we infer a
less precise effect, but in the more conservative direction (we include
more code/deps than necesssary rather than fewer).

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/33543).
* facebook#33571
* facebook#33558
* facebook#33547
* __->__ facebook#33543
By accident we were only ever checking the compiled output, but the
intention was in general to be able to compare memoization with/without
forget.

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/33547).
* facebook#33571
* facebook#33558
* __->__ facebook#33547
…facebook#33558)

Ensures that effects are well-formed with respect to the rules:
* For a given instruction, each place is only initialized once (w one of
Create, CreateFrom, Assign)
* Ensures that Alias targets are already initialized within the same
instruction (should have a Create before them)
* Preserves Create and similar instructions
* Avoids duplicate instructions when inferring effects of function
expressions

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/33558).
* facebook#33571
* __->__ facebook#33558
* facebook#33547
Removes unnecessary debugging code in the new inference passes now that
they've stabilized more.

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/33571).
* __->__ facebook#33571
* facebook#33558
* facebook#33547
Add detection and warning for incorrect Map/Set polyfill implementations
that don't properly support non-string keys. This helps developers identify
issues with polyfills that break React's internal mechanisms.

- Add BadMapPolyfill module to detect problematic implementations
- Include comprehensive tests for the detection logic
- Warn users when their polyfills may cause React to malfunction

Fixes issues where bad polyfills silently break React functionality.
…cebook#33560)

## Summary

Make this flag dynamic, so it can be controlled internally.

## How did you test this change?

Build, observe that `console.timeStamp` is only present in FB artifacts
and `enableComponentPerformanceTrack` is referenced.
On pages that have a high number of server components (e.g. common when
doing syntax highlighting), the debug outlining can produce extremely
large RSC payloads. For example a documentation page I was working on
had a 13.8 MB payload. I noticed that a majority of this was the source
code for the same function components repeated over and over again (over
4000 times) within `$E()` eval commands.

This PR deduplicates the same functions by serializing by reference,
similar to what is already done for objects. Doing this reduced the
payload size of my page from 13.8 MB to 4.6 MB, and resulted in only 31
evals instead of over 4000. As a result it reduced development page load
and hydration time from 4 seconds to 1.5 seconds. It also means the
deserialized functions will have reference equality just as they did on
the server.
…acebook#33583)

Stacked on facebook#33539.

Stores dedupes of `renderConsoleValue` in a separate set. This allows us
to dedupe objects safely since we can't write objects using this
algorithm if they might also be referenced by the "real" serialization.

Also renamed it to `renderDebugModel` since it's not just for console
anymore.
There's a memory leak in DebugNode where the `Error` objects that we
instantiate retains their callstacks which can have Promises on them. In
fact, it's very likely since the current callsite has the "resource" on
it which is the Promise itself. If those Promises are retained then
their `destroy` async hook is never fired which doesn't clean up our map
which can contains the `Error` object. Creating a cycle that can't be
cleaned up.

This fix is just eagerly reifying and parsing the stacks.

I totally expect this to be crazy slow since there's so many Promises
that we end up not needing to visit otherwise. We'll need to optimize it
somehow. Perhaps by being smarter about which ones we might need stacks
for. However, at least it doesn't leak indefinitely.
…cebook#33591)

It turns out this was being compiled to a `_defineProperty` helper by
Babel or Closure. We're supposed to have it error the build when we use
features like this that might get compiled.

We should stick to simple ES5 features.
…k#33588)

We already support serializing the values of instrumented Promises as
debug values such as in console logs. However, we don't support plain
native promises.

This waits a microtask to see if we can read the value within a
microtask and if so emit it. This is so that we can still close the
connection.

Otherwise, we emit a "halted" row into its row id which replaces the old
"Infinite Promise" reference.

We could potentially wait until the end of the render before cancelling
so that if it resolves before we exit we can still include its value but
that would require a bit more work. Ideally we'd have a way to get these
lazily later anyway.
This adds better support for serializing class instances as Debug
values.

It adds a new marker on the object `{ "": "$P...", ... }` which
indicates which constructor's prototype to use for this object's
prototype. It doesn't encode arbitrary prototypes and it doesn't encode
any of the properties on the prototype. It might get some of the
properties from the prototype by virtue of `toString` on a `class`
constructor will include the whole class's body.

This will ensure that the instance gets the right name in logs.

Additionally, this now also invokes getters if they're enumerable on the
prototype. This lets us reify values that can only be read from native
classes.

---------

Co-authored-by: Hendrik Liebau <[email protected]>
…nformation (facebook#33592)

Stacked on facebook#33588, facebook#33589 and facebook#33590.

This lets us automatically show the resolved value in the UI.

<img width="863" alt="Screenshot 2025-06-22 at 12 54 41 AM"
src="https://github.com/user-attachments/assets/a66d1d5e-0513-4767-910c-5c7169fc2df4"
/>

We can also show rejected I/O that may or may not have been handled with
the error message.

<img width="838" alt="Screenshot 2025-06-22 at 12 55 06 AM"
src="https://github.com/user-attachments/assets/e0a8b6ae-08ba-46d8-8cc5-efb60956a1d1"
/>

To get this working we need to keep the Promise around for longer so
that we can access it once we want to emit an async sequence. I do this
by storing the WeakRefs but to ensure that the Promise doesn't get
garbage collected, I keep a WeakMap of Promise to the Promise that it
depended on. This lets the VM still clean up any Promise chains that
have leaves that are cleaned up. So this makes Promises live until the
last Promise downstream is done. At that point we can go back up the
chain to read the values out of them.

Additionally, to get the best possible value we don't want to get a
Promise that's used by internals of a third-party function. We want the
value that the first party gets to observe. To do this I had to change
the logic for which "await" to use, to be the one that is the first
await that happened in user space. It's not enough that the await has
any first party at all on the stack - it has to be the very first frame.
This is a little sketchy because it relies on the `.then()` call or
`await` call not having any third party wrappers. But it gives the best
object since it hides all the internals. For example when you call
`fetch()` we now log that actual `Response` object.
## Summary

Follow-up to facebook#33525

Fixes `Unsupported tag: "false"`
(https://github.com/facebook/react/actions/runs/15773778995/job/44463562733#step:13:12)
which also affects nightly releases.

## How did you test this change?

- [x] Run successful, manual prerelease from this branch:
https://github.com/facebook/react/actions/runs/15774083406
It wasn't immediately obvious to me, that all the exports here are
related to legacy context, so renaming for clarity.

Modern context lives in `ReactFiberNewContext` which we could probably
also raname in a separate step to just Context.
…ok#33620)

This frees some memory that will be even more important in a follow up.

Currently, all `ReactPromise` instances hold onto their original
`Response`. The `Response` holds onto all objects that were in that
response since they're needed in case the parsed content ends up
referring to an existing object. If everything you retain are plain
objects then that's fine and the `Response` gets GC:ed, but if you're
retaining a `Promise` itself then it holds onto the whole `Response`.

The only thing that needs this reference at all is a
`ResolvedModelChunk` since it will lazily initialize e.g. by calling
`.then` on itself and so we need to know where to find any sibling
chunks it may refer to. However, we can just store the `Response` on the
`reason` field for this particular state.

That way when all lazy values are touched and initialized the `Response`
is freed. We also free up some memory by getting rid of the extra field.
…end in DEV (facebook#33627)

This adds plumbing for opening a stream from the Flight Client to the
Flight Server so it can ask for more data on-demand. In this mode, the
Flight Server keeps the connection open as long as the client is still
alive and there's more objects to load. It retains any depth limited
objects so that they can be asked for later. In this first PR it just
releases the object when it's discovered on the server and doesn't
actually lazy load it yet. That's coming in a follow up.

This strategy is built on the model that each request has its own
channel for this. Instead of some global registry. That ensures that
referential identity is preserved within a Request and the Request can
refer to previously written objects by reference.

The fixture implements a WebSocket per request but it doesn't have to be
done that way. It can be multiplexed through an existing WebSocket for
example. The current protocol is just a Readable(Stream) on the server
and WritableStream on the client. It could even be sent through a HTTP
request body if browsers implemented full duplex (which they don't).

This PR only implements the direction of messages from Client to Server.
However, I also plan on adding Debug Channel in the other direction to
allow debug info (optionally) be sent from Server to Client through this
channel instead of through the main RSC request. So the `debugChannel`
option will be able to take writable or readable or both.

---------

Co-authored-by: Hendrik Liebau <[email protected]>
Substantially improves the last major known issue with the new inference
model's implementation: inferring effects of function expressions. I
knowingly used a really simple (dumb) approach in
InferFunctionExpressionAliasingEffects but it worked surprisingly well
on a ton of code. However, investigating during the sync I saw that we
the algorithm was literally running out of memory, or crashing from
arrays that exceeded the maximum capacity. We were accumluating data
flow in a way that could lead to lists of data flow captures compounding
on themselves and growing very large very quickly. Plus, we were
incorrectly recording some data flow, leading to cases where we reported
false positive "can't mutate frozen value" for example.

So I went back to the drawing board. InferMutationAliasingRanges already
builds up a data flow graph which it uses to figure out what values
would be affected by mutations of other values, and update mutable
ranges. Well, the key question that we really want to answer for
inferring a function expression's aliasing effects is which values
alias/capture where. Per the docs I wrote up, we only have to record
such aliasing _if they are observable via mutations_. So, lightbulb:
simulate mutations of the params, free variables, and return of the
function expression and see which params/free-vars would be affected!
That's what we do now, giving us precise information about which such
values alias/capture where. When the "into" is a param/context-var we
use Capture, iwhen the destination is the return we use Alias to be
conservative.

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/33584).
* facebook#33626
* facebook#33625
* facebook#33624
* __->__ facebook#33584
…s Owner Stack (facebook#33629)

This is using the same trick as facebook#30798 but for runtime code too. It's
essential zero cost.

This lets us include a source location for parent stacks of Server
Components when it has an owned child's location. Either from JSX or
I/O.

Ironically, a Component that throws an error will likely itself not get
the stack because it won't have any JSX rendered yet.
…ing a render (facebook#33632)

When we abort a render we don't really have much information about the
task that was aborted. Because before a Promise resolves there's no
indication about would have resolved it. In particular we don't know
which I/O would've ultimately called resolve().

However, we can at least emit any information we do have at the point
where we emit it. At the least the stack of the top most Promise.

Currently we synchronously flush at the end of an `abort()` but we
should ideally schedule the flush in a macrotask and emit this debug
information right before that. That way we would give an opportunity for
any `cacheSignal()` abort to trigger rejections all the way up and those
rejections informs the awaited stack.

---------

Co-authored-by: Hendrik Liebau <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.