forked from facebook/react
-
Notifications
You must be signed in to change notification settings - Fork 0
[pull] main from facebook:main #177
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
pull
wants to merge
135
commits into
code:main
Choose a base branch
from
facebook:main
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
When I added the `ready_for_review` event in #32344, no notifications for opened draft PRs were sent due to some other condition. This is not the case anymore, so we need to exclude draft PRs from triggering a notification when the workflow is run because of an `opened` event. This event is still needed because the `ready_for_review` event only fires when an existing draft PR is converted to a non-draft state. It does not trigger for pull requests that are opened directly as ready-for-review.
## Summary Enables the `enableEagerAlternateStateNodeCleanup` feature flag for all variants, while maintaining the `__VARIANT__` for the internal React Native flavor for backtesting reasons. ## How did you test this change? ``` $ yarn test ```
This shouldn't be needed now that the lint rule was move
Block the view transition on suspensey images Up to 500ms just like the client. We can't use `decode()` because a bug in Chrome where those are blocked on `startViewTransition` finishing we instead rely on sync decoding but also that the image is live when it's animating in and we assume it doesn't start visible. However, we can block the View Transition from starting on the `"load"` or `"error"` events. The nice thing about blocking inside `startViewTransition` is that we have already done the layout so we can only wait on images that are within the viewport at this point. We might want to do that in Fiber too. If many image doesn't have fixed size but need to load first, they can all end up in the viewport. We might consider only doing this for images that have a fixed size or only a max number that doesn't have a fixed size.
…undaries (#33454) We want to make sure that we can block the reveal of a well designed complete shell reliably. In the Suspense model, client transitions don't have any way to implicitly resolve. This means you need to use Suspense or SuspenseList to explicitly split the document. Relying on implicit would mean you can't add a Suspense boundary later where needed. So we highly encourage the use of them around large content. However, if you have constructed a too large shell (e.g. by not adding any Suspense boundaries at all) then that might take too long to render on the client. We shouldn't punish users (or overzealous metrics tracking tools like search engines) in that scenario. This opts out of render blocking if the shell ends up too large to be intentional and too slow to load. Instead it deopts to showing the content split up in arbitrary ways (browser default). It only does this for SSR, and not client navs so it's not reliable. In fact, we issue an error to `onError`. This error is recoverable in that the document is still produced. It's up to your framework to decide if this errors the build or just surface it for action later. What should be the limit though? There's a trade off here. If this limit is too low then you can't fit a reasonably well built UI within it without getting errors. If it's too high then things that accidentally fall below it might take too long to load. I came up with 512kB of uncompressed shell HTML. See the comment in code for the rationale for this number. TL;DR: Data and theory indicates that having this much content inside `rel="expect"` doesn't meaningfully change metrics. Research of above-the-fold content on various websites indicate that this can comfortable fit all of them which should be enough for any intentional initial paint.
) Follow up to #33442. This is the bundled version. To keep type check passes from exploding and the maintainance of the annoying `paths: []` list small, this doesn't add this to flow type checks. We might miss some config but every combination should already be covered by other one passes. I also don't add any jest tests because to test these double export entry points we need conditional importing to cover builds and non-builds which turns out to be difficult for the Flight builds so these aren't covered by any basic build tests. This approach is what I'm going for, for the other bundlers too.
Missed these the last time.
Adding throttling or delaying on images, can obviously impact metrics. However, it's all in the name of better actual user experience overall. (Note that it's not strictly worse even for metric. Often it's actually strictly better due to less work being done overall thanks to batching.) Metrics can impact things like search ranking but I believe this is on a curve. If you're already pretty good, then a slight delay won't suddenly make you rank in a completely different category. Similarly, if you're already pretty bad then a slight delay won't make it suddenly way worse. It's still in the same realm. It's just one weight of many. I don't think this will make a meaningful practical impact and if it does, that's probably a bug in the weights that will get fixed. However, because there's a race to try to "make everything green" in terms of web vitals, if you go from green to yellow only because of some throttling or suspensey images, it can feel bad. Therefore this implements a heuristic where if the only reason we'd miss a specific target is because of throttling or suspensey images, then we shorten the timeout to hit the metric. This is a worse user experience because it can lead to extra flashing but feeling good about "green" matters too. If you then have another reveal that happens to be the largest contentful paint after that, then that's throttled again so that it doesn't become flashy after that. If you've already missed the deadline then you're not going to hit your metric target anyway. It can affect average but not median. This is mainly about LCP. It doesn't affect FCP since that doesn't have a throttle. If your LCP is the same as your FCP then it also doesn't matter. We assume that `performance.now()`'s zero point starts at the "start of the navigation" which makes this simple. Even if we used the `PerformanceNavigationTiming` API it would just tell us the same thing. This only implements for Fizz since these metrics tend to currently only by tracked for initial loads, but with soft navs tracking we could consider implementing the same for Fiber throttles.
When deeply nested Suspense boundaries inside a fallback of another boundary resolve it is possible to encounter situations where you either attempt to flush an aborted Segment or you have a boundary without any root segment. We intended for both of these conditions to be impossible to arrive at legitimately however it turns out in this situation you can. The fix is two-fold 1. allow flushing aborted segments by simply skipping them. This does remove some protection against future misconfiguraiton of React because it is no longer an invariant that you hsould never attempt to flush an aborted segment but there are legitimate cases where this can come up and simply omitting the segment is fine b/c we know that the user will never observe this. A semantically better solution would be to avoid flushing boudaries inside an unneeded fallback but to do this we would need to track all boundaries inside a fallback or create back pointers which add to memory overhead and possibly make GC harder to do efficiently. By flushing extra we're maintaining status quo and only suffer in performance not with broken semantics. 2. when queuing completed segments allow for queueing aborted segments and if we are eliding the enqueued segment allow for child segments that are errored to be enqueued too. This will mean that we can maintain the invariant that a boundary must have a root segment the first time we flush it, it just might be aborted (see point 1 above). This change has two seemingly similar test cases to exercise this fix. The reason we need both is that when you have empty segments you hit different code paths within Fizz and so each one (without this fix) triggers a different error pathway. This change also includes a fix to our tests where we were not appropriately setting CSPnonce back to null at the start of each test so in some contexts scripts would not run for some tests
Reverts #33457, #33456 and #33442. There are too many issues with wrappers, lazy init, stateful modules, duplicate instantiation of async_hooks and duplication of code. Instead, we'll just do a wrapper polyfill that uses Node Streams internally. I kept the client indirection files that I added for consistency with the server though.
This effectively lets us consume Web Streams in a Node build. In fact the Node entry point is now just adding Node stream APIs. For the client, this is simple because the configs are not actually stream type specific. The server is a little trickier.
New take on #33441. This uses a wrapper instead of a separate bundle.
This needs some tweaks to the implementation and a conversion but simple enough. --------- Co-authored-by: Hendrik Liebau <[email protected]>
…#33478) I noticed that the ThirdPartyComponent in the fixture was showing the wrong stack and the `"use third-party"` is in the wrong location. <img width="628" alt="Screenshot 2025-06-06 at 11 22 11 PM" src="https://github.com/user-attachments/assets/f0013380-d79e-4765-b371-87fd61b3056b" /> When creating the initial JSX inside the third party server, we should make sure that it has no owner. In a real cross-server environment you get this by default by just executing in different context. But since the fixture example is inside the same AsyncLocalStorage as the parent it already has an owner which gets transferred. So we should make sure that were we create the JSX has no owner to simulate this. When we then parse a null owner on the receiving side, we replace its owner/stack with the owner/stack of the call to `createFrom...` to connect them. This worked fine with only two environments. The bug was that when we did this and then transferred the result to a third environment we took the original parsed stack trace. We should instead parse a new one from the replaced stack in the current environment. The second bug was that the `"use third-party"` badge ends up in the wrong place when we do this kind of thing. Because the stack of the thing entering the new environment is the call to `createFrom...` which is in the old environment even though the component itself executes in the new environment. So to see if there's a change we should be comparing the current environment of the task to the owner's environment instead of the next environment after the task. After: <img width="494" alt="Screenshot 2025-06-07 at 1 13 28 AM" src="https://github.com/user-attachments/assets/e2e870ba-f125-4526-a853-bd29f164cf09" />
Technically the async call graph spans basically all the way back to the start of the app potentially, but we don't want to include everything. Similarly we don't want to include everything from previous components in every child component. So we need some heuristics for filtering out data. We roughly want to be able to inspect is what might contribute to a Suspense loading sequence even if it didn't this time e.g. due to a race condition. One flaw with the previous approach was that awaiting a cached promise in a sibling that happened to finish after another sibling would be excluded. However, in a different race condition that might end up being used so I wanted to include an empty "await" in that scenario to have some association from that component. However, for data that resolved fully before the request even started, it's a little different. This can be things that are part of the start up sequence of the app or externally cached data. We decided that this should be excluded because it doesn't contribute to the loading sequence in the expected scenario. I.e. if it's cached. Things that end up being cache misses would still be included. If you want to test externally cached data misses, then it's up to you or the framework to simulate those. E.g. by dropping the cache. This also helps free up some noise since static / cached data can be excluded in visualizations. I also apply this principle to forwarding debug info. If you reuse a cached RSC payload, then the Server Component render time and its awaits gets clamped to the caller as if it has zero render/await time. The I/O entry is still back dated but if it was fully resolved before we started then it's completely excluded.
…allow processing function props (#32119) ## Summary In react-native props that are passed as function get converted to a boolean (`true`). This is the default pattern for event handlers in react-native. However, there are reasons for why you might want to opt-out of this behavior, and instead, pass along the actual function as the prop. Right now, there is no way to do this, and props that are functions always get set to `true`. The `ViewConfig` attributes already have the API for a `process` function. I simply moved the check for the process function up, so if a ViewConfig's prop attribute configured a process function this is always called first. This provides an API to opt out of the default behavior. This is the accompanied PR for react-native: - facebook/react-native#48777 ## How did you test this change? <!-- Demonstrate the code is solid. Example: The exact commands you ran and their output, screenshots / videos if the pull request changes the user interface. How exactly did you verify that your PR solves the issue you wanted to solve? If you leave this empty, your PR will very likely be closed. --> I modified the code manually in a template react-native app and confirmed its working. This is a code path you only need in very special cases, thus it's a bit hard to provide a test for this. I recorded a video where you can see that the changes are active and the prop is being passed as native value. For this I created a custom native component with a view config that looked like this: ```js const viewConfig = { uiViewClassName: 'CustomView', bubblingEventTypes: {}, directEventTypes: {}, validAttributes: { nativeProp: { process: (nativeProp) => { // Identity function that simply returns the prop function callback // to opt out of this prop being set to `true` as its a function return nativeProp }, }, }, } ``` https://github.com/user-attachments/assets/493534b2-a508-4142-a760-0b1b24419e19 Additionally I made sure that this doesn't conflict with any existing view configs in react native. In general, this shouldn't be a breaking change, as for existing view configs it didn't made a difference if you simply set `myProp: true` or `myProp: { process: () => {...} }` because as soon as it was detected that the prop is a function the config wouldn't be used (which is what this PR fixes). Probably everyone, including the react-native core components use `myProp: true` for callback props, so this change should be fine.
The prettier check for this file is currently failing on `main`, after #32119 was merged.
Summary: useEffectEvent values are not meant to be added to the dep array
This adds some I/O to go get the third party thing to test how it overlaps. With #33482, this is what it looks like. The await gets cut off when the third party component starts rendering. I.e. after the latency to start. <img width="735" alt="Screenshot 2025-06-08 at 5 42 46 PM" src="https://github.com/user-attachments/assets/f68d9a84-05a1-4125-b3f0-8f3e4eaaa5c1" /> This doesn't fully simulate everything because it should actually also simulate each chunk of the stream coming back too. We could wrap the ReadableStream to simulate that. In that scenario, it would probably get some awaits on the chunks at the end too.
…paces (#33409) ## Summary Problem #1: Running the `link-compiler.sh` bash script via `"prebuild"` script fails if a developer has cloned the `react` repo into a folder that contains _any_ spaces. 3 tests fail because of this. <img width="1003" alt="fail-1" src="https://github.com/user-attachments/assets/1fbfa9ce-4f84-48d7-b49c-b6e967b8c7ca" /> <img width="1011" alt="fail-2" src="https://github.com/user-attachments/assets/0a8c6371-a2df-4276-af98-38f4784cf0da" /> <img width="1027" alt="fail-3" src="https://github.com/user-attachments/assets/1c4f4429-800c-4b44-b3da-a59ac85a16b9" /> For example, my current folder is: `/Users/wes/Development/Open Source Contributions/react` The link compiler error returns: `./scripts/react-compiler/link-compiler.sh: line 15: cd: /Users/wes/Development/Open: No such file or directory` Problem #2: 1 test in `ReactChildren-test.js` fails due the existing stack trace regex which should be lightly revised. `([^(\[\n]+)[^\n]*/g` is more robust for stack traces: it captures the function/class name (with dots) and does not break on spaces in file paths. `([\S]+)[^\n]*/g` is simpler but breaks if there are spaces and doesn't handle dotted names well. Additionally, we trim the whitespace off the name to resolve extra spaces breaking this test as well: ``` - in div (at **) + in div (at **) ``` <img width="987" alt="fail-4" src="https://github.com/user-attachments/assets/56a673bc-513f-4458-95b2-224129c77144" /> All of the above tests pass if I hyphenate my local folder: `/Users/wes/Development/Open-Source-Contributions/react` I selfishly want to keep spaces in my folder names. 🫣 ## How did you test this change? **npx yarn prebuild** Before: <img width="896" alt="Screenshot at Jun 01 11-42-56" src="https://github.com/user-attachments/assets/4692775c-1e5c-4851-9bd7-e12ed5455e47" /> After: <img width="420" alt="Screenshot at Jun 01 11-43-42" src="https://github.com/user-attachments/assets/4e303c00-02b7-4540-ba19-927b2d7034fb" /> **npx yarn test** **npx yarn test ./packages/react/src/\_\_tests\_\_/ReactChildren-test.js** **npx yarn test -r=xplat --env=development --variant=true --ci --shard=3/5** Before: <img width="438" alt="before" src="https://github.com/user-attachments/assets/f5eedb22-18c3-4124-a04b-daa95c0f7652" /> After: <img width="439" alt="after" src="https://github.com/user-attachments/assets/a94218ba-7c6a-4f08-85d3-57540e9d0029" /> <img width="650" alt="Screenshot at Jun 02 18-03-39" src="https://github.com/user-attachments/assets/3eae993c-a56b-46c8-ae02-d249cb053fe7" /> <img width="685" alt="Screenshot at Jun 03 12-53-47" src="https://github.com/user-attachments/assets/5b2caa33-d3dc-4804-981d-52cb10b6226f" />
… in shadow dom scnenario (#33487) ## Summary Minor changes around css and styling of Settings dialog. 1. `:root` selector was updated to `:is(:root, :host)` to make css variables available on Shadow Root 2. CSS tweaks around Settings dialog: removed references to deleted styles, removed unused styles, ironed out styling for cases when input styles are enhanced by user agent stylesheet <!-- Explain the **motivation** for making this change. What existing problem does the pull request solve? --> ## How did you test this change? | Before | After | |--------|--------| |  |  | |  |  | |  |  | |  |  |
Includes #31412. The issue is that `pushTreeFork` stores some global state when reconcile children. This gets popped by `popTreeContext` in `completeWork`. Normally `completeWork` returns its own `Fiber` again if it wants to do a second pass which will call `pushTreeFork` again in the next pass. However, `SuspenseList` doesn't return itself, it returns the next child to work on. The fix is to keep track of the count and push it again it when we return the next child to attempt. There are still some outstanding issues with hydration. Like the backwards test still has the wrong behavior in it because it hydrates backwards and so it picks up the DOM nodes in reverse order. `tail="hidden"` also doesn't work correctly. There's also another issue with `useId` and `AsyncIterable` in SuspenseList when there's an unknown number of children. We don't support those showing one at a time yet though so it's not an issue yet. To fix it we need to add variable total count to the `useId` algorithm. E.g. by falling back to varint encoding. --------- Co-authored-by: Rick Hanlon <[email protected]> Co-authored-by: Ricky <[email protected]>
#33482) Previously you weren't guaranteed to have only advancing time entries, you could jump back in time, but now it omits unnecessary duplicates and clamps automatically if you emit a previous time entry to enforce forwards order only. The reason I didn't do this originally is because `await` can jump in the order because we're trying to encode a graph into a flat timeline for simplicity of the protocol and consumers. ```js async function a() { await fetch1(); await fetch2(); } async function b() { await fetch3(); } async function foo() { const p = a(); await b(); return p; } ``` This can effectively create two parallel sequences: ``` --1.................----2.......-- ------3......--------------------- ``` This can now be flattened to either: ``` --1.................3---2.......-- ``` Or: ``` ------3......1......----2.......-- ``` Depending on which one we visit first. Regardless, information is lost. I'd say that the second one is worse encoding of this scenario because it pretends that we weren't waiting for part of the timespan that we were. To solve this I think we should probably make `emitAsyncSequence` create a temporary flat list and then sort it by start time before emitting. Although we weren't actually blocked since there was some CPU time that was able to proceed to get to 3. So maybe the second one is actually better. If we wanted that consistently we'd have to figure out what the intersection was. --------- Co-authored-by: Hendrik Liebau <[email protected]>
The flag is fully rolled out.
…ents (#33675) We generally treat these types of fields as optional on ReactDebugInfo and should on ReactElement too. That way we can consume prod payloads from third parties.
`react-stack-bottom-frame` -> `react_stack_bottom_frame`. This survives `@babel/plugin-transform-function-name`, but now frames will be displayed as `at Object.react_stack_bottom_frame (...)` in V8. Checks that were relying on exact function name match were updated to use either `.indexOf()` or `.includes()` For backwards compatibility, both React DevTools and Flight Client will look for both options. I am not so sure about the latter and if React version is locked.
No longer used after #33648
…meline protocol (#33501) The React API is just that we now accept this protocol as an alternative to a native `AnimationTimeline` to be passed to `startGestureTransition`. This is specifically the DOM version. ```js interface CustomTimeline { currentTime: number; animate(animation: Animation): void | (() => void); } ``` Instead, of passing this to the `Animation` that we start to control the View Transition keyframes, we instead inverse the control and pass the `Animation` to this one. It lets any custom implementation drive the updates. It can do so by updating the time every frame or letting it run a time based animation (such as momentum scroll). In this case I added a basic polyfill for `ScrollTimeline` in the example but we'll need a better one.
View Transitions has this annoying quirk where it adds `width` and `height` to keyframes automatically when generating keyframes even when it's not needed. This causes them to deopt from running on the compositor thread in both Chrome and Safari. @bramus has a [good article on it](https://www.bram.us/2025/02/07/view-transitions-applied-more-performant-view-transition-group-animations/). In React we can automatically rewrite the keyframes when we're starting a View Transition to drop the `width` and `height` from the keyframes when they have the same value and the same value as the pseudo element. To compare it against the pseudo element we first apply the new keyframes without the width/height and then read it back to see if it has changed. For gestures, we have already cancelled the previous animation so we can just read out from that.
<img width="634" alt="Screenshot 2025-06-27 at 1 13 20 PM" src="https://github.com/user-attachments/assets/dc8c488b-4a23-453f-918f-36b245364934" /> We have to be careful with performance in DEV. It can slow down DX since these are ran whether you're currently running a performance trace or not. It can also show up as misleading since these add time to the "Remaining Effects" entry. I'm not adding all props to the entries. Instead, I'm only adding the changed props after diffing and none for initial mount. I'm trying to as much as possible pick a fast path when possible. I'm also only logging this for the "render" entries and not the effects. If we did something for effects, it would be more like checking with dep changed. This could still have a negative effect on dev performance since we're now also using the slower `performance.measure` API when there's a diff.
…3659) Stacked on #33658. Unfortunately `console.timeStamp` has the same bug that `performance.measure` used to have where equal start/end times stack in call order instead of reverse call-order. We rely on that in general so we should really switch back all. But there is one case in particular where we always add the same start/time and that's for the "triggers" - Mount/Unmount/Reconnect/Disconnect. Switching to `console.timeStamp` broke this because they now showed below the thing that mounted. After: <img width="726" alt="Screenshot 2025-06-27 at 3 31 16 PM" src="https://github.com/user-attachments/assets/422341c8-bef6-4909-9403-933d76b71508" /> Also fixed a bug where clamped update times could end up logging zero width entries that stacked up on top of each other causing a two row scheduler lane which should always be one row.
…the Promise value (#33662) It's useful to be able to distinguish between different invocations of common helper libraries (like fetch) without having to click through each one. This adds a heuristic to extract a useful description of I/O from the Promise value. We try to find things like getUser(id) -> User where User.id is the id or fetch(url) -> Response where Response.url is the url. For urls we use the filename (or hostname if there is none) as the short name if it can fit. The full url is in the tooltip. <img width="845" alt="Screenshot 2025-06-27 at 7 58 20 PM" src="https://github.com/user-attachments/assets/95f10c08-13a8-449e-97e8-52f0083a65dc" />
Stacked on #33501. This disables the use of ScrollTimeline when detected in Safari in the recommended SwipeRecognizer approach. I'm instead using a polyfill using touch events on iOS. Safari seems set to [release ScrollTimeline soon](https://webkit.org/blog/16993/news-from-wwdc25-web-technology-coming-this-fall-in-safari-26-beta/). Unfortunately it's not really what you'd expect. First of all, [it's not running in sync with the scroll](https://bugs.webkit.org/show_bug.cgi?id=288402) which is kind of its main point. Instead, it is running at 60fps and out of sync with the scroll just like JS. In fact, it is worse than JS because with JS you can at least spawn CSS animations that run at 120fps. So our polyfill can respond to touches at 60fps while gesturing and then run at 120fps upon release. That's better than with ScrollTimeline. Second, [there's a bug which interrupts scrolling if you start a ViewTransition](https://bugs.webkit.org/show_bug.cgi?id=288795) when the element is being removed as part of that. The element can still respond to touches so in a polyfill this isn't an issue. But it essentially makes it useless to use ScrollTimeline with swipe-away gestures. So we're better off in every scenario by not using it. The UA detection is a bit unfortunate. Not sure if there's something more specific but we also had to do a UA detection for Chrome for View Transitions. Those are the only two we have in all of React. 
…nce Track (#33660) Stacked on #33658 and #33659. If we detect that a component is receiving only deeply equal objects, then we highlight it as potentially problematic and worth looking into. <img width="1055" alt="Screenshot 2025-06-27 at 4 15 28 PM" src="https://github.com/user-attachments/assets/e96c6a05-7fff-4fd7-b59a-36ed79f8e609" /> It's fairly conservative and can bail out for a number of reasons: - We only log it on the first parent that triggered this case since other children could be indirect causes. - If children has changed then we bail out since this component will rerender anyway. This means that it won't warn for a lot of cases that receive plain DOM children since the DOM children won't themselves get logged. - If the component's total render time including children is 100ms or less then we skip warning because rerendering might not be a big deal. - We don't warn if you have shallow equality but could memoize the JSX element itself since we don't typically recommend that and React Compiler doesn't do that. It only warns if you have nested objects too. - If the depth of the objects is deeper than like the 3 levels that we print diffs for then we wouldn't warn since we don't know if they were equal (although we might still warn on a child). - If the component had any updates scheduled on itself (e.g. setState) then we don't warn since it would rerender anyway. This should really consider Context updates too but we don't do that atm. Technically you should still memoize the incoming props even if you also had unrelated updates since it could apply to deeper bailouts.
…arty (#33685) If a FlightClient runs inside a FlightServer like fetching from a third party and that logs, then we currently double badge them since we just add on another badge. The issue is that this might be unnecessarily noisy but we also transfer the original format of the current server into the second badge. This extracts our own badge and then adds the environment name as structured data which lets the client decide how to format it. Before: <img width="599" alt="Screenshot 2025-07-02 at 2 30 07 PM" src="https://github.com/user-attachments/assets/4bf26a29-b3a8-4024-8eb9-a3f90dbff97a" /> After: <img width="590" alt="Screenshot 2025-07-02 at 2 32 56 PM" src="https://github.com/user-attachments/assets/f06bbb6d-fbb1-4ae6-b0e3-775849fe3c53" />
## Summary Follow-up to #33517. With #33517, we now preserve at least some minimal indent. This actually doesn't work with the current setup, because we don't allow the container to overflow, so basically deeply nested elements will go off the screen. With these changes, we completely change the approach: - The layout will be static and it will have a constant indentation that will always be preserved. - The container will allow overflows, so users will be able to scroll horizontally and vertically. - We will implement automatic horizontal and vertical scrolls, if selected element is not in a viewport. - New: added vertical delimiter that can be used for simpler visual navigation. ## Demo ### Current public release https://github.com/user-attachments/assets/58645d42-c6b8-408b-b76f-95fb272f2e1e ### With #33517 https://github.com/user-attachments/assets/845285c8-5a01-4739-bcd7-ffc089e771bf ### This PR https://github.com/user-attachments/assets/72086b84-8d84-4626-94b3-e22e114e028e
Changes from 6.1.3: * feat: static Components panel layout ([hoxyq](https://github.com/hoxyq) in [#33696](#33696)) * fix: support optionality of structured stack trace function name ([hoxyq](https://github.com/hoxyq) in [#33697](#33697)) * fix: rename bottom stack frame ([hoxyq](https://github.com/hoxyq) in [#33680](#33680))
…33700) Discovered while testing with Hermes.
Follow-up to #33652. Don't know how the other were missed. Double-checked that Profiler works in dev mode. Now all hooks start with `!isProfiling` check and return, if true.
Same as 6.1.4, but with 2 hotfixes: * fix: check if profiling for all profiling hooks ([hoxyq](https://github.com/hoxyq) in [#33701](#33701)) * fix: fallback to reading string stack trace when failed ([hoxyq](https://github.com/hoxyq) in [#33700](#33700))
This delays the abort by splitting the abort into a first step that just flags a task as abort and tracks the time that we aborted. This first step also invokes the `cacheSignal()` abort handler. Then in a macrotask do we finish flushing the abort (or halt). This ensures that any microtasks after the abort signal can finish flushing which may emit rejections or fulfill (e.g. if you try/catch the abort or if it was allSettled). These rejections are themselves signals for which promise was blocked on what promise which forms a graph that we can use for debug info. Notably this doesn't include any additional data in the output since we don't include any data produced after the abort. It just uses the additional execution to collect more debug info. The abort itself might not have been spawned from I/O but it's still interesting to mark Promises that aborted as interesting since they may have been blocked on I/O. So we take the inner most Promise that resolved after the end time (presumably due to the abort signal but also could've just finished after but that's still after the abort). Since the microtasks can spawn new Promises after the ones that reject we ignore any of those that started after the abort.
…ned by then callback (#33713) When a `.then()` callback returns another Promise, there's effectively another "await" on that Promise that happens in the internals but that was not modeled. In effect the Promise returned by `.then()` is blocked on both the original Promise AND the promise returned by the callback. This models that by cloning the original node and treat that as the await on the original Promise. Then we use the existing Node to await the new Promise but its "previous" points to the clone. That way we have a forked node that awaits both. --------- Co-authored-by: Sebastian Sebbie Silbermann <[email protected]>
…ore listed (#33714) These are part of the internals of Promises and async functions even if anonymous functions are otherwise not ignore listed.
If I/O is not awaited in user space in a "previous" path we used to just drop it on the floor. There's a few strategies we could apply here. My first commit just emits it without an await but that would mean we don't have an await stack when there's no I/O in a follow up. I went with a strategy where the "previous" I/O is used only if the "next" didn't have I/O. This may still drop I/O on the floor if there's two back to back within internals for example. It would only log the first one even though the outer await may have started earlier. It may also log deeper in the "next" path if that had user space stacks and then the outer await will appear as if it awaited after. So it's not perfect.
…#33718) When we have a debug channel open that can ask for more objects. That doesn't close until all lazy objects have been explicitly asked for. If you GC an object before the lazy references inside of it before asking for or releasing the objects, then it'll never close. This ensures that if there are no more PendingChunk and no more ResolvedModelChunk then we can close the connection. There's two sources of retaining the Response object. On one side we have a handle to it from the stream coming from the server. On the other side we have a handle to it from ResolvedModelChunk to ask for more data when we lazily parse a model. This PR makes a weak handle from the stream to the Response. However, it keeps a strong reference alive whenever we're waiting on a pending chunk because then the stream might be the root if the only listeners are the callbacks passed to the promise and no references to the promise itself. The pending chunks count can end up being zero even if we might get more data because the references might be inside lazy chunks. In this case the lazy chunks keeps the Response alive. When the lazy chunk gets parsed it can find more chunks that then end up pending to keep the response strongly alive until they resolve.
Same as #33716 but without the separate close signal. We'll need the ref count for separate debug channel anyway but I'm not sure we'll need the separate close signal.
…33719) Stacked on #33718. Alternative to #33716. The issue with flushing the Server Components track in its current form is that we need to decide how long to wait before flushing whatever we have. That's because the root's end time will be determined by the end time of that last child. However, if a child isn't actually used then we don't necessarily need to include it in the Server Components track since it wasn't blocking the initial render. This waits for 100ms after the last pending chunk is resolved and if nothing is invoking any more lazy initializers after that then we log the Server Components track with the information we have at that point. We also don't eagerly initialize any chunks that wasn't already initialized so if nothing was rendered, then nothing will be logged. This is somewhat an artifact of the current visualization. If we did another transposed form we wouldn't necessarily need to wait until the end and can log things as they're discovered.
Content in Suspense fallbacks are really not considered part of the Suspense but since it does have some behavior it should be marked somehow separately from the Suspense content. A follow up would be to do the same in Fiber.
This is rolled out to 100%. Let me merge it though.
Good to assert these include the component stack
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
See Commits and Changes for more details.
Created by
pull[bot] (v2.0.0-alpha.1)
Can you help keep this open source service alive? 💖 Please sponsor : )