-
-
Notifications
You must be signed in to change notification settings - Fork 35.8k
WebGPURenderer: Optimize WebXR render path. #31134
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: dev
Are you sure you want to change the base?
Conversation
📦 Bundle sizeFull ESM build, minified and gzipped.
🌳 Bundle size after tree-shakingMinimal build including a renderer, camera, empty scene, and dependencies.
|
I traced the performance down to an issue in Quest browser. I'm investigating why I'm seeing this weird behavior. Surface 4 | 1000x1760 | color 32bit, depth 24bit, stencil 8 bit, MSAA 4, Mode: 2 (SwBinning) | 21 384x256 bins ( 22 rendered) | 0.48 ms | 64 stages : Render : 0.196ms StoreColor : 0.052ms Blit : 0.006ms StoreDepthStencil : 0.055ms Surface 6 is the first render pass of the rollercoarst and 7 is the tone mapping I don't know what is causing surface 4, 5, 8 and 9 to trigger but they're the root of the slow experience. |
3e64d3c
to
77adfd8
Compare
77adfd8
to
5e9e053
Compare
@Mugen87 I was able to fix the slowdown. Setting the renderer size also changed the pixel backing store of the canvas and that made it very slow. I also added code to discard depth to gain some performance. I also added a simple test file to validate webxr without using multiview. That path was broken. |
The windows failure also happens without my changes.See #31145 |
@@ -1250,6 +1252,7 @@ class Renderer { | |||
frameBufferTarget.scissor.multiplyScalar( this._pixelRatio ); | |||
frameBufferTarget.scissorTest = this._scissorTest; | |||
frameBufferTarget.multiview = outputRenderTarget !== null ? outputRenderTarget.multiview : false; | |||
frameBufferTarget.resolveDepthBuffer = outputRenderTarget !== null ? outputRenderTarget.resolveDepthBuffer : true; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this indicates if we need to resolve the depth buffer. I think this needs to be updated for devices that prefer to get depth (ie AVP) but I have no way to test this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What does AVP stand for?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Apple Vision Pro.
It works without a depth buffer but it has better reprojection if one is provided.
src/renderers/common/Renderer.js
Outdated
@@ -1380,7 +1383,7 @@ class Renderer { | |||
renderContext.viewportValue.height >>= activeMipmapLevel; | |||
renderContext.viewportValue.minDepth = minDepth; | |||
renderContext.viewportValue.maxDepth = maxDepth; | |||
renderContext.viewport = renderContext.viewportValue.equals( _screen ) === false; | |||
renderContext.viewport = renderContext.viewportValue.equals( _screen ) === false || this._forceViewPort; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the viewport and screen are both set by setSize
so we need to catch the instance we want them to be set explicitly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you really need _forceViewPort
? Since you are not changing the drawing buffer when resizing the renderer during XRManager.renderLayers()
, the configured viewport (which uses the layer's width and height) should be different than the drawing buffer size and thus set RenderContext.viewport
to true
.
RenderContext.viewport
only indicates the viewport update should use the value from the render context (and not the drawing buffer size).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
drawingBufferSize is calculated as follows:
return target.set( this._width * this._pixelRatio, this._height * this._pixelRatio ).floor();
so it also uses width and height. Maybe this is wrong? I was worried to change that it's a big change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The implementation of getDrawingBufferSize()
is correct, imo. I had to read the code multiple times to understand why you need _forceViewPort
.
I think you can remove it when the following lines:
state.viewport( 0, 0, gl.drawingBufferWidth, gl.drawingBufferHeight ); |
state.viewport( 0, 0, gl.drawingBufferWidth, gl.drawingBufferHeight ); |
are refactored to:
const { width, height } = this.getDrawingBufferSize( _drawingBufferSize );
state.viewport( 0, 0, width, height );
If you don't update the DOM element when setting the size for layers, gl.drawingBufferWidth
and gl.drawingBufferHeight
are incorrect. Using getDrawingBufferSize()
should report the correct values.
this._renderer.setOutputRenderTarget( layer.renderTarget ); | ||
this._renderer.setRenderTarget( null ); | ||
|
||
renderer.setOutputRenderTarget( layer.renderTarget ); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
:-\
const { frameBufferTarget, quad } = this._frameBufferTargets.get( layer.renderTarget ) || { frameBufferTarget: null, quad: null }; | ||
if ( ! frameBufferTarget ) { | ||
|
||
renderer._quad = new QuadMesh( new NodeMaterial() ); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
needed because otherwise there's confusion with multiview vr regular rendering.
@@ -904,7 +935,8 @@ class XRManager extends EventDispatcher { | |||
const projectionlayerInit = { | |||
colorFormat: gl.RGBA8, | |||
depthFormat: glDepthFormat, | |||
scaleFactor: this._framebufferScaleFactor | |||
scaleFactor: this._framebufferScaleFactor, | |||
clearOnAccess: false |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
clearOnAccess gives a small benefit because it skips the step where browser explicitly clear the buffer upon framebuffer attach.
|
||
const fb = renderTargetContextData.framebuffers[ renderContext.getCacheKey() ]; | ||
state.bindFramebuffer( gl.DRAW_FRAMEBUFFER, fb ); | ||
gl.invalidateFramebuffer( gl.DRAW_FRAMEBUFFER, renderTargetContextData.depthInvalidationArray ); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this discards the depth buffer before we do the tonemapping steps. With this, the depth is flushed to main memory which adds overhead
src/renderers/common/XRManager.js
Outdated
for ( const layer of this._layers ) { | ||
|
||
layer.renderTarget.isXRRenderTarget = this._session !== null; | ||
layer.renderTarget.hasExternalTextures = layer.renderTarget.isXRRenderTarget; | ||
renderer.setSize( layer.renderTarget.width, layer.renderTarget.height, false, false ); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you can move this line in the below if
block. setSize()
only matters if rendering into the default framebuffer. When rendering into a render target defined via setRenderTarget()
, the dimensions from the render target are used.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if we should change the renderer so setOutputRenderTarget()
behaves like setRenderTarget()
. Meaning, when an output render target is defined, the dimensions of the internal framebuffer target is derived from it. Right now, we only do this for depth
:
three.js/src/renderers/common/Renderer.js
Line 1246 in 2d978cb
frameBufferTarget.setSize( width, height, outputRenderTarget !== null ? outputRenderTarget.depth : 1 ); |
However, keeping the dimensions of the internal framebuffer target in sync with the output render target could simplify things by making certain setSize()
calls obsolete.
We use setOutputRenderTarget()
only in context of XR right now (to define a different output target than the default framebuffer) so if you think that design would be helpful we maybe can give it a shot.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you can move this line in the below
if
block.
done
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if we should change the renderer so
setOutputRenderTarget()
behaves likesetRenderTarget()
. Meaning, when an output render target is defined, the dimensions of the internal framebuffer target is derived from it.
Maybe we can try that in a follow up PR? I'm a bit worried that some code might look at the renderer's size so if we don't update it, they might make the wrong decisions.
cc @Mugen87
This fixes some of the performance issues but since it reaches directly into the renderer, it seems brittle