diff --git a/README.md b/README.md index 09e7f12..fa4f1da 100644 --- a/README.md +++ b/README.md @@ -3,443 +3,84 @@ WebGL Deferred Shading **University of Pennsylvania, CIS 565: GPU Programming and Architecture, Project 6** -* (TODO) YOUR NAME HERE -* Tested on: (TODO) **Google Chrome 222.2** on - Windows 22, i7-2222 @ 2.22GHz 22GB, GTX 222 222MB (Moore 2222 Lab) +* Levi Cai +* Tested on: **Google Chrome** on + Windows 8, i7-5500U @ 2.4GHz, 12GB, NVidia GeForce 940M 2GB ### Live Online -[![](img/thumb.png)](http://TODO.github.io/Project6-WebGL-Deferred-Shading) +[![](img/thumb.png)](http://arizonat.github.io/Project6-WebGL-Deferred-Shading/) ### Demo Video -[![](img/video.png)](TODO) +(Apologize for the low-quality video, none of the screen recorders seem to work) -### (TODO: Your README) +https://www.youtube.com/watch?v=8UpVNrFkYcw&feature=em-upload_owner -*DO NOT* leave the README to the last minute! It is a crucial part of the -project, and we will not be able to grade you without a good README. +### Features and Optimizations -This assignment has a considerable amount of performance analysis compared -to implementation work. Complete the implementation early to leave time! +All features turned on at once: +![](img/all.PNG) -Instructions (delete me) -======================== +## Toon Shading (with Ramp Shading option) -This is due at midnight on the evening of Tuesday, October 27. +Toon shading was implemented in the deferred shader stage. It simply takes the diffuse cosine output and creates a sep function from that output, such that there are only a limited number of intensities available to a pixel. This was implemented primarily with if-statements and could probably be improved by removing the if-statements in favour of boolean math. There is an additional option to use ramp shading for the smaller values of intensities so that they are more gradual (simply uses the cosine output at this stage). Lines were created by checking the depth of a pixel compared to its neighbor pixels. If they are different by some threshold, the pixel is coloured white. The thickness of the line can be tuned by which neighbors to consider. -**Summary:** In this project, you'll be introduced to the basics of deferred -shading and WebGL. You'll use GLSL and WebGL to implement a deferred shading -pipeline and various lighting and visual effects. +There seems to be very little performance impact however. -**Recommendations:** -Take screenshots as you go. Use them to document your progress in your README! +![](img/toon_shading.PNG) -Read (or at least skim) the full README before you begin, so that you know what -to expect and what to prepare for. +## Naive Bloom Effect -### Running the code +This effect is accomplished with a simple 1-pass post processing stage. The output from the deferred shader is taken and a pixel's intensity is increased based on its neighbor's intensities. The amount increased is determined by a simple 5x5 block kernel (set to all 0.1's, but this can be tweaked). This is slow in the sense that each thread must then compute 25 values in a for loop. However, this too did not seem to have as dramatic an impact on performance as one might think (could not differentiate by FPS/ms with stats.js). -If you have Python, you should be able to run `server.py` to start a server. -Then, open [`http://localhost:10565/`](http://localhost:10565/) in your browser. +![](img/bloom.PNG) -This project requires a WebGL-capable web browser with support for -`WEBGL_draw_buffers`. You can check for support on -[WebGL Report](http://webglreport.com/). +## Bloom Effect with 2 Pass Separable Filter -Google Chrome seems to work best on all platforms. If you have problems running -the starter code, use Chrome or Chromium, and make sure you have updated your -browser and video drivers. +An improvement to the efficiency of the bloom effect is to use a separable kernel and render it in 2 separate passes. To do this I added an additional post-processing stage. In the first post-processing stage, the first part of the kernel, a 5x1 kernel is applied. Then in the second stage, a 1x5 kernel is applied. Since this can be thought of as a guassian convolution operation, it is clear that this accomplishes the same effect as above, however, each pass only needs to do 5 operations (and so 10 total) per pixel, compared to the 25 from before. -In Moore 100C, both Chrome and Firefox work. -See below for notes on profiling/debugging tools. +![](img/bloom_post2.PNG) -Use the screenshot button to save a screenshot. +## Box Scissor Testing vs Sphere Proxy Geometry Optimization -## Requirements +The first optimization to make is to only render fragments that are actually reachable by light sources. There are a variety of ways to determine which fragments are reachable however. -**Ask on the mailing list for any clarifications.** +The most naive implementation is to use a simple scissor test, which creates a bounding box around each light source (where the sphere of the light radius is inscribed in the box). If a fragment lies outside of the light's box, then it does not need to computre contributions from that light source. We can see a debug view of the boxes here (analysis further below): -In this project, you are given code for: +![](img/debug_scissor.PNG) -* Loading OBJ files and color/normal map textures -* Camera control -* Partial implementation of deferred shading including many helper functions +A tighter optimization is to bound the light sphere itself using proxy geometry. This is accomplished by creating a new vertex shader to compute the spheres surrounding each light source. Once we compute the spheres, we can then use the transformed sphere's triangles for shading. Though there are many more triangles now to shade, we do not end up wasting any time computing light contributions from regions that were missed by the previous bounding box implemention (bounding box - sphere = amount of wasted computation per light). (Again, analysis further below) -### Required Tasks +![](img/debug_spheres.PNG) -**Before doing performance analysis,** you must disable debug mode by changing -`debugMode` to `false` in `framework.js`. Keep it enabled when developing - it -helps find WebGL errors *much* more easily. +### Debug Views -You will need to perform the following tasks: +Depth: -* Complete the deferred shading pipeline so that the Blinn-Phong and Post1 - shaders recieve the correct input. Go through the Starter Code Tour **before - continuing!** +![](img/depth.PNG) -**Effects:** +Positions: -* Implement deferred Blinn-Phong shading (diffuse + specular) for point lights - * With normal mapping (code provided) - * For deferred shading, you want to use a lighting model for the point lights - which has a limited radius - so that adding a scissor or proxy geometry - will not cause parts of the lighting to disappear. It should look very - similar both with and without scissor/proxy optimization. Here is a - convenient lighting model, but you can also use others: - * `float attenuation = max(0.0, u_lightRad - dist_from_surface_to_light);` +![](img/position.PNG) -* Implement one of the following effects: - * Bloom using post-process blur (box or Gaussian) [1] - * Toon shading (with ramp shading + simple depth-edge detection for outlines) +Normals: -**Optimizations:** +![](img/surface_normal.PNG) -* Scissor test optimization: when accumulating shading from each point - light source, only render in a rectangle around the light. - * Show a debug view for this (showing scissor masks clearly), e.g. by - modifying and using `red.frag.glsl` with additive blending and alpha = 0.1. - * Code is provided to compute this rectangle for you, and there are - comments at the relevant place in `deferredRender.js` with more guidance. - * **NOTE:** The provided scissor function is not very accurate - it is a - quick hack which results in some errors (as can be seen in the live - demo). +### Performance Analysis and Additional Optimizations -* Optimized g-buffer format - reduce the number and size of g-buffers: - * Ideas: - * Pack values together into vec4s - * Use 2-component normals - * Quantize values by using smaller texture types instead of gl.FLOAT - * Reduce number of properties passed via g-buffer, e.g. by: - * Applying the normal map in the `copy` shader pass instead of - copying both geometry normals and normal maps - * Reconstructing world space position using camera matrices and X/Y/depth - * For credit, you must show a good optimization effort and record the - performance of each version you test, in a simple table. - * It is expected that you won't need all 4 provided g-buffers for a basic - pipeline - make sure you disable the unused ones. - * See mainly: `copy.frag.glsl`, `deferred/*.glsl`, `deferredSetup.js` - -### Extra Tasks - -You must do at least **10 points** worth of extra features (effects or -optimizations/analysis). - -**Effects:** - -* (3pts) The effect you didn't choose above (bloom or toon shading) - -* (3pts) Screen-space motion blur (blur along velocity direction) [3] - -* (2pts) Allow variability in additional material properties - * Include other properties (e.g. specular coeff/exponent) in g-buffers - * Use this to render objects with different material properties - * These may be uniform across one model draw call, but you'll have to show - multiple models - -**Optimizations/Analysis:** - -* (2pts) Improved screen-space AABB for scissor test - (smaller/more accurate than provided - but beware of CPU/GPU tradeoffs) - -* (3pts) Two-pass **Gaussian** blur using separable convolution (using a second - postprocess render pass) to improve bloom or other 2D blur performance - -* (4-6pts) Light proxies - * (4pts) Instead of rendering a scissored full-screen quad for every light, - render some proxy geometry which covers the part of the screen affected by - the light (e.g. a sphere, for an attenuated point light). - * A model called `sphereModel` is provided which can be drawn in the same - way as the code in `drawScene`. (Must be drawn with a vertex shader which - scales it to the light radius and translates it to the light position.) - * (+2pts) To avoid lighting geometry far behind the light, render the proxy - geometry (e.g. sphere) using an inverted depth test - (`gl.depthFunc(gl.GREATER)`) with depth writing disabled (`gl.depthMask`). - This test will pass only for parts of the screen for which the backside of - the sphere appears behind parts of the scene. - * Note that the copy pass's depth buffer must be bound to the FBO during - this operation! - * Show a debug view for this (showing light proxies) - * Compare performance of this, naive, and scissoring. - -* (8pts) Tile-based deferred shading with detailed performance comparison - * On the CPU, check which lights overlap which tiles. Then, render each tile - just once for all lights (instead of once for each light), applying only - the overlapping lights. - * The method is described very well in - [Yuqin & Sijie's README](https://github.com/YuqinShao/Tile_Based_WebGL_DeferredShader/blob/master/README.md#algorithm-details). - * This feature requires allocating the global light list and tile light - index lists as shown at this link. These can be implemented as textures. - * Show a debug view for this (number of lights per tile) - -* (6pts) Deferred shading without multiple render targets - (i.e. without WEBGL_draw_buffers). - * Render the scene once for each target g-buffer, each time into a different - framebuffer object. - * Include a detailed performance analysis, comparing with/without - WEBGL_draw_buffers (like in the - [Mozilla blog article](https://hacks.mozilla.org/2014/01/webgl-deferred-shading/)). - -* (2-6pts) Compare performance to equivalently-lit forward-rendering: - * (2pts) With no forward-rendering optimizations - * (+2pts) Coarse, per-object back-to-front sorting of geometry for early-z - * (Of course) must render many objects to test - * (+2pts) Z-prepass for early-z - -This extra feature list is not comprehensive. If you have a particular idea -that you would like to implement, please **contact us first** (preferably on -the mailing list). - -**Where possible, all features should be switchable using the GUI panel in -`ui.js`.** - -### Performance & Analysis - -**Before doing performance analysis,** you must disable debug mode by changing -`debugMode` to `false` in `framework.js`. Keep it enabled when developing - it -helps find WebGL errors *much* more easily. - -Optimize your JavaScript and/or GLSL code. Chrome/Firefox's profiling tools -(see Resources section) will be useful for this. For each change -that improves performance, show the before and after render times. - -For each new *effect* feature (required or extra), please -provide the following analysis: - -* Concise overview write-up of the feature. -* Performance change due to adding the feature. - * If applicable, how do parameters (such as number of lights, etc.) - affect performance? Show data with simple graphs. - * Show timing in milliseconds, not FPS. -* If you did something to accelerate the feature, what did you do and why? -* How might this feature be optimized beyond your current implementation? - -For each *performance* feature (required or extra), please provide: - -* Concise overview write-up of the feature. -* Detailed performance improvement analysis of adding the feature - * What is the best case scenario for your performance improvement? What is - the worst? Explain briefly. - * Are there tradeoffs to this performance feature? Explain briefly. - * How do parameters (such as number of lights, tile size, etc.) affect - performance? Show data with graphs. - * Show timing in milliseconds, not FPS. - * Show debug views when possible. - * If the debug view correlates with performance, explain how. - -### Starter Code Tour - -You'll be working mainly in `deferredRender.js` using raw WebGL. Three.js is -included in the project for various reasons. You won't use it for much, but its -matrix/vector types may come in handy. - -It's highly recommended that you use the browser debugger to inspect variables -to get familiar with the code. At any point, you can also -`console.log(some_var);` to show it in the console and inspect it. - -The setup in `deferredSetup` is already done for you, for many of the features. -If you want to add uniforms (textures or values), you'll change them here. -Therefore, it is recommended that you review the comments to understand the -process, BEFORE starting work in `deferredRender`. - -In `deferredRender`, start at the **START HERE!** comment. -Work through the appropriate `TODO`s as you go - most of them are very -small. Test incrementally (after implementing each part, instead of testing -all at once). -* (The first thing you should be doing is implementing the fullscreen quad!) -* See the note in the Debugging section on how to test the first part of the - pipeline incrementally. - -Your _next_ first goal should be to get the debug views working. -Add code in `debug.frag.glsl` to examine your g-buffers before trying to -render them. (Set the debugView in the UI to show them.) - -For editing JavaScript, you can use a simple editor with syntax highlighting -such as Sublime, Vim, Emacs, etc., or the editor built into Chrome. - -* `js/`: JavaScript files for this project. - * `main.js`: Handles initialization of other parts of the program. - * `framework.js`: Loads the scene, camera, etc., and calls your setup/render - functions. Hopefully, you won't need to change anything here. - * `deferredSetup.js`: Deferred shading pipeline setup code. - * `createAndBind(Depth/Color)TargetTexture`: Creates empty textures for - binding to frame buffer objects as render targets. - * `deferredRender.js`: Your deferred shading pipeline execution code. - * `renderFullScreenQuad`: Renders a full-screen quad with the given shader - program. - * `ui.js`: Defines the UI using - [dat.GUI](https://workshop.chromeexperiments.com/examples/gui/). - * The global variable `cfg` can be accessed anywhere in the code to read - configuration values. - * `utils.js`: Utilities for JavaScript and WebGL. - * `abort`: Aborts the program and shows an error. - * `loadTexture`: Loads a texture from a URL into WebGL. - * `loadShaderProgram`: Loads shaders from URLs into a WebGL shader program. - * `loadModel`: Loads a model into WebGL buffers. - * `readyModelForDraw`: Configures the WebGL state to draw a model. - * `drawReadyModel`: Draws a model which has been readied. - * `getScissorForLight`: Computes an approximate scissor rectangle for a - light in world space. -* `glsl/`: GLSL code for each part of the pipeline: - * `clear.*.glsl`: Clears each of the `NUM_GBUFFERS` g-buffers. - * `copy.*.glsl`: Performs standard rendering without any fragment shading, - storing all of the resulting values into the `NUM_GBUFFERS` g-buffers. - * `quad.vert.glsl`: Minimal vertex shader for rendering a single quad. - * `deferred.frag.glsl`: Deferred shading pass (for lighting calculations). - Reads from each of the `NUM_GBUFFERS` g-buffers. - * `post1.frag.glsl`: First post-processing pass. -* `lib/`: JavaScript libraries. -* `models/`: OBJ models for testing. Sponza is the default. -* `index.html`: Main HTML page. -* `server.bat` (Windows) or `server.py` (OS X/Linux): - Runs a web server at `localhost:10565`. - -### The Deferred Shading Pipeline - -See the comments in `deferredSetup.js`/`deferredRender.js` for low-level guidance. - -In order to enable and disable effects using the GUI, upload a vec4 uniform -where each component is an enable/disable flag. In JavaScript, the state of the -UI is accessible anywhere as `cfg.enableEffect0`, etc. - -**Pass 1:** Renders the scene geometry and its properties to the g-buffers. -* `copy.vert.glsl`, `copy.frag.glsl` -* The framebuffer object `pass_copy.fbo` must be bound during this pass. -* Renders into `pass_copy.depthTex` and `pass_copy.gbufs[i]`, which need to be - attached to the framebuffer. - -**Pass 2:** Performs lighting and shading into the color buffer. -* `quad.vert.glsl`, `deferred/blinnphong-pointlight.frag.glsl` -* Takes the g-buffers `pass_copy.gbufs`/`depthTex` as texture inputs to the - fragment shader, on uniforms `u_gbufs` and `u_depth`. -* `pass_deferred.fbo` must be bound. -* Renders into `pass_deferred.colorTex`. - -**Pass 3:** Performs post-processing. -* `quad.vert.glsl`, `post/one.frag.glsl` -* Takes `pass_BlinnPhong_PointLight.colorTex` as a texture input `u_color`. -* Renders directly to the screen if there are no additional passes. - -More passes may be added for additional effects (e.g. combining bloom with -motion blur) or optimizations (e.g. two-pass Gaussian blur for bloom) - -#### Debugging - -If there is a WebGL error, it will be displayed on the developer console and -the renderer will be aborted. To find out where the error came from, look at -the backtrace of the error (you may need to click the triangle to expand the -message). The line right below `wrapper @ webgl-debug.js` will point to the -WebGL call that failed. - -When working in the early pipeline (before you have a lit render), it can be -useful to render WITHOUT post-processing. To do this, you have to make sure -that there is NO framebuffer bound while rendering to the screen (that is, bind -null) so that the output will display to the screen instead of saving into a -texture. Writing to gl_FragData[0] is the same as writing to gl_FragColor, so -you'll see whatever you were storing into the first g-buffer. - -#### Changing the number of g-buffers - -Note that the g-buffers are just `vec4`s - you can put any values you want into -them. However, if you want to change the total number of g-buffers (add more -for additional effects or remove some for performance), you will need to make -changes in a number of places: - -* `deferredSetup.js`/`deferredRender.js`: search for `NUM_GBUFFERS` -* `copy.frag.glsl` -* `deferred.frag.glsl` -* `clear.frag.glsl` - - -## Resources - -* [1] Bloom: - [GPU Gems, Ch. 21](http://http.developer.nvidia.com/GPUGems/gpugems_ch21.html) -* [2] Screen-Space Ambient Occlusion: - [Floored Article](http://floored.com/blog/2013/ssao-screen-space-ambient-occlusion.html) -* [3] Post-Process Motion Blur: - [GPU Gems 3, Ch. 27](http://http.developer.nvidia.com/GPUGems3/gpugems3_ch27.html) - -**Also see:** The articles linked in the course schedule. - -### Profiling and debugging tools - -Built into Firefox: -* Canvas inspector -* Shader Editor -* JavaScript debugger and profiler - -Built into Chrome: -* JavaScript debugger and profiler - -Plug-ins: -* Web Tracing Framework - **Does not currently work with multiple render targets**, - which are used in the starter code. -* (Chrome) [Shader Editor](https://chrome.google.com/webstore/detail/shader-editor/ggeaidddejpbakgafapihjbgdlbbbpob) - -Libraries: -* Stats.js (already included) - -Firefox can also be useful - it has a canvas inspector, WebGL profiling and a -shader editor built in. +One optimization used was to reduce the size of the g-buffers from 4 to 3. This was achieved by splitting a vec3 by putting each of its components into separate g-buffers. -## README +Below is a comparison between the number of lights and the optimization technique used. It is easy to see that the number of lights greatly effects the runtime, as there are many more fragments that must be computed. However, it is also clear that the spherical optimization is better than the naive bounding box method, which in turn, is significantly better than no optimization at all. -Replace the contents of this README.md in a clear manner with the following: +![](img/fps_lights.png) -* A brief description of the project and the specific features you implemented. -* At least one screenshot of your project running. -* A 30+ second video of your project running showing all features. - [Open Broadcaster Software](http://obsproject.com) is recommended. - (Even though your demo can be seen online, using multiple render targets - means it won't run on many computers. A video will work everywhere.) -* A performance analysis (described below). +The following comparison was achieved by zooming out from the building, in order to see that the number of fragments needed to be rasterized greatly effects performance. As we zoom out, there are less fragments that must be rendered (shown as a % of screen covered), and hence much faster performance. -### Performance Analysis +![](img/fps_scissor_fragments.png) -See above. +We can also see from the chrome profiler that a fairly significant portion of time is spent passing uniforms into the shaders. -### GitHub Pages - -Since this assignment is in WebGL, you can make your project easily viewable by -taking advantage of GitHub's project pages feature. - -Once you are done with the assignment, create a new branch: - -`git branch gh-pages` - -Push the branch to GitHub: - -`git push origin gh-pages` - -Now, you can go to `.github.io/` to see your -renderer online from anywhere. Add this link to your README. - -## Submit - -1. Open a GitHub pull request so that we can see that you have finished. - The title should be "Submission: YOUR NAME". - * **ADDITIONALLY:** - In the body of the pull request, include a link to your repository. -2. Send an email to the TA (gmail: kainino1+cis565@) with: - * **Subject**: in the form of `[CIS565] Project N: PENNKEY`. - * Direct link to your pull request on GitHub. - * Estimate the amount of time you spent on the project. - * If there were any outstanding problems, briefly explain. - * **List the extra features you did.** - * Feedback on the project itself, if any. - -### Third-Party Code Policy - -* Use of any third-party code must be approved by asking on our mailing list. -* If it is approved, all students are welcome to use it. Generally, we approve - use of third-party code that is not a core part of the project. For example, - for the path tracer, we would approve using a third-party library for loading - models, but would not approve copying and pasting a CUDA function for doing - refraction. -* Third-party code **MUST** be credited in README.md. -* Using third-party code without its approval, including using another - student's code, is an academic integrity violation, and will, at minimum, - result in you receiving an F for the semester. +![](img/chrome_profiler.PNG) diff --git a/glsl/clear.frag.glsl b/glsl/clear.frag.glsl index b4e4ff3..53d8f4c 100644 --- a/glsl/clear.frag.glsl +++ b/glsl/clear.frag.glsl @@ -3,7 +3,7 @@ precision highp float; precision highp int; -#define NUM_GBUFFERS 4 +#define NUM_GBUFFERS 3 void main() { for (int i = 0; i < NUM_GBUFFERS; i++) { diff --git a/glsl/copy.frag.glsl b/glsl/copy.frag.glsl index 0f5f8f7..09ddeb7 100644 --- a/glsl/copy.frag.glsl +++ b/glsl/copy.frag.glsl @@ -10,6 +10,12 @@ varying vec3 v_position; varying vec3 v_normal; varying vec2 v_uv; +#define NUM_GBUFFERS 3 + void main() { - // TODO: copy values into gl_FragData[0], [1], etc. + vec3 normap = texture2D(u_normap, v_uv).xyz; + vec3 colmap = texture2D(u_colmap, v_uv).xyz; + gl_FragData[0] = vec4(v_position, colmap.x); + gl_FragData[1] = vec4(v_normal, colmap.y); + gl_FragData[2] = vec4(normap, colmap.z); } diff --git a/glsl/deferred/ambient.frag.glsl b/glsl/deferred/ambient.frag.glsl index 1fd4647..a94cfe2 100644 --- a/glsl/deferred/ambient.frag.glsl +++ b/glsl/deferred/ambient.frag.glsl @@ -3,7 +3,7 @@ precision highp float; precision highp int; -#define NUM_GBUFFERS 4 +#define NUM_GBUFFERS 3 uniform sampler2D u_gbufs[NUM_GBUFFERS]; uniform sampler2D u_depth; @@ -14,7 +14,6 @@ void main() { vec4 gb0 = texture2D(u_gbufs[0], v_uv); vec4 gb1 = texture2D(u_gbufs[1], v_uv); vec4 gb2 = texture2D(u_gbufs[2], v_uv); - vec4 gb3 = texture2D(u_gbufs[3], v_uv); float depth = texture2D(u_depth, v_uv).x; // TODO: Extract needed properties from the g-buffers into local variables diff --git a/glsl/deferred/blinnphong-pointlight.frag.glsl b/glsl/deferred/blinnphong-pointlight.frag.glsl index b24a54a..d93d1dd 100644 --- a/glsl/deferred/blinnphong-pointlight.frag.glsl +++ b/glsl/deferred/blinnphong-pointlight.frag.glsl @@ -2,10 +2,17 @@ precision highp float; precision highp int; -#define NUM_GBUFFERS 4 +#define NUM_GBUFFERS 3 +uniform vec4 u_settings; + +uniform vec3 u_cameraPos; uniform vec3 u_lightCol; uniform vec3 u_lightPos; + +uniform float u_camera_width; +uniform float u_camera_height; + uniform float u_lightRad; uniform sampler2D u_gbufs[NUM_GBUFFERS]; uniform sampler2D u_depth; @@ -21,19 +28,83 @@ vec3 applyNormalMap(vec3 geomnor, vec3 normap) { } void main() { + /* vec4 gb0 = texture2D(u_gbufs[0], v_uv); vec4 gb1 = texture2D(u_gbufs[1], v_uv); vec4 gb2 = texture2D(u_gbufs[2], v_uv); - vec4 gb3 = texture2D(u_gbufs[3], v_uv); float depth = texture2D(u_depth, v_uv).x; - // TODO: Extract needed properties from the g-buffers into local variables + */ + + vec2 guv = gl_FragCoord.xy / vec2(u_camera_width, u_camera_height); + + vec4 gb0 = texture2D(u_gbufs[0], guv); + vec4 gb1 = texture2D(u_gbufs[1], guv); + vec4 gb2 = texture2D(u_gbufs[2], guv); + float depth = texture2D(u_depth, guv).x; + + vec3 pos = gb0.xyz; // World-space position + vec3 geomnor = gb1.xyz; // Normals of the geometry as defined, without normal mapping + vec3 color = vec3(gb0.w,gb1.w,gb2.w); // The color map - unlit "albedo" (surface color) + vec3 normap = gb2.xyz; // The raw normal map (normals relative to the surface they're on) + vec3 nor = applyNormalMap(geomnor, normap); // The true normals as we want to light them - with the normal map applied to the geometry normals (applyNormalMap above) + + float toonShading = u_settings[0]; + float rampShading = u_settings[1]; // If nothing was rendered to this pixel, set alpha to 0 so that the // postprocessing step can render the sky color. if (depth == 1.0) { - gl_FragColor = vec4(0, 0, 0, 0); + gl_FragColor = vec4(0, 0, 0, 1); return; } - gl_FragColor = vec4(0, 0, 1, 1); // TODO: perform lighting calculations + if (toonShading == 1.0){ + float thresh = 0.005; + float neighbor; + + for (int i = -1; i <= 1; i++){ + for (int j = -1; j <= 1; j++){ + if (i == 0 && j == 0) continue; + neighbor = texture2D(u_depth, guv + vec2(float(i)/u_camera_width, float(j)/u_camera_height)).x; + + if (abs(depth - neighbor) > thresh){ + gl_FragColor = vec4(1.0); + return; + } + } + } + } + + float dist = length(u_lightPos - pos); + + // Diffuse + vec3 lightDir = normalize(u_lightPos - pos); + float diffuse = dot(nor, lightDir); + + // Specular + vec3 cameraDir = normalize(u_cameraPos - pos); + vec3 halfVector = normalize(lightDir + cameraDir); + float specular = dot(nor, halfVector); + + vec3 fragColor = color.rgb * u_lightCol; + + // Toon shading + if (toonShading == 1.0){ + if (diffuse > 0.6){ + fragColor *= 0.6; + } else if (diffuse > 0.3) { + fragColor *= 0.3; + } else { + //fragColor *= 0.1; + fragColor *= (1.0-rampShading)*0.1 + (rampShading)*diffuse; + } + fragColor *= max(0.0,(u_lightRad - dist)) * 0.3; + + // Normal shading + } else { + fragColor *= diffuse * max(0.0,(u_lightRad - dist)) * 0.3; + fragColor += color.rgb * specular * max(0.0,(u_lightRad - dist)) * 0.3; + } + + gl_FragColor = vec4(fragColor, 1.0); } diff --git a/glsl/deferred/debug.frag.glsl b/glsl/deferred/debug.frag.glsl index 9cbfae4..55a61a6 100644 --- a/glsl/deferred/debug.frag.glsl +++ b/glsl/deferred/debug.frag.glsl @@ -2,7 +2,7 @@ precision highp float; precision highp int; -#define NUM_GBUFFERS 4 +#define NUM_GBUFFERS 3 uniform int u_debug; uniform sampler2D u_gbufs[NUM_GBUFFERS]; @@ -24,15 +24,13 @@ void main() { vec4 gb0 = texture2D(u_gbufs[0], v_uv); vec4 gb1 = texture2D(u_gbufs[1], v_uv); vec4 gb2 = texture2D(u_gbufs[2], v_uv); - vec4 gb3 = texture2D(u_gbufs[3], v_uv); float depth = texture2D(u_depth, v_uv).x; - // TODO: Extract needed properties from the g-buffers into local variables - // These definitions are suggested for starting out, but you will probably want to change them. - vec3 pos; // World-space position - vec3 geomnor; // Normals of the geometry as defined, without normal mapping - vec3 colmap; // The color map - unlit "albedo" (surface color) - vec3 normap; // The raw normal map (normals relative to the surface they're on) - vec3 nor; // The true normals as we want to light them - with the normal map applied to the geometry normals (applyNormalMap above) + + vec3 pos = gb0.xyz; // World-space position + vec3 geomnor = gb1.xyz; // Normals of the geometry as defined, without normal mapping + vec3 colmap = vec3(gb0.w,gb1.w,gb2.w); // The color map - unlit "albedo" (surface color) + vec3 normap = gb2.xyz; // The raw normal map (normals relative to the surface they're on) + vec3 nor = applyNormalMap(geomnor, normap); // The true normals as we want to light them - with the normal map applied to the geometry normals (applyNormalMap above) if (u_debug == 0) { gl_FragColor = vec4(vec3(depth), 1.0); diff --git a/glsl/post/one.frag.glsl b/glsl/post/one.frag.glsl index 94191cd..3d10e44 100644 --- a/glsl/post/one.frag.glsl +++ b/glsl/post/one.frag.glsl @@ -8,13 +8,48 @@ varying vec2 v_uv; const vec4 SKY_COLOR = vec4(0.01, 0.14, 0.42, 1.0); +uniform float u_width; +uniform float u_height; +uniform vec4 u_settings; + +uniform float u_block_kernel[25]; +uniform float u_kernel[5]; + void main() { - vec4 color = texture2D(u_color, v_uv); + vec2 guv = gl_FragCoord.xy / vec2(u_width,u_height); + vec4 color = texture2D(u_color, guv); if (color.a == 0.0) { gl_FragColor = SKY_COLOR; return; } - gl_FragColor = color; + // Naive 1-pass bloom filter + if (u_settings.x == 1.0){ + vec2 n_uv; + vec4 n_color; + float k; + + for (int i=-2; i <= 2; i++){ + for (int j=-2; j <= 2; j++){ + n_uv = guv + vec2(float(i)/800.0, float(j)/600.0); + n_color = texture2D(u_color, n_uv); + k = u_block_kernel[(i+2)+(j+2)*5]; + color += k*n_color; + } + } + + // 2-pass separable filter (vertical first) + } else if (u_settings.y == 1.0) { + vec2 n_uv; + vec4 n_color; + + for (int i=-2; i <= 2; i++){ + n_uv = guv + vec2(0.0, float(i)/600.0); + n_color = texture2D(u_color, n_uv); + color += u_kernel[i+2]*n_color; + } + } + + gl_FragColor = clamp(color, 0.0, 1.0); } diff --git a/glsl/post/two.frag.glsl b/glsl/post/two.frag.glsl new file mode 100644 index 0000000..362de32 --- /dev/null +++ b/glsl/post/two.frag.glsl @@ -0,0 +1,35 @@ +#version 100 +precision highp float; +precision highp int; + +uniform sampler2D u_color; + +varying vec2 v_uv; + +const vec4 SKY_COLOR = vec4(0.01, 0.14, 0.42, 1.0); + +uniform float u_width; +uniform float u_height; +uniform vec4 u_settings; + +uniform float u_kernel[5]; + +void main() { + vec4 color = texture2D(u_color, v_uv); + + // Separable filter + if (u_settings[1] == 1.0){ + vec2 n_uv; + vec4 n_color; + + for (int i=-2; i <= 2; i++){ + n_uv = v_uv + vec2(float(i)/800.0, 0.0); + n_color = texture2D(u_color, n_uv); + color.r += u_kernel[i+2]*n_color.r; + color.g += u_kernel[i+2]*n_color.g; + color.b += u_kernel[i+2]*n_color.b; + } + } + + gl_FragColor = clamp(color, 0.0, 1.0); +} diff --git a/glsl/red.frag.glsl b/glsl/red.frag.glsl index f8ef1ec..f0e249c 100644 --- a/glsl/red.frag.glsl +++ b/glsl/red.frag.glsl @@ -3,5 +3,5 @@ precision highp float; precision highp int; void main() { - gl_FragColor = vec4(1, 0, 0, 1); -} + gl_FragColor = vec4(1.0, 0.0, 0.0, 0.1); +} \ No newline at end of file diff --git a/glsl/sphere.vert.glsl b/glsl/sphere.vert.glsl new file mode 100644 index 0000000..5f2cfc9 --- /dev/null +++ b/glsl/sphere.vert.glsl @@ -0,0 +1,22 @@ +#version 100 +precision highp float; +precision highp int; + +attribute vec3 a_position; + +uniform mat4 u_cameraMat; +uniform vec4 u_lightTrans; // [pos, radius] + +varying vec2 v_uv; + +void main() { + + float scale = u_lightTrans.w; + vec3 translation = u_lightTrans.xyz; + + vec4 position = vec4( (a_position * scale) + translation, 1.0); + + gl_Position = u_cameraMat * position; // gl_Position should be in NDC coordinates + + v_uv = gl_Position.xy * 0.5 + 0.5; +} diff --git a/img/all.PNG b/img/all.PNG new file mode 100644 index 0000000..70f49d6 Binary files /dev/null and b/img/all.PNG differ diff --git a/img/bloom.PNG b/img/bloom.PNG new file mode 100644 index 0000000..0ea2759 Binary files /dev/null and b/img/bloom.PNG differ diff --git a/img/bloom_post2.PNG b/img/bloom_post2.PNG new file mode 100644 index 0000000..4adc3c6 Binary files /dev/null and b/img/bloom_post2.PNG differ diff --git a/img/chrome_profiler.PNG b/img/chrome_profiler.PNG new file mode 100644 index 0000000..02e8cc4 Binary files /dev/null and b/img/chrome_profiler.PNG differ diff --git a/img/color.PNG b/img/color.PNG new file mode 100644 index 0000000..922a06f Binary files /dev/null and b/img/color.PNG differ diff --git a/img/debug_scissor.PNG b/img/debug_scissor.PNG new file mode 100644 index 0000000..3866491 Binary files /dev/null and b/img/debug_scissor.PNG differ diff --git a/img/debug_spheres.PNG b/img/debug_spheres.PNG new file mode 100644 index 0000000..a9ad07f Binary files /dev/null and b/img/debug_spheres.PNG differ diff --git a/img/depth.PNG b/img/depth.PNG new file mode 100644 index 0000000..5b2cda1 Binary files /dev/null and b/img/depth.PNG differ diff --git a/img/fps_lights.png b/img/fps_lights.png new file mode 100644 index 0000000..ca75782 Binary files /dev/null and b/img/fps_lights.png differ diff --git a/img/fps_scissor_fragments.png b/img/fps_scissor_fragments.png new file mode 100644 index 0000000..1237aac Binary files /dev/null and b/img/fps_scissor_fragments.png differ diff --git a/img/fps_scissor_lights.png b/img/fps_scissor_lights.png new file mode 100644 index 0000000..9c1259f Binary files /dev/null and b/img/fps_scissor_lights.png differ diff --git a/img/geometry.PNG b/img/geometry.PNG new file mode 100644 index 0000000..c095791 Binary files /dev/null and b/img/geometry.PNG differ diff --git a/img/initial_spheres.PNG b/img/initial_spheres.PNG new file mode 100644 index 0000000..a9ad07f Binary files /dev/null and b/img/initial_spheres.PNG differ diff --git a/img/position.PNG b/img/position.PNG new file mode 100644 index 0000000..d3c959b Binary files /dev/null and b/img/position.PNG differ diff --git a/img/surface_normal.PNG b/img/surface_normal.PNG new file mode 100644 index 0000000..df1616f Binary files /dev/null and b/img/surface_normal.PNG differ diff --git a/img/thumb.png b/img/thumb.png index 9ec8ed0..7690493 100644 Binary files a/img/thumb.png and b/img/thumb.png differ diff --git a/img/toon_shading.PNG b/img/toon_shading.PNG new file mode 100644 index 0000000..45bef0a Binary files /dev/null and b/img/toon_shading.PNG differ diff --git a/img/toon_shading_ramp.PNG b/img/toon_shading_ramp.PNG new file mode 100644 index 0000000..a0fe84a Binary files /dev/null and b/img/toon_shading_ramp.PNG differ diff --git a/js/deferredRender.js b/js/deferredRender.js index b1f238b..dd90dc5 100644 --- a/js/deferredRender.js +++ b/js/deferredRender.js @@ -10,7 +10,9 @@ !R.prog_Ambient || !R.prog_BlinnPhong_PointLight || !R.prog_Debug || - !R.progPost1)) { + !R.progPost1 || + !R.progPost2 || + !R.progRedSphere)) { console.log('waiting for programs to load...'); return; } @@ -23,18 +25,6 @@ R.lights[i].pos[1] = (R.lights[i].pos[1] + R.light_dt - mn + mx) % mx + mn; } - // Execute deferred shading pipeline - - // CHECKITOUT: START HERE! You can even uncomment this: - //debugger; - - { // TODO: this block should be removed after testing renderFullScreenQuad - gl.bindFramebuffer(gl.FRAMEBUFFER, null); - // TODO: Implement/test renderFullScreenQuad first - renderFullScreenQuad(R.progRed); - return; - } - R.pass_copy.render(state); if (cfg && cfg.debugView >= 0) { @@ -43,10 +33,9 @@ R.pass_debug.render(state); } else { // * Deferred pass and postprocessing pass(es) - // TODO: uncomment these - //R.pass_deferred.render(state); - //R.pass_post1.render(state); - + R.pass_deferred.render(state); + R.pass_post1.render(state); + R.pass_post2.render(state); // OPTIONAL TODO: call more postprocessing passes, if any } }; @@ -56,22 +45,22 @@ */ R.pass_copy.render = function(state) { // * Bind the framebuffer R.pass_copy.fbo - // TODO: ^ + gl.bindFramebuffer(gl.FRAMEBUFFER, R.pass_copy.fbo); // * Clear screen using R.progClear - TODO: renderFullScreenQuad(R.progClear); + renderFullScreenQuad(R.progClear); + // * Clear depth buffer to value 1.0 using gl.clearDepth and gl.clear - // TODO: ^ - // TODO: ^ + gl.clearDepth(1.0); + gl.clear(gl.DEPTH_BUFFER_BIT); // * "Use" the program R.progCopy.prog - // TODO: ^ - // TODO: Write glsl/copy.frag.glsl + gl.useProgram(R.progCopy.prog); var m = state.cameraMat.elements; // * Upload the camera matrix m to the uniform R.progCopy.u_cameraMat // using gl.uniformMatrix4fv - // TODO: ^ + gl.uniformMatrix4fv(R.progCopy.u_cameraMat, gl.FALSE, m); // * Draw the scene drawScene(state); @@ -114,24 +103,102 @@ gl.clearDepth(1.0); gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT); - // * _ADD_ together the result of each lighting pass - // Enable blending and use gl.blendFunc to blend with: // color = 1 * src_color + 1 * dst_color - // TODO: ^ + gl.enable(gl.BLEND); + if (cfg && (cfg.debugScissor || cfg.debugSphere) ){ + gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA); + } else { + gl.blendFunc(gl.ONE, gl.ONE); + } // * Bind/setup the ambient pass, and render using fullscreen quad bindTexturesForLightPass(R.prog_Ambient); renderFullScreenQuad(R.prog_Ambient); // * Bind/setup the Blinn-Phong pass, and render using fullscreen quad - bindTexturesForLightPass(R.prog_BlinnPhong_PointLight); - - // TODO: add a loop here, over the values in R.lights, which sets the + if (cfg && cfg.enableSphere){ + bindTexturesForLightPass(R.prog_BlinnPhong_PointLightSphere); + } else { + bindTexturesForLightPass(R.prog_BlinnPhong_PointLight); + } + + // loop here, over the values in R.lights, which sets the // uniforms R.prog_BlinnPhong_PointLight.u_lightPos/Col/Rad etc., // then does renderFullScreenQuad(R.prog_BlinnPhong_PointLight). + var cameraPos = [state.cameraPos.x, state.cameraPos.y, state.cameraPos.z]; + var settings = [cfg.enableToonShading, cfg.enableRampShading, 0, 0]; + + if (cfg && !cfg.enableSphere){ + gl.enable(gl.SCISSOR_TEST); + } + + for(var i = 0; i < R.lights.length; i++){ + // Scissor Testing + if (cfg && !cfg.enableSphere){ + var sc = getScissorForLight(state.viewMat, state.projMat, R.lights[i]); + if (sc == null || sc[2] < 0 || sc[3] < 0){ + continue; + } + + gl.scissor(sc[0],sc[1],sc[2],sc[3]); + + if (cfg && cfg.debugScissor){ + renderFullScreenQuad(R.progRed); + continue; + } + + gl.uniform3fv(R.prog_BlinnPhong_PointLight.u_cameraPos, cameraPos); + gl.uniform4fv(R.prog_BlinnPhong_PointLight.u_settings, settings); + + gl.uniform1f(R.prog_BlinnPhong_PointLight.u_camera_width, width); + gl.uniform1f(R.prog_BlinnPhong_PointLight.u_camera_height, height); + + gl.uniform3fv(R.prog_BlinnPhong_PointLight.u_lightPos, R.lights[i].pos); + gl.uniform3fv(R.prog_BlinnPhong_PointLight.u_lightCol, R.lights[i].col); + gl.uniform1f(R.prog_BlinnPhong_PointLight.u_lightRad, R.lights[i].rad); + + renderFullScreenQuad(R.prog_BlinnPhong_PointLight); + + // Sphere Testing + } else { + + gl.uniform3fv(R.prog_BlinnPhong_PointLightSphere.u_cameraPos, cameraPos); + gl.uniform4fv(R.prog_BlinnPhong_PointLightSphere.u_settings, settings); + + gl.uniform1f(R.prog_BlinnPhong_PointLightSphere.u_camera_width, width); + gl.uniform1f(R.prog_BlinnPhong_PointLightSphere.u_camera_height, height); + + gl.uniform3fv(R.prog_BlinnPhong_PointLightSphere.u_lightPos, R.lights[i].pos); + gl.uniform3fv(R.prog_BlinnPhong_PointLightSphere.u_lightCol, R.lights[i].col); + gl.uniform1f(R.prog_BlinnPhong_PointLightSphere.u_lightRad, R.lights[i].rad); + + var lightsTrans = R.lights[i].pos; + lightsTrans = lightsTrans.concat(R.lights[i].rad); + + gl.uniform4fv(R.prog_BlinnPhong_PointLightSphere.u_lightTrans, lightsTrans); + gl.uniformMatrix4fv(R.prog_BlinnPhong_PointLightSphere.u_cameraMat, gl.FALSE, state.cameraMat.elements); + + if (cfg && cfg.debugSphere){ + gl.uniform4fv(R.progRedSphere.u_lightTrans, lightsTrans); + gl.uniformMatrix4fv(R.progRedSphere.u_cameraMat, gl.FALSE, state.cameraMat.elements); + + readyModelForDraw(R.progRedSphere, R.sphereModel); + drawReadyModel(R.sphereModel); + } else { + gl.uniform4fv(R.prog_BlinnPhong_PointLightSphere.u_lightTrans, lightsTrans); + gl.uniformMatrix4fv(R.prog_BlinnPhong_PointLightSphere.u_cameraMat, gl.FALSE, state.cameraMat.elements); + + readyModelForDraw(R.prog_BlinnPhong_PointLightSphere, R.sphereModel); + drawReadyModel(R.sphereModel); + } + } + } - // TODO: In the lighting loop, use the scissor test optimization + if (cfg && !cfg.enableSphere){ + gl.disable(gl.SCISSOR_TEST); + } + // In the lighting loop, use the scissor test optimization // Enable gl.SCISSOR_TEST, render all lights, then disable it. // // getScissorForLight returns null if the scissor is off the screen. @@ -163,7 +230,9 @@ */ R.pass_post1.render = function(state) { // * Unbind any existing framebuffer (if there are no more passes) - gl.bindFramebuffer(gl.FRAMEBUFFER, null); + //gl.bindFramebuffer(gl.FRAMEBUFFER, null); + gl.bindFramebuffer(gl.FRAMEBUFFER, R.pass_post1.fbo); + //gl.bindFramebuffer(gl.FRAMEBUFFER, null); // * Clear the framebuffer depth to 1.0 gl.clearDepth(1.0); @@ -174,16 +243,60 @@ // * Bind the deferred pass's color output as a texture input // Set gl.TEXTURE0 as the gl.activeTexture unit - // TODO: ^ + gl.activeTexture(gl.TEXTURE0); + // Bind the TEXTURE_2D, R.pass_deferred.colorTex to the active texture unit - // TODO: ^ + gl.bindTexture(gl.TEXTURE_2D, R.pass_deferred.colorTex); + // Configure the R.progPost1.u_color uniform to point at texture unit 0 + gl.uniform1f(R.progPost1.u_width, width); + gl.uniform1f(R.progPost1.u_height, height); gl.uniform1i(R.progPost1.u_color, 0); + gl.uniform4fv(R.progPost1.u_settings, [cfg.enableBloom, cfg.enablePost2, 0.0, 0.0]); + gl.uniform1fv(R.progPost1.u_kernel, new Float32Array([0.01,0.02,0.03,0.02,0.01])); + if(cfg && cfg.enableBloom){ + gl.uniform1fv(R.progPost1.u_block_kernel, new Float32Array([0.01,0.01,0.01,0.01,0.01, + 0.01,0.01,0.01,0.01,0.01, + 0.01,0.01,0.01,0.01,0.01, + 0.01,0.01,0.01,0.01,0.01, + 0.01,0.01,0.01,0.01,0.01])); + } // * Render a fullscreen quad to perform shading on renderFullScreenQuad(R.progPost1); }; + /** + * 'post1' pass: Perform (second) pass of post-processing + */ + R.pass_post2.render = function(state) { + // * Unbind any existing framebuffer (if there are no more passes) + gl.bindFramebuffer(gl.FRAMEBUFFER, null); + + // * Clear the framebuffer depth to 1.0 + gl.clearDepth(1.0); + gl.clear(gl.DEPTH_BUFFER_BIT); + + // * Bind the postprocessing shader program + gl.useProgram(R.progPost2.prog); + + // * Bind the deferred pass's color output as a texture input + // Set gl.TEXTURE0 as the gl.activeTexture unit + gl.activeTexture(gl.TEXTURE0); + + // Bind the TEXTURE_2D, R.pass_deferred.colorTex to the active texture unit + //gl.bindTexture(gl.TEXTURE_2D, R.pass_deferred.colorTex); + gl.bindTexture(gl.TEXTURE_2D, R.pass_post1.colorTex); + + // Configure the R.progPost1.u_color uniform to point at texture unit 0 + gl.uniform1i(R.progPost2.u_color, 0); + gl.uniform4fv(R.progPost2.u_settings, [cfg.enableBloom, cfg.enablePost2, 0.0, 0.0]); + gl.uniform1fv(R.progPost2.u_kernel, new Float32Array([0.01,0.02,0.03,0.02,0.01])); + + // * Render a fullscreen quad to perform shading on + renderFullScreenQuad(R.progPost2); + }; + var renderFullScreenQuad = (function() { // The variables in this function are private to the implementation of // renderFullScreenQuad. They work like static local variables in C++. @@ -204,13 +317,12 @@ var init = function() { // Create a new buffer with gl.createBuffer, and save it as vbo. - // TODO: ^ - + vbo = gl.createBuffer(); // Bind the VBO as the gl.ARRAY_BUFFER - // TODO: ^ + gl.bindBuffer(gl.ARRAY_BUFFER, vbo); // Upload the positions array to the currently-bound array buffer // using gl.bufferData in static draw mode. - // TODO: ^ + gl.bufferData(gl.ARRAY_BUFFER, positions, gl.STATIC_DRAW); }; return function(prog) { @@ -223,16 +335,18 @@ gl.useProgram(prog.prog); // Bind the VBO as the gl.ARRAY_BUFFER - // TODO: ^ + gl.bindBuffer(gl.ARRAY_BUFFER, vbo); + // Enable the bound buffer as the vertex attrib array for // prog.a_position, using gl.enableVertexAttribArray - // TODO: ^ + gl.enableVertexAttribArray(prog.a_position); + // Use gl.vertexAttribPointer to tell WebGL the type/layout for // prog.a_position's access pattern. - // TODO: ^ + gl.vertexAttribPointer(prog.a_position, 3, gl.FLOAT, false, 0, 0); // Use gl.drawArrays (or gl.drawElements) to draw your quad. - // TODO: ^ + gl.drawArrays(gl.TRIANGLE_STRIP, 0, 4); // Unbind the array buffer. gl.bindBuffer(gl.ARRAY_BUFFER, null); diff --git a/js/deferredSetup.js b/js/deferredSetup.js index 65136e0..ebaf12f 100644 --- a/js/deferredSetup.js +++ b/js/deferredSetup.js @@ -6,9 +6,12 @@ R.pass_debug = {}; R.pass_deferred = {}; R.pass_post1 = {}; + R.pass_post2 = {}; R.lights = []; - R.NUM_GBUFFERS = 4; + R.NUM_GBUFFERS = 3; + + R.useSphereOptimization = true; /** * Set up the deferred pipeline framebuffer objects and textures. @@ -18,6 +21,7 @@ loadAllShaderPrograms(); R.pass_copy.setup(); R.pass_deferred.setup(); + R.pass_post1.setup(); }; // TODO: Edit if you want to change the light initial positions @@ -25,7 +29,7 @@ R.light_max = [14, 18, 6]; R.light_dt = -0.03; R.LIGHT_RADIUS = 4.0; - R.NUM_LIGHTS = 20; // TODO: test with MORE lights! + R.NUM_LIGHTS = 50; // TODO: test with MORE lights! var setupLights = function() { Math.seedrandom(0); @@ -98,6 +102,25 @@ gl.bindFramebuffer(gl.FRAMEBUFFER, null); }; + /** + * Create/configure framebuffer between "post1" and "post2" stages + */ + R.pass_post1.setup = function() { + // * Create the FBO + R.pass_post1.fbo = gl.createFramebuffer(); + // * Create, bind, and store a single color target texture for the FBO + R.pass_post1.colorTex = createAndBindColorTargetTexture( + R.pass_post1.fbo, gl_draw_buffers.COLOR_ATTACHMENT0_WEBGL); + + // * Check for framebuffer errors + abortIfFramebufferIncomplete(R.pass_post1.fbo); + // * Tell the WEBGL_draw_buffers extension which FBO attachments are + // being used. (This extension allows for multiple render targets.) + gl_draw_buffers.drawBuffersWEBGL([gl_draw_buffers.COLOR_ATTACHMENT0_WEBGL]); + + gl.bindFramebuffer(gl.FRAMEBUFFER, null); + }; + /** * Loads all of the shader programs used in the pipeline. */ @@ -125,6 +148,16 @@ R.progRed = { prog: prog }; }); + loadShaderProgram(gl, 'glsl/sphere.vert.glsl', 'glsl/red.frag.glsl', + function(prog){ + var p = { prog: prog }; + + p.u_cameraMat = gl.getUniformLocation(prog, 'u_cameraMat'); + p.u_lightTrans = gl.getUniformLocation(prog, 'u_lightTrans'); + + R.progRedSphere = p; + }); + loadShaderProgram(gl, 'glsl/quad.vert.glsl', 'glsl/clear.frag.glsl', function(prog) { // Create an object to hold info about this shader program @@ -136,29 +169,84 @@ R.prog_Ambient = p; }); + loadPostProgram('one', function(p) { + p.u_width = gl.getUniformLocation(p.prog, 'u_width'); + p.u_height = gl.getUniformLocation(p.prog, 'u_height'); + p.u_color = gl.getUniformLocation(p.prog, 'u_color'); + p.u_settings = gl.getUniformLocation(p.prog, 'u_settings'); + p.u_kernel = gl.getUniformLocation(p.prog, 'u_kernel'); + p.u_block_kernel = gl.getUniformLocation(p.prog, 'u_block_kernel'); + // Save the object into this variable for access later + R.progPost1 = p; + }); + + loadPostProgram('two', function(p) { + p.u_color = gl.getUniformLocation(p.prog, 'u_color'); + p.u_settings = gl.getUniformLocation(p.prog, 'u_settings'); + p.u_kernel = gl.getUniformLocation(p.prog, 'u_kernel'); + + R.progPost2 = p; + }); + + loadDeferredSphereProgram('blinnphong-pointlight', function(p) { + // Save the object into this variable for access later + p.u_cameraPos = gl.getUniformLocation(p.prog, 'u_cameraPos'); + p.u_settings = gl.getUniformLocation(p.prog, 'u_settings'); + p.u_camera_width = gl.getUniformLocation(p.prog, 'u_camera_width'); + p.u_camera_height = gl.getUniformLocation(p.prog, 'u_camera_height'); + p.u_lightPos = gl.getUniformLocation(p.prog, 'u_lightPos'); + p.u_lightCol = gl.getUniformLocation(p.prog, 'u_lightCol'); + p.u_lightRad = gl.getUniformLocation(p.prog, 'u_lightRad'); + + p.u_cameraMat = gl.getUniformLocation(p.prog, 'u_cameraMat'); + p.u_lightTrans = gl.getUniformLocation(p.prog, 'u_lightTrans'); + + R.prog_BlinnPhong_PointLightSphere = p; + }); + loadDeferredProgram('blinnphong-pointlight', function(p) { // Save the object into this variable for access later + p.u_cameraPos = gl.getUniformLocation(p.prog, 'u_cameraPos'); + p.u_settings = gl.getUniformLocation(p.prog, 'u_settings'); + p.u_camera_width = gl.getUniformLocation(p.prog, 'u_camera_width'); + p.u_camera_height = gl.getUniformLocation(p.prog, 'u_camera_height'); p.u_lightPos = gl.getUniformLocation(p.prog, 'u_lightPos'); p.u_lightCol = gl.getUniformLocation(p.prog, 'u_lightCol'); p.u_lightRad = gl.getUniformLocation(p.prog, 'u_lightRad'); + R.prog_BlinnPhong_PointLight = p; }); - + loadDeferredProgram('debug', function(p) { p.u_debug = gl.getUniformLocation(p.prog, 'u_debug'); // Save the object into this variable for access later R.prog_Debug = p; }); - loadPostProgram('one', function(p) { - p.u_color = gl.getUniformLocation(p.prog, 'u_color'); - // Save the object into this variable for access later - R.progPost1 = p; - }); - // TODO: If you add more passes, load and set up their shader programs. }; + + var loadDeferredSphereProgram = function(name, callback){ + loadShaderProgram(gl, 'glsl/sphere.vert.glsl', + 'glsl/deferred/' + name + '.frag.glsl', + function(prog) { + // Create an object to hold info about this shader program + var p = { prog: prog }; + + // Retrieve the uniform and attribute locations + p.u_gbufs = []; + for (var i = 0; i < R.NUM_GBUFFERS; i++) { + p.u_gbufs[i] = gl.getUniformLocation(prog, 'u_gbufs[' + i + ']'); + } + p.u_depth = gl.getUniformLocation(prog, 'u_depth'); + + p.a_position = gl.getAttribLocation(prog, 'a_position'); + + callback(p); + }); + }; + var loadDeferredProgram = function(name, callback) { loadShaderProgram(gl, 'glsl/quad.vert.glsl', 'glsl/deferred/' + name + '.frag.glsl', @@ -178,6 +266,7 @@ }); }; + var loadPostProgram = function(name, callback) { loadShaderProgram(gl, 'glsl/quad.vert.glsl', 'glsl/post/' + name + '.frag.glsl', diff --git a/js/framework.js b/js/framework.js index 7c008df..9b3dc15 100644 --- a/js/framework.js +++ b/js/framework.js @@ -76,6 +76,9 @@ var width, height; }); gl = renderer.context; + // TODO: For performance measurements, disable debug mode! + var debugMode = false; + if (debugMode) { $('#dlbutton button').attr('disabled', false); $('#debugmodewarning').css('display', 'block'); diff --git a/js/ui.js b/js/ui.js index 05c1852..0b4bd51 100644 --- a/js/ui.js +++ b/js/ui.js @@ -6,8 +6,14 @@ var cfg; var Cfg = function() { // TODO: Define config fields and defaults here this.debugView = -1; + this.enableSphere = true; + this.debugSphere = false; this.debugScissor = false; - this.enableEffect0 = false; + this.enableToonShading = false; + this.enableRampShading = false; + this.enableBloom = false; + this.enableSphere = false; + this.enablePost2 = false; }; var init = function() { @@ -24,10 +30,15 @@ var cfg; '4 Normal map': 4, '5 Surface normal': 5 }); + gui.add(cfg, 'enableSphere'); gui.add(cfg, 'debugScissor'); + gui.add(cfg, 'debugSphere'); - var eff0 = gui.addFolder('EFFECT NAME HERE'); - eff0.add(cfg, 'enableEffect0'); + var eff0 = gui.addFolder('Effects'); + eff0.add(cfg, 'enableToonShading'); + eff0.add(cfg, 'enableRampShading'); + eff0.add(cfg, 'enableBloom'); + eff0.add(cfg, 'enablePost2'); // TODO: add more effects toggles and parameters here };