diff --git a/README.md b/README.md index da4c7e1..b11074e 100644 --- a/README.md +++ b/README.md @@ -1,58 +1,26 @@ ------------------------------------------------------------------------------ CIS565: Project 6 -- Deferred Shader ------------------------------------------------------------------------------- -Fall 2014 -------------------------------------------------------------------------------- -Due Wed, 11/12/2014 at Noon -------------------------------------------------------------------------------- - -------------------------------------------------------------------------------- -NOTE: -------------------------------------------------------------------------------- -This project requires any graphics card with support for a modern OpenGL -pipeline. Any AMD, NVIDIA, or Intel card from the past few years should work -fine, and every machine in the SIG Lab and Moore 100 is capable of running -this project. - -This project also requires a WebGL capable browser. The project is known to -have issues with Chrome on windows, but Firefox seems to run it fine. -------------------------------------------------------------------------------- INTRODUCTION: ------------------------------------------------------------------------------- +In this project, I wrote a basic deferred shading with GLSL and OpenGL. In this deferred lighting pipeline, I implemented some simple effects, including Diffuse and Blinn-Phong shading, Bloom, "Toon" shading, and Screen Space Ambient Occlusion. -In this project, you will get introduced to the basics of deferred shading. You will write GLSL and OpenGL code to perform various tasks in a deferred lighting pipeline such as creating and writing to a G-Buffer. +![ScreenShot](https://github.com/liying3/Project6-DeferredShader/blob/master/results/p_bloom1(fps%201).PNG) ------------------------------------------------------------------------------- CONTENTS: ------------------------------------------------------------------------------- -The Project5 root directory contains the following subdirectories: +The root directory contains the following subdirectories: * js/ contains the javascript files, including external libraries, necessary. * assets/ contains the textures that will be used in the second half of the assignment. * resources/ contains the screenshots found in this readme file. - This Readme file edited as described above in the README section. - ------------------------------------------------------------------------------- -OVERVIEW: +Control: ------------------------------------------------------------------------------- -The deferred shader you will write will have the following stages: - -Stage 1 renders the scene geometry to the G-Buffer -* pass.vert -* pass.frag - -Stage 2 renders the lighting passes and accumulates to the P-Buffer -* quad.vert -* diffuse.frag -* diagnostic.frag - -Stage 3 renders the post processing -* post.vert -* post.frag - The keyboard controls are as follows: WASDRF - Movement (along w the arrow keys) * W - Zoom in @@ -65,156 +33,114 @@ WASDRF - Movement (along w the arrow keys) * v - Down * < - Left * > - Right + +Effect switch control: * 1 - World Space Position * 2 - Normals * 3 - Color * 4 - Depth +* 5 - Bloom (one-pass) +* 6 - Bloom (two-pass) +* 7 - "Toon" Shading +* 8 - Screen Space Ambient Occlusion +* 9 - Diffuse and Blinn-Phong shading * 0 - Full deferred pipeline There are also mouse controls for camera rotation. ------------------------------------------------------------------------------- -REQUIREMENTS: +Basic Features: ------------------------------------------------------------------------------- +I've implemented the following basic features: +* Diffuse and Blinn-Phong shading -In this project, you are given code for: -* Loading .obj file -* Deferred shading pipeline -* GBuffer pass +It's simple for diffuse and blinn-phong shading, just passing the vertex normal, light position and camera position to the shader file, which are used to calculate the the shader color. -You are required to implement: -* Either of the following effects - * Bloom - * "Toon" Shading (with basic silhouetting) -* Screen Space Ambient Occlusion -* Diffuse and Blinn-Phong shading +Following is the normal of each vertex of suzanne.obj: -**NOTE**: Implementing separable convolution will require another link in your pipeline and will count as an extra feature if you do performance analysis with a standard one-pass 2D convolution. The overhead of rendering and reading from a texture _may_ offset the extra computations for smaller 2D kernels. +![ScreenShot](https://github.com/liying3/Project6-DeferredShader/blob/master/results/s_normal(fps%2060).PNG) -You must implement two of the following extras: -* The effect you did not choose above -* Compare performance to a normal forward renderer with - * No optimizations - * Coarse sort geometry front-to-back for early-z - * Z-prepass for early-z -* Optimize g-buffer format, e.g., pack things together, quantize, reconstruct z from normal x and y (because it is normalized), etc. - * Must be accompanied with a performance analysis to count -* Additional lighting and pre/post processing effects! (email first please, if they are good you may add multiple). +Diffuse and Blinn-Phong shading: -------------------------------------------------------------------------------- -RUNNING THE CODE: -------------------------------------------------------------------------------- +![ScreenShot](https://github.com/liying3/Project6-DeferredShader/blob/master/results/s_diffuse.PNG) -Since the code attempts to access files that are local to your computer, you -will either need to: +* Bloom (one-pass 2D convolution and two-pass separable convolution) +http://http.developer.nvidia.com/GPUGems/gpugems_ch21.html -* Run your browser under modified security settings, or -* Create a simple local server that serves the files +First, use a 3*3 sobel operator to extra the edges of the model, which are used as the glowing part. Then use a 11 * 11 blur filter to do convolution for the glowing part; lastly add the flowing texture to the original image. +I also implemented a two-pass convolution operators. I added another post fragment shader to handle the second pass. The two-pass convolution is of higher efficiency compared with the 2D convolution operator. -FIREFOX: change ``strict_origin_policy`` to false in about:config +Bloom (5 * 5 blur filter): -CHROME: run with the following argument : `--allow-file-access-from-files` +![ScreenShot](https://github.com/liying3/Project6-DeferredShader/blob/master/results/s_bloom(r%3D2).PNG) -(You can do this on OSX by running Chrome from /Applications/Google -Chrome/Contents/MacOS with `open -a "Google Chrome" --args ---allow-file-access-from-files`) +Bloom (11*11 blur filter): -* To check if you have set the flag properly, you can open chrome://version and - check under the flags +![ScreenShot](https://github.com/liying3/Project6-DeferredShader/blob/master/results/s_bloom1.PNG) -RUNNING A SIMPLE SERVER: -If you have Python installed, you can simply run a simple HTTP server off your -machine from the root directory of this repository with the following command: +* "Toon" Shading (with basic silhouetting) -`python -m SimpleHTTPServer` +First, discretize the color based on the diffuse shading. Then, use sobel operator to extra the edge and added a dark color to the edge. Finally, combine the results of this two parts. -------------------------------------------------------------------------------- -RESOURCES: -------------------------------------------------------------------------------- +![ScreenShot](https://github.com/liying3/Project6-DeferredShader/blob/master/results/s_tooon.PNG) -The following are articles and resources that have been chosen to help give you -a sense of each of the effects: +* Screen Space Ambient Occlusion -* Bloom : [GPU Gems](http://http.developer.nvidia.com/GPUGems/gpugems_ch21.html) -* Screen Space Ambient Occlusion : [Floored - Article](http://floored.com/blog/2013/ssao-screen-space-ambient-occlusion.html) +I followed the algorithm in this article to do the SSAO. +http://john-chapman-graphics.blogspot.co.uk/2013/01/ssao-tutorial.html -------------------------------------------------------------------------------- -README -------------------------------------------------------------------------------- -All students must replace or augment the contents of this Readme.md in a clear -manner with the following: +First, generate a sample kernel, generate the noise in a shpere and find the normal-oriented hemisphere. Then, project each sample point into screen space to get the coordinates into the depth buffer. Next, read sampleDepth out of the depth buffer. If this is in front of the sample position, the sample contributes to occlusion. If sampleDepth is behind the sample position, the sample doesn't contribute to the occlusion factor. -* A brief description of the project and the specific features you implemented. -* At least one screenshot of your project running. -* A 30 second or longer video of your project running. To create the video you - can use [Open Broadcaster Software](http://obsproject.com) -* A performance evaluation (described in detail below). +SSAO: + +![ScreenShot](https://github.com/liying3/Project6-DeferredShader/blob/master/results/s_occlusion.PNG) + +Final result wiht SSAO: + +![ScreenShot](https://github.com/liying3/Project6-DeferredShader/blob/master/results/s_ssao.PNG) + + +Another model: sponza.obj + +Normal: + +![ScreenShot](https://github.com/liying3/Project6-DeferredShader/blob/master/results/p_normal(fps%2025).PNG) + +Diffuse adn Blinn-Phong shading: + +![ScreenShot](https://github.com/liying3/Project6-DeferredShader/blob/master/results/p_blinn.PNG) + +Bloom: + +![ScreenShot](https://github.com/liying3/Project6-DeferredShader/blob/master/results/p_bloom1(fps%201).PNG) + +"Toon" shading: + +![ScreenShot](https://github.com/liying3/Project6-DeferredShader/blob/master/results/p_toon(fps%201).PNG) ------------------------------------------------------------------------------- PERFORMANCE EVALUATION ------------------------------------------------------------------------------- -The performance evaluation is where you will investigate how to make your -program more efficient using the skills you've learned in class. You must have -performed at least one experiment on your code to investigate the positive or -negative effects on performance. +The following chart shows the FPS for each featurs: + +![ScreenShot](https://github.com/liying3/Project6-DeferredShader/blob/master/results/FPS.JPG) -We encourage you to get creative with your tweaks. Consider places in your code -that could be considered bottlenecks and try to improve them. +I also compared the performance of one-pass and two-pass bloom effect with different-size filter. The following chart shows that, two-pass bloom is of higher efficiency than one-pass bloom. -Each student should provide no more than a one page summary of their -optimizations along with tables and or graphs to visually explain any -performance differences. +![ScreenShot](https://github.com/liying3/Project6-DeferredShader/blob/master/results/chart%20bloom.JPG) ------------------------------------------------------------------------------- THIRD PARTY CODE POLICY ------------------------------------------------------------------------------- -* Use of any third-party code must be approved by asking on the Google groups. - If it is approved, all students are welcome to use it. Generally, we approve - use of third-party code that is not a core part of the project. For example, - for the ray tracer, we would approve using a third-party library for loading - models, but would not approve copying and pasting a CUDA function for doing - refraction. -* Third-party code must be credited in README.md. -* Using third-party code without its approval, including using another - student's code, is an academic integrity violation, and will result in you - receiving an F for the semester. +* stas.js +It's a library to visualize realize fps and timing. +https://github.com/mrdoob/stats.js/ +* random noise in GLSL: +http://byteblacksmith.com/improvements-to-the-canonical-one-liner-glsl-rand-for-opengl-es-2-0/ ------------------------------------------------------------------------------- -SELF-GRADING -------------------------------------------------------------------------------- -* On the submission date, email your grade, on a scale of 0 to 100, to Harmony, - harmoli+cis565@seas.upenn.edu, with a one paragraph explanation. Be concise and - realistic. Recall that we reserve 30 points as a sanity check to adjust your - grade. Your actual grade will be (0.7 * your grade) + (0.3 * our grade). We - hope to only use this in extreme cases when your grade does not realistically - reflect your work - it is either too high or too low. In most cases, we plan - to give you the exact grade you suggest. -* Projects are not weighted evenly, e.g., Project 0 doesn't count as much as - the path tracer. We will determine the weighting at the end of the semester - based on the size of each project. - - ---- -SUBMISSION ---- -As with the previous projects, you should fork this project and work inside of -your fork. Upon completion, commit your finished project back to your fork, and -make a pull request to the master repository. You should include a README.md -file in the root directory detailing the following - -* A brief description of the project and specific features you implemented -* At least one screenshot of your project running. -* A link to a video of your project running. -* Instructions for building and running your project if they differ from the - base code. -* A performance writeup as detailed above. -* A list of all third-party code used. -* This Readme file edited as described above in the README section. - ---- ACKNOWLEDGEMENTS --- diff --git a/assets/deferred/diffuse.frag b/assets/deferred/diffuse.frag index ef0c5fc..4053c43 100644 --- a/assets/deferred/diffuse.frag +++ b/assets/deferred/diffuse.frag @@ -19,5 +19,6 @@ void main() { // Write a diffuse shader and a Blinn-Phong shader // NOTE : You may need to add your own normals to fulfill the second's requirements + gl_FragColor = vec4(texture2D(u_colorTex, v_texcoord).rgb, 1.0); } diff --git a/assets/shader/deferred/diffuse.frag b/assets/shader/deferred/diffuse.frag index ef0c5fc..e94aeed 100644 --- a/assets/shader/deferred/diffuse.frag +++ b/assets/shader/deferred/diffuse.frag @@ -7,8 +7,13 @@ uniform sampler2D u_depthTex; uniform float u_zFar; uniform float u_zNear; + uniform int u_displayType; +uniform vec3 u_lightCol; +uniform vec3 u_lightPos; +uniform vec3 u_eyePos; + varying vec2 v_texcoord; float linearizeDepth( float exp_depth, float near, float far ){ @@ -19,5 +24,16 @@ void main() { // Write a diffuse shader and a Blinn-Phong shader // NOTE : You may need to add your own normals to fulfill the second's requirements - gl_FragColor = vec4(texture2D(u_colorTex, v_texcoord).rgb, 1.0); -} + + vec3 lightDir = normalize(texture2D(u_positionTex, v_texcoord).rgb - u_lightPos); + float diffuseTerm = dot(-lightDir, normalize(texture2D(u_normalTex, v_texcoord).rgb)); + + vec3 viewDir = normalize(texture2D(u_positionTex, v_texcoord).rgb - u_eyePos); + vec3 refDir = normalize(reflect(lightDir, texture2D(u_normalTex, v_texcoord).rgb)); + float specTerm = clamp(dot(refDir, -viewDir), 0.0, 1.0); + + vec3 diff = u_lightCol * diffuseTerm * texture2D(u_colorTex, v_texcoord).rgb; + vec3 spec = u_lightCol * pow(specTerm, 100.0); + + gl_FragColor = min(vec4(0.5 * diff + 3.0 * spec, 1.0), vec4(1,1,1,1)); +} \ No newline at end of file diff --git a/assets/shader/deferred/normPass.frag b/assets/shader/deferred/normPass.frag index b41d6ed..819a8f8 100644 --- a/assets/shader/deferred/normPass.frag +++ b/assets/shader/deferred/normPass.frag @@ -3,5 +3,5 @@ precision highp float; varying vec3 v_normal; void main(void){ - gl_FragColor = vec4(v_normal, 1.0); + gl_FragColor = vec4(v_normal, 0.0); } diff --git a/assets/shader/deferred/post.frag b/assets/shader/deferred/post.frag index 52edda2..f12a789 100644 --- a/assets/shader/deferred/post.frag +++ b/assets/shader/deferred/post.frag @@ -1,17 +1,198 @@ precision highp float; +#define DISPLAY_BLOOM 5 +#define DISPLAY_BLOOM2 6 +#define DISPLAY_TOON 7 +#define DISPLAY_AMBIENT_OCCU 8 +#define DISPLAY_DIFFUSE 9 +#define DISPLAY_AMBIENT 0 + +#define KernelSize 64 +#define Radius 0.02 +#define BlurSize 4 + uniform sampler2D u_shadeTex; +uniform sampler2D u_normalTex; +uniform sampler2D u_positionTex; +uniform sampler2D u_depthTex; +uniform sampler2D u_colorTex; + +uniform mat4 u_mvp; + +uniform float u_zFar; +uniform float u_zNear; varying vec2 v_texcoord; +uniform int u_displayType; + +uniform int u_width; +uniform int u_height; + +uniform vec3 u_lightCol; +uniform vec3 u_lightPos; +uniform vec3 u_eyePos; + float linearizeDepth( float exp_depth, float near, float far ){ return ( 2.0 * near ) / ( far + near - exp_depth * ( far - near ) ); } +float edgeDetect(vec2 texcoord) +{ + vec2 G = vec2(0,0); + vec3 Gx = vec3(0,0,0); + vec3 Gy = vec3(0,0,0); + for (int i = -1; i <= 1; i++) + { + for (int j = -1; j <= 1; j++) + { + vec3 c = texture2D(u_shadeTex, texcoord + vec2( float(i)/float(u_width), float(j)/float(u_height))).rgb; + Gx += float((i == 0) ? j + j : j) * c; + Gy += float((j == 0) ? -i - i : -i) * c; + } + } + return sqrt(dot(Gx, Gx) + dot(Gy, Gy)); +} + +highp float rand(vec2 co) +{ + highp float a = 12.9898; + highp float b = 78.233; + highp float c = 43758.5453; + highp float dt= dot(co.xy ,vec2(a,b)); + highp float sn= mod(dt,3.14); + return fract(sin(sn) * c); + + //return fract(sin(dot(co.xy ,vec2(12.9898,78.233))) * 43758.5453); +} void main() { // Currently acts as a pass filter that immmediately renders the shaded texture // Fill in post-processing as necessary HERE // NOTE : You may choose to use a key-controlled switch system to display one feature at a time - gl_FragColor = vec4(texture2D( u_shadeTex, v_texcoord).rgb, 1.0); -} + + vec3 color; + float occlusion; + + if( u_displayType == DISPLAY_BLOOM ) + { + vec3 normal = normalize(texture2D(u_normalTex, v_texcoord).rgb); + color = vec3(0,0,0); + float weight = 0.0; + for (int i = -5; i<= 5; i++) + { + for (int j = -5; j <= 5; j++) + { + vec2 texcoord = v_texcoord + vec2(float(i) / float(u_width), float(j) / float(u_height)); + float g = edgeDetect(texcoord); + if (g > 0.9) + { + weight += 1.0 - length(vec2(i,j)) / length(vec2(5,5)); + } + } + } + gl_FragColor = vec4(0.8*texture2D( u_shadeTex, v_texcoord).rgb+float(weight)/50.0*vec3(1.0, 1.0, 0.0), 1.0); + } + else if (u_displayType == DISPLAY_BLOOM2) + { + + vec3 normal = normalize(texture2D(u_normalTex, v_texcoord).rgb); + color = vec3(0,0,0); + float weight = 0.0; + for (int i = -5; i<= 5; i++) + { + vec2 texcoord = v_texcoord + vec2(float(i) / float(u_width)); + float g = edgeDetect(texcoord); + if (g > 0.9) + { + weight += 1.0 - abs(float(i)) / 5.0; + } + } + gl_FragColor = vec4(0.8*texture2D( u_shadeTex, v_texcoord).rgb+float(weight)/5.0*vec3(1.0, 1.0, 0.0), 1.0); + + } + else if (u_displayType == DISPLAY_TOON) + { + color = texture2D( u_shadeTex, v_texcoord).rgb; + + vec3 normal = normalize(texture2D(u_normalTex, v_texcoord).rgb); + vec3 lightDir = normalize(texture2D(u_positionTex, v_texcoord).rgb - u_lightPos); + float intensity = dot(-lightDir, normal); + + // Discretize color + if (intensity > 0.95) + color = vec3(1,1,1) * color; + else if (intensity > 0.5) + color = vec3(0.9,0.9,0.9) * color; + else if (intensity > 0.05) + color = vec3(0.35,0.35,0.35) * color; + else + color = vec3(0.2,0.2,0.2) * color; + + //silhouetting, sobel detect + float line = 0.3 * edgeDetect(v_texcoord); + + gl_FragColor = vec4(color-line, 1.0); + + } + else if (u_displayType == DISPLAY_AMBIENT_OCCU || u_displayType == DISPLAY_AMBIENT) + { + occlusion = 0.0; + + vec3 orgColor = texture2D(u_colorTex, v_texcoord).rgb; + if (orgColor.r == 1.0) + { + float depth = linearizeDepth( texture2D(u_depthTex, v_texcoord).r, u_zNear, u_zFar); + vec3 pos = vec3(texture2D(u_positionTex, v_texcoord).xy, depth); + + vec3 normal = normalize(texture2D(u_normalTex, v_texcoord).rgb); + + for(int i = 0; i < KernelSize; i++) + { + //generate kernel samples + vec3 kernel = normalize(vec3(rand(vec2(pos.x, float(i)*0.1357)), + rand(vec2(pos.y, float(i)*0.2468)), + (rand(vec2(pos.z, float(i)*0.1479)) + 1.0 ) / 2.0 )); + + float scale = float(i) / float(KernelSize); + scale = mix(0.1, 1.0, scale * scale); + kernel *= scale ; + + //random noise + vec3 rvec = vec3(rand(vec2(pos.x, float(i)*0.1234)), + rand(vec2(pos.y, float(i)*0.5678)), + 0.0) ; + + //orientation normal + vec3 tangent = normalize(rvec - normal * dot(rvec, normal)); + vec3 bitangent = cross(normal, tangent); + mat3 tbn = mat3(tangent, bitangent, normal); + + //occlusion factor + vec3 sample = tbn * kernel; + sample = vec3(sample.x, sample.y, -sample.z) * Radius + pos; + + vec4 offset = vec4(sample, 1); + offset = u_mvp * offset; + offset.xy /= offset.w; + offset.xy = offset.xy * 0.5 + 0.5; + + float sampleDepth = texture2D(u_depthTex, offset.xy ).r; + sampleDepth = linearizeDepth( sampleDepth, u_zNear, u_zFar ); + + float rangeCheck= abs(pos.z - sampleDepth) < Radius ? 1.0 : 0.0; + occlusion += (sampleDepth <= sample.z ? 1.0 : 0.0) * rangeCheck; + } + occlusion = 1.0 - occlusion/float(KernelSize); + } + + if (u_displayType == DISPLAY_AMBIENT) + color = texture2D( u_shadeTex, v_texcoord).rgb; + else + color = vec3(0.8,0.8,0.8); + + gl_FragColor = vec4(occlusion * color, 1.0); + } + else + gl_FragColor = vec4(texture2D( u_shadeTex, v_texcoord).rgb, 1.0); +} \ No newline at end of file diff --git a/assets/shader/deferred/post2.frag b/assets/shader/deferred/post2.frag new file mode 100644 index 0000000..9cccda4 --- /dev/null +++ b/assets/shader/deferred/post2.frag @@ -0,0 +1,55 @@ +precision highp float; + +#define DISPLAY_BLOOM2 6 + +uniform sampler2D u_shadeTex; +uniform sampler2D u_normalTex; + +varying vec2 v_texcoord; + +uniform int u_displayType; + +uniform int u_width; +uniform int u_height; + +float linearizeDepth( float exp_depth, float near, float far ){ + return ( 2.0 * near ) / ( far + near - exp_depth * ( far - near ) ); +} + +float edgeDetect(vec2 texcoord) +{ + vec2 G = vec2(0,0); + vec3 Gx = vec3(0,0,0); + vec3 Gy = vec3(0,0,0); + for (int i = -1; i <= 1; i++) + { + for (int j = -1; j <= 1; j++) + { + vec3 c = texture2D(u_shadeTex, texcoord + vec2( float(i)/float(u_width), float(j)/float(u_height))).rgb; + Gx += float((i == 0) ? j + j : j) * c; + Gy += float((j == 0) ? -i - i : -i) * c; + } + } + return sqrt(dot(Gx, Gx) + dot(Gy, Gy)); +} + +void main() +{ + if (u_displayType == DISPLAY_BLOOM2) + { + vec3 normal = normalize(texture2D(u_normalTex, v_texcoord).rgb); + vec3 color = vec3(0,0,0); + float weight = 0.0; + for (int j = -5; j<= 5; j++) + { + vec2 texcoord = v_texcoord + vec2(float(j) / float(u_height)); + float g = edgeDetect(texcoord); + if (g > 0.9) + { + weight += 1.0 - abs(float(j)) / 5.0; + } + } + gl_FragColor = vec4(0.8*texture2D( u_shadeTex, v_texcoord).rgb + float(weight)/5.0*vec3(1.0, 1.0, 0.0), 1.0); + } + +} \ No newline at end of file diff --git a/index.html b/index.html index dd0ffef..fecc3ed 100644 --- a/index.html +++ b/index.html @@ -14,6 +14,7 @@ + diff --git a/js/core/fbo-util.js b/js/core/fbo-util.js index 42abe4c..2a5a54d 100644 --- a/js/core/fbo-util.js +++ b/js/core/fbo-util.js @@ -111,7 +111,8 @@ CIS565WEBGLCORE.createFBO = function(){ gl.bindFramebuffer(gl.FRAMEBUFFER, null); fbo[FBO_PBUFFER] = gl.createFramebuffer(); - gl.bindFramebuffer(gl.FRAMEBUFFER, fbo[FBO_PBUFFER]); + gl.bindFramebuffer(gl.FRAMEBUFFER, fbo[FBO_PBUFFER]); + gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT, gl.TEXTURE_2D, depthTex, 0); gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, textures[4], 0); FBOstatus = gl.checkFramebufferStatus(gl.FRAMEBUFFER); @@ -124,7 +125,8 @@ CIS565WEBGLCORE.createFBO = function(){ // Set up GBuffer Normal fbo[FBO_GBUFFER_NORMAL] = gl.createFramebuffer(); - gl.bindFramebuffer(gl.FRAMEBUFFER, fbo[FBO_GBUFFER_NORMAL]); + gl.bindFramebuffer(gl.FRAMEBUFFER, fbo[FBO_GBUFFER_NORMAL]); + gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT, gl.TEXTURE_2D, depthTex, 0); gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, textures[1], 0); FBOstatus = gl.checkFramebufferStatus(gl.FRAMEBUFFER); @@ -138,6 +140,7 @@ CIS565WEBGLCORE.createFBO = function(){ // Set up GBuffer Color fbo[FBO_GBUFFER_COLOR] = gl.createFramebuffer(); gl.bindFramebuffer(gl.FRAMEBUFFER, fbo[FBO_GBUFFER_COLOR]); + gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT, gl.TEXTURE_2D, depthTex, 0); gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, textures[2], 0); FBOstatus = gl.checkFramebufferStatus(gl.FRAMEBUFFER); diff --git a/js/ext/stats.js b/js/ext/stats.js new file mode 100644 index 0000000..90b2a27 --- /dev/null +++ b/js/ext/stats.js @@ -0,0 +1,149 @@ +/** + * @author mrdoob / http://mrdoob.com/ + */ + +var Stats = function () { + + var startTime = Date.now(), prevTime = startTime; + var ms = 0, msMin = Infinity, msMax = 0; + var fps = 0, fpsMin = Infinity, fpsMax = 0; + var frames = 0, mode = 0; + + var container = document.createElement( 'div' ); + container.id = 'stats'; + container.addEventListener( 'mousedown', function ( event ) { event.preventDefault(); setMode( ++ mode % 2 ) }, false ); + container.style.cssText = 'width:80px;opacity:0.9;cursor:pointer'; + + var fpsDiv = document.createElement( 'div' ); + fpsDiv.id = 'fps'; + fpsDiv.style.cssText = 'padding:0 0 3px 3px;text-align:left;background-color:#002'; + container.appendChild( fpsDiv ); + + var fpsText = document.createElement( 'div' ); + fpsText.id = 'fpsText'; + fpsText.style.cssText = 'color:#0ff;font-family:Helvetica,Arial,sans-serif;font-size:9px;font-weight:bold;line-height:15px'; + fpsText.innerHTML = 'FPS'; + fpsDiv.appendChild( fpsText ); + + var fpsGraph = document.createElement( 'div' ); + fpsGraph.id = 'fpsGraph'; + fpsGraph.style.cssText = 'position:relative;width:74px;height:30px;background-color:#0ff'; + fpsDiv.appendChild( fpsGraph ); + + while ( fpsGraph.children.length < 74 ) { + + var bar = document.createElement( 'span' ); + bar.style.cssText = 'width:1px;height:30px;float:left;background-color:#113'; + fpsGraph.appendChild( bar ); + + } + + var msDiv = document.createElement( 'div' ); + msDiv.id = 'ms'; + msDiv.style.cssText = 'padding:0 0 3px 3px;text-align:left;background-color:#020;display:none'; + container.appendChild( msDiv ); + + var msText = document.createElement( 'div' ); + msText.id = 'msText'; + msText.style.cssText = 'color:#0f0;font-family:Helvetica,Arial,sans-serif;font-size:9px;font-weight:bold;line-height:15px'; + msText.innerHTML = 'MS'; + msDiv.appendChild( msText ); + + var msGraph = document.createElement( 'div' ); + msGraph.id = 'msGraph'; + msGraph.style.cssText = 'position:relative;width:74px;height:30px;background-color:#0f0'; + msDiv.appendChild( msGraph ); + + while ( msGraph.children.length < 74 ) { + + var bar = document.createElement( 'span' ); + bar.style.cssText = 'width:1px;height:30px;float:left;background-color:#131'; + msGraph.appendChild( bar ); + + } + + var setMode = function ( value ) { + + mode = value; + + switch ( mode ) { + + case 0: + fpsDiv.style.display = 'block'; + msDiv.style.display = 'none'; + break; + case 1: + fpsDiv.style.display = 'none'; + msDiv.style.display = 'block'; + break; + } + + }; + + var updateGraph = function ( dom, value ) { + + var child = dom.appendChild( dom.firstChild ); + child.style.height = value + 'px'; + + }; + + return { + + REVISION: 12, + + domElement: container, + + setMode: setMode, + + begin: function () { + + startTime = Date.now(); + + }, + + end: function () { + + var time = Date.now(); + + ms = time - startTime; + msMin = Math.min( msMin, ms ); + msMax = Math.max( msMax, ms ); + + msText.textContent = ms + ' MS (' + msMin + '-' + msMax + ')'; + updateGraph( msGraph, Math.min( 30, 30 - ( ms / 200 ) * 30 ) ); + + frames ++; + + if ( time > prevTime + 1000 ) { + + fps = Math.round( ( frames * 1000 ) / ( time - prevTime ) ); + fpsMin = Math.min( fpsMin, fps ); + fpsMax = Math.max( fpsMax, fps ); + + fpsText.textContent = fps + ' FPS (' + fpsMin + '-' + fpsMax + ')'; + updateGraph( fpsGraph, Math.min( 30, 30 - ( fps / 100 ) * 30 ) ); + + prevTime = time; + frames = 0; + + } + + return time; + + }, + + update: function () { + + startTime = this.end(); + + } + + } + +}; + +if ( typeof module === 'object' ) { + + module.exports = Stats; + +} \ No newline at end of file diff --git a/js/main.js b/js/main.js index 4140ae1..40d2822 100644 --- a/js/main.js +++ b/js/main.js @@ -21,6 +21,7 @@ var passProg; // Shader program for G-Buffer var shadeProg; // Shader program for P-Buffer var diagProg; // Shader program from diagnostic var postProg; // Shader for post-process effects +var postProg2; // Multi-Pass programs var posProg; @@ -32,6 +33,21 @@ var zNear = 20; var zFar = 2000; var texToDisplay = 1; +var lightColor = [1.0, 1.0, 1.0]; +var lightPos = [0.0, 0.5, 0.5]; + +var eyePos = [0.0, 1.0, 1.0]; + +var stats = new Stats(); +stats.setMode(0); // 0: fps, 1: ms +stats.domElement.style.position = 'absolute'; +stats.domElement.style.right = '0px'; +stats.domElement.style.top = '108px'; +document.body.appendChild(stats.domElement ); +var renderloop = document.createElement('div'); +renderloop.innerHTML = 'render'; +stats.domElement.appendChild(renderloop); + var main = function (canvasId, messageId) { var canvas; @@ -58,11 +74,15 @@ var main = function (canvasId, messageId) { CIS565WEBGLCORE.run(gl); }; + var renderLoop = function () { window.requestAnimationFrame(renderLoop); + stats.begin(); render(); + stats.end(); }; + var render = function () { if (fbo.isMultipleTargets()) { renderPass(); @@ -73,6 +93,8 @@ var render = function () { if (!isDiagnostic) { renderShade(); renderPost(); + if (texToDisplay == 6) + renderPost2(); } else { renderDiagnostic(); } @@ -170,12 +192,14 @@ var renderMulti = function () { drawModel(posProg, 1); - gl.disable(gl.DEPTH_TEST); + //gl.disable(gl.DEPTH_TEST); fbo.unbind(gl); gl.useProgram(null); fbo.bind(gl, FBO_GBUFFER_NORMAL); - gl.clear(gl.COLOR_BUFFER_BIT); + //gl.clear(gl.COLOR_BUFFER_BIT); + gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT); + gl.enable(gl.DEPTH_TEST); gl.useProgram(normProg.ref()); @@ -193,7 +217,9 @@ var renderMulti = function () { fbo.unbind(gl); fbo.bind(gl, FBO_GBUFFER_COLOR); - gl.clear(gl.COLOR_BUFFER_BIT); + //gl.clear(gl.COLOR_BUFFER_BIT); + gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT); + gl.enable(gl.DEPTH_TEST); gl.useProgram(colorProg.ref()); @@ -215,25 +241,29 @@ var renderShade = function () { gl.clear(gl.COLOR_BUFFER_BIT); // Bind necessary textures - //gl.activeTexture( gl.TEXTURE0 ); //position - //gl.bindTexture( gl.TEXTURE_2D, fbo.texture(0) ); - //gl.uniform1i( shadeProg.uPosSamplerLoc, 0 ); + gl.activeTexture( gl.TEXTURE0 ); //position + gl.bindTexture( gl.TEXTURE_2D, fbo.texture(0) ); + gl.uniform1i( shadeProg.uPosSamplerLoc, 0 ); - //gl.activeTexture( gl.TEXTURE1 ); //normal - //gl.bindTexture( gl.TEXTURE_2D, fbo.texture(1) ); - //gl.uniform1i( shadeProg.uNormalSamplerLoc, 1 ); + gl.activeTexture( gl.TEXTURE1 ); //normal + gl.bindTexture( gl.TEXTURE_2D, fbo.texture(1) ); + gl.uniform1i( shadeProg.uNormalSamplerLoc, 1 ); gl.activeTexture( gl.TEXTURE2 ); //color gl.bindTexture( gl.TEXTURE_2D, fbo.texture(2) ); gl.uniform1i( shadeProg.uColorSamplerLoc, 2 ); - //gl.activeTexture( gl.TEXTURE3 ); //depth - //gl.bindTexture( gl.TEXTURE_2D, fbo.depthTexture() ); - //gl.uniform1i( shadeProg.uDepthSamplerLoc, 3 ); + gl.activeTexture( gl.TEXTURE3 ); //depth + gl.bindTexture( gl.TEXTURE_2D, fbo.depthTexture() ); + gl.uniform1i( shadeProg.uDepthSamplerLoc, 3 ); // Bind necessary uniforms - //gl.uniform1f( shadeProg.uZNearLoc, zNear ); - //gl.uniform1f( shadeProg.uZFarLoc, zFar ); + gl.uniform1f( shadeProg.uZNearLoc, zNear ); + gl.uniform1f( shadeProg.uZFarLoc, zFar ); + + gl.uniform3fv(shadeProg.uLightColLoc, lightColor); + gl.uniform3fv(shadeProg.uLightPosLoc, lightPos); + gl.uniform3fv(shadeProg.uEyePosLoc, eyePos); drawQuad(shadeProg); @@ -279,13 +309,68 @@ var renderPost = function () { gl.clear(gl.COLOR_BUFFER_BIT); // Bind necessary textures + gl.activeTexture( gl.TEXTURE0 ); //position + gl.bindTexture( gl.TEXTURE_2D, fbo.texture(0) ); + gl.uniform1i( postProg.uPosSamplerLoc, 0 ); + + gl.activeTexture( gl.TEXTURE1 ); //normal + gl.bindTexture( gl.TEXTURE_2D, fbo.texture(1) ); + gl.uniform1i( postProg.uNormalSamplerLoc, 1 ); + + gl.activeTexture( gl.TEXTURE2 ); //color + gl.bindTexture( gl.TEXTURE_2D, fbo.texture(2) ); + gl.uniform1i( postProg.uColorSamplerLoc, 2 ); + + gl.activeTexture( gl.TEXTURE3 ); //depth + gl.bindTexture( gl.TEXTURE_2D, fbo.depthTexture() ); + gl.uniform1i( postProg.uDepthSamplerLoc, 3 ); + gl.activeTexture( gl.TEXTURE4 ); gl.bindTexture( gl.TEXTURE_2D, fbo.texture(4) ); gl.uniform1i(postProg.uShadeSamplerLoc, 4 ); - + + gl.uniform1f( postProg.uZNearLoc, zNear ); + gl.uniform1f( postProg.uZFarLoc, zFar ); + + gl.uniform1i( postProg.uDisplayTypeLoc, texToDisplay ); + + gl.uniform1i(postProg.uWidthLoc, canvas.width); + gl.uniform1i(postProg.uHeightLoc, canvas.height); + + gl.uniform3fv(postProg.uLightColLoc, lightColor); + gl.uniform3fv(postProg.uLightPosLoc, lightPos); + gl.uniform3fv(postProg.uEyePosLoc, eyePos); + + var mvpMat = mat4.create(); + mat4.multiply( mvpMat, persp, camera.getViewTransform() ); + gl.uniformMatrix4fv( postProg.uMVPLoc, false, mvpMat ); + drawQuad(postProg); }; +var renderPost2 = function () { + gl.useProgram(postProg2.ref()); + + gl.disable(gl.DEPTH_TEST); + gl.clear(gl.COLOR_BUFFER_BIT); + + // Bind necessary textures + gl.activeTexture( gl.TEXTURE1 ); //normal + gl.bindTexture( gl.TEXTURE_2D, fbo.texture(1) ); + gl.uniform1i( postProg2.uNormalSamplerLoc, 1 ); + + gl.activeTexture( gl.TEXTURE4 ); + gl.bindTexture( gl.TEXTURE_2D, fbo.texture(4) ); + gl.uniform1i(postProg2.uShadeSamplerLoc, 4 ); + + gl.uniform1i( postProg2.uDisplayTypeLoc, texToDisplay ); + + gl.uniform1i(postProg2.uWidthLoc, canvas.width); + gl.uniform1i(postProg2.uHeightLoc, canvas.height); + + drawQuad(postProg2); +}; + var initGL = function (canvasId, messageId) { var msg; @@ -320,6 +405,7 @@ var initCamera = function () { switch(e.keyCode) { case 48: isDiagnostic = false; + texToDisplay = 0; break; case 49: isDiagnostic = true; @@ -337,6 +423,26 @@ var initCamera = function () { isDiagnostic = true; texToDisplay = 4; break; + case 53: + isDiagnostic = false; + texToDisplay = 5; //bloom + break; + case 54: + isDiagnostic = false; + texToDisplay = 6; //bloom2 + break; + case 55: + isDiagnostic = false; + texToDisplay = 7; //toon + break; + case 56: + isDiagnostic = false; + texToDisplay = 8; //SSAO + break; + case 57: + isDiagnostic = false; + texToDisplay = 9; //diffuse + break; } } }; @@ -466,6 +572,10 @@ var initShaders = function () { shadeProg.uZNearLoc = gl.getUniformLocation( shadeProg.ref(), "u_zNear" ); shadeProg.uZFarLoc = gl.getUniformLocation( shadeProg.ref(), "u_zFar" ); shadeProg.uDisplayTypeLoc = gl.getUniformLocation( shadeProg.ref(), "u_displayType" ); + + shadeProg.uLightColLoc = gl.getUniformLocation( shadeProg.ref(), "u_lightCol" ); + shadeProg.uLightPosLoc = gl.getUniformLocation( shadeProg.ref(), "u_lightPos" ); + shadeProg.uEyePosLoc = gl.getUniformLocation( shadeProg.ref(), "u_eyePos" ); }); CIS565WEBGLCORE.registerAsyncObj(gl, shadeProg); @@ -475,10 +585,43 @@ var initShaders = function () { postProg.addCallback( function() { postProg.aVertexPosLoc = gl.getAttribLocation( postProg.ref(), "a_pos" ); postProg.aVertexTexcoordLoc = gl.getAttribLocation( postProg.ref(), "a_texcoord" ); - + postProg.uPosSamplerLoc = gl.getUniformLocation( postProg.ref(), "u_positionTex"); + postProg.uNormalSamplerLoc = gl.getUniformLocation( postProg.ref(), "u_normalTex"); + postProg.uDepthSamplerLoc = gl.getUniformLocation( postProg.ref(), "u_depthTex"); postProg.uShadeSamplerLoc = gl.getUniformLocation( postProg.ref(), "u_shadeTex"); + postProg.uColorSamplerLoc = gl.getUniformLocation( postProg.ref(), "u_colorTex"); + + postProg.uZNearLoc = gl.getUniformLocation( postProg.ref(), "u_zNear" ); + postProg.uZFarLoc = gl.getUniformLocation( postProg.ref(), "u_zFar" ); + + postProg.uDisplayTypeLoc = gl.getUniformLocation( postProg.ref(), "u_displayType" ); + postProg.uWidthLoc = gl.getUniformLocation( postProg.ref(), "u_width" ); + postProg.uHeightLoc = gl.getUniformLocation( postProg.ref(), "u_height" ); + + postProg.uLightColLoc = gl.getUniformLocation( postProg.ref(), "u_lightCol" ); + postProg.uLightPosLoc = gl.getUniformLocation( postProg.ref(), "u_lightPos" ); + postProg.uEyePosLoc = gl.getUniformLocation( postProg.ref(), "u_eyePos" ); + + postProg.uMVPLoc = gl.getUniformLocation( postProg.ref(), "u_mvp" ); }); CIS565WEBGLCORE.registerAsyncObj(gl, postProg); + + + // Create shader program for post-process2 + postProg2 = CIS565WEBGLCORE.createShaderProgram(); + postProg2.loadShader(gl, "assets/shader/deferred/quad.vert", "assets/shader/deferred/post2.frag"); + postProg2.addCallback( function() { + postProg2.aVertexPosLoc = gl.getAttribLocation( postProg2.ref(), "a_pos" ); + postProg2.aVertexTexcoordLoc = gl.getAttribLocation( postProg2.ref(), "a_texcoord" ); + postProg2.uNormalSamplerLoc = gl.getUniformLocation( postProg2.ref(), "u_normalTex"); + postProg2.uShadeSamplerLoc = gl.getUniformLocation( postProg2.ref(), "u_shadeTex"); + + postProg2.uDisplayTypeLoc = gl.getUniformLocation( postProg2.ref(), "u_displayType" ); + postProg2.uWidthLoc = gl.getUniformLocation( postProg2.ref(), "u_width" ); + postProg2.uHeightLoc = gl.getUniformLocation( postProg2.ref(), "u_height" ); + }); + CIS565WEBGLCORE.registerAsyncObj(gl, postProg2); + }; var initFramebuffer = function () { diff --git a/results/FPS.JPG b/results/FPS.JPG new file mode 100644 index 0000000..d0b2aaa Binary files /dev/null and b/results/FPS.JPG differ diff --git a/results/chart bloom.JPG b/results/chart bloom.JPG new file mode 100644 index 0000000..2d70aff Binary files /dev/null and b/results/chart bloom.JPG differ diff --git a/results/p_blinn.PNG b/results/p_blinn.PNG new file mode 100644 index 0000000..a4509b0 Binary files /dev/null and b/results/p_blinn.PNG differ diff --git a/results/p_bloom1(fps 1).PNG b/results/p_bloom1(fps 1).PNG new file mode 100644 index 0000000..48c084c Binary files /dev/null and b/results/p_bloom1(fps 1).PNG differ diff --git a/results/p_normal(fps 25).PNG b/results/p_normal(fps 25).PNG new file mode 100644 index 0000000..fad49f5 Binary files /dev/null and b/results/p_normal(fps 25).PNG differ diff --git a/results/p_toon(fps 1).PNG b/results/p_toon(fps 1).PNG new file mode 100644 index 0000000..7a24130 Binary files /dev/null and b/results/p_toon(fps 1).PNG differ diff --git a/results/s_bloom(r=2).PNG b/results/s_bloom(r=2).PNG new file mode 100644 index 0000000..5d108c0 Binary files /dev/null and b/results/s_bloom(r=2).PNG differ diff --git a/results/s_bloom1.PNG b/results/s_bloom1.PNG new file mode 100644 index 0000000..fda3cf2 Binary files /dev/null and b/results/s_bloom1.PNG differ diff --git a/results/s_bloom2.PNG b/results/s_bloom2.PNG new file mode 100644 index 0000000..064bb78 Binary files /dev/null and b/results/s_bloom2.PNG differ diff --git a/results/s_depth.PNG b/results/s_depth.PNG new file mode 100644 index 0000000..711ccc5 Binary files /dev/null and b/results/s_depth.PNG differ diff --git a/results/s_diffuse.PNG b/results/s_diffuse.PNG new file mode 100644 index 0000000..1fe5064 Binary files /dev/null and b/results/s_diffuse.PNG differ diff --git a/results/s_normal(fps 60).PNG b/results/s_normal(fps 60).PNG new file mode 100644 index 0000000..c00541b Binary files /dev/null and b/results/s_normal(fps 60).PNG differ diff --git a/results/s_occlusion.PNG b/results/s_occlusion.PNG new file mode 100644 index 0000000..30bedf0 Binary files /dev/null and b/results/s_occlusion.PNG differ diff --git a/results/s_pos (fps 60).PNG b/results/s_pos (fps 60).PNG new file mode 100644 index 0000000..be3e77e Binary files /dev/null and b/results/s_pos (fps 60).PNG differ diff --git a/results/s_ssao.PNG b/results/s_ssao.PNG new file mode 100644 index 0000000..355aee0 Binary files /dev/null and b/results/s_ssao.PNG differ diff --git a/results/s_tooon.PNG b/results/s_tooon.PNG new file mode 100644 index 0000000..e2a7b91 Binary files /dev/null and b/results/s_tooon.PNG differ