Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
165 changes: 27 additions & 138 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,37 +3,18 @@ CIS565: Project 6 -- Deferred Shader
-------------------------------------------------------------------------------
Fall 2014
-------------------------------------------------------------------------------
Due Wed, 11/12/2014 at Noon
-------------------------------------------------------------------------------

-------------------------------------------------------------------------------
NOTE:
-------------------------------------------------------------------------------
This project requires any graphics card with support for a modern OpenGL
pipeline. Any AMD, NVIDIA, or Intel card from the past few years should work
fine, and every machine in the SIG Lab and Moore 100 is capable of running
this project.
[Youtube](https://www.youtube.com/watch?v=ggUH_oqFYuo&feature=youtu.be)

This project also requires a WebGL capable browser. The project is known to
have issues with Chrome on windows, but Firefox seems to run it fine.
[Live Demo](http://xjma.github.io/Project6-DeferredShader/)

-------------------------------------------------------------------------------
INTRODUCTION:
-------------------------------------------------------------------------------

In this project, you will get introduced to the basics of deferred shading. You will write GLSL and OpenGL code to perform various tasks in a deferred lighting pipeline such as creating and writing to a G-Buffer.

-------------------------------------------------------------------------------
CONTENTS:
-------------------------------------------------------------------------------
The Project5 root directory contains the following subdirectories:

* js/ contains the javascript files, including external libraries, necessary.
* assets/ contains the textures that will be used in the second half of the
assignment.
* resources/ contains the screenshots found in this readme file.
In this project, i write GLSL and OpenGL code to perform various tasks in a deferred lighting pipeline such as creating and writing to a G-Buffer. This project requires a graphic card support for deferred shader pipeline.

This Readme file edited as described above in the README section.
![blinn](https://raw.githubusercontent.com/XJMa/Project6-DeferredShader/master/screenshots/diffuseSpec.jpg)

-------------------------------------------------------------------------------
OVERVIEW:
Expand Down Expand Up @@ -69,154 +50,62 @@ WASDRF - Movement (along w the arrow keys)
* 2 - Normals
* 3 - Color
* 4 - Depth
* 5 - Blinn-Phong shading
* 6 - bloom shading
* 7 - Toon shading
* 8 - SSAO
* 0 - Full deferred pipeline

There are also mouse controls for camera rotation.

-------------------------------------------------------------------------------
REQUIREMENTS:
Blinn-Phong:
-------------------------------------------------------------------------------

In this project, you are given code for:
* Loading .obj file
* Deferred shading pipeline
* GBuffer pass
The diffuse and specular shader is implemented in lighting passes and accumulates stage that writes the result to P-buffer.

You are required to implement:
* Either of the following effects
* Bloom
* "Toon" Shading (with basic silhouetting)
* Screen Space Ambient Occlusion
* Diffuse and Blinn-Phong shading

**NOTE**: Implementing separable convolution will require another link in your pipeline and will count as an extra feature if you do performance analysis with a standard one-pass 2D convolution. The overhead of rendering and reading from a texture _may_ offset the extra computations for smaller 2D kernels.

You must implement two of the following extras:
* The effect you did not choose above
* Compare performance to a normal forward renderer with
* No optimizations
* Coarse sort geometry front-to-back for early-z
* Z-prepass for early-z
* Optimize g-buffer format, e.g., pack things together, quantize, reconstruct z from normal x and y (because it is normalized), etc.
* Must be accompanied with a performance analysis to count
* Additional lighting and pre/post processing effects! (email first please, if they are good you may add multiple).
![blinn](https://raw.githubusercontent.com/XJMa/Project6-DeferredShader/master/screenshots/diffuseSpec.jpg)

-------------------------------------------------------------------------------
RUNNING THE CODE:
Bloom
-------------------------------------------------------------------------------
Bloom is a post processing effects. Normally, Bloom effects is implemented with a texture that specify the glow source and then blur the glow source. But here I just treat the whole object as a glow source. I use a gaussian convolution on color from G-buffer.

Since the code attempts to access files that are local to your computer, you
will either need to:

* Run your browser under modified security settings, or
* Create a simple local server that serves the files


FIREFOX: change ``strict_origin_policy`` to false in about:config

CHROME: run with the following argument : `--allow-file-access-from-files`

(You can do this on OSX by running Chrome from /Applications/Google
Chrome/Contents/MacOS with `open -a "Google Chrome" --args
--allow-file-access-from-files`)

* To check if you have set the flag properly, you can open chrome://version and
check under the flags

RUNNING A SIMPLE SERVER:

If you have Python installed, you can simply run a simple HTTP server off your
machine from the root directory of this repository with the following command:

`python -m SimpleHTTPServer`
![blinn](https://raw.githubusercontent.com/XJMa/Project6-DeferredShader/master/screenshots/bloom.jpg)

-------------------------------------------------------------------------------
RESOURCES:
"Toon" Shading (with basic silhouetting)
-------------------------------------------------------------------------------

The following are articles and resources that have been chosen to help give you
a sense of each of the effects:
Toon shading is a non-photorealistic rendering technique that is used to achieve a cartoonish or hand-drawn appearance of three-dimensional models. To make is cartoonish we don't want many color in the final rendering so I round the colors in the scene to a certain color set. Basic silhouetting is achieved by compare the depth of the object with the background to get the edge.

* Bloom : [GPU Gems](http://http.developer.nvidia.com/GPUGems/gpugems_ch21.html)
* Screen Space Ambient Occlusion : [Floored
Article](http://floored.com/blog/2013/ssao-screen-space-ambient-occlusion.html)
![blinn](https://raw.githubusercontent.com/XJMa/Project6-DeferredShader/master/screenshots/toon.jpg)

-------------------------------------------------------------------------------
README
Screen Space Ambient Occlusion
-------------------------------------------------------------------------------
All students must replace or augment the contents of this Readme.md in a clear
manner with the following:
Ambient occlusion is an approximation of the amount by which a point on a surface is occluded by the surrounding geometry. To achieve this I sample a random position within a hemisphere, oriented along the surface normal at that pixel. Then project the sample position into screen space to get its depth on depth buffer. If the depth buffer value is smaller than sample position's depth, then occlusion accumulates.

* A brief description of the project and the specific features you implemented.
* At least one screenshot of your project running.
* A 30 second or longer video of your project running. To create the video you
can use [Open Broadcaster Software](http://obsproject.com)
* A performance evaluation (described in detail below).
![blinn](https://raw.githubusercontent.com/XJMa/Project6-DeferredShader/master/screenshots/SSAO.jpg)

-------------------------------------------------------------------------------
PERFORMANCE EVALUATION
-------------------------------------------------------------------------------
The performance evaluation is where you will investigate how to make your
program more efficient using the skills you've learned in class. You must have
performed at least one experiment on your code to investigate the positive or
negative effects on performance.

We encourage you to get creative with your tweaks. Consider places in your code
that could be considered bottlenecks and try to improve them.

Each student should provide no more than a one page summary of their
optimizations along with tables and or graphs to visually explain any
performance differences.
![blinn](https://raw.githubusercontent.com/XJMa/Project6-DeferredShader/master/screenshots/performance1.jpg)

-------------------------------------------------------------------------------
THIRD PARTY CODE POLICY
-------------------------------------------------------------------------------
* Use of any third-party code must be approved by asking on the Google groups.
If it is approved, all students are welcome to use it. Generally, we approve
use of third-party code that is not a core part of the project. For example,
for the ray tracer, we would approve using a third-party library for loading
models, but would not approve copying and pasting a CUDA function for doing
refraction.
* Third-party code must be credited in README.md.
* Using third-party code without its approval, including using another
student's code, is an academic integrity violation, and will result in you
receiving an F for the semester.
In diagnostic mode(show normal, position, etc) I just output the color read from G buffer without light accumulation or post processing. From the chart above we can see the stage 2 and 3 of deferred shading is quite computational intense. And I think the performance is not that good because I implement the deferred shader with simple one-pass pipline and my browser does not support drawbuffer. So every part get computed no matter is is used or not. I think Implementing separable convolution will definetely help improving the performance.

-------------------------------------------------------------------------------
SELF-GRADING
-------------------------------------------------------------------------------
* On the submission date, email your grade, on a scale of 0 to 100, to Harmony,
[email protected], with a one paragraph explanation. Be concise and
realistic. Recall that we reserve 30 points as a sanity check to adjust your
grade. Your actual grade will be (0.7 * your grade) + (0.3 * our grade). We
hope to only use this in extreme cases when your grade does not realistically
reflect your work - it is either too high or too low. In most cases, we plan
to give you the exact grade you suggest.
* Projects are not weighted evenly, e.g., Project 0 doesn't count as much as
the path tracer. We will determine the weighting at the end of the semester
based on the size of each project.
![blinn](https://raw.githubusercontent.com/XJMa/Project6-DeferredShader/master/screenshots/performance2.jpg)

Apparently use more sample kernels to compute SSAO will slow down the computation process, but the result is not as obvious as I expect. I want to test with more kernel computed but my laptop can't handle it when the kernel size exceed 90.

---
SUBMISSION
Reference
---
As with the previous projects, you should fork this project and work inside of
your fork. Upon completion, commit your finished project back to your fork, and
make a pull request to the master repository. You should include a README.md
file in the root directory detailing the following

* A brief description of the project and specific features you implemented
* At least one screenshot of your project running.
* A link to a video of your project running.
* Instructions for building and running your project if they differ from the
base code.
* A performance writeup as detailed above.
* A list of all third-party code used.
* This Readme file edited as described above in the README section.
BLOOM: http://http.developer.nvidia.com/GPUGems/gpugems_ch21.html

---
ACKNOWLEDGEMENTS
---
SSAO: http://john-chapman-graphics.blogspot.co.uk/2013/01/ssao-tutorial.html

Many thanks to Cheng-Tso Lin, whose framework for CIS700 we used for this
assignment.
Expand Down
34 changes: 30 additions & 4 deletions assets/deferred/diffuse.frag
Original file line number Diff line number Diff line change
Expand Up @@ -8,16 +8,42 @@ uniform sampler2D u_depthTex;
uniform float u_zFar;
uniform float u_zNear;
uniform int u_displayType;
uniform vec4 u_Light;

varying vec2 v_texcoord;

float linearizeDepth( float exp_depth, float near, float far ){
return ( 2.0 * near ) / ( far + near - exp_depth * ( far - near ) );
return ( 2.0 * near ) / ( far + near - exp_depth * ( far - near ) );
}

void main()
{
// Write a diffuse shader and a Blinn-Phong shader
// NOTE : You may need to add your own normals to fulfill the second's requirements
gl_FragColor = vec4(texture2D(u_colorTex, v_texcoord).rgb, 1.0);

// Diffuse calculation
vec4 lightColor = vec4(0.5, 0.5, 0.5, 1.0);
vec3 normal = texture2D(u_normalTex, v_texcoord).xyz;

vec3 position = texture2D(u_positionTex, v_texcoord).xyz;
vec3 lightDir = normalize(u_Light.xyz - position);
vec3 diffuseColor = texture2D(u_colorTex, v_texcoord).rgb;
float diffuseTerm = clamp(abs(dot(normalize(normal), normalize(lightDir))), 0.0, 1.0);//max(dot(lightDir,normal), 0.0);
float specular = 0.0;


vec3 viewDir = normalize(-position);
vec3 halfDir = normalize(lightDir + viewDir);
float specAngle = max(dot(halfDir, normal), 0.0);
specular = pow(specAngle, 80.0);


//change background color
float depth = texture2D( u_depthTex, v_texcoord ).x;
depth = linearizeDepth( depth, u_zNear, u_zFar );

if (depth > 0.99) {
gl_FragColor = vec4(vec3(0.0), 1.0);//vec4(vec3(u_displayType == 5 ? 0.0 : 0.0), 1.0);
} else {
gl_FragColor = vec4(diffuseTerm*diffuseColor + specular*vec3(1.0), 1.0);
}

}
49 changes: 49 additions & 0 deletions assets/deferred/diffuse.frag.bak
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
precision highp float;

uniform sampler2D u_positionTex;
uniform sampler2D u_normalTex;
uniform sampler2D u_colorTex;
uniform sampler2D u_depthTex;

uniform float u_zFar;
uniform float u_zNear;
uniform int u_displayType;
uniform vec4 u_Light;

varying vec2 v_texcoord;

float linearizeDepth( float exp_depth, float near, float far ){
return ( 2.0 * near ) / ( far + near - exp_depth * ( far - near ) );
}

void main()
{

// Diffuse calculation
vec4 lightColor = vec4(0.5, 0.5, 0.5, 1.0);
vec3 normal = texture2D(u_normalTex, v_texcoord).xyz;

vec3 position = texture2D(u_positionTex, v_texcoord).xyz;
vec3 lightDir = normalize(u_Light.xyz - position);
vec3 diffuseColor = texture2D(u_colorTex, v_texcoord).rgb;
float diffuseTerm = clamp(abs(dot(normalize(normal), normalize(lightDir))), 0.0, 1.0);//max(dot(lightDir,normal), 0.0);
float specular = 0.0;


vec3 viewDir = normalize(-position);
vec3 halfDir = normalize(lightDir + viewDir);
float specAngle = max(dot(halfDir, normal), 0.0);
specular = pow(specAngle, 80.0);


//change background color
float depth = texture2D( u_depthTex, v_texcoord ).x;
depth = linearizeDepth( depth, u_zNear, u_zFar );

if (depth > 0.99) {
gl_FragColor = vec4(vec3(0.0), 1.0);//vec4(vec3(u_displayType == 5 ? 0.0 : 0.0), 1.0);
} else {
//gl_FragColor = vec4(diffuseTerm*diffuseColor + specular*vec3(1.0), 1.0);
}

}
2 changes: 1 addition & 1 deletion assets/shader/deferred/diagnostic.frag
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ void main()
else if( u_displayType == DISPLAY_COLOR )
gl_FragColor = color;
else if( u_displayType == DISPLAY_NORMAL )
gl_FragColor = vec4( normal, 1 );
gl_FragColor = vec4( normalize(normal), 1 );
else
gl_FragColor = vec4( position, 1 );
}
40 changes: 40 additions & 0 deletions assets/shader/deferred/diagnostic.frag.bak
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
precision highp float;

#define DISPLAY_POS 1
#define DISPLAY_NORMAL 2
#define DISPLAY_COLOR 3
#define DISPLAY_DEPTH 4

uniform sampler2D u_positionTex;
uniform sampler2D u_normalTex;
uniform sampler2D u_colorTex;
uniform sampler2D u_depthTex;

uniform float u_zFar;
uniform float u_zNear;
uniform int u_displayType;

varying vec2 v_texcoord;

float linearizeDepth( float exp_depth, float near, float far ){
return ( 2.0 * near ) / ( far + near - exp_depth * ( far - near ) );
}

void main()
{
vec3 normal = texture2D( u_normalTex, v_texcoord ).xyz;
vec3 position = texture2D( u_positionTex, v_texcoord ).xyz;
vec4 color = texture2D( u_colorTex, v_texcoord );
float depth = texture2D( u_depthTex, v_texcoord ).x;

depth = linearizeDepth( depth, u_zNear, u_zFar );

if( u_displayType == DISPLAY_DEPTH )
gl_FragColor = vec4( depth, depth, depth, 1 );
else if( u_displayType == DISPLAY_COLOR )
gl_FragColor = color;
else if( u_displayType == DISPLAY_NORMAL )
gl_FragColor = vec4( normal, 1 );
else
gl_FragColor = vec4( position, 1 );
}
Loading