[GSoC 2011 - Accepted] Modern Illumination Techniques
Posted: Fri Mar 18, 2011 9:02 pm
Personal Details
Name: Andrei Radu
Email: andrei.radu@siggraph.org
OGRE Forum username: andrei_radu
Project Proposal
My proposal focuses on implementing several modern illumination techniques, that are either related to, or benefit from deferred shading. Most of these have been used in several shipped games, while a
couple are still considered 'bleeding edge'. My list of features is prioritized, and I tried to make sure that each milestone delivers an usable and thoroughly documented piece of code.
Light Pre-pass
This technique is an alternative to deferred shading, promising lower memory bandwidth requirements and a somewhat higher material system flexibility, at the expense of adding another geometry pass(which is relatively cheap). It was first introduced by Wolfgang Engel via his blog[1], and after developed trough several articles in the ShaderX and Gpu Pro series.
The rendering process starts by rendering normals and depth information into a slim G-Buffer(the normals can be stored in polar coordinates, which leaves 16 bits for depth; this should be enough
precision, and requires a single RGBA8 render target; we use a 16 bit floating point render target for clarity). Next, for each light in the scene, N*L*light_color and RV^n(for the phong model) are rendered into a light buffer(also a single RGBA8 target, or a RGBA16F to support HDR). A second geometry pass is performed(while keeping the Z-Buffer from the first pass, thus there
is no over-draw), and the lighting equation is reconstructed, using the material info.
This technique has been used in several shipped games, and a rather impressive demo, involving a large number of light sources, can be found at[2]. A larger performance gain is achieved when some post-processing effects like motion blurred or precomputed ambient occlusion are performed(and thus would require a larger G-Buffer in a classical deferred shading approach).
In my implementation, the alpha channel of the light buffer contains the specular term raised at a light-specific value, and multiplied by NL and attenuation. The specular term(NH) is recalculated at merging by division with the luminance of the diffuse channel.
Inferred Lighting[3]
This builds on the previous technique and tries to use smaller resolution buffers. The main issue that arises is that rope-like artifacts appear on object edges(due to sampling issues). The proposed solution is using another buffer that stores both linear depth and a semi-unique ID of that surface(two adjacent surfaces shouldn't have the same ID). This is used in the final material pass to bias the bilinear filtering that would occur due to sampling a smaller light buffer.
The main purpose for this is reducing the cost of illumination, since lighting is performed at a lower resolution.I used a light buffer with 0.75 the height and 0.75 the width of the viewport, so a ~44% reduction in lighting calculations. Artifacts are present along curve surfaces, and lighting detail on pixel sized objects or highlights is lost(temporal aliasing occurs due to this as well).
Another important issue is that performance can be easily traded for visual accuracy(we simply need to use smaller or larger light/G buffer). To the extent of my knowledge, this has not yet been used in a shipped game.
Reflective Shadow Maps
Reflective shadow maps are used, in this implementation, as a form of instant radiosity. The purpose is to provide one bounce global illumination for diffuse surfaces. The technique starts with the idea that every point that would give one bounce GI is captured by the light's shadowmap. So we store extra information in the shadowmap(world-space position, normal and flux - basically the ammount of energy that reaches that point), and generate Virtual Point Lights(VPLs) based on that. We then use these pointlights to light the scene. Since several hundred point lights are needed to get good results, using a deferred technique is mandatory.
Light Propagation Volumes[4]
This is a global illumination approximation technique, that can be used for low frequency lighting situations. The first step is partitioning the scene in a grid. Normally, the grid would have varying cell size, but for my implementation I will only implement a fixed size grid. The scene is then rendered into a set of Reflective Shadow Maps Each VPL's position is then determined, and converted to a spherical harmonics representation[5]. The VPL's in each cell are then 'accumulated'(we perform a summation in SH space). Low frequency direct illumination can be approximated as well by creating VPL's from environment maps or area lights.
Next the depth and surface normal at each point is used to reconstruct a coarse approximation of the scene's geometry(surface information stored in the RSMs is also used; additional information can be
obtained by using depth peeling). This geometry information is used per cell to obtain occlusion data for each incoming direction. This is also converted into SH space and accumulated.
Implementation Details: The whole technique can be implemented as a single compositor. The grids will consist of flattened 3D textures(2D textures that contain several slices) and a custom rendering pass will be used to inject the RSMs into it(sample the RSM and render the point lights directly in SH coordinates). Light propagation is computed in a fixed number of passes, using ping-pong buffers(in each pass each cell looks at it's six neighbors).
Two open-source stand-alone implementations exist at [6]and [7].
Demos
The demo extends the deferred shading demo with a drop-down that selects the lighting technique used. The options will be: forward rendering, deferred shading, deferred lighting, inferred lighting, spherical harmonics lighting and light propagation volumes. Additionally, when selecting deferred shading, deferred lighting or inferred lighting, there will be an option to activate Reflective Shadow Maps. Debugging outputs are also extended to allow the viewing of the light buffer, for deferred lighting and inferred lighting, and object IDs for inferred lighting.
Schedule
The only other commitment I have this summer are my third year exams, and I'll finish those around the 10th of June. That gives me roughly 10 weeks of actual work.
• Preparation period – Find out what the community expects from this project. Get familiar with the inner workings of the compositor framework.
• [done] Weeks 1-[s]2[/s] 4(June 12th)– Implement and optimize Light Pre-Pass and Inferred lighting
•[s]Week 3(June 26th)Implement Depth Peeling(useful for both LPV and SSDO)[/s]
•[s]Week 4(July 3rd)Implement SSDO[/s]
•Week 5(July 10th)Implement the rendering of 'fat' shadow maps
• Week 6(July 17th)Implement the RSM sampling and generating of VPLs - VPLs will work as a special kind of deferred lights(they provide light over an hemisphere); after each shadow casting light is processed and it's shadow map generated, vpl's for that light are generated and rendered. The number of VPLs will depend on scene settings and the light's intensity.
•Week 7(July 24th) Tweak VPL generation to obtain good visuals & solve issues - I expect issues due to the relative small number of VPLs that will be generated for real time performance to be achievable. Each VPL's contribution needs to be clamped to avoid unnatural highlights. Implement Spherical Harmonic Lighting (this is a step of implementing LPV) - will generate a set of coeficients for the cathedral scene and use them for (directional only) lighting.
•Week 8 - 9(July 31st) Build a spacial grid containing the scene and render the geometry into it - this involves rendering each shadow map into a 'flat' 3D texture
•Week 10(August 14th)''Implement propagation & occlusion between grid cells, and generate the SH coefficients for each of them. Use these coefficients to light the scene.
Some further development ideas:
• The specular term in deferred lighting can be computed in other ways with various tradeoffs(an extra render target can be used, or the diffuse term can be compressed).
• LPV can be optimized by allowing cascading grids(recursively subdivided; think of a octree)
• The SH representation from LPV can be used as a basis for further development in SSDO(instead of accessing a cube-map for each direction, we could use the per-cell representation)
Why I'm The Person For This Project
I am a computer science student, in my third year, and the courses I followed span from object oriented programming to software engineering, to computer graphics and computer architectures. I consider
myself to be well prepared in all of these fields, especially since I have taken a special interest in them.
I'm also quite a talkative and sociable person, and enjoy working with others. I am currently involved in a large scale virtual reality project that uses an Ogre based client. You can find more about my work from my blog[9] or my projects page[10].
Why OGRE?
I have a great interest in computer graphics and engine design, and hope to work in the field in the near future and has helped me improve in both fields. I also learned a lot from Ogre, both in coding style and in terms of engine design(I often find myself skimming trough open source engines' doxygen documentation to see how different modules work together).
Anything Else
My proposal might look a little ambitious, or unfeasible, but I believe that I have the needed skills and motivation to implement it. Plus, I don't have anything planned until early September
[1]http://diaryofagraphicsprogrammer.blogs ... derer.html
[2]http://www.confettispecialfx.com/river- ... ts#more-92
[3]http://graphics.cs.uiuc.edu/~kircher/in ... _paper.pdf
[4]http://www6.incrysis.com/Light_Propagation_Volumes.pdf
[5]Reflective Shadow Maps
[6]http://www.ppsloan.org/publications/StupidSH36.pdf
[7]http://lee.fov120.com/lpv.zip
[8]http://blog.blackhc.net/wp-content/uplo ... totype.zip
[9]http://andreiradu.blogspot.com
[10]http://graphics.cs.pub.ro/~andreiradu
Andrei
Name: Andrei Radu
Email: andrei.radu@siggraph.org
OGRE Forum username: andrei_radu
Project Proposal
My proposal focuses on implementing several modern illumination techniques, that are either related to, or benefit from deferred shading. Most of these have been used in several shipped games, while a
couple are still considered 'bleeding edge'. My list of features is prioritized, and I tried to make sure that each milestone delivers an usable and thoroughly documented piece of code.
Light Pre-pass
This technique is an alternative to deferred shading, promising lower memory bandwidth requirements and a somewhat higher material system flexibility, at the expense of adding another geometry pass(which is relatively cheap). It was first introduced by Wolfgang Engel via his blog[1], and after developed trough several articles in the ShaderX and Gpu Pro series.
The rendering process starts by rendering normals and depth information into a slim G-Buffer(the normals can be stored in polar coordinates, which leaves 16 bits for depth; this should be enough
precision, and requires a single RGBA8 render target; we use a 16 bit floating point render target for clarity). Next, for each light in the scene, N*L*light_color and RV^n(for the phong model) are rendered into a light buffer(also a single RGBA8 target, or a RGBA16F to support HDR). A second geometry pass is performed(while keeping the Z-Buffer from the first pass, thus there
is no over-draw), and the lighting equation is reconstructed, using the material info.
This technique has been used in several shipped games, and a rather impressive demo, involving a large number of light sources, can be found at[2]. A larger performance gain is achieved when some post-processing effects like motion blurred or precomputed ambient occlusion are performed(and thus would require a larger G-Buffer in a classical deferred shading approach).
In my implementation, the alpha channel of the light buffer contains the specular term raised at a light-specific value, and multiplied by NL and attenuation. The specular term(NH) is recalculated at merging by division with the luminance of the diffuse channel.
Inferred Lighting[3]
This builds on the previous technique and tries to use smaller resolution buffers. The main issue that arises is that rope-like artifacts appear on object edges(due to sampling issues). The proposed solution is using another buffer that stores both linear depth and a semi-unique ID of that surface(two adjacent surfaces shouldn't have the same ID). This is used in the final material pass to bias the bilinear filtering that would occur due to sampling a smaller light buffer.
The main purpose for this is reducing the cost of illumination, since lighting is performed at a lower resolution.I used a light buffer with 0.75 the height and 0.75 the width of the viewport, so a ~44% reduction in lighting calculations. Artifacts are present along curve surfaces, and lighting detail on pixel sized objects or highlights is lost(temporal aliasing occurs due to this as well).
Another important issue is that performance can be easily traded for visual accuracy(we simply need to use smaller or larger light/G buffer). To the extent of my knowledge, this has not yet been used in a shipped game.
Reflective Shadow Maps
Reflective shadow maps are used, in this implementation, as a form of instant radiosity. The purpose is to provide one bounce global illumination for diffuse surfaces. The technique starts with the idea that every point that would give one bounce GI is captured by the light's shadowmap. So we store extra information in the shadowmap(world-space position, normal and flux - basically the ammount of energy that reaches that point), and generate Virtual Point Lights(VPLs) based on that. We then use these pointlights to light the scene. Since several hundred point lights are needed to get good results, using a deferred technique is mandatory.
Light Propagation Volumes[4]
This is a global illumination approximation technique, that can be used for low frequency lighting situations. The first step is partitioning the scene in a grid. Normally, the grid would have varying cell size, but for my implementation I will only implement a fixed size grid. The scene is then rendered into a set of Reflective Shadow Maps Each VPL's position is then determined, and converted to a spherical harmonics representation[5]. The VPL's in each cell are then 'accumulated'(we perform a summation in SH space). Low frequency direct illumination can be approximated as well by creating VPL's from environment maps or area lights.
Next the depth and surface normal at each point is used to reconstruct a coarse approximation of the scene's geometry(surface information stored in the RSMs is also used; additional information can be
obtained by using depth peeling). This geometry information is used per cell to obtain occlusion data for each incoming direction. This is also converted into SH space and accumulated.
Implementation Details: The whole technique can be implemented as a single compositor. The grids will consist of flattened 3D textures(2D textures that contain several slices) and a custom rendering pass will be used to inject the RSMs into it(sample the RSM and render the point lights directly in SH coordinates). Light propagation is computed in a fixed number of passes, using ping-pong buffers(in each pass each cell looks at it's six neighbors).
Two open-source stand-alone implementations exist at [6]and [7].
Demos
The demo extends the deferred shading demo with a drop-down that selects the lighting technique used. The options will be: forward rendering, deferred shading, deferred lighting, inferred lighting, spherical harmonics lighting and light propagation volumes. Additionally, when selecting deferred shading, deferred lighting or inferred lighting, there will be an option to activate Reflective Shadow Maps. Debugging outputs are also extended to allow the viewing of the light buffer, for deferred lighting and inferred lighting, and object IDs for inferred lighting.
Schedule
The only other commitment I have this summer are my third year exams, and I'll finish those around the 10th of June. That gives me roughly 10 weeks of actual work.
• Preparation period – Find out what the community expects from this project. Get familiar with the inner workings of the compositor framework.
• [done] Weeks 1-[s]2[/s] 4(June 12th)– Implement and optimize Light Pre-Pass and Inferred lighting
•[s]Week 3(June 26th)Implement Depth Peeling(useful for both LPV and SSDO)[/s]
•[s]Week 4(July 3rd)Implement SSDO[/s]
•Week 5(July 10th)Implement the rendering of 'fat' shadow maps
• Week 6(July 17th)Implement the RSM sampling and generating of VPLs - VPLs will work as a special kind of deferred lights(they provide light over an hemisphere); after each shadow casting light is processed and it's shadow map generated, vpl's for that light are generated and rendered. The number of VPLs will depend on scene settings and the light's intensity.
•Week 7(July 24th) Tweak VPL generation to obtain good visuals & solve issues - I expect issues due to the relative small number of VPLs that will be generated for real time performance to be achievable. Each VPL's contribution needs to be clamped to avoid unnatural highlights. Implement Spherical Harmonic Lighting (this is a step of implementing LPV) - will generate a set of coeficients for the cathedral scene and use them for (directional only) lighting.
•Week 8 - 9(July 31st) Build a spacial grid containing the scene and render the geometry into it - this involves rendering each shadow map into a 'flat' 3D texture
•Week 10(August 14th)''Implement propagation & occlusion between grid cells, and generate the SH coefficients for each of them. Use these coefficients to light the scene.
Some further development ideas:
• The specular term in deferred lighting can be computed in other ways with various tradeoffs(an extra render target can be used, or the diffuse term can be compressed).
• LPV can be optimized by allowing cascading grids(recursively subdivided; think of a octree)
• The SH representation from LPV can be used as a basis for further development in SSDO(instead of accessing a cube-map for each direction, we could use the per-cell representation)
Why I'm The Person For This Project
I am a computer science student, in my third year, and the courses I followed span from object oriented programming to software engineering, to computer graphics and computer architectures. I consider
myself to be well prepared in all of these fields, especially since I have taken a special interest in them.
I'm also quite a talkative and sociable person, and enjoy working with others. I am currently involved in a large scale virtual reality project that uses an Ogre based client. You can find more about my work from my blog[9] or my projects page[10].
Why OGRE?
I have a great interest in computer graphics and engine design, and hope to work in the field in the near future and has helped me improve in both fields. I also learned a lot from Ogre, both in coding style and in terms of engine design(I often find myself skimming trough open source engines' doxygen documentation to see how different modules work together).
Anything Else
My proposal might look a little ambitious, or unfeasible, but I believe that I have the needed skills and motivation to implement it. Plus, I don't have anything planned until early September
[1]http://diaryofagraphicsprogrammer.blogs ... derer.html
[2]http://www.confettispecialfx.com/river- ... ts#more-92
[3]http://graphics.cs.uiuc.edu/~kircher/in ... _paper.pdf
[4]http://www6.incrysis.com/Light_Propagation_Volumes.pdf
[5]Reflective Shadow Maps
[6]http://www.ppsloan.org/publications/StupidSH36.pdf
[7]http://lee.fov120.com/lpv.zip
[8]http://blog.blackhc.net/wp-content/uplo ... totype.zip
[9]http://andreiradu.blogspot.com
[10]http://graphics.cs.pub.ro/~andreiradu
Andrei