I'll be trying to implement this extremely soon.
TaaTT4 wrote: ↑
Fri Dec 13, 2019 1:36 am
xrgo wrote: ↑
Thu Dec 12, 2019 8:56 pm
now the next step is to process the cubemaps like SolarPortal did but it seems a bit over my skills
I believe it's out of my skills too. Don't know if/how much SolarPortal can share with us, but if he could provide us the computer shader he uses to process the cubemap, I guess it would be easy enough to integrate it in the PCC workflow.
I'll explain in it in plain english.
What we're doing right now is (explaining specular for now):
- Grab a cubemap
- Generate the mipmaps with a simple filter (i.e. a regular downscale, or a gaussian one if you're fancy which gets blurrier less blocky results)
- In the pixel shader, we sample the cubemap (using mip = roughness)
- Perform a simple fake BRDF approximation to blend this sampled value from previous step into the final result
What needs to be fixed:
This is a bad approximation. Looks cool, but that's it. For Mip 0 (e.g. roughness = 0), there's virtually no difference.
But once the roughness becomes higher, the mips we sample are wrong.
We need to improve step 2 (mipmap generation) and step 4 (BRDF) formula.
Fixing Mipmaps (step 2)
Mipmaps must be generated by evaluating lots of samples from its lower mip.
The best way is to think this mipmap generation as a smaller sphere (mip N+1) integrating the output BRDF coming from a bigger sphere (mip N).
In 2D it would look like the smaller circle is shooting rays to sample the bigger one:
IBLBaker has an easy to understand code about it
The function "ImportanceSample" shoots "ConvolutionSampleCount" rays and averages them together with a particular formula (this 'particular formula' is tuned to match the BRDF behavior)
Other resources that explain this step:
https://placeholderart.wordpress.com/20 ... -lighting/
http://chanhaeng.blogspot.com/2018/08/p ... r-ibl.html
Fixing Merging in Pixel Shader (step 4)
When we integrated the larger sphere into a smaller sphere:
We had 2 things: Normal (perpendicular to the sphere), incoming Light direction (the direction of each ray, it's blue in the picture).
However we need two more we didn't have at that time: Eye direction (only known at render time) and roughness (varies per material).
Therefore the cubemap integration is a precomputed value that is incomplete. However it can be completed at render time with another 2D texture which stores lots of combinations of NdotV (Normal dot View) in one axis, and Roughness in the other.
IblBrdf.hlsl performs this step
where the function integrate() generates that look up texture, and stores coefficient for NdotV in X axis, and coefficient for Roughness in the Y offset.
The resulting texture looks like this, and can doesn't change with scenes/materials/etc:
This lookup texture can be generated once and then stored on disk.
Thus, once we have these two things (filtered cubemap and brdf 2D lookup table), we need to perform:
Code: Select all
float lod = getMipLevelFromRoughness(roughness);
float3 prefilteredColor = textureCubeLod(PrefilteredEnvMap, refVec, lod);
float2 envBRDF = texture2D(BRDFIntegrationMap, float2(NdotV, roughness)).xy;
float3 indirectSpecular = prefilteredColor * (F * envBRDF.x + envBRDF.y);
What I'll be doing is integrating IBLBaker's code into Ogre.
cmgen looks gigantic because there's a lot of C++ code to load textures from files, a lot of code to sample them (i.e. sample uvw coordinates by hand) and save this texture; whereas the actual work is much smaller and easier.
Diffuse IBL is the same as Specular IBL (integrate a bigger sphere into a smaller sphere), but it is much easier because:
- We often assume it doesn't change with roughness (it's wrong but nobody cares)
- There is no eye direction
- There's only normal and light vectors required (known at baking/integration time)
- The BRDF is much simpler (it's a simple NdotL)
Additionally, diffuse IBL looks very blurry by nature, hence a 64x64 cubemap is often enough to capture everything. More resolution doesn't really add much quality.
And SH (Spherical Harmonics) can be used to lossy compress 64x64x6 pixels into just 9 floats. But SH is not strictly necessary.
is the source code of the tool used in Filament to process cubemaps.
Thanks for the link. I think I'm going to need if we implement SH (I loathe writing SH coefficient generation)