Argh, WTF cg?
Here's a part of my new oculus distortion shader, ported from the official dx11 hlsl to sm2.0 cg:
Code: Select all
float blue = tex2D(RT, tcBlue).b*1.0000;
float green = tex2D(RT, tcGreen).g*1.0000;
float red = tex2D(RT, tcRed).r*1.0000;
return float4(red, green, blue, 1);
There's a bunch of stuff above that's different but not important to see. Ignore the *1.0000 bit, that will be explained soon.
So we have one render texture, the render of a single viewport (left or right eye). To correct the chromatic aberrations I need to read each colour channel from a different set of uv coords.
I've rendered the tcBlue, tcGreen and tcRed texture coords to the screen, they are all different.
But when I combine the channels like in the last line, I get a result which looks like all three channels use the same texture coords.
If I change the above code to have 1.00001 instead, it works correctly. Each channel is now scaled differently and overlap each other. Why is scaling the brightness of a colour channel by a non 1.0 value changing it's texture coordinates?
I bet if I renamed the file to hlsl and changed the program definition to hlsl it would work fine (I've had that in the past).
It's like it's caching the texture read and reusing it instead of doing three reads.