Ray tracing support for Ogre 2.2+

Discussion area about developing with Ogre-Next (2.1, 2.2 and beyond)


User avatar
dark_sylinc
OGRE Team Member
OGRE Team Member
Posts: 5435
Joined: Sat Jul 21, 2007 4:55 pm
Location: Buenos Aires, Argentina
x 1342

Re: Ray tracing support for Ogre 2.2+

Post by dark_sylinc »

RenderSystem is the wrong place to do it. It's too low level.

See VctVoxelizer (Components/Hlms/Pbs/include/Vct/OgreVctVoxelizer.h) and see VctCascadedVoxelizer (Components/Hlms/Pbs/include/Vct/OgreVctCascadedVoxelizer.h) the latter which accepts static and dynamic objects to generate the GI.

Of course you'd need to add RenderSystem interfaces to perform the RayTracing API calls, but RayTracing itself should be encapsulated somewhere else.

The only thing I am wondering is that VctCascadedVoxelizer requires manually adding each Item to be tracked; while I am thinking that is perhaps a mistake; and GI per-object settings should live in Item; while VctCascadedVoxelizer gets attached to SceneManager to be the active GI.

But overall RayTracing should be encapsulated into its own interface that is plugged into SceneManager and makes the necessary RenderSystem calls.

Hotshot5000
OGRE Contributor
OGRE Contributor
Posts: 235
Joined: Thu Oct 14, 2010 12:30 pm
x 58

Re: Ray tracing support for Ogre 2.2+

Post by Hotshot5000 »

As I was playing with reconstructing from depth I realised that the far corner normals provided in the reconstruct from depth sample are interpolated in the cameraDir variable passed from the vertex to fragment shader. The only issue is that in the compute shader I am not aware of any way to do this interpolation automatically.

So that would mean that I need to do the interpolation myself based on the x, y position in clip space provided by the gl_GlobalInvocationID.xy. Kind of like the barycentric triangle coords but using the screen quad. Is this worth it or will it kill performance?

Is there an easier way to get the cameraDir(farCorners) available in the compute shader ready to be used as viewRays to be multiplied with linearDepth?
Just using the inverseViewProj matrix doesn't seem to work as the x, y positions don't seem to take into account the actual view frustum with the farCorners when unprojected.

Also I need all of this to end up in world space as the ray.origin and ray.direction are in world space in order to trace the rays correctly.

User avatar
dark_sylinc
OGRE Team Member
OGRE Team Member
Posts: 5435
Joined: Sat Jul 21, 2007 4:55 pm
Location: Buenos Aires, Argentina
x 1342

Re: Ray tracing support for Ogre 2.2+

Post by dark_sylinc »

You'll have to do the interpolation by hand.

This is quite easy. The unoptimized formula is:

Code: Select all

result =
lerp(
  lerp( v00, v10 ), fract( uv.x ),
  lerp( v01, v11 ), fract( uv.x ),
  fract( uv.y ) );

Where fract returns the decimal part, e.g. fract( 13.456 ) = 0.456

The formula can be optimized by expanding it and cancelling some terms and use algebra properties to reduce the number of multiplies & additiions, but this form is the most intuitive to understand what's going on.

You'll have to send all 4 corners (which are normally passed to the "normals" vertex component) from the compute shader.

Hotshot5000
OGRE Contributor
OGRE Contributor
Posts: 235
Joined: Thu Oct 14, 2010 12:30 pm
x 58

Re: Ray tracing support for Ogre 2.2+

Post by Hotshot5000 »

Thanks for the support! Now it looks like the cube projects a shadow onto the plane using rays.

Shadow texture
https://imgur.com/iHyOu84
Red is where the rays didn't hit anything from the reconstructed pos from depth. White represents contact with a triangle.

Normal rendertarget:
https://imgur.com/Si8aDEv

Hotshot5000
OGRE Contributor
OGRE Contributor
Posts: 235
Joined: Thu Oct 14, 2010 12:30 pm
x 58

Re: Ray tracing support for Ogre 2.2+

Post by Hotshot5000 »

Didn't have much time to continue working on this but I managed to get the dynamic shadows showing:

https://imgur.com/a/ClHQE8n

Right now it's only refitting the AS as it assumes that the objects in the scene don't change position too much (right now they only rotate around an axis) and it's not optimized in any way but at least there is something on the screen. The red background is for intersection misses as it hits no objects.

I will continue the work here and keep posting updates when I get something interesting working.

EDIT: Forgot to mention this is working only for directional light.

Hotshot5000
OGRE Contributor
OGRE Contributor
Posts: 235
Joined: Thu Oct 14, 2010 12:30 pm
x 58

Re: Ray tracing support for Ogre 2.2+

Post by Hotshot5000 »

First video
Second video

I have noticed that shadows seem off and there are some artifacts probably because of floating point lack of precision. The 2 videos are based on the rendering of the image from previous post just that the light position has been changed to be a directional light coming down vertically.

In the first video you can see the shadow texture that results from the compute shader that does the calculation for tracing rays from world space to the light source (in this case the light direction is 0, -1, 0 pointing directly downwards). Red means that the ray didn't hit anything. White when it does hit something. The min_distance for intersection detection is 0 and you can see a lot of errors especially on the rotating balls. The "shadow" seems to creep upward on the right side for some reason and moving the camera somehow influences this. When I go above the ball you can see that the shadow is now covering the right half of the ball, even though the light is now behind the camera so the ball shouldn't be shadowed from that angle.

In the second video I've increase the min_distance to 0.01 and you can see the issue not being as glaring. But it's still there somewhat. The shadows on the ground seem fine in both cases but the shadows coming from self-shadowing look really bad.

The compute shader is reconstructing the worldspace position for each pixel from the depth as in the ReconstructFromDepth tutorial. It sends the corners for the camera as CompositorPassQuadDef::WORLD_SPACE_CORNERS_CENTERED.

Cleaned up compute intersection code:

Code: Select all

#include <metal_stdlib>
#include <simd/simd.h>

#define GEOMETRY_MASK_TRIANGLE 1
#define GEOMETRY_MASK_SPHERE   2
#define GEOMETRY_MASK_LIGHT    4

#define GEOMETRY_MASK_GEOMETRY (GEOMETRY_MASK_TRIANGLE | GEOMETRY_MASK_SPHERE)

#define RAY_MASK_PRIMARY   (GEOMETRY_MASK_GEOMETRY | GEOMETRY_MASK_LIGHT)
#define RAY_MASK_SHADOW    GEOMETRY_MASK_GEOMETRY
#define RAY_MASK_SECONDARY GEOMETRY_MASK_GEOMETRY

using namespace metal;
using namespace raytracing;

struct INPUT
{
    float4x4 invProjectionMat;
    float4x4 invViewMat;
    float4 cameraCorner0;
    float4 cameraCorner1;
    float4 cameraCorner2;
    float4 cameraCorner3;
    float4 cameraPos;
    float4 cameraRight;
    float4 cameraUp;
    float4 cameraFront;
    float2 projectionParams;
    float width;
    float height;
};

struct Light
{
    
float4 position; //.w contains the objLightMask float4 diffuse; //.w contains numNonCasterDirectionalLights float3 specular; float3 attenuation; //Spotlights: // spotDirection.xyz is direction // spotParams.xyz contains falloff params float4 spotDirection; float4 spotParams; #define lightTexProfileIdx spotDirection.w }; float origin() { return 1.0f / 32.0f; } float float_scale() { return 1.0f / 65536.0f; } float int_scale() { return 256.0f; } // Normal points outward for rays exiting the surface, else is flipped. 6 float3 offset_ray(const float3 p, const float3 n) Taken from Ray tracing gems 2019 chapter A FAST AND ROBUST METHOD FOR AVOIDING SELF-INTERSECTION float3 offset_ray(const float3 p, const float3 n) { int3 of_i(int_scale() * n.x, int_scale() * n.y, int_scale() * n.z);
float3 p_i( as_type<float>(as_type<int>(p.x)+((p.x < 0) ? -of_i.x : of_i.x)), as_type<float>(as_type<int>(p.y)+((p.y < 0) ? -of_i.y : of_i.y)), as_type<float>(as_type<int>(p.z)+((p.z < 0) ? -of_i.z : of_i.z))); return float3(fabs(p.x) < origin() ? p.x+ float_scale()*n.x : p_i.x, fabs(p.y) < origin() ? p.y+ float_scale()*n.y : p_i.y, fabs(p.z) < origin() ? p.z+ float_scale()*n.z : p_i.z); } kernel void main_metal ( depth2d<@insertpiece(texture0_pf_type), access::read> depthTexture [[texture(0)]], texture2d<@insertpiece(texture1_pf_type), access::read> normalsTexture [[texture(1)]], // sampler samplerState [[sampler(0)]], texture2d<float, access::write> shadowTexture [[texture(UAV_SLOT_START)]], // Destination //constant float2 &projectionParams [[buffer(PARAMETER_SLOT)]], // TODO PARAMTER_SLOT should be const buffer?? constant Light *lights, // TODO replace with correct light source. constant INPUT *in, instance_acceleration_structure accelerationStructure, intersection_function_table<triangle_data, instancing> intersectionFunctionTable, ushort3 gl_LocalInvocationID [[thread_position_in_threadgroup]], ushort3 gl_GlobalInvocationID [[thread_position_in_grid]], ushort3 gl_WorkGroupID [[threads_per_threadgroup]] ) { ushort3 pixelPos = gl_GlobalInvocationID; float fDepth = depthTexture.read( pixelPos.xy ); float3 fNormal = normalize( normalsTexture.read( pixelPos.xy ).xyz * 2.0 - 1.0 ); fNormal.z = -fNormal.z; //Normal should be left handed. float linearDepth = in->projectionParams.y / (fDepth - in->projectionParams.x); // The ray to cast. ray shadowRay; // Pixel coordinates for this thread. float2 pixel = (float2)gl_GlobalInvocationID.xy; float2 uv = float2( pixel.x / in->width, pixel.y / in->height ); float3 interp = mix( mix( in->cameraCorner0.xyz, in->cameraCorner2.xyz, uv.x ), mix( in->cameraCorner1.xyz, in->cameraCorner3.xyz, uv.x), uv.y ); float3 worldSpacePosition = in->cameraPos.xyz + interp * linearDepth; // Create an intersector to test for intersection between the ray and the geometry in the scene. intersector<triangle_data, instancing> i; // Shadow rays check only whether there is an object between the intersection point // and the light source. Tell Metal to return after finding any intersection. i.accept_any_intersection( true ); i.assume_geometry_type(geometry_type::triangle); i.force_opacity(forced_opacity::opaque); typename intersector<triangle_data, instancing>::result_type intersection; // Rays start at the camera position. float4 normalInWorldSpace = in->invViewMat * float4( fNormal.xyz, 1.0f ); shadowRay.origin = offset_ray( worldSpacePosition.xyz, normalInWorldSpace.xyz ); // Using worldSpacePosition.xyz directly yields no different result //for( int lightIndex = 0; lightIndex < /*lightCount*/1; ++i ) //{ // Map normalized pixel coordinates into camera's coordinate system. shadowRay.direction = normalize( float3( 0.0f, 1.0f, 0.0f ) ); // Don't limit intersection distance. shadowRay.max_distance = INFINITY; shadowRay.min_distance = 0.01f; intersection = i.intersect( shadowRay, accelerationStructure, RAY_MASK_SHADOW ); if( intersection.type == intersection_type::triangle ) { shadowTexture.write( float4( 1.0f, 1.0f, 1.0f, 1.0f ), gl_GlobalInvocationID.xy ); } else { shadowTexture.write( float4( 1.0f, 0.0f, 0.0f, 1.0f ), gl_GlobalInvocationID.xy ); } }

Btw the repo is here on branch RTShadows.

EDIT: I will try to shoot the rays directly from camera and ignore the depthTexture. See if I get any other results....

EDIT2: I am doing something very wrong somewhere. Shooting rays directly from camera returns a compressed on Y axis shadow texture.

Image here

Code: Select all

// Metal compute shader
// First method to cast rays. 
float2 uv = float2( pixel.x / in->width, pixel.y / in->height );
        uv.x = uv.x * 2.0f - 1.0f;
        uv.y = ( 1.0f - uv.y ) * 2.0f - 1.0f;
    
// The rays start at the camera position. shadowRay.origin = in->cameraPos.xyz; // Map normalized pixel coordinates into the camera's coordinate system. shadowRay.direction = normalize( float3( uv.x * normalize( in->cameraRight.xyz ) + uv.y * normalize( in->cameraUp.xyz ) + normalize( in->cameraFront.xyz ) ) ); // Second method to cast rays. Same result as first method as in linked image. float imageAspectRatio = in->width / in->height; // assuming width > height float Px = ( 2 * ( ( pixel.x + 0.5 ) / in->width ) - 1 ) * tan( in->fovY / 2 ) * imageAspectRatio; float Py = ( 1 - 2 * ( ( pixel.y + 0.5 ) / in->height ) ) * tan( in->fovY / 2 ); float3 rayOrigin = float3( in->cameraPos.xyz ); float3 rayDirection = ( in->invViewMat * float4( Px, Py, -1, 1 ) ).xyz; rayDirection = normalize(rayDirection);
// C++ where the camera data, width and height are written. const Ogre::Vector3 cameraPos = mCamera->getDerivedPosition(); Ogre::Vector3 cameraRight = mCamera->getRight(); Ogre::Vector3 cameraUp = mCamera->getUp(); Ogre::Vector3 cameraFront = mCamera->getDirection(); rtInput->width = mRenderWindow->getWidth(); rtInput->height = mRenderWindow->getHeight(); rtInput->fovY = mCamera->getFOVy().valueRadians();

EDIT 3: Camera::getAspectRatio() is just a default 1.333 not the actual 1.77 from 16:9 actually used. Doesn't really change anything as the aspectRatio is calculated in the compute program.

Last edited by Hotshot5000 on Tue Jul 16, 2024 2:09 pm, edited 1 time in total.
Hotshot5000
OGRE Contributor
OGRE Contributor
Posts: 235
Joined: Thu Oct 14, 2010 12:30 pm
x 58

Re: Ray tracing support for Ogre 2.2+

Post by Hotshot5000 »

Found the issue. I was sending invViewProjection data in the invViewMatrix, so I was actually multiplying the ndc with invProjMat and then with invViewProjMat.

Shooting rays from the camera seems more accurate than using the linear depth, but I still don't understand why the issue exists... The results should be the same shooting rays from the reconstructed world space and shooting from camera and then from world space to light source.

It can be seen here that the shadow doesn't go up the right side of the ball like in the linear depth attempt for shooting rays to the light. (red means the first ray from the camera didn't touch anything, green that the second ray from world space didn't touch anything and white means it hit something opaque)

I am not even sure if it would make any difference from a real world scenario and if a player would even observe these errors. I would prefer to use the depth reconstruction as it shoots less rays in the end so it should be faster...