As others have said, OgreNext has no Mesh Shader support (D3D12/Vulkan only, don't know about Metal) and Ogre 1 has initial mesh shader support.
But once we go past that:
hedphelym wrote: ↑Wed Feb 26, 2025 9:52 am
https://developer.nvidia.com/blog/nvidi ... n-samples/
Has anyone of you looked into it already?
I have not started yet, other then getting it to compile (their sample project ect), planning on looking into it more and move forward, to understand more what it involves to integrate it into ogre.
Mega Geometry is useful in a fully GPU-driven pipeline. This is brilliantly explained in Erik Jansson - GPU driven Rendering with Mesh Shaders in Alan Wake 2.
The overall idea is that meshes are divided into "meshlets" (small chunks of geometry of at least 64 vertices each) that are frustum culled (and optionally occlusion culled), and added to a buffer, all of this done by a Compute Shader (or a series of them). This buffer is then rendered via indirect rendering. If the buffer is empty, it means the indirect render call has nothing to render as everything was culled.
This means C++ issues a few call (the compute dispatches and a few indirect draw calls) and tthat's it, most of the rendering code lives in the shader (hence "GPU-driven").
This would obsolete many parts of OgreNext, like MovableObject::cullFrustum
and RenderQueue::render
, the former because that code now happens on shaders at a finer granularity (MovableObject::cullFrustum
works at mesh level, the Compute Shader would work at meshlet level) and the latter because it needs to be rewritten to issue the Compute Shader dispatches and the indirect draws. Even though RenderQueue right now supports indirect buffers, it only fills them from C++, and it would need to be refactored so that both C++ and shaders are in agreement in how batches of work (split by state) is rendered.
It also obsoletes the way LODs are handled, since LODs are now per meshlet.
This way of doing rendering scales a lot better to extremely large triangle counts, and works beautifully in dense yet-heavily-occluded areas like forests.
However we have to pause for a sec to ask a few questions:
-
Do we support both methods or just one? I don't think we have the manpower for both, and Mesh Shaders means dropping support for anything older than Radeon RX 6000 (RDNA2) and GeForce 2000 (and I don't even remember when Intel introduced Mesh Shaders).
-
Do we have the time to rewrite all that?
-
What about mobile? OgreNext's biggest strength right now is excellent Vulkan mobile support (both in terms of compatibility and performance)
-
Is this a priority? Are our users rendering such large triangle counts that can't be handled by Ogre/OgreNext current algorithms?
Perhaps there are ways I'm not seeing to easily and casually add optional mesh shader + meshlet support without being intrusive. I've been surprised by contributors before (like when someone dropped a massive PR to add OpenGL OgreNext support to macOS despite I thought that was literally impossible).
I'm adding context to understand what Mega Geometry is used for.
However something I completely skipped is Ray Tracing support. Mega Geometry can be useful there. We don't have Ray Tracing support (I think somebody was working on a PR, IIRC it was user Hotshot5000) right now, but that is a less ambitious endeavour than replacing our entire pipeline
.
Cheers
Matias