How to implement your own hlms

Problems building or running the engine, queries about how to use features etc.
User avatar
bishopnator
Goblin
Posts: 299
Joined: Thu Apr 26, 2007 11:43 am
Location: Slovakia / Switzerland
x 11

How to implement your own hlms

Post by bishopnator »

Ogre Version: 2.3.4
Operating System: Win10
Render System: d3d11 / gl3plus

Hi, I struggle a little bit here - I encounter this wiki page: https://wiki.ogre3d.org/HLMS+Materials where the section is actually empty. I am still trying to figure out all details how to achieve the different rendering in different windows like I mentioned here: viewtopic.php?t=97253. I think in general the answer there moved me forward, however there are still couple of points.

According to the answer I got feeling that it is impossible to switch the hlms for a particular Ogre::Item/Ogre::Renderable in different windows - using the listeners or sub-classing the current hlms, I can set/unset different properties which influence how the vs/gs/ps are created from the template files. This is perfect but not so good with integrating with the current HlmsPbs/HlmsUnlit implementations. I need a particular object rendered like HlmsPbs does, but in another window like with HlmsUnlit does. Theoretically I can pack the content of PixelShader_ps.* / VertexShader_vs.* into a pieces and write my own PixelShader_ps.* / VertexShader_vs.* files where I can insert those original content from either HlmsPbs or HlmsUnlit based on the conditions (properties).

However for the starter I would like to reimplement my techniques written in another engine to Ogre - here it would be great to start with fresh new hlms implementation where I can iteratively add more shader codes, properties, conditions etc. to render the objects (like lines, infinite lines, convert 3d objects to wireframe using geometry shaders etc.).

What I struggle with Ogre-next, is to get some requirements on the shaders - how Ogre prepares the data and what the custom shaders must fulfill to get objects rendered properly.

Let's focus only on V2 functionality - I checked the Ogre::RenderQueue::renderGL3 method and it accumulates somehow the same geometry into an instanced geometry. The vertex shaders accepts by default my own vertex attributes + drawId (as instanced data), then there is some const buffer which accumulates per instance attributes together (like a world matrix per object) - somewhere I saw that maximum 1024 instances are accumulated (not sure if I found it correctly).

So Ogre-next has some "fixed" functionality outside the hlms implementation in Ogre::RenderQueue (and probably somewhere else) which every hlms implementation must follow/integrate/support.

If I would like to implement hlms only for d3d11, how the minimal VertexShader_vs.hlsl and PixelShader_ps.hlsl would look like to support build-in accumulation of indexed instances? (without any indirection through the pieces or trying to use the code placed in CrossPlatformSettings_piece_all.hlsl

It would be great also to see a very simple hlms implementation (c++) which uses those simple *.hlsl files (specifically how Hlms::fillBuffersForV2 should look like).

Additionally Ogre::HlmsBufferManager seems to cover some common implementation of hlms, but seems insufficient as it fits the needs of hlms which uses only vertex/pixel shaders (no geometry and tessellation shaders) - the HlmsBufferManager::mapNextConstBuffer adds commands to bind the buffer only for vertex/pixel shaders - here I don't quite understand why it is important to bind same data to both shaders. If my const buffer contains all data needed for all variations of vertex shaders, I can bind that const buffer once at the beginning of the rendering, same for pixel shaders (but another instance of const buffer) and save some memory (here my understanding of const buffers are limited, so it is possible that I missed some crucial pre-conditions why it is implemented like it is).