Scene rendering in editor

Problems building or running the engine, queries about how to use features etc.
User avatar
bishopnator
Gnome
Posts: 348
Joined: Thu Apr 26, 2007 11:43 am
Location: Slovakia / Switzerland
x 16

Scene rendering in editor

Post by bishopnator »

Ogre Version: :?: ogre-next 2.3.3
Operating System: :?: windows 11
Render System: :?: d3d11

Hi all, I am trying to figure out, how I can achieve some rendering techniques from my other 3D editor using Ogre3d. The editor displays 3D objects and it has "ability" to display additional helper geometry on top of that - like selected objects, construction geometry used to show a preview from current modeling operation, etc. The current implementation renders the content in the same render target like layers - each layer can be rendered directly or it can have cached content in another render target and in window it is just blended with previous layers using a screen-quad.

Let consider following rendering:
Image
It is possible to recognize 3 layers:

  1. bottom-most is 9 boxes - content of the scene
  2. middle contains 2 selected boxes rendered on the top of the previous layer
  3. top layer contains those blue triangles and red squared - those are the points where the user can click and adjust the selected objects

Here is also the content extracted from the window:

  1. scene:
    Image

  2. selected objects:
    Image

  3. top layer:
    Image

I remember that some time ago, when I used Ogre 1.x, I used Ogre::RenderQueueListener - the "layered" objects were assigned to different render queues and I cleared the depth buffer using the listener between rendering the queues. In ogre-next however it seems that the listener is disconnected - I found some references to it, but with the comment that the "hacky" listener should be removed and it is there probably only due to OverlayManager.

Without splitting the objects to separate render queues, I can imagine placing multiple viewports in the render target - but what is the efficient way to filter out objects from the scene manager? It is then recommended to create mutliple Ogre::SceneManager instances per such layer? Is it allowed to mix scene nodes between scene managers? E.g. selected objects are always the one from the scene, but they are placed on different layer.

The editor uses different rendering engine - the camera holds the root node to be rendered so there is better control which camera renders what - in Ogre, the Camera is created by SceneManager and it sees by default whole the content. Consider here a huge scene - like consisting of 10K+ objects and user selects just only few of them.

User avatar
dark_sylinc
OGRE Team Member
OGRE Team Member
Posts: 5537
Joined: Sat Jul 21, 2007 4:55 pm
Location: Buenos Aires, Argentina
x 1395

Re: Scene rendering in editor

Post by dark_sylinc »

Hi!

Although there are multiple possible ways to achieve this, it sounds like what you want is the following (always using the same SceneManager):

Code: Select all

compositor_node ExampleRenderingNode
{
	in 0 rt_renderwindow

	target rt_renderwindow
	{
		pass render_scene
		{
			load
			{
				colour			clear
				depth			clear
				stencil			clear
			}
			store
			{
				colour	store_or_resolve
				depth	dont_care
				stencil	dont_care
			}
	
			profiling_id	"Scene Pass"
			identifier		54 // Number is arbitrary but must be in sync with C++
			
			rq_first		1
			rq_last			10
		}
		pass render_scene
		{
			load
			{
				colour			load
				depth			clear
				stencil			clear
			}
			store
			{
				colour	store_or_resolve
				depth	dont_care
				stencil	dont_care
			}

			profiling_id	"Selected Objects"
			identifier		55 // Number is arbitrary but must be in sync with C++
			
			rq_first		10
			rq_last			20
		}
		pass render_scene
		{
			load
			{
				colour			load
				depth			clear
				stencil			clear
			}
			store
			{
				colour	store_or_resolve
				depth	dont_care
				stencil	dont_care
			}

			profiling_id	"Top Layer"
			identifier		56 // Number is arbitrary but must be in sync with C++
			
			rq_first		20
			rq_last			30
		}
	}
}

Then you use MovableObject::setRenderQueueGroup to assign them to the right render queue.

As for the identifier (54, 55 and 56) you can use a workspace listener to identify them:

Code: Select all

class MyListener : public CompositorWorkspaceListener
{
void passPreExecute( CompositorPass *pass ) override
{
    if( pass->getDefinition()->mIdentifier == 54 )
    {
         // Main layer is about to be rendered.
    }
}
}

Note that you're a bit limited on what you can do in those listeners. Full debug mode will trigger asserts if, for example, you move a SceneNode (because it's too late to move that SceneNode). BUT, you can use SceneNode::_getFullTransformUpdated to fix that and be able to move the SceneNode regardless (but if you plan on moving say, 1000 nodes; that would be slow).

Another example: Altering the camera inside the listener is probably the worst thing to do, because shadow nodes have already evaluated, which means the shadows will no longer be correct if the camera is modified.

User avatar
bishopnator
Gnome
Posts: 348
Joined: Thu Apr 26, 2007 11:43 am
Location: Slovakia / Switzerland
x 16

Re: Scene rendering in editor

Post by bishopnator »

Thanks, it seems to work somehow. I am trying to setup the rendering purely with c++ without using scripts. In editor like environment, the compositor is dynamic - e.g. depending on the render mode of each "layer". I see that there is a definition for workspace (Ogre::CompositorWorkspaceDef) and instance of workspace (Ogre::CompositorWorkspace). If I consider multiple windows with same content, but different cameras and different clear color (background behind the scene), do I need to create multiple instances of the Ogre::CompositorWorkspaceDef? At the moment my app has only a single window and I am setting the clear color to Ogre::CompositorWorkspace:

Code: Select all

	// Find clear pass.
	if (m_pOgreCompositorWorkspace->getNodeSequence().empty())
		return;
	if (m_pOgreCompositorWorkspace->getNodeSequence()[0]->_getPasses().empty())
		return;
	auto* pOgreCompositorPassClear = dynamic_cast<Ogre::CompositorPassClear*>(m_pOgreCompositorWorkspace->getNodeSequence()[0]->_getPasses().front());
	if (pOgreCompositorPassClear == nullptr)
		return;
	auto* pOgreRenderPassDesc = pOgreCompositorPassClear->getRenderPassDesc();
	if (pOgreRenderPassDesc == nullptr)
		return;

// Set clear color.
pOgreRenderPassDesc->setClearColour(ToOgre(color));

But when a window is resized, the clear color is copied from the Ogre::CompositorWorkspaceDef - I don't see any entry point (listener) where I can insert the customization code to overwrite the values from the Ogre::CompositorWorkspaceDef.

The reinitialization after the resize happens here:

Code: Select all

	OgreNextMain_d.dll!Ogre::CompositorPass::setupRenderPassDesc(const Ogre::RenderTargetViewDef * rtv) Line 256	C++
 	OgreNextMain_d.dll!Ogre::CompositorPass::setupRenderPassDesc(const Ogre::RenderTargetViewDef * rtv) Line 234	C++
 	OgreNextMain_d.dll!Ogre::CompositorPass::notifyRecreated(const Ogre::TextureGpu * channel) Line 760	C++
 	OgreNextMain_d.dll!Ogre::CompositorNode::finalTargetResized02(const Ogre::TextureGpu * finalTarget) Line 848	C++
 	OgreNextMain_d.dll!Ogre::CompositorWorkspace::_update(const bool bInsideAutoreleasePool) Line 789	C++
 	OgreNextMain_d.dll!Ogre::CompositorManager2::_updateImplementation() Line 784	C++
 	OgreNextMain_d.dll!Ogre::RenderSystem::updateCompositorManager(Ogre::CompositorManager2 * compositorManager) Line 1344	C++
 	OgreNextMain_d.dll!Ogre::CompositorManager2::_update() Line 706	C++
 	OgreNextMain_d.dll!Ogre::Root::_updateAllRenderTargets() Line 1596	C++
 	OgreNextMain_d.dll!Ogre::Root::renderOneFrame() Line 1128	C++

I check the implementation in those methods and I don't see any possible listener which I can attach to the "resizing" event where I can modify the Ogre::CompositorWorkspace instance.

Am I missing something? Should I create always a unique Ogre::CompositorWorkspaceDef/Ogre::CompositorWorkspace for every window? Consider to have 4 windows in the editor - according to the "view type" (front, top, left, etc.) I would like to set custom color to indicate the camera orientation.

I noticed in the Sample_Tutorial00_Basic that when a window is resized, I don't see the window's content (the background is black and when I finish the resizing, the background is refreshed with blue color) - is it possible to change this behavior to see always the content during resizing?

User avatar
dark_sylinc
OGRE Team Member
OGRE Team Member
Posts: 5537
Joined: Sat Jul 21, 2007 4:55 pm
Location: Buenos Aires, Argentina
x 1395

Re: Scene rendering in editor

Post by dark_sylinc »

Hi!

I am trying to setup the rendering purely with c++ without using scripts. In editor like environment, the compositor is dynamic

Personally I suggest that you have a set of compositor nodes created via scripts that you can edit via C++ or connect at runtime using the workspace.

Simply because the scripts are much easier to work with, while the C++ interface is hard to deal with.

Code: Select all

	auto* pOgreCompositorPassClear = dynamic_cast<Ogre::CompositorPassClear*>(m_pOgreCompositorWorkspace->getNodeSequence()[0]->_getPasses().front());
	if (pOgreCompositorPassClear == nullptr)
		return;

You can use compositorPass->getDefinition()->getType() to determine which type of pass you're dealing with (rather than downcasting via dynamic_cast).

Am I missing something? Should I create always a unique Ogre::CompositorWorkspaceDef/Ogre::CompositorWorkspace for every window? Consider to have 4 windows in the editor - according to the "view type" (front, top, left, etc.) I would like to set custom color to indicate the camera orientation.

HdrUtils::setSkyColour in the samples shows how to change the clear colour and indeed for maximum compatibility you're supposed to change the definition too and not just the RenderPassDescriptor.

You have various options:

  1. Use only one copy and use pass->getRenderPassDesc()->mColour[0].texture->addListener() (where pass is Ogre::CompositorPass) to listen to TextureGpu changes. You will see that the texture transitions to OnStorage and then to Resident again in CompositorNode::finalTargetResized01. Shortly after this CompositorNode::finalTargetResized02 will recreate the RenderPassDescriptor. Perhaps we could add a listener in finalTargetResized02 to make this much easier?.
  2. Create multiple clones in its own node (I'll take more about this down below).

Creating multiple clones

I'm going to explain this mostly because I said "I suggest that you have a set of compositor nodes created via scripts that you can edit via C++ or connect at runtime using the workspace". This is how I suggest you approach most of Compositor problems:

Code: Select all

// Clearing in its own pass isn't mobile-friendly, however this
// is flexible for an editor.
//
// You can create this particular node in C++
compositor_node ClearNode
{
	in 0 rt_renderwindow
	
	target rt_renderwindow
	{
		pass clear
		{
			load
			{
				colour	clear
				depth	dont_care
				stencil	dont_care
			}
			store
			{
				colour	store_or_resolve
				depth	dont_care
				stencil	dont_care
			}
	
			profiling_id	"Clear Pass"
		}
	}
	
	out 0 rt_renderwindow
}

compositor_node MainRenderingNode
{
	in 0 rt_renderwindow

	target rt_renderwindow
	{
		pass render_scene
		{
			load
			{
				colour			load
				depth			clear
				stencil			clear
			}
			store
			{
				colour	store_or_resolve
				depth	dont_care
				stencil	dont_care
			}
	
			profiling_id	"Scene Pass"
			identifier		54 // Number is arbitrary but must be in sync with C++
			
			rq_first		1
			rq_last			10
		}
	}
	
	out 0 rt_renderwindow
}

compositor_node DebugRenderingNode
{
	in 0 rt_renderwindow
	
	target rt_renderwindow
	{
		pass render_scene
		{
			load
			{
				colour			load
				depth			clear
				stencil			clear
			}
			store
			{
				colour	store_or_resolve
				depth	dont_care
				stencil	dont_care
			}

			profiling_id	"Selected Objects"
			identifier		55 // Number is arbitrary but must be in sync with C++
			
			rq_first		10
			rq_last			20
		}

		pass render_scene
		{
			load
			{
				colour			load
				depth			clear
				stencil			clear
			}
			store
			{
				colour	store_or_resolve
				depth	dont_care
				stencil	dont_care
			}

			profiling_id	"Top Layer"
			identifier		56 // Number is arbitrary but must be in sync with C++
			
			rq_first		20
			rq_last			30
		}
	}
}

// The workspace will also be created in C++
workspace ExampleWorkspace
{
	connect_external 0 ClearNode 0
	connect ClearNode 0 MainRenderingNode 0
	connect MainRenderingNode 0 DebugRenderingNode 0
}

In this example you can create ClearNode & ExampleWorkspace in C++:

Code: Select all

createClearNode(); // See Postprocessing example on how to create a nodes and passes programmatically.
Ogre::CompositorWorkspaceDef *workspaceDef = compositorManager->addWorkspaceDefinition( "MyRuntimeWorkspace" );
workspaceDef->connectExternal( 0u, "MyRuntimeClearNode", 0u );
workspaceDef->connect( "MyRuntimeClearNode", 0u, "MainRenderingNode", 0u );
if( debug_view_on )
   workspaceDef->connect( "MainRenderingNode", 0u, "DebugRenderingNode", 0u );

Thus this way, having blocks of pre-made nodes which you mix and match programmatically is much simpler than creating every single pass programmatically.

Other ideas: Using execution masks

For toggling your debug views you could use execution masks instead:

Code: Select all

compositor_node ExampleRenderingNode
{
	in 0 rt_renderwindow

	target rt_renderwindow
	{
		pass render_scene
		{
			load
			{
				colour			load // We assume it's cleared by your runtime-generated clear node
				depth			clear
				stencil			clear
			}
			store
			{
				colour	store_or_resolve
				depth	dont_care
				stencil	dont_care
			}
	
			profiling_id	"Scene Pass"
			identifier		54 // Number is arbitrary but must be in sync with C++
			
			rq_first		1
			rq_last			10
		}
		pass render_scene
		{
			execution_mask  0x02
			load
			{
				colour			load
				depth			clear
				stencil			clear
			}
			store
			{
				colour	store_or_resolve
				depth	dont_care
				stencil	dont_care
			}

			profiling_id	"Selected Objects"
			identifier		55 // Number is arbitrary but must be in sync with C++
			
			rq_first		10
			rq_last			20
		}
		pass render_scene
		{
			execution_mask  0x04
			load
			{
				colour			load
				depth			clear
				stencil			clear
			}
			store
			{
				colour	store_or_resolve
				depth	dont_care
				stencil	dont_care
			}

			profiling_id	"Top Layer"
			identifier		56 // Number is arbitrary but must be in sync with C++
			
			rq_first		20
			rq_last			30
		}
	}
}

Notice the "execution_mask" on the last two passes. Then from C++:

Code: Select all

if( select_objs_layer )
   mask |= 0x02;
if( top_layer )
   mask |= 0x04;
workspace->setExecutionMask( mask );
workspace->_notifyBarriersDirty(); // Just in case, for maximum Vulkan compatibility

The great advantage of this method is that you don't recreate resources (destroying a workspace and creating it again recreates textures), so it's much cheaper to toggle.
You can also combine node compositing as shown in the previous section, with execution masks. They're not mutually exclusive.

User avatar
bishopnator
Gnome
Posts: 348
Joined: Thu Apr 26, 2007 11:43 am
Location: Slovakia / Switzerland
x 16

Re: Scene rendering in editor

Post by bishopnator »

Is there any performance problem if I pack a node in its separate CompositorWorkspace? If I think about the layers from my initial post, each layer is independent from the previous one - so it is possible to create a CompositorWorkspaceDef/CompositorWorkspace pair for each layer. The CompositorWorkspaceDef defines the "technique" how the content of the layer is rendered (with possible multiple connected nodes to achieve desired rendering effect).

I am trying to create a "library-like" implementation and at the moment of writing it should be as flexible as possible. I find it little bit restricting that the CompositorWorkspace when instanced, needs SceneManager. From the implementation side it seems that it is only used for looking for cameras and hence little bit "overkill". If I have multiple nodes inside the CompositorWorkspace and I can provide the cameras by myself, the SceneManager in CompositorWorkspace is not needed. The problem is that whenever the node is reinitialized from the definition, it tries to search for the camera using CompositorWorkspace's SceneManager.

To be more specific - my goal is to implement a "Window" class which has support for adding layers, each layer has at least following properties:

  • it has its own camera (multiple layers can use same camera, but it shouldn't be limited)
  • viewport placement (position, size + anchor points to have some automatic response to the changes of the window)
  • render mode (with simple description like type (wireframe, priority rendering (like in Inkspace for 2D geometry), shaded, hidden lines, etc.)
  • visibility of geometry types (I want to predefine the types like lines, 2d faces, 3d faces, edges of 3D objects, texts, infinite lines, circles, etc.)
  • render queue IDs to define a range of entities from SceneManager (not necessarily same SceneManager over the layers)
  • ... possible other properties

For now the most flexible system for me would be to have separate CompositorWorkspace instances per layer due to not knowing the bound SceneManager - I can get it from the camera passed to the layer, but it won't be restricted to be same SceneManager between the layers.

To be able to render also empty window, it looks like I need to create some default scene manager to be able to create "clear" workspace.

So my question is - what kind of performance overhead is there? Should I really force the usage of a single SceneManager and create only a single CompositorWorkspaceDef/CompositorWorkspace for my Window implementation?

If I have CompositorWorkspaceDef/CompositorWorkspace per Layer - when I remove or add layer, I don't have to deal with other nodes - I will just destroy a workspace and create new one (there is parameter "position" in CompositorManager2::addWorkspace)
If I have CompositorWorkspaceDef/CompositorWorkspace per Window - when I remove or add layer, I need to adjust the CompositorWorkspaceDef, update the nodes, reconnect them properly, etc. which seems to be more complex.

In both cases it would be possible to use your suggested solution of defining the nodes in script files and reuse them from c++ code.