[2.1] Compositor and non continuous rendering

Discussion area about developing with Ogre2 branches (2.1, 2.2 and beyond)
Post Reply
User avatar
MadWatch
Halfling
Posts: 64
Joined: Sat Jul 06, 2013 11:25 am
x 4

[2.1] Compositor and non continuous rendering

Post by MadWatch »

Hello everyone.

I'm considering using Ogre 2.1 for my next project so I'm taking a look at all that has changed since 1.9 (that I'm used to). I have some questions regarding the new compositor system and, especially, about the possibility to update workspaces independently.

Let say I have a scene and two views (eg two windows or one window split in two). I want the first view to display the scene as seen by human eye, therefore it must produce frames as fast as the system allows it. Then I want the second view to display the scene as seen by a drone sensor that, for latency reasons, can only produce one image per second, therefore I want to limit the framerate for this view to 1.

As far as I understand, to achieve this I have to create two workspaces, one for each view, correct ? But how can I tell each workspace to update at different rates ? Is it safe to call CompositorWorkspace::_update() manually instead of calling Root::renderOneFrame() ?

What if I want one of my workspaces to be embedded into a Qt (or whatever) widget ? Qt will want to tell when the workspace is to be rendered (that is, when the paintEvent() method of the widget is called). I currently embed Ogre 1.9 into a custom Qt widget by calling RenderWindow::update() into paintEvent() and it works great. But it seems that it's no longer possible with Ogre 2.1.

Is it possible to move, resize, show or hide workspaces at runtime ? Let say I have one full screen window and I want to switch between one full screen view and 4 split views (ala Mario Kart) when the user presses a button.

Last question : is it possible to use a texture produced by a workspace as input for another one ? Let say my drone have a photo camera and it must take a photo when the user presses a button. But when a photo is taken it isn't necessarily displayed on the window right away. It must be stored into memory to be displayed some time later (when the user presses another button). I would achieve that by creating a workspace for my photo camera and setting its final render target to a texture. But then, can I use that texture into another workspace to display it on the window ?

Thank you very much.

User avatar
dark_sylinc
OGRE Team Member
OGRE Team Member
Posts: 4211
Joined: Sat Jul 21, 2007 4:55 pm
Location: Buenos Aires, Argentina
x 802
Contact:

Re: [2.1] Compositor and non continuous rendering

Post by dark_sylinc »

Hi!

As for updating at 1hz:
You will have to call addWorkspace with "bEnabled = false" so that Ogre doesn't call it automatically, and then manually call every 1 second:

Code: Select all

workspace->_beginUpdate( false );
workspace->_update();
workspace->_endUpdate( false );
If possible, do this before CompositorManager2::_update gets called (usually gets called as a consequence of calling Root::_updateAllRenderTargets or Root::renderOneFrame)

Note: Do not update the workspace while being inside another workspace (i.e. from inside an Ogre listener)

As for the Qt widget:
Same applies. Instead of calling RenderWindow::update, call CompositorWorkspace::_update. If there's only one widget with Ogre's render window then I'd suggest calling Root::renderOneFrame.
This function must be called at some point (whether inside Qt or outside) since eventually leads to clearing data that must be cleared once per frame. If for some reason renderOneFrame messes up your Qt state, then manually perform what is being done inside renderOneFrame except for whatever part is messing up the state.
Is it possible to move, resize, show or hide workspaces at runtime ? Let say I have one full screen window and I want to switch between one full screen view and 4 split views (ala Mario Kart) when the user presses a button.
Yes. Beware resizing that often means recreating lots of resources, which is a slow operation. This is an API restriction anyway.
Last question : is it possible to use a texture produced by a workspace as input for another one ?
The answer is "yesish". Workspaces allow for one "input" which is considered to be where you want to render the output. Technically you can use that as both input and output. However I was in another project some time ago where I wanted to use multiple inputs for the workspace and realized this wasn't possible, so it's certainly something I missed in my design.

However you can workaround it (it's a little painful to do though):
  • One option is to have the 'main' workspace create the RTT as a local texture, and once the workspace is initialized, iterate through the node and passes and grab the actual texture Pointer, and send that as input/output to your sensor workspace. This way the texture will be visible to both compositor
  • Grab the texture from the workspaces once instantiated, and manually set the Materials/datablocks with their pointers.
The disadvantage is that resizing is a little painful because workspaces need to be destroyed so you can alter the definition and then instantiate again. Whereas normally this should happen automatically.

It would work beautifully if the compositor supported multiple inputs. But unfortunately it doesn't yet. Shouldn't be too hard to add though.

User avatar
MadWatch
Halfling
Posts: 64
Joined: Sat Jul 06, 2013 11:25 am
x 4

Re: [2.1] Compositor and non continuous rendering

Post by MadWatch »

Thank you for this detailed answer.
dark_sylinc wrote:It would work beautifully if the compositor supported multiple inputs. But unfortunately it doesn't yet. Shouldn't be too hard to add though.
If that feature was to be added, would it be possible to change one of the workspace input at runtime (like connecting it to the output of a different workspace depending on what the user do) ?

Anyway, I'm going to need view resize a lot for this project so I don't think I will split my window into several workspace. I think the only way left is to render all my drone's sensors in textures, create a second scene with some quads in it, apply the textures on them and display them on screen through an ortho camera. That way I can move and resize them to my heart content. That's probably not as efficient but more flexible.

Post Reply