Exploring 2D gui rendering (looking for guides/examples)

Discussion area about developing with Ogre-Next (2.1, 2.2 and beyond)


Post Reply
mrmclovin
Gnome
Posts: 324
Joined: Sun May 11, 2008 9:27 pm
x 20

Exploring 2D gui rendering (looking for guides/examples)

Post by mrmclovin »

I'm looking for documentation or examples showing how one would render GUI textures in 2.1. I have searched the forums and looked at some code from Farso (https://github.com/farrer/farso) and MyGUI but I find it hard to understand when I'm lacking the general picture of how the rendering pipeline works when it comes to 2D. In particular I have the question(s):

What are the stages a texture goes through from texture creation to ending up on the the screen at position x,y ?
  • Are there any scene nodes involved?
  • Are vertex buffers etc. necessary even though it's 2D only?
  • At what stage do you define the screen position where the texture should end up?
  • Where does Hlms Unlit fit into the picture?
I'm currently investigating whether it's possible to render SDL2 Textures (which has been drawn onto by Cairo graphics) with Ogre. But first I need to learn how stuff works in Ogre.

I think it would be easier to figure out the details if I just could get some help understanding the bigger picture.

Thanks.
farrer
Halfling
Posts: 64
Joined: Mon Sep 12, 2011 7:35 pm
x 13

Re: Exploring 2D gui rendering (looking for guides/examples)

Post by farrer »

Regarding your questions, from the way I've implemented in Farso, which is very different, as far as I seen, from the way it is implemented in MyGui:
mrmclovin wrote: Are there any scene nodes involved?
Yes. On Farso each 'root' widget (without parents) is associated to a renderer. The renderer is defined by its own Renderable and Movable, which are associated to a SceneNode to be visible.
mrmclovin wrote: Are vertex buffers etc. necessary even though it's 2D only?
Yes. In fact, to render anything you always need to define its vertices. In Farso, as it uses a Ogre::Renderable implementation, each one needs its own VertexArrayObject (for 2D they'll be very simple... you could check OgreWidgetRenderable::createVAO() for that). Also, for 2D render, the renderable must be set to use the Identity Matrix:

Code: Select all

bool getUseIdentityWorldMatrix(void) const { return true; };
mrmclovin wrote: At what stage do you define the screen position where the texture should end up?
Not sure if it is the best to do, but I direct define it on the VAO, and let the SceneNode just be at (0.0f, 0.0f, 0.0f). (maybe I should use the scene node for setting the position instead... need to recheck this part... it was implemented as 'make it work, check it latter' approach).
mrmclovin wrote: Where does Hlms Unlit fit into the picture?
At the WidgetRenderer, I set its renderable to use the respective datablock. The widget, when dirty, is rendered to a Surface (I use a SDL surface for that, so, working on CPU/RAM side) and them blited to the related Ogre::Texture (GPU side).

The relevant parts:

The Ogre texture is manually created with (only once):

Code: Select all

   this->texture = Ogre::TextureManager::getSingleton().createManual(
         name, 
         Ogre::ResourceGroupManager::DEFAULT_RESOURCE_GROUP_NAME,
         Ogre::TEX_TYPE_2D, realWidth, realHeight, 0, Ogre::PF_A8R8G8B8,
Ogre::TU_DYNAMIC_WRITE_ONLY);
The Texture is assigned to the datablock with (only once):

Code: Select all

   /* Using Ogre's Unlit HLMS */
   Ogre::HlmsManager* hlmsManager = Ogre::Root::getSingleton().getHlmsManager();
   Ogre::HlmsUnlit* hlmsUnlit = static_cast<Ogre::HlmsUnlit*>(
         hlmsManager->getHlms(Ogre::HLMS_UNLIT));

   /* Macroblock with depth write, depth check and culling disabled */
   Ogre::HlmsMacroblock macroBlock;
   macroBlock.mDepthCheck = false;
   macroBlock.mDepthWrite = false;
   macroBlock.mCullMode = Ogre::CULL_NONE;

   Ogre::HlmsBlendblock blendBlock;
   blendBlock.mDestBlendFactor = Ogre::SBF_ONE_MINUS_SOURCE_ALPHA;
   
   /* Let's create and define our datablock */
   datablock = static_cast<Ogre::HlmsUnlitDatablock*>(
         hlmsUnlit->createDatablock(name, name,
            macroBlock, blendBlock, Ogre::HlmsParamVec()));
datablock->setTexture(Ogre::PBSM_DIFFUSE, 0, texture);
The SDL Surface is blited to the Ogre::Texture (only when the widget is dirty) with:

Code: Select all

void OgreWidgetRenderer::uploadSurface()
{
   /* The idea here is similar to the OpenGL implementation: we just update
    * the texture with the contents of the rendering surface (represented
    * by its PixelBox bellow). */
   OgreSurface* ogreSurface = static_cast<OgreSurface*>(surface);

   ogreSurface->lock();
   texture->getBuffer()->blitFromMemory(pixelBox);
   ogreSurface->unlock();
}
The PixelBox is simple:

Code: Select all

pixelBox = Ogre::PixelBox(width, height, 1, Ogre::PF_A8B8G8R8, ogreSurface->getSurface()->pixels);
Being ogreSurface->getSurface() a pointer to the SDLSurface.

mrmclovin wrote: I'm currently investigating whether it's possible to render SDL2 Textures (which has been drawn onto by Cairo graphics) with Ogre.
Probably is. But, as SDL2 Textures are defined to the better system SDL wants (for example, could be an OpenGL texture, but could also be an DirectX one, which will be a problem for you if you are at windows using, for example, Ogre's OpenGL Renderer and the SDL texture was created as a DirectX one). Using SDL Surface is easier, and, as they are on memory and not on GPU as SDL Textures, cheaper to access its contents if you need to change something before sending it to the GPU.

Not sure if I'm clear, if not, feel free to ask what is confusing and I'll try to explain better.
mrmclovin
Gnome
Posts: 324
Joined: Sun May 11, 2008 9:27 pm
x 20

Re: Exploring 2D gui rendering (looking for guides/examples)

Post by mrmclovin »

First of all, thank you for taking the time to write your awesome answer and the relevant code snippets! Appreciate it!
Yes. In fact, to render anything you always need to define its vertices. In Farso, as it uses a Ogre::Renderable implementation, each one needs its own VertexArrayObject (for 2D they'll be very simple
Since surfaces in SDL are rectangles - would it make sense to create a VAO containing vertices for a 2D "Unit square" which can be scaled up and down depending on the desired dimensions of the widget? This VAO could then be re-used by all widgets. (I don't know how to handle transparency and alpha channels though, but I hope it can be done in the shaders in a later rendering stage).
Not sure if it is the best to do, but I direct define it on the VAO, and let the SceneNode just be at (0.0f, 0.0f, 0.0f)
Does that implies that you if you want to re-position a widget, you'd have to update the VAO as well?
If you want two instances of that same widget, do they have separate VAOs?
At the WidgetRenderer, I set its renderable to use the respective datablock. The widget, when dirty, is rendered to a Surface (I use a SDL surface for that, so, working on CPU/RAM side) and them blited to the related Ogre::Texture (GPU side).
This is a very interesting part, and I'm gonna to look more at your code tomorrow (today I barely got home before I need to go to bed again..).
Using SDL Surface is easier, and, as they are on memory and not on GPU as SDL Textures, cheaper to access its contents if you need to change something before sending it to the GPU.
You are right, it definitely does not make any sense to have SDL2 Textures when they need to be brought back and uploaded again via Ogre. So I'm going to stick wth SDL Surface. But I fear i might hit a problem with incompatible pixel formats between SDL surface and Cairo if I want to use alpha channels.. we'll see how it can be solved..

Have you implemented any transparency and alpha stuff, like rendering a round circle widget etc.?
farrer
Halfling
Posts: 64
Joined: Mon Sep 12, 2011 7:35 pm
x 13

Re: Exploring 2D gui rendering (looking for guides/examples)

Post by farrer »

mrmclovin wrote: Since surfaces in SDL are rectangles - would it make sense to create a VAO containing vertices for a 2D "Unit square" which can be scaled up and down depending on the desired dimensions of the widget? This VAO could then be re-used by all widgets. (I don't know how to handle transparency and alpha channels though, but I hope it can be done in the shaders in a later rendering stage).
The transparency and anything material related is made via HLMS datablocks. About the single VAO, yes, that should be possible, but I preffer do create one with the defined size for each renderable and not sharing them.
Does that implies that you if you want to re-position a widget, you'd have to update the VAO as well?
If you want two instances of that same widget, do they have separate VAOs?
That was a hack. And, fortunately, while explaining to you the way I've implemented, I've remember that it should be revised... And I've fixed it. In short, before I was defining for the renderable to use the IdentityMatrix always:

Code: Select all

bool getUseIdentityWorldMatrix(void) const { return true; };
This will obviously ignore any SceneNode transformations... What I should define was to use the IdentityProjection instead:

Code: Select all

bool getUseIdentityProjection(void) const { return true; };
Now I can finally define all transforms via the SceneNode and not by the hackish unefficient resetting the VAO (note: when defining the SceneNode coordinates remember to pass the x,y in [-1,1] space).
You are right, it definitely does not make any sense to have SDL2 Textures when they need to be brought back and uploaded again via Ogre. So I'm going to stick wth SDL Surface. But I fear i might hit a problem with incompatible pixel formats between SDL surface and Cairo if I want to use alpha channels.. we'll see how it can be solved..
Just keep them sane and created with the same formats and that's fine. If the SDLSurface isn't your creation (you receive it from somewhere) and is at a different format, you should convert to the desired format before blitting to Ogre::Texture (or create the Ogre::Texture with the same format when that's possible).
Have you implemented any transparency and alpha stuff, like rendering a round circle widget etc.?
As I've said, that is done via HLMS datablocks. More specifically with (check the full snipped on previous post):

Code: Select all

blendBlock.mDestBlendFactor = Ogre::SBF_ONE_MINUS_SOURCE_ALPHA;
mrmclovin
Gnome
Posts: 324
Joined: Sun May 11, 2008 9:27 pm
x 20

Re: Exploring 2D gui rendering (looking for guides/examples)

Post by mrmclovin »

farrer wrote: Tue Nov 14, 2017 3:05 pm Just keep them sane and created with the same formats and that's fine. If the SDLSurface isn't your creation (you receive it from somewhere) and is at a different format, you should convert to the desired format before blitting to Ogre::Texture (or create the Ogre::Texture with the same format when that's possible).
The format problem was easier than I thought. Somehow I was under the impression SDL_Surface did not support Cairo's ARGB32 format, but last night when I re-reviewed the SDL docs I saw there are a bunch of formats. I must have looked at SDL 1 earlier or really overlooked it. Transparency was not a problem at all, as I feared. Frankly it worked out of the box :D

Thank you for your code snippets, they really helped me getting the gist of the process. Now here's a screenshot of my current results:

Image

It does not look good, but it works! The ugly stuff is probably due to some error I made in setting up the VAO with texture coordinates. But I'm pleased to see my rounded background works.
farrer wrote: Tue Nov 14, 2017 3:05 pm (note: when defining the SceneNode coordinates remember to pass the x,y in [-1,1] space).
Yes, now I have a lot of work to do, to figure out the different transformations and what coordinate systems that makes most sense. For example, my layout system works in pixels so I need to do some transformations there back and forth.

I could not override getUseIdentityProjection() in Renderable though - it seems it's not virtual in 2.1. So I just used setUseIdentityProjection() on the renderable instead.
farrer
Halfling
Posts: 64
Joined: Mon Sep 12, 2011 7:35 pm
x 13

Re: Exploring 2D gui rendering (looking for guides/examples)

Post by farrer »

Glad it is working!

About:
mrmclovin wrote:I could not override getUseIdentityProjection() in Renderable though - it seems it's not virtual in 2.1. So I just used setUseIdentityProjection() on the renderable instead.
You're right. And also mUseIdentityProjection is a protected attribute and you could just set it on your renderable implementation (in fact that's what I was doing before on farso's Renderable constructor and I didn't remembered/checked that... :? )

Out of curiosity what is your final intention, use Cairo as bare-bone-primitives-drawing for creating a GUI or is it something else?
mrmclovin
Gnome
Posts: 324
Joined: Sun May 11, 2008 9:27 pm
x 20

Re: Exploring 2D gui rendering (looking for guides/examples)

Post by mrmclovin »

farrer wrote: Wed Nov 15, 2017 10:37 pm You're right. And also mUseIdentityProjection is a protected attribute and you could just set it on your renderable implementation (in fact that's what I was doing before on farso's Renderable constructor and I didn't remembered/checked that... :? )
Oh, great tip!
farrer wrote: Wed Nov 15, 2017 10:37 pm Out of curiosity what is your final intention, use Cairo as bare-bone-primitives-drawing for creating a GUI or is it something else?
I want to port my simple GUI system which I have had for a 2D project. And I really enjoy Cairo's API, and it has good results when rendering fonts. And together with Pango it'll handle line breaking and other font rendering problems. Another reason is that I want the GUI layout system to be based on the CSS3 Flexbox standard.

And it's just a lot of fun now when I have broken the first barrier :D
mrmclovin
Gnome
Posts: 324
Joined: Sun May 11, 2008 9:27 pm
x 20

Re: Exploring 2D gui rendering (looking for guides/examples)

Post by mrmclovin »

farrer wrote: Tue Nov 14, 2017 3:05 pm Now I can finally define all transforms via the SceneNode and not by the hackish unefficient resetting the VAO (note: when defining the SceneNode coordinates remember to pass the x,y in [-1,1] space).
With getUseIdentityProjection() returning true it seems that the position of the SceneNode represents the position where it will end up in clip space. Would you know where to store the transform if I want to change that? E.g. If my SceneNode's position is (0, 0, 0), the I'd like it to end up in the upper left corner of the viewport. Right now, the (0, 0, 0) is in the middle of the viewport.
farrer
Halfling
Posts: 64
Joined: Mon Sep 12, 2011 7:35 pm
x 13

Re: Exploring 2D gui rendering (looking for guides/examples)

Post by farrer »

mrmclovin wrote: Wed Nov 15, 2017 11:45 pm With getUseIdentityProjection() returning true it seems that the position of the SceneNode represents the position where it will end up in clip space. Would you know where to store the transform if I want to change that? E.g. If my SceneNode's position is (0, 0, 0), the I'd like it to end up in the upper left corner of the viewport. Right now, the (0, 0, 0) is in the middle of the viewport.
The simplest to do is transform the coordinates before applying them to the scene node, for example, if you are using pixel absolute (to screen) coordinates, being top-left (0, 0) and right-bottom (screenWidth, screenHeight), just do something like:

Code: Select all

void YourClass::setPosition(Ogre::Real x, Ogre::Real y)
{
   Ogre::Real xt = -1.0f + (x / screenWidth) * 2.0f);
   Ogre::Real yt = 1.0f - (y / screenHeight) * 2.0f);

   sceneNode->setPosition(xt, yt, 0.0f);
}
mrmclovin
Gnome
Posts: 324
Joined: Sun May 11, 2008 9:27 pm
x 20

Re: Exploring 2D gui rendering (looking for guides/examples)

Post by mrmclovin »

farrer wrote: Thu Nov 16, 2017 11:55 am
mrmclovin wrote: Wed Nov 15, 2017 11:45 pm With getUseIdentityProjection() returning true it seems that the position of the SceneNode represents the position where it will end up in clip space. Would you know where to store the transform if I want to change that? E.g. If my SceneNode's position is (0, 0, 0), the I'd like it to end up in the upper left corner of the viewport. Right now, the (0, 0, 0) is in the middle of the viewport.
The simplest to do is transform the coordinates before applying them to the scene node, for example, if you are using pixel absolute (to screen) coordinates, being top-left (0, 0) and right-bottom (screenWidth, screenHeight), just do something like:

Code: Select all

void YourClass::setPosition(Ogre::Real x, Ogre::Real y)
{
   Ogre::Real xt = -1.0f + (x / screenWidth) * 2.0f);
   Ogre::Real yt = 1.0f - (y / screenHeight) * 2.0f);

   sceneNode->setPosition(xt, yt, 0.0f);
}
Right, that's a simple and straight forward solution. I was just curious if it was possible to set a such transform, like you could do with world transforms in Ogre 1.x.
farrer
Halfling
Posts: 64
Joined: Mon Sep 12, 2011 7:35 pm
x 13

Re: Exploring 2D gui rendering (looking for guides/examples)

Post by farrer »

mrmclovin wrote: Thu Nov 16, 2017 12:17 pm Right, that's a simple and straight forward solution. I was just curious if it was possible to set a such transform, like you could do with world transforms in Ogre 1.x.
I don't think it's possible on a 2.x way, as the getWorldTransforms seems to be used only for v1-compatible render queues (and is only implemented for v1 Renderables).
Post Reply