New GLX rendersystem

Discussion area about developing or extending OGRE, adding plugins for it or building applications on it. No newbie questions please, use the Help forum for that.
Felix Bellaby
Halfling
Posts: 49
Joined: Sat Jan 26, 2008 1:22 pm

New GLX rendersystem

Post by Felix Bellaby » Tue Mar 18, 2008 2:12 am

I have submitted a complete rewrite of the Ogre GLX rendersystem into the sourceforge patch tracking system. Since this is a substantial set of changes that would be likely to affect all linux users, I would like to encourage anyone who is interested to give it a go.

I set out to substantially reduce the considerable divergence between the OgreWGL and OgreGLX implementations and encountered a number of bugs in OgreGLX along the way. The attached files completely replace the OgreGLX support located in the RenderSystem/GL/*/GLX/ directories. The attached minor patches to glew.cpp, OgreWindowEventUtilities.cpp and the gtk and Xt ConfigDialog code must be applied at the same time.

Changes to the GLX RenderWindow API
-----------------------------------

The following parameters are now available for RenderWindow creation:

1 name = String
2 fullscreen = bool
3 width = uint
4 height = uint
5 misc = NameValuePairList
a title = String
b left = int
c top = int
d displayFrequency = uint
e Vsync = bool
f FSAA = int
g parentWindowHandle = Window
h exteralWindowHandle = Window (deprecated)
i currentGLContext = bool
j externalGLControl = bool

The fullscreen support has been updated to use current WM protocols. Ogre can now go fullscreen without breaking the desktop. The video mode can be switched if desired with the width, height and displayFrequency providing precise control. These parameters are now determined in advance of displaying the Ogre config dialog and the available frequencies update as different screen sizes are selected. The RenderWindow API now includes an implementation of setFullScreen so that switching can occur between video modes and fullscreen display on the fly.

The vsync parameter has been implemented using the GLX equivalent of the WGL extension used by the Win32 API and will bring the frame rate into agreement with the displayFrequency to make use of the vertical sync.

The Full Scene anti-aliasing (FSAA) support has been repaired and should work on all platforms. The available sampling levels are determined in advance of the config dialog. Ogre looks even better with anti-aliasing.

The parentWindowHandle has been reduced to a window handle. Ogre will still accept the old display:screen:window(:visual) format and behave as before. The extra fields are ignored.

The externalWindowHandle will still work as before, but there is no good reason to use it under GLX. The parentWindowHandle can normally be used in its place. In other cases, currentGLContext is a better choice.

I would strongly recommend that applications leave Ogre to do all the GLX work on their behalf. However, this is not possible when the application has to use GL outside the Ogre library and I have added the last two parameters to handle this case.

Using GL outside the Ogre library
---------------------------------

It is vitally important when using GL under X to ensure that all GL and GLX commands are issued over a single Display connection. This constraint is necessary because GLX provides mechanisms that subvert the normal X server / X client model in favour of increased performance. Failing to follow this principle usually results in segfaults within libGL that occur in different places under different drivers.

In order to avoid these problems, the patched code requires that applications that use GL outside the Ogre library MUST either:

1 Avoid using GL and GLX until they have created their first Ogre RenderTarget and obtained the Ogre Display pointer using glXGetCurrentDisplay(), OR

2 Create a current GL context before selecting the Ogre GL RenderSystem, so that Ogre can (automatically) obtain and share their Display pointer.

The currentGLContext = true parameter to createRenderWindow allows the application to create a RenderTarget that draws onto the current GLXDrawable using a clone of the current GLXContext. Applications may use this approach to direct Ogre rendering onto a GLXPixmap or GLXPbuffer.

This approach is cleaner than specifying external window handles or context pointers and provides a means for high level language wrappers like ogre4j to set up RenderTargets without illegitimately accessing C pointers.

I believe that it would make sense to include a currentGLContext parameter within the OgreWGL API as well. It would direct Ogre to render onto the current HDC using a clone of the current HGLRC. There might even be a case for trying to extend this approach to the DirectX RenderSystems.

Since the currentGLContext parameter essentially allows the application to create a RenderTarget rather than a RenderWindow, it might more sense to implement it as a means of instantiating of the the RenderTarget class. The new class might be designed to act as a user accessible substitute for the GLContext classes.

The externalGLControl parameter turns off Ogre control over buffer swapping and vsync in an analogous way to WGL.

Using WindowEventUtilities::messagePump()
-----------------------------------------

Ogre listens for basic window management events on the top level windows that it creates and relays the events on to any WindowEventListeners attached by the application. Ogre will also listen for these events on any RenderWindows created as children of a parentWindowHandle. As a result, applications can either control child RenderWindows sending them XEvents or by calling the RenderWindow methods in the normal way.

The patched Ogre uses a separate Display connection for all its Window Event processing to avoid interfering with any event processing by the application.

The WindowEventUtilities code is very heavily platform and render system dependant. I think that it would make sense to move it into the RenderSystem plugin. This would allow some efficiency savings and reduce the amount of internal detail that has to be exposed through the getCustomAttribute method.

Using Ogre on systems with multiple X screens
---------------------------------------------

Ogre applications must ensure that all of the RenderTargets created using an Ogre Root are located on the same X screen, because X uses separate address spaces for each screen. Ogre will use the default screen on the default display (ie the login screen) unless the application has created a current GL context on another display/screen before selecting the RenderSystem.

Unresolved differences between WGL and GLX
------------------------------------------

There are five RenderWindow parameters from the Ogre WGL/DirectX API that I have not tried to implement in Ogre GLX:

1 gamma
2 border
3 outerDimension
4 colourDepth
5 depthBuffer

The gamma parameter would require the use of hardware specific X extensions and would involve a lot of work. There are already GUIs available to manipulate gamma control on a screen wide basis.

The other parameters could be implemented, but I have not got around to them. It might be sensible to add parameters that allow the selection of stencil buffers, etc. I think that any additional parameters should be decided on a cross platform basis.

Misc Bug fixes
--------------

There are some bugs fixes in the new code and the following features should now be working correctly on platforms that support them:

1 Pbuffers, including float buffers.
2 Window manager interaction (moving, sizing and closing)
3 Option refreshing in the GTK and Xt config dialogs.

Code Revisions
--------------

Internally, the code revisions might be summarised as follows:

1 The previously implicit limitation to one screen and X server is now explicit.
2 The handle and context based parameters are validated before use and exceptions are raised.
3 The GLXUtils code has been moved into GLXGLSupport.
4 GLXEW is used throughout to handle the GLX extensions.
5 FBConfigs are used throughout.

I think that these revisions neaten the coding and help provide support for more general render targets.

Testing
-------

I developed the code on an nVidia 8800 using the 100.14.19 nVidia drivers. In addition, I have tested it on an ATI Radeon 9600 with the 2.0.6958 fglrx drivers. The Xorg 1.3.0 server was used in both cases.

I found some limitations on my test ATI platform:

1 FSAA only works in fullscreen.
2 The framebuffer configs that claim to support 8 FSAA samples do not work.
3 Frame rate limited vsync does not work.

I have not had sufficient time to determine the cause of these limitations or whether they might be resolved. I would particularly appreciate feedback from anyone who is familar enough with the ATI hardware and drivers to help resolve these points.

NB: Your drivers may allow you to take over control of anti-aliasing from your applications. You will need to give it back to test the Ogre FSAA support. The same goes for vsync. Furthermore, you may need to apply the following patch to OgreGLRenderSystem.cpp before you can test the new Pbuffer implementation, because Ogre currently ignores any Pbuffer code on systems with FBOs.

Index: RenderSystems/GL/src/OgreGLRenderSystem.cpp
===================================================================
RCS file: /cvsroot/ogre/ogrenew/RenderSystems/GL/src/OgreGLRenderSystem.cpp,v
retrieving revision 1.245
diff -r1.245 OgreGLRenderSystem.cpp
506c506
< else
---
> // else
0 x

User avatar
xavier
OGRE Retired Moderator
OGRE Retired Moderator
Posts: 9481
Joined: Fri Feb 18, 2005 2:03 am
Location: Dublin, CA, US

Post by xavier » Tue Mar 18, 2008 2:53 am

/me nominates Felix for project member in charge of the Linux version. ;)
0 x
Do you need help? What have you tried?

Image

Angels can fly because they take themselves lightly.

morricone
Gnoblar
Posts: 24
Joined: Tue Mar 15, 2005 8:03 pm

Post by morricone » Sun Mar 23, 2008 9:38 pm

Hi,

I'm having trouble compiling with the patch applied:

Code: Select all

g++ -DHAVE_CONFIG_H -I. -I. -I../../../../OgreMain/include -I../../../../RenderSystems/GL/include -I../../../../RenderSystems/GL/include/GLX -I../../../../RenderSystems/GL/src/GLSL/include -I../../../../OgreMain/include -fvisibility=hidden -fvisibility-inlines-hidden -DOGRE_GCC_VISIBILITY -g -O2 -MT OgreGLXGLSupport.lo -MD -MP -MF .deps/OgreGLXGLSupport.Tpo -c OgreGLXGLSupport.cpp  -fPIC -DPIC -o .libs/OgreGLXGLSupport.o
/usr/include/GL/glx.h:172: error: declaration of C function 'void glXCopyContext(Display*, __GLXcontextRec*, __GLXcontextRec*, long unsigned int)' conflicts with
../../../../RenderSystems/GL/include/GL/glxew.h:122: error: previous declaration 'void glXCopyContext(Display*, __GLXcontextRec*, __GLXcontextRec*, GLuint)' here
/usr/include/GL/glx.h:343: error: conflicting declaration 'typedef struct GLXPbufferClobberEvent GLXPbufferClobberEvent'
../../../../RenderSystems/GL/include/GL/glxew.h:244: error: 'GLXPbufferClobberEvent' has a previous declaration as 'typedef struct GLXPbufferClobberEvent GLXPbufferClobberEvent'
/usr/include/GL/glx.h:345: error: redefinition of 'union __GLXEvent'
../../../../RenderSystems/GL/include/GL/glxew.h:245: error: previous definition of 'union __GLXEvent'
/usr/include/GL/glx.h:348: error: invalid type in declaration before ';' token
/usr/include/GL/glx.h:348: error: conflicting declaration 'typedef int GLXEvent'
../../../../RenderSystems/GL/include/GL/glxew.h:248: error: 'GLXEvent' has a previous declaration as 'typedef union __GLXEvent GLXEvent'
I'm using the nvidia driver version 169.12. I will try to compile with the older drivers and see whether that works.

100.14.19 doesn't work, either.
0 x

User avatar
sparkprime
Ogre Magi
Posts: 1137
Joined: Mon May 07, 2007 3:43 am
Location: Ossining, New York
Contact:

Post by sparkprime » Wed Mar 26, 2008 4:01 am

compiled straight off for me

g++ (GCC) 4.1.3 20070929 (prerelease) (Ubuntu 4.1.2-16ubuntu2)

ubuntu is an up-to-date gutsy

ii nvidia-glx-new 100.14.19+2.6.22.4-14.10 NVIDIA binary XFree86 4.x/X.Org 'new' driver
ii nvidia-glx-new-dev 100.14.19+2.6.22.4-14.10 NVIDIA binary XFree86 4.x/X.Org 'legacy' driver development file
ii nvidia-kernel-common 20051028+1ubuntu7 NVIDIA binary kernel module common files


edit: i should say i'm using a custom makefile to build ogre though
0 x

User avatar
sparkprime
Ogre Magi
Posts: 1137
Joined: Mon May 07, 2007 3:43 am
Location: Ossining, New York
Contact:

Post by sparkprime » Wed Mar 26, 2008 4:56 am

It works well in my app. I haven't tried the demos yet. A few things though:

With the GLX config box, I get "Error: Shell widget menu has zero width and/or height" if I click on the display frequency widget before selecting a resolution. The refresh rate widget is initially blank, but fills in with a refresh rate "51" when i select 1280x1024. xrandr reports:

Code: Select all

spark@failure:~$ xrandr
Screen 0: minimum 320 x 240, current 1280 x 1024, maximum 2640 x 1024
default connected 1280x1024+0+0 (normal left inverted right) 0mm x 0mm
   2640x1024      50.0     56.0     51.0  
   2304x768       51.0     50.0  
   1600x600       52.0  
   1280x480       53.0  
   1152x384       54.0  
   640x240        55.0  
   1280x1024      51.0* 
   1024x768       57.0  
   800x600        58.0  
   640x480        59.0  
   512x384        60.0  
   320x240        61.0  
(Note that the refresh rates are completely fabricated by the nvidia driver since I'm using dynamic twinview)


edit: This is now fixed (see next post) My HUD is initialising wrongly, it looks like it's being fed the wrong window size for the initial positioning. If I resize the window, the sizes are correct from that point onwards. I'm using the ion3 window manager which might make things more complicated since that rarely allows windows to be the initial rectangle they ask for. The old GLX worked in this respect though.

It seems like RenderSystem::getWaitForVerticalBlank initialises to true no matter what, and setting it to false does nothing. By setting it to True/False in ogre.cfg I can turn it off, however. __GL_SYNC_TO_VBLANK and nvidia-settings seem to always override ogre.cfg. Am I using the right interface to set vsync at runtime?

edit: This is now fixed (see next post) I don't seem to be getting a windowClosed event into my WindowEventListener. I noticed this when moving to shoggoth before I applied this patch, but I think you might be able to help fix it anyway. It works fine on windows and in Eihort.

I am a major fan of the GLX config dialog. It looks a lot better than the GTK version, and it's more lightweight. But I have been using this patch for some time:

Code: Select all

Index: OgreMain/src/GLX/OgreConfigDialog.cpp
===================================================================
RCS file: /cvsroot/ogre/ogrenew/OgreMain/src/GLX/OgreConfigDialog.cpp,v
retrieving revision 1.5.2.1
diff -u -r1.5.2.1 OgreConfigDialog.cpp
--- OgreMain/src/GLX/OgreConfigDialog.cpp       5 May 2007 22:22:07 -0000       1.5.2.1
+++ OgreMain/src/GLX/OgreConfigDialog.cpp       22 Feb 2008 22:37:39 -0000
@@ -234,7 +234,7 @@
                XtNmaxHeight, mHeight,
                XtNallowShellResize, False,
                XtNborderWidth, 0,
-               XtNoverrideRedirect, True,
+               XtNoverrideRedirect, False,
                NULL, NULL);
 
        /* Find out display and screen used */
I don't see why it should override the window manager - it is a dialog same as any other. Unless someone can think of a reason, maybe this change could go in too.
Last edited by sparkprime on Wed Mar 26, 2008 3:41 pm, edited 1 time in total.
0 x

User avatar
sparkprime
Ogre Magi
Posts: 1137
Joined: Mon May 07, 2007 3:43 am
Location: Ossining, New York
Contact:

Post by sparkprime » Wed Mar 26, 2008 3:33 pm

It seems whoever added the windowClosing() support forgot to reset the iterator before going into the second loop to send the windowClose() event.

There is potentially a similar problem with ConfigureNotify.

It seems that by applying the following patch, I have also corrected my "initial width/height are wrong" bug above.

This patch fixes the new glx rendersystem, even though the problem persists from shoggoth before I patched it to introduce the new glx rendersystem. It seems to succesfully apply to cvs shoggoth too, but only if you use the -l (ignore whitespace) parameter to the "patch" utility. (edit: no, it applies it to some random code further down, don't do this :) )

Code: Select all

--- ../cvs_shoggoth_local/OgreMain/src/OgreWindowEventUtilities.cpp     2008-03-26 14:30:10.867410581 +0000
+++ OgreMain/src/OgreWindowEventUtilities.cpp   2008-03-26 14:25:05.404079239 +0000
@@ -267,6 +267,7 @@
                                        close = false;
                        }
                        if (!close) return;
+                        start = WindowEventUtilities::_msListeners.lower_bound(win);
 
                        for( ; start != end; ++start )
                                (start->second)->windowClosed(win);
@@ -302,6 +303,7 @@
                        for( ; start != end; ++start )
                                (start->second)->windowMoved(win);
                }
+                start = WindowEventUtilities::_msListeners.lower_bound(win);
 
                if (newWidth != oldWidth || newHeight != oldHeight)
                {
One final thing, a lot of the new GLX rendersystem code seems to use 8 char hard tabs which goes against the rest of ogre, which seems to use either 4 char soft tabs or 4 char hard tabs (pretty much at random),
0 x

btmorex
Gremlin
Posts: 156
Joined: Thu May 17, 2007 10:56 pm

Post by btmorex » Wed Mar 26, 2008 5:04 pm

I can try testing this out later when I have a build environment on my new machine. Of course, I have an nvidia card and most of the linux problems seem to happen with ati cards.

Just a quick question:

Did you add any support for Xinerama/Twinview? The current window placement code puts the window on the second xinerama screen (in the middle of the big virtual screen) which in most cases isn't really desirable.

I submitted a patch that fixed this a while ago (http://sourceforge.net/tracker/index.ph ... tid=302997), but I have no idea how to add a configure check for the xinerama extension so it sort of died.
0 x

User avatar
sparkprime
Ogre Magi
Posts: 1137
Joined: Mon May 07, 2007 3:43 am
Location: Ossining, New York
Contact:

Post by sparkprime » Wed Mar 26, 2008 5:47 pm

Although I do use twinview, I don't bother with xinerama as it doesn't really make sense with my window manager (ion3).

So I haven't tested it with that. I believe my X install isn't even configured to provide the extension.

Shouldn't the window manager handle initial window placement though? What window manager are you using?


In case anyone cares, I recently learnt a bit about xinerama:

xinerama with twinview is only about giving hints, i.e. lies, to client applications about where the physical screens are within the x screen. They are lies because twinview goes against the normal x philosophy of one physical screen per x screen, instead opting for many physical screens per x screen. This is an ugly hack (needing metamodes, etc).

Xinerama without nvidia hacks is an extension that allows movement of windows between x screens, i.e. everything twinview does, but is not dynamic enough, I think. You have to restart X to change the screen configuration. Something like that anyway.
0 x

btmorex
Gremlin
Posts: 156
Joined: Thu May 17, 2007 10:56 pm

Post by btmorex » Wed Mar 26, 2008 6:46 pm

Out of curiosity since you're using twinview and not xinerama, how does ion3 know not to place a window half way in between physical screens?
Shouldn't the window manager handle initial window placement though? What window manager are you using?
I'm using metacity (gnome). Ogre specifies position coords when it calls XCreateWindow. I did a quick google and apparently there are differences of opinion on whether these should always be honored by the window manager or are merely "hints". In any case, my window manager honors them and it ends up putting the window on my physical screen that I rarely turn on.

So, the current code centers the window in the virtual, twinview screen, when it really makes more sense to center the window in one of the "fake" xinerama screens. The patch I submitted does just that and just uses the xinerama screen that currently has the mouse pointer.
0 x

User avatar
sparkprime
Ogre Magi
Posts: 1137
Joined: Mon May 07, 2007 3:43 am
Location: Ossining, New York
Contact:

Post by sparkprime » Wed Mar 26, 2008 7:01 pm

btmorex wrote:Out of curiosity since you're using twinview and not xinerama, how does ion3 know not to place a window half way in between physical screens?
It doesn't, but the way ion works it doesn't matter: Ion places new windows into the active frame (or a specific frame if it recognises the new window from a set of rules). Thus they fill the frame and the position hint is ignored. It's up to the user to set the frames correctly, so I just put a horizontal split that separates the monitors. Ion used to interpret xinerama information by allowing you to set up each screen differently, and to flip between desktops independently on each screen, but personally I found that just over-complicated the user interface.

It sounds like your patch was very sensible, though -- if xinerama is present at compile-time, use it at run-time to pick good hints.
0 x

User avatar
lf3thn4d
Orc
Posts: 478
Joined: Mon Apr 10, 2006 9:12 pm

Post by lf3thn4d » Thu Mar 27, 2008 10:24 am

sparkprime wrote:It seems whoever added the windowClosing() support forgot to reset the iterator before going into the second loop to send the windowClose() event.
Oops. My bad. Sorry. :oops:
In fact it's wrong on mac too (no windowClosing() event). I'll resubmit a patch to fix this. Sorry~!

[edit]
You seem to have some changes in OgreWindowEventUtilities.cpp as well, so i'll leave the patch here for you to merge.

Code: Select all

Index: OgreWindowEventUtilities.cpp
===================================================================
RCS file: /cvsroot/ogre/ogrenew/OgreMain/src/OgreWindowEventUtilities.cpp,v
retrieving revision 1.17
diff -u -r1.17 OgreWindowEventUtilities.cpp
--- OgreWindowEventUtilities.cpp	7 Feb 2008 13:31:56 -0000	1.17
+++ OgreWindowEventUtilities.cpp	27 Mar 2008 09:41:13 -0000
@@ -274,6 +274,7 @@
 			}
 			if (!close) return;
 
+			start = WindowEventUtilities::_msListeners.lower_bound(win);
 			for( ; start != end; ++start )
 				(start->second)->windowClosed(win);
 			win->destroy();
@@ -392,9 +393,20 @@
 				(start->second)->windowResized(curWindow);
             break;
         case kEventWindowClosed:
-            curWindow->destroy();
-            for( ; start != end; ++start )
-				(start->second)->windowClosed(curWindow);
+			{
+				//Send message first, to allow app chance to unregister things that need done before
+				//window is shutdown
+				bool close = true;
+				for( ; start != end; ++start )
+				{
+					if (!(start->second)->windowClosing(win))
+						close = false;
+				}
+				if (!close) break;
+				curWindow->destroy();
+				for( start = _msListeners.lower_bound(curWindow); start != end; ++start )
+					(start->second)->windowClosed(curWindow);
+			}
             break;
         default:
             status = eventNotHandledErr;

[/edit]
0 x

btmorex
Gremlin
Posts: 156
Joined: Thu May 17, 2007 10:56 pm

Post by btmorex » Sat Mar 29, 2008 12:48 pm

The patch works fine for me. It's nice to actually have working FSAA. On that note, I did notice one thing with FSAA, but since I haven't used it in the past I don't know if it's an issue with the patch, my card, ogre, or the nvidia drivers.

The issue: when I turn on FSAAx4 I get jitter on the edges of polys. If I leave FSAA at 4 and turn on vsync, the jitter goes away. Also, if I go down to FSAAx2 or 0 it goes away.
0 x

User avatar
sparkprime
Ogre Magi
Posts: 1137
Joined: Mon May 07, 2007 3:43 am
Location: Ossining, New York
Contact:

Post by sparkprime » Sat Mar 29, 2008 3:41 pm

I take it you don't mean tearing artifacts, can you get a screen shot of it?
0 x

btmorex
Gremlin
Posts: 156
Joined: Thu May 17, 2007 10:56 pm

Post by btmorex » Sat Mar 29, 2008 4:41 pm

sparkprime wrote:I take it you don't mean tearing artifacts, can you get a screen shot of it?
There is jittering when nothing in the scene is moving (including the camera). It looks to me like frame to frame the antialiasing results are coming back differently (jitter only occurs on edges that are being antialiased).
0 x

btmorex
Gremlin
Posts: 156
Joined: Thu May 17, 2007 10:56 pm

Post by btmorex » Sun Mar 30, 2008 8:26 pm

btmorex wrote:
sparkprime wrote:I take it you don't mean tearing artifacts, can you get a screen shot of it?
There is jittering when nothing in the scene is moving (including the camera). It looks to me like frame to frame the antialiasing results are coming back differently (jitter only occurs on edges that are being antialiased).
It turns out that this goes away completely when I downgrade my nvidia driver. In fact, all of the problems I've been having recently with ogre are solved by downgrading my driver. Looks like the 169.x series on Linux at least is just crappy.
0 x

User avatar
DWORD
OGRE Retired Moderator
OGRE Retired Moderator
Posts: 1365
Joined: Tue Sep 07, 2004 12:43 pm
Location: Aalborg, Denmark
Contact:

Post by DWORD » Tue Apr 01, 2008 11:06 am

btmorex wrote:It turns out that this goes away completely when I downgrade my nvidia driver. In fact, all of the problems I've been having recently with ogre are solved by downgrading my driver. Looks like the 169.x series on Linux at least is just crappy.
<OT>Yeah, also got a lot of problems with the 169.x drivers! Why are they introducing so many new bugs? :?</OT>
0 x

User avatar
manowar
Orc
Posts: 419
Joined: Thu Apr 07, 2005 2:11 pm
Location: UK
Contact:

Post by manowar » Tue Apr 01, 2008 11:10 am

Don't say the 169.x linux drivers are crap ! I love the pink windows shadows with compiz on my 8800GTS :o)
0 x

Felix Bellaby
Halfling
Posts: 49
Joined: Sat Jan 26, 2008 1:22 pm

Post by Felix Bellaby » Thu Apr 03, 2008 12:17 am

Thanks for making a start on testing this stuff out.

morricone's compilation problem should be resolved by the following patch:

--- old/RenderSystems/GL/include/GLX/OgreGLXRenderTexture.h 2008-03-28 00:54:49.000000000 +0000
+++ new/RenderSystems/GL/include/GLX/OgreGLXRenderTexture.h 2008-04-02 18:31:13.000000000 +0100
@@ -35,9 +35,6 @@
#include "OgreGLXContext.h"
#include "OgreGLXGLSupport.h"

-#include <X11/Xlib.h>
-#include <GL/glx.h>
-
namespace Ogre
{
class _OgrePrivate GLXPBuffer : public GLPBuffer

The width/height error that sparkprime noticed with the Athena config dialog box arises from the display frequency not being set when it is absent from the existing config file. It should be fixed by the following patch:

--- old/RenderSystems/GL/src/GLX/OgreGLXGLSupport.cpp 2008-03-28 01:01:31.000000000 +0000
+++ new/RenderSystems/GL/src/GLX/OgreGLXGLSupport.cpp 2008-04-02 22:39:33.000000000 +0100
@@ -239,6 +239,8 @@
mOptions[optVSync.name] = optVSync;
mOptions[optRTTMode.name] = optRTTMode;
mOptions[optFSAA.name] = optFSAA;
+
+ refreshConfig();
}

//-------------------------------------------------------------------------------------------------//
@@ -270,6 +272,11 @@
{
optDisplayFrequency->second.currentValue = optDisplayFrequency->second.possibleValues[0];
}
+ else
+ {
+ optVideoMode->second.currentValue = StringConverter::toString(mVideoModes[0].first.first,4) + " x " + StringConverter::toString(mVideoModes[0].first.second,4);
+ optDisplayFrequency->second.currentValue = StringConverter::toString(mVideoModes[0].second) + " MHz";
+ }
}
}

The Athena config dialog might be using the override redirect flag as a kludge to prevent the window manager drawing a frame around it. It should be using the _NET_WM window property based protocols instead. I could fix this, but you seem to be using the code more, so perhaps you would like to do it.

I will patch the OgreWindowEventUtilities.cpp code to use index iterator variables instead of incrementing the start iterator [ie for(index=start; index != end; index++)].

I did not realise that Ogre was using 4 char tabs. I always use 8 char tabs. No wonder the Ogre indentation has looked such a mess! I will resubmit the whole lot when this testing phase is over.

I chose not to implement the "wait for vertical blank" API in GLX as it is not used in Windows. Ogre relies on limiting itself to drawing 1 frame between vertical blanks to implement vsync on both platforms. You can determine whether a renderWindow uses vsync by setting the appropriate value for the Vsync misc param in the original createRenderWindow call.

I have not given any special consideration to how window managers might handle Xinerama. Ogre only puts its top level windows where it does because it has to put them somewhere. The final position of the window is entirely the window manager's responsibility.

There is a wide consensus that metacity is broken in the way that it handles Xinerama (http://bugzilla.gnome.org/show_bug.cgi?id=145503). If this bug in Metacity has yet to be fixed in the latest version then I suggest that you bully them a bit more. I would advise against patching Ogre for a problem in Metacity as it might well break things for someone else.

It sounds as though the nVidia testing has been reasonably successful so far, but does anyone out there use a ATI card with Linux? The ATI drivers are pretty poor, so there are things that I do not expect to work. However, it would be nice to know that I have avoided any regressions.
0 x

User avatar
sparkprime
Ogre Magi
Posts: 1137
Joined: Mon May 07, 2007 3:43 am
Location: Ossining, New York
Contact:

Post by sparkprime » Thu Apr 03, 2008 12:51 pm

Thanks for putting more work into this, Felix. GLX support is key for me :)

About the vsync stuff, are you sure windows does not use it? Does windows D3D9 use it? Unfortunately I am at a conference and can't test this now. I think we should be aiming to satisfy the behaviour suggested by the RenderSystem than matching win32 behaviour, though, so it would be nice to have this feature unless there is a reason why it is not sensible for GLX.

It is my impression that there is more to "enabling vsync" than just not rendering more often than necessary. That in itself will not eliminate the tearing artifact. Correct me if I'm wrong.

edit: what is the difference between the rendersystem vsync and individual window vsync? Is it sufficient to implement the rendersystem vsync by iterating through the windows? what is the reason for this design?
0 x

Felix Bellaby
Halfling
Posts: 49
Joined: Sat Jan 26, 2008 1:22 pm

Post by Felix Bellaby » Sat Apr 05, 2008 12:15 pm

The waitForVSync parameter to RenderWindow::swapBuffers is completely ignored in the OgreGLXWindow.cpp, OgreWin32Window.cpp, OgreD3D9RenderWindow.cpp and OgreD3D10RenderWindow.cpp code. For example:

Code: Select all

void Win32Window::swapBuffers(bool waitForVSync)
{
    if (!mIsExternalGLControl) {
         SwapBuffers(mHDC);
    }
}
As a result, calls to RenderSystem::setWaitForVerticalBlank have no effect whatsoever. I presume that they are a left over from a previous implementation of vsync that has since disappeared from the Ogre codebase. They should probably be removed from the API.

Under OpenGL, Ogre relies on limiting the system to performing one swap per vertical blank (using glXSwapIntervalSGI and wglSwapIntervalEXT). I believe that all current OpenGL drivers suspend processing of GL commands on windows that have vsync turned on until the vertical refresh is in progress. This involves stalling applications if they attempt to perform buffer swaps on vsynced windows before the vertical refresh has begun. This mechanism is sufficient to avoid any risk of tearing.

The situation seems to be more complex with the DirectX drivers. Ogre uses a combination of swap interval limitation and triple buffering under DirectX9. However, these steps appear to be insufficient under DirectX10, so it is best to leave VSync turned on in the control panel in this case.

The practical consequence of the Ogre vsync implementation and window render loop is that enabling vsync on one window prevents any possibility of any other windows being swapped more than once per vertical blank. Given that this is the case, it would sense to render vsync windows before non-vsync windows so that all windows ended up being swapped during the vertical refresh. However, I can see no real purpose in reimplementing the render queue code to allow non-vsync windows to update more often than vsync windows.

There has been talk of adding an extension to enable triple buffering control under OpenGL, but I do not think that anyone has ever implemented one. The nVidia drivers allow triple buffering control in the xorg.conf as an alternative. I think ATI has something similar.

The NV_swap_group extension can be used to ensure that swaps exactly correspond across multiple render windows. I believe that you need to have an nVidia Quatro card for this to have any effect and that is mainly of benefit when you have more than one card. I could try adding support to OpenGL, but I might have to win the lottery to test it.

I have looked again at the gamma code in OgreWGL and it is clearly based on the EXT_framebuffer_sRGB extension. This has an equivalent in GLX and I will be posting a patch to implement it.
0 x

juliusctw
Gremlin
Posts: 159
Joined: Thu Sep 28, 2006 8:15 pm

how does these patch work

Post by juliusctw » Thu Apr 24, 2008 7:12 pm

hello

i'm just curious how the ogre patch system work, does this mean that this patch is automatically going to be inside future versions of ogre? or this is something we'll have to just use and recompile ogre with.
0 x

User avatar
sparkprime
Ogre Magi
Posts: 1137
Joined: Mon May 07, 2007 3:43 am
Location: Ossining, New York
Contact:

Post by sparkprime » Thu Apr 24, 2008 10:17 pm

It will probably go in eventually, if enough people try it out and say it works.
0 x

User avatar
sinbad
OGRE Retired Team Member
OGRE Retired Team Member
Posts: 19265
Joined: Sun Oct 06, 2002 11:19 pm
Location: Guernsey, Channel Islands
x 2
Contact:

Post by sinbad » Sat Apr 26, 2008 6:37 pm

Indeed, I'm being cautious because the last major GLX upgrade had major breaking problems on some older distros & cards, so I'd like more people to test it with their distros to see.
0 x

User avatar
hagish
Kobold
Posts: 30
Joined: Wed Jul 25, 2007 2:34 am
Contact:

Post by hagish » Mon Apr 28, 2008 9:17 am

I tested the glx renderer with ogre/trunk and all-in-all it worked.

system informations:
01:00.0 VGA compatible controller: nVidia Corporation NV20 [GeForce3 Ti 200] (rev a3)

Linux windegg 2.6.24-16-generic #1 SMP Thu Apr 10 13:23:42 UTC 2008 i686 GNU/Linux

Ubuntu 8.04

nvidia module provided with ubuntu
ogre samples: works
(as fas as my graphic cards supports the features)

iris2: works
(but some materials were not displayed correctly, should be two-sided but i only saw one side, but i don't know if this is a shoggoth or renderer problem)
0 x

User avatar
sinbad
OGRE Retired Team Member
OGRE Retired Team Member
Posts: 19265
Joined: Sun Oct 06, 2002 11:19 pm
Location: Guernsey, Channel Islands
x 2
Contact:

Post by sinbad » Mon Apr 28, 2008 11:19 am

Thanks, tests with older cards like this are useful. Since that's a fairly old ATI card and an oldish ATI card from Felix's original test (9600), I think we're pretty well covered. I'll look at getting this in for 1.6 though just to be sure.
0 x

Post Reply