OpenGL 3+ RenderSystem

Discussion area about developing or extending OGRE, adding plugins for it or building applications on it. No newbie questions please, use the Help forum for that.
User avatar
holocronweaver
Google Summer of Code Student
Google Summer of Code Student
Posts: 273
Joined: Mon Oct 29, 2012 8:52 pm
Location: Princeton, NJ
x 47

Re: OpenGL 3+ RenderSystem

Post by holocronweaver »

masterfalcon wrote:Another update to my tessellation work. It's rendering something at least, if you move the mouse around you'll see it. I suspect that there is something wrong with how I translated the shaders. Would anyone be able to verify?
It has taken me a awhile to get around to testing this, but the changes you made to the tessellation shaders in your last patch mostly agree with my own attempts to get them working. (As an aside, for the triangle domain gl_TessLevelInner[1] is not used, so setting it should not do anything.) That being said, I still cannot see any colored pixels on Ubuntu 13.04 with an AMD 7950, even when I attempt to rotate object with the mouse.

I copied these shaders almost verbatim to my personal OpenGL framework and was able to see the output triangle just fine. This leads me to believe that a parameter is not properly being passed to the shaders. Is it possible that there is something wrong with the VP matrix, or that it is implicitly being applied to glPosition at some stage?
User avatar
masterfalcon
OGRE Team Member
OGRE Team Member
Posts: 4270
Joined: Sun Feb 25, 2007 4:56 am
Location: Bloomington, MN
x 126
Contact:

Re: OpenGL 3+ RenderSystem

Post by masterfalcon »

It shouldn't be. However I can't say with certainty that that is not the case.
uelkfr
Gnoblar
Posts: 12
Joined: Wed Jun 05, 2013 6:19 am

Re: OpenGL 3+ RenderSystem

Post by uelkfr »

I think this is also good piece of information for consideration of using it:
https://github.com/p3/regal
dermont
Bugbear
Posts: 812
Joined: Thu Dec 09, 2004 2:51 am
x 42

Re: OpenGL 3+ RenderSystem

Post by dermont »

Is there something up with the GL3 RenderSystem (latest revisiion from hg) and the demos? For example:

a) Sample_DualQuaternion

The demo runs but is not OK with GL Renderer, reverting to the 1.8 media resolves the issue with the GL Renderer.

For the GL3 Renderer.

Code: Select all

Vertex info
-----------
Skeleton: Loading spine.mesh.skeleton
WARNING: spine.mesh is an older format ([MeshSerializer_v1.40]); you should upgrade it as soon as possible using the OgreMeshUpgrade tool.
Can't assign material spine/01 - Default to SubEntity of Spine because this Material does not exist. Have you forgotten to define it in a .material script?
Texture: circuit.dds: Loading 1 faces(PF_DXT5,256x256x1) with 5 hardware generated mipmaps from Image. Internal format is PF_DXT5,256x256x1.
Can't assign material spine/01 - Default to SubEntity of SpineDQ because this Material does not exist. Have you forgotten to define it in a .material script?
Texture: spot_shadow_fade.png: Loading 1 faces(PF_R8G8B8,128x128x1) with 5 hardware generated mipmaps from Image. Internal format is PF_X8R8G8B8,128x128x1.
Vertex Program: 26c2e673-ea74-a9b4-7c64-a556be21f897_VS
Fragment Program: ea33b805-6e99-7082-ce8d-aa11ef63884b_FS
 GLSL link result : 


0(90) : error C3002: call to undefined function "void SGX_AdjointTransposeMatrix(mat3x4, mat3);"
0(90) : error C3002: call to undefined function "void FFP_Transform(mat3, vec3, vec3);"
0(90) : error C3002: call to undefined function "void FFP_Normalize(vec3);"
0(90) : error C3002: call to undefined function "void SGX_AdjointTransposeMatrix(mat3x4, mat3);"
0(90) : error C3002: call to undefined function "void FFP_Transform(mat3, vec3, vec3);"
0(90) : error C3002: call to undefined function "void FFP_Normalize(vec3);"
0(90) : error C3002: call to undefined function "void SGX_AdjointTransposeMatrix(mat3x4, mat3);"
0(90) : error C3002: call to undefined function "void FFP_Transform(mat3, vec3, vec3);"
0(90) : error C3002: call to undefined function "void FFP_Normalize(vec3);"
Vertex Program: 92ba0131-78fe-c8fb-1abb-6c0553ea8fc4_VS
Fragment Program: ea33b805-6e99-7082-ce8d-aa11ef63884b_FS
 GLSL link result : 
Vertex info
-----------
0(70) : error C3002: call to undefined function "void FFP_Transform(mat3x4, vec4, vec3);"
0(70) : error C3002: call to undefined function "void FFP_Modulate(vec4, float, vec4);"
0(70) : error C3002: call to undefined function "void FFP_Transform(mat3x4, vec4, vec3);"
0(70) : error C3002: call to undefined function "void FFP_Modulate(vec4, float, vec4);"
0(70) : error C3002: call to undefined function "void FFP_Modulate(vec3, float, vec3);"
0(70) : error C3002: call to undefined function "void FFP_Modulate(vec3, float, vec3);"
0(70) : error C3002: call to undefined function "void FFP_Modulate(vec3, float, vec3);"
0(70) : error C3002: call to undefined function "void FFP_Modulate(vec3, float, vec3);"
0(70) : error C3002: call to undefined function "void FFP_Modulate(vec3, float, vec3);"
0(70) : error C3002: call to undefined function "void FFP_Modulate(vec3, float, vec3);"

Error attaching FFPLib_Common_FS shader object to GLSL Program Object
Program binary could not be loaded. Binary is not compatible with current driver/hardware combination.
b) Sample_Terrain

The demo runs OK with Gl Renderer. For the GL3 Renderer.

Code: Select all

Error attaching OgreTerrain/15868156/sm2/vp/comp shader object to GLSL Program Object
Program binary could not be loaded. Binary is not compatible with current driver/hardware combination.
Error attaching OgreTerrain/15868156/sm2/fp/comp shader object to GLSL Program Object
Program binary could not be loaded. Binary is not compatible with current driver/hardware combination.
DefaultWorkQueueBase('Root') - PROCESS_REQUEST_END(b2b45b40): ID=7 channel=3 requestType=1 processed=1
DefaultWorkQueueBase('Root') - PROCESS_RESPONSE_START(thread:b6f4e740): ID=7 success=1 messages=[] channel=3 requestType=1
DefaultWorkQueueBase('Root') - PROCESS_RESPONSE_END(thread:b6f4e740): ID=7 success=1 messages=[] channel=3 requestType=1

Program received signal SIGSEGV, Segmentation fault.
0xb7c9a371 in mspace_free.constprop.2481 ()
   from /home/dermont/TEST/build/v1-9/lib/libOgreMain.so.1.9.0
(gdb) bt
#0  0xb7c9a371 in mspace_free.constprop.2481 ()
   from /home/dermont/TEST/build/v1-9/lib/libOgreMain.so.1.9.0
#1  0xb7caec3c in nedalloc::nedpfree(nedalloc::nedpool_t*, void*) ()
   from /home/dermont/TEST/build/v1-9/lib/libOgreMain.so.1.9.0
#2  0xb7cb04ce in Ogre::NedPoolingImpl::deallocBytes(void*) ()
   from /home/dermont/TEST/build/v1-9/lib/libOgreMain.so.1.9.0
#3  0xb0ea58aa in Ogre::Terrain::save(Ogre::StreamSerialiser&) ()
   from /home/dermont/TEST/build/v1-9/lib/libOgreTerrain.so.1.9.0
#4  0xb0ea8fd3 in Ogre::Terrain::save(std::string const&) ()
   from /home/dermont/TEST/build/v1-9/lib/libOgreTerrain.so.1.9.0
#5  0xb0ea90ff in Ogre::TerrainGroup::saveAllTerrains(bool, bool) ()
   from /home/dermont/TEST/build/v1-9/lib/libOgreTerrain.so.1.9.0
#6  0xb0a7c332 in Sample_Terrain::frameRenderingQueued(Ogre::FrameEvent const&)
    () from /home/dermont/TEST/build/v1-9/lib/Sample_Terrain.so.1.9.0
#7  0x08065c83 in OgreBites::SampleBrowser::frameRenderingQueued(Ogre::FrameEvent const&) ()
#8  0xb7dc6b0d in Ogre::Root::_fireFrameRenderingQueued(Ogre::FrameEvent&) ()
   from /home/dermont/TEST/build/v1-9/lib/libOgreMain.so.1.9.0
#9  0xb7e0e582 in Ogre::Root::_fireFrameRenderingQueued() ()
   from /home/dermont/TEST/build/v1-9/lib/libOgreMain.so.1.9.0
#10 0xb7e0e5d4 in Ogre::Root::_updateAllRenderTargets() ()
---Type <return> to continue, or q <return> to quit---
   from /home/dermont/TEST/build/v1-9/lib/libOgreMain.so.1.9.0
#11 0xb7e0e6e0 in Ogre::Root::renderOneFrame() ()
   from /home/dermont/TEST/build/v1-9/lib/libOgreMain.so.1.9.0
#12 0xb7e0e73d in Ogre::Root::startRendering() ()
   from /home/dermont/TEST/build/v1-9/lib/libOgreMain.so.1.9.0
#13 0x0805255a in OgreBites::SampleContext::go(OgreBites::Sample*) ()
#14 0x0804ff4c in main ()
User avatar
masterfalcon
OGRE Team Member
OGRE Team Member
Posts: 4270
Joined: Sun Feb 25, 2007 4:56 am
Location: Bloomington, MN
x 126
Contact:

Re: OpenGL 3+ RenderSystem

Post by masterfalcon »

The first one suggests a problem with the binary program support.

I've also seen the second error but I haven't had a chance to debug the issue. It occurs in nedalloc from on OGRE_FREE call on line 382 of OgreTerrain.cpp.
dermont
Bugbear
Posts: 812
Joined: Thu Dec 09, 2004 2:51 am
x 42

Re: OpenGL 3+ RenderSystem

Post by dermont »

masterfalcon wrote:The first one suggests a problem with the binary program support.

I've also seen the second error but I haven't had a chance to debug the issue. It occurs in nedalloc from on OGRE_FREE call on line 382 of OgreTerrain.cpp.
Yeah I'm trying to take a look the second error with gdb / Ogre built with Debug enabled. Unfortunately the GL3 renderer segfaults on _createRenderWindow for a Debug build . I'll have to trace this issue first.

Code: Select all

******************************
*** Starting GLX Subsystem ***
******************************
GL3PlusRenderSystem::_createRenderWindow "OGRE Sample Browser", 800x600 windowed  miscParams: FSAA=0 displayFrequency=50 Hz gamma=No vsync=No 

Program received signal SIGSEGV, Segmentation fault.
0xb5bd67b6 in ?? () from /usr/lib/libGL.so.1
(gdb) bt
#0  0xb5bd67b6 in ?? () from /usr/lib/libGL.so.1
#1  0xb5bcd639 in glXMakeCurrent () from /usr/lib/libGL.so.1
#2  0xb3a037b6 in Ogre::GLXContext::setCurrent (this=0x815af90)
    at /home/dermont/TEST/src/v1-9/RenderSystems/GL3Plus/src/GLX/OgreGLXContext.cpp:77
#3  0xb39ec279 in Ogre::GLXWindow::setVSyncEnabled (this=0xb638e680, 
    vsync=false)
    at /home/dermont/TEST/src/v1-9/RenderSystems/GL3Plus/src/GLX/OgreGLXWindow.cpp:563
#4  0xb39eb820 in Ogre::GLXWindow::create (this=0xb638e680, name=..., 
    width=800, height=600, fullScreen=false, miscParams=0xbfffee54)
    at /home/dermont/TEST/src/v1-9/RenderSystems/GL3Plus/src/GLX/OgreGLXWindow.cpp:431
#5  0xb39de6ce in Ogre::GLXGLSupport::newWindow (this=0x81011c8, name=..., 
    width=800, height=600, fullScreen=false, miscParams=0xbfffee54)
    at /home/dermont/TEST/src/v1-9/RenderSystems/GL3Plus/src/GLX/OgreGLXGLSupport.cpp:386
#6  0xb3a14a4f in Ogre::GL3PlusRenderSystem::_createRenderWindow (
    this=0xb6ad38a0, name=..., width=800, height=600, fullScreen=false, 
    miscParams=0xbfffee54)
    at /home/dermont/TEST/src/v1-9/RenderSystems/GL3Plus/src/OgreGL3PlusRenderSy---Type <return> to continue, or q <return> to quit---
stem.cpp:635
#7  0xb39de444 in Ogre::GLXGLSupport::createWindow (this=0x81011c8, 
    autoCreateWindow=true, renderSystem=0xb6ad38a0, windowTitle=...)
    at /home/dermont/TEST/src/v1-9/RenderSystems/GL3Plus/src/GLX/OgreGLXGLSupport.cpp:375

#8  0xb3a1278b in Ogre::GL3PlusRenderSystem::_initialise (this=0xb6ad38a0, 
    autoCreateWindow=true, windowTitle=...)
    at /home/dermont/TEST/src/v1-9/RenderSystems/GL3Plus/src/OgreGL3PlusRenderSystem.cpp:209
#9  0xb7c067d7 in Ogre::Root::initialise (this=0xb6acb388, 
    autoCreateWindow=true, windowTitle=..., customCapabilitiesConfig=...)
    at /home/dermont/TEST/src/v1-9/OgreMain/src/OgreRoot.cpp:690
#10 0x08071043 in OgreBites::SampleBrowser::createWindow (this=0xbffff0e4)
    at /home/dermont/TEST/src/v1-9/Samples/Browser/include/SampleBrowser.h:1252
#11 0x08070695 in OgreBites::SampleBrowser::setup (this=0xbffff0e4)
    at /home/dermont/TEST/src/v1-9/Samples/Browser/include/SampleBrowser.h:1049
#12 0x0805db61 in OgreBites::SampleContext::initApp (this=0xbffff0e4, 
    initialSample=0x0)
    at /home/dermont/TEST/src/v1-9/Samples/Common/include/SampleContext.h:266
#13 0x0805dc48 in OgreBites::SampleContext::go (this=0xbffff0e4, 
    initialSample=0x0)
---Type <return> to continue, or q <return> to quit---
    at /home/dermont/TEST/src/v1-9/Samples/Common/include/SampleContext.h:323
#14 0x0805b148 in main (argc=2, argv=0xbffff294)
    at /home/dermont/TEST/src/v1-9/Samples/Browser/src/SampleBrowser.cpp:116
dermont
Bugbear
Posts: 812
Joined: Thu Dec 09, 2004 2:51 am
x 42

Re: OpenGL 3+ RenderSystem

Post by dermont »

dermont wrote:
Yeah I'm trying to take a look the second error with gdb / Ogre built with Debug enabled. Unfortunately the GL3 renderer segfaults on _createRenderWindow for a Debug build . I'll have to trace this issue first.

Code: Select all

diff --git a/RenderSystems/GL3Plus/src/GLX/OgreGLXGLSupport.cpp b/RenderSystems/GL3Plus/src/GLX/OgreGLXGLSupport.cpp
--- a/RenderSystems/GL3Plus/src/GLX/OgreGLXGLSupport.cpp
+++ b/RenderSystems/GL3Plus/src/GLX/OgreGLXGLSupport.cpp
@@ -819,7 +819,7 @@
 	//-------------------------------------------------------------------------------------------------//
 	::GLXContext GLXGLSupport::createNewContext(GLXFBConfig fbConfig, GLint renderType, ::GLXContext shareList, GLboolean direct) const
 	{
-		::GLXContext glxContext;
+		::GLXContext glxContext=NULL;
 		int context_attribs[] =
 		{
 			GLX_CONTEXT_MAJOR_VERSION_ARB, 5,

Edit:
Regarding the problem with the Sample_DualQuaternion, I've attached my log file. It looks to have additional info which may be useful to you.
Attachments
ogre.log.tar.gz
(13.76 KiB) Downloaded 406 times
User avatar
holocronweaver
Google Summer of Code Student
Google Summer of Code Student
Posts: 273
Joined: Mon Oct 29, 2012 8:52 pm
Location: Princeton, NJ
x 47

Re: OpenGL 3+ RenderSystem

Post by holocronweaver »

Just want to say that I will be posting updates on my post-GSoC GL3+ RS work in this thread. My work will be in the ogre-gl3plus-beta fork. At the moment I am fixing up the remaining samples.
Ident
Gremlin
Posts: 155
Joined: Thu Sep 17, 2009 8:43 pm
Location: Austria
x 9
Contact:

Re: OpenGL 3+ RenderSystem

Post by Ident »

I am trying to get the latest CEGUI release (0.8.3) ready for support for your Ogre GL3 renderer changes, based on Ogre's current default branch. While doing so, i have been improving our (CEGUI's) step-by-step guide for Ogre/CEGUI, but that just as a side-note. Once this all is done I will use CEGUI and Ogre in my project, which depends on tesselation shaders.

My progress so far was that everything compiles and links and runs. However, once the CEGUI sample framework start rendering (fixed-function enabled) i get error messages from the following function: void GL3PlusRenderSystem::_render(const RenderOperation& op)
Error-Message is "ERROR: Failed to create separable program." (line 1773)
This message is spammed non-stop and with the screen being constantly black. Obviously a message is sent for every geometrybuffer that is attempted to draw. I assume that the OGL3Renderer uses a shader-emulation for the fixed-function behavior, so that it doesn't rely on any deprecated OGL calls, is that correct? From my understanding this should normally fully replace the original behaviour so this problem should not even occur in the first place and at least with this option it should still run without problems.

I tried rendering with fixed-function pipeline on and off. The CEGUI Renderer for Ogre handles these two options seperately. Both do currently not work for different reasons, however I first will focus on getting the rendering with fixed-function done. I intend to use the glsl shader code that I wrote for our CEGUI OpenGL3 renderer for the programmable-pipeline option, which should work without problems. Also i will have to take care that the CEGUI Ogre renderer changes that will be applied to fix OGL3, won't break the other rendering options of Ogre or older Ogre versions (worst case I will have to rely on macros). So let's inspect the issue:

The problem currently is that GLSLSeparableProgramManager::getSingleton().getCurrentSeparableProgram() returns 0. I have not looked into it much deeper yet but maybe you could hint me towards something that might be missing compared to the original OGL fixed-function pipeline?
scrawl
OGRE Expert User
OGRE Expert User
Posts: 1119
Joined: Sat Jan 01, 2011 7:57 pm
x 216

Re: OpenGL 3+ RenderSystem

Post by scrawl »

The GL3 renderer itself does not provide fixed function emulation. To do this you can use the RTSS. However that will only work if rendering using Ogre's material / renderable system, so it will not work in your case, since CEGUI bypasses Ogre's scene manager and renders manually using RenderSystem methods.
MyGUI does this too. GUI libraries circumvent Ogre's material system for a reason - it's way too inefficient setting all possible material states for each batch, especially since Ogre does not have a state cache yet. If you already know a set of states you'll need (which, for rendering a GUI, is extremely small), then manually using the render system ("immediate mode") will be much more efficient.
Bottom line, you should just scrap the fixed-function path.
Last edited by scrawl on Tue Dec 31, 2013 7:25 pm, edited 3 times in total.
User avatar
masterfalcon
OGRE Team Member
OGRE Team Member
Posts: 4270
Joined: Sun Feb 25, 2007 4:56 am
Location: Bloomington, MN
x 126
Contact:

Re: OpenGL 3+ RenderSystem

Post by masterfalcon »

Actually Ogre does have a state cache, it just hasn't arrived to GL3+ yet. But it will, I promise.
User avatar
holocronweaver
Google Summer of Code Student
Google Summer of Code Student
Posts: 273
Joined: Mon Oct 29, 2012 8:52 pm
Location: Princeton, NJ
x 47

Re: OpenGL 3+ RenderSystem

Post by holocronweaver »

masterfalcon wrote:Actually Ogre does have a state cache, it just hasn't arrived to GL3+ yet. But it will, I promise.
You mean like the one in GLES RS?

I would prefer to avoid implementing a catch-all state cache in the GL3+ RS as the performance boost should be negligible (or possibly negative) for most state functions and the additional code cruft considerable. The GL3+ RS is itself a wrapper of OpenGL, so adding another wrapper layer between OGRE and OpenGL just adds another level of API complexity and indirection in the code.

The drivers implement and maintain the OpenGL state machine, so it should be their responsibility to maintain it in an optimal fashion. I understand the goal is to avoid unnecessary function calls and be more cache friendly, but on a PC from the past 10 years the cache is sufficiently large and optimized that the cache misses should be few and the call overhead miniscule.

If we have any state cache, it should be limited only to performance killers discovered via profiling. Currently things like texture, buffer and program managers could take care of unecessary binding in those respective arenas, which I believe are the most costly state changes in OpenGL. Outside of that I am not aware of any state worth caching.
scrawl
OGRE Expert User
OGRE Expert User
Posts: 1119
Joined: Sat Jan 01, 2011 7:57 pm
x 216

Re: OpenGL 3+ RenderSystem

Post by scrawl »

You are probably right. Caching everything seems wrong. There are / were quite a few of those performance killer in the GL RS (e.g. _setPointSpritesEnabled 34 GL calls / renderable even if point sprites are never used) but the GL3 RS already looks much tidier in this regard (and the fixed function states are removed :) )
The drivers implement and maintain the OpenGL state machine, so it should be their responsibility to maintain it in an optimal fashion.
Unfortunately, there is no "optimal fashion". Checking for redundant state changes in all cases and preventing them may in itself be more work than just setting the state anyway. Caching, if any, should be managed by the application.

We should really do some actual profiling before we add any cache at all. Also, adding a cache will not help if the layer above the cache is doing stupid things, such as _setPointSpritesEnabled.
scrawl
OGRE Expert User
OGRE Expert User
Posts: 1119
Joined: Sat Jan 01, 2011 7:57 pm
x 216

Re: OpenGL 3+ RenderSystem

Post by scrawl »

I would prefer to avoid implementing a catch-all state cache in the GL3+ RS as the performance boost should be negligible (or possibly negative) for most state functions and the additional code cruft considerable. The GL3+ RS is itself a wrapper of OpenGL, so adding another wrapper layer between OGRE and OpenGL just adds another level of API complexity and indirection in the code.
It's true that a catch-all layer will not magically give you the best performance.
If it were that simple, the drivers would implement it.

Instead of caching GL state, we should cache "meta" state.

Take the infamous GLRenderSystem::_setPointSpritesEnabled as an example:

Code: Select all

    void GLRenderSystem::_setPointSpritesEnabled(bool enabled)
    {
        if (!getCapabilities()->hasCapability(RSC_POINT_SPRITES))
            return;

        if (enabled)
        {
			mStateCacheManager->setEnabled(GL_POINT_SPRITE);
        }
        else
        {
			mStateCacheManager->setDisabled(GL_POINT_SPRITE);
        }

        // Set sprite texture coord generation
        // Don't offer this as an option since D3D links it to sprite enabled
        for (ushort i = 0; i < mFixedFunctionTextureUnits; ++i)
        {
			mStateCacheManager->activateGLTextureUnit(i);
			glTexEnvi(GL_POINT_SPRITE, GL_COORD_REPLACE,
                      enabled ? GL_TRUE : GL_FALSE);
        }
		mStateCacheManager->activateGLTextureUnit(0);

    }
With NullStateCacheManager:
34 GL calls / renderable

With StateCacheManager:
Worst case: 34 GL calls / renderable + 18 conditionals
Best case: 17 GL calls / renderable + 18 conditionals

Proposed solution: Cache meta state!

Code: Select all

    void GLRenderSystem::_setPointSpritesEnabled(bool enabled)
    {
     if (mActiveGLContextMetaStates->mPointSpritesEnabled == enabled)
         return;
Worst case: 34 GL calls / renderable
Best case: 0 GL calls / renderable + 1 conditional
User avatar
masterfalcon
OGRE Team Member
OGRE Team Member
Posts: 4270
Joined: Sun Feb 25, 2007 4:56 am
Location: Bloomington, MN
x 126
Contact:

Re: OpenGL 3+ RenderSystem

Post by masterfalcon »

Are you only referring to boolean GL states? We certainly could change the API and internals of the cache. I'll admit that some of that is still a bit unwieldy.
Ident
Gremlin
Posts: 155
Joined: Thu Sep 17, 2009 8:43 pm
Location: Austria
x 9
Contact:

Re: OpenGL 3+ RenderSystem

Post by Ident »

scrawl wrote: Bottom line, you should just scrap the fixed-function path.
Thanks for your response.
I thought instead of scrapping the fixed-function path of OGL3 entirely I could just use the same shaders there as i use for the non-fixed-function path. I think this shouldn't interfere with anything the user could do, the user could still emulate the fixed-function with shaders, such as you described. Howevever, if i missed something, I appreciate your input on this.

To check for this case (OGL3 in use and fixed-function on) and still use shaders, i tried to find a good way to check if we are using OGL3 renderer currently. I ended up doing it this way by checking the first 8 characters of the rendersystem name:

Code: Select all

d_renderSystem->getName().compare(0, 8, "OpenGL 3") == 0
This will of course only work as long as the name will not change in the future. I didn't find an enum or likewise that I could check for. Btw. I only compare against the first 8 characters because the current rendersystem name for example ends with (ALPHA) so i thought this is best to maintain forward compatibility.
Do you think this is a viable solution and/or is there a better solution maybe?

EDIT1: I pushed my changes to cegui v0-8 branch which is therefore ready for the current state of OGL3Renderer from Ogre default branch. The glsl150 shader and other relevant CEGUI changes are in this commit this commit for anyone who is interested. I will now proceed to use the CEGUI + Ogre OGL3 in a project for university.
TheSHEEEP
OGRE Retired Team Member
OGRE Retired Team Member
Posts: 972
Joined: Mon Jun 02, 2008 6:52 pm
Location: Berlin
x 65

Re: OpenGL 3+ RenderSystem

Post by TheSHEEEP »

Do we have any statistics if there is a significant performance difference between the old GL render system and the GL3+ one?

I was asked if it would make sense to switch to the new render system, even if no specific GL+3 features are used, so basically just switch the loaded plugin and fix the bugs that may happen with the switch. Other than suggesting to try it out, I could give no good answer. So now I reflect that question to you ;)
My site! - Have a look :)
Also on Twitter - extra fluffy
User avatar
holocronweaver
Google Summer of Code Student
Google Summer of Code Student
Posts: 273
Joined: Mon Oct 29, 2012 8:52 pm
Location: Princeton, NJ
x 47

Re: OpenGL 3+ RenderSystem

Post by holocronweaver »

TheSHEEEP wrote:I was asked if it would make sense to switch to the new render system, even if no specific GL+3 features are used, so basically just switch the loaded plugin and fix the bugs that may happen with the switch.
Unless you rely on the fixed function pipeline or have modified the GL render system or use its 'private public' methods that begin with an underscore, switching to GL3+ RS should not involve any significant modifications to your existing code.

While the use of VAOs in GL3+ might give you a performance boost on some drivers / platforms, I think the primary reason to switch is maintenance updates and future proofing. I will be using the GL3+ RS in my own game, so it will receive constant love and attention for at least the next couple years, while I plan to spend essentially no time on the older GL RS it replaces.

Of course, the best way to measure the benefits is to try switching and posting your results here, including any differences in performance. I suspect the switch should take all of 10 to 20 minutes for most programs. If you encounter problems, provide logs and I will debug until it works.
TheSHEEEP
OGRE Retired Team Member
OGRE Retired Team Member
Posts: 972
Joined: Mon Jun 02, 2008 6:52 pm
Location: Berlin
x 65

Re: OpenGL 3+ RenderSystem

Post by TheSHEEEP »

Oh, we'll definitely use GL3+, too, once we are done with our research (for which we're currently using the old GL RS), so I will start pestering you with issues soon enough :D
But I was asked by a student who is considering the switch right now.
My site! - Have a look :)
Also on Twitter - extra fluffy
Ident
Gremlin
Posts: 155
Joined: Thu Sep 17, 2009 8:43 pm
Location: Austria
x 9
Contact:

Re: OpenGL 3+ RenderSystem

Post by Ident »

There was once a post by some OpenGL expert, i think in the OpenGL forum, which said that the fixed-function can be faster than you can get with programmable shaders in some cases. This however mostly applies to cases where you do very simple (e.g. texturing and colouring only) rendering and is due to the fact that the vendors have optimised their drivers a lot for these fixed-function pipeline cases. However, once you do more complex shading the performance benefit will be on the side of the programmable shaders. For example you can do a lot of things that are multi-pass in fixed-function pipeline with a shader in a single pass in the programmable shader pipeline. Also you might have more efficient solutions for calculations like fog or shadows than are possible using fixed-function.

The important thing to note is that a lot of functionality in OGL3 core was already present in OpenGL 2.X, but was purely OPTIONAL to use. This means that switching to GL3.2 Core Profile won't magically make your program run faster if you already use programmable shaders, vaos, vbos, fbos etc. With the OGL 3.2+ Core Profile you are limited to using the modern solutions for rendering and cannot use the deprecated stuff at all. So when you use OGL3 core you can be sure you are programming graphics the "modern" way and that it will still be well supported by GPUs in the following years and be ready for additions. I also assume the deprecated stuff will at some point not be supported as well on drivers or just be emulated via the programmable pipeline anyways, if possible. What's also nice about 3.2 Core Profile is that if it is supported by your GPU it is assured that certain minimum requirements are provided, for example the availability of compressed texture formats or minimum amount of bound textures etc. This allows you to setup your application while relying on such features, without having to be uncertain about their availability.

Bottom line: fixed-function pipeline is great if you are doing basic rendering, and for example Torchlight 1 and 2 only relied on fixed-function. I asked some developer at gamescom, long ago and before TL2 was released, if they used shaders and they said it was just fixed-function. It must have been a conscious decision by them to do it this way because they wanted a simple non-realistic famtasy style in their game. And it worked for them well as you can see. Overall is no real reason to switch to OGL3 if you already got your project running, unless you want to rely on modern features like geometry shaders or tessellation etc. For anyone starting a new project and who wants to learn well how to do shader programming I would heavily recommend starting with OGL3 only. There is absolutely no advantage in knowing how the fixed-function pipeline works before you start learning about programmable shaders.
User avatar
holocronweaver
Google Summer of Code Student
Google Summer of Code Student
Posts: 273
Joined: Mon Oct 29, 2012 8:52 pm
Location: Princeton, NJ
x 47

Re: OpenGL 3+ RenderSystem

Post by holocronweaver »

Since this is a student, I agree with Ident that they should skip the fixed function pipeline and go straight to pure shaders / GL core profile. The only reason not to use the GL3+ RS is if the student wishes to use shaders that come with OGRE, such as RTSS shaders for things like shadows, since not all of them have been ported. That being said, I think students learn best by doing everything themselves at least once and porting shaders to GL 3 should not be difficult.
AusSkiller
Gremlin
Posts: 158
Joined: Wed Nov 28, 2012 1:38 am
x 13

Re: OpenGL 3+ RenderSystem

Post by AusSkiller »

Hi, I'm working on a game that uses deferred rendering and I'm planning on using MSAA for anti-aliasing which requires OpenGL 3+ (I need to manually resolve the multisampled g-buffer to get correct lighting, and I'm ignoring DirectX) and was wondering home much of what I require for that will be implemented in the OpenGL 3+ render system for the 1.10 release of Ogre? The main thing I need is glTexImage2DMultisample textures to render to and a way to reference them so I can feed them into a GLSL shader that does a manual resolve. Of course the size of the g-buffer gets quite crazy at 16x MSAA (500+MB at 1920x1080) so I'm also curious if you will be implementing nVidia's CSAA extensions with support for TexImage2DMultisampleCoverageNV that could save nearly half, or even nearly three quarters of the data necessary for 16x MSAA while still producing similar results with 16xQ CSAA or 16x CSAA.

I could probably add them in myself if you don't intend on doing it, but I suspect you would be able to do a much better job of it as your knowledge of how the Ogre render system works is likely to be MUCH better than mine, so if you plan on implementing them then I should probably just hold off on implementing that stuff myself and wait for your implementation of it ;).

BTW where is the most up to date version of your work on the OGL3+ render system, is it in the 1.10 (default?) branch yet?
User avatar
holocronweaver
Google Summer of Code Student
Google Summer of Code Student
Posts: 273
Joined: Mon Oct 29, 2012 8:52 pm
Location: Princeton, NJ
x 47

Re: OpenGL 3+ RenderSystem

Post by holocronweaver »

The main thing I need is glTexImage2DMultisample textures to render to and a way to reference them so I can feed them into a GLSL shader that does a manual resolve.
Funny you should mention this because I just switched to implementing it. Will try to get it done this week.

Since deferred shading is all about RTT, and RTT is most easily accomplished using compositors, I suggest following the OGRE wiki guide to deferred shading which significantly reduces the work you have to do by using shader generators and compositing.
... I'm also curious if you will be implementing nVidia's CSAA extensions with support for TexImage2DMultisampleCoverageNV ...
I admittedly have never heard of this extension. Will look into it. I prefer to implement ARB extensions, but am willing to make an exception if an extension is widely supported, useful, and likely to be standardized.
BTW where is the most up to date version of your work on the OGL3+ render system, is it in the 1.10 (default?) branch yet?
I merged a good chunk of my GSoC work into the default branch in the main repo, but I have my own bleeding edge GL3+ repo.
AusSkiller
Gremlin
Posts: 158
Joined: Wed Nov 28, 2012 1:38 am
x 13

Re: OpenGL 3+ RenderSystem

Post by AusSkiller »

holocronweaver wrote:
The main thing I need is glTexImage2DMultisample textures to render to and a way to reference them so I can feed them into a GLSL shader that does a manual resolve.
Funny you should mention this because I just switched to implementing it. Will try to get it done this week.
Awesome, I look forward to checking that out when it's done, but I've got a lot of other stuff to work on so no rush :).
holocronweaver wrote:Since deferred shading is all about RTT, and RTT is most easily accomplished using compositors, I suggest following the OGRE wiki guide to deferred shading which significantly reduces the work you have to do by using shader generators and compositing.
That's what I originally used, but the compositors caused me a lot of trouble and made it too tricky to do what I wanted, I've written a few engines using OpenGL before so I tend to run into a lot of issues where my knowledge of what can be done conflicts with what can be done with Ogre ;). At some point I might want to give compositors another crack though as I'll be wanting to use some stencil operations to determine which pixels need all samples resolved and which I can get away with using just use one sample (which should help to speed up the resolve in the lighting calculations significantly), and the stencil operations were much easier to use in compositors. Will it be easy to reference the multisample textures of one framebuffer as inputs to the shader for another using compositors? If so that would certainly make it worth giving compositors another shot :).
holocronweaver wrote:
... I'm also curious if you will be implementing nVidia's CSAA extensions with support for TexImage2DMultisampleCoverageNV ...
I admittedly have never heard of this extension. Will look into it. I prefer to implement ARB extensions, but am willing to make an exception if an extension is widely supported, useful, and likely to be standardized.
You can read about it here: https://developer.nvidia.com/csaa-cover ... tialiasing
And I found a good simple demo of it (with source) here: http://www.dhpoware.com/demos/glMultiSa ... asing.html
I'm not sure how widespread support of it is for AMD or Intel GPUs, but it was introduced in the nVidia 8800 so it's been around a while, and it was used in Half-Life 2 so unless nVidia were jerks and made it proprietary it's probably not too bad. I have been having trouble tracking down whether or not it is suitable for my needs though, in particular how to use the coverage samples in GLSL shaders, but it does offer a higher quality anti-aliasing at a lower memory usage so I'm sure it would be worth having for others even if it isn't suitable for me.
holocronweaver wrote:
BTW where is the most up to date version of your work on the OGL3+ render system, is it in the 1.10 (default?) branch yet?
I merged a good chunk of my GSoC work into the default branch in the main repo, but I have my own bleeding edge GL3+ repo.
Cool thanks, I think I'm using an alpha build from that repo at the moment :).
User avatar
holocronweaver
Google Summer of Code Student
Google Summer of Code Student
Posts: 273
Joined: Mon Oct 29, 2012 8:52 pm
Location: Princeton, NJ
x 47

Re: OpenGL 3+ RenderSystem

Post by holocronweaver »

AusSkiller wrote:Will it be easy to reference the multisample textures of one framebuffer as inputs to the shader for another using compositors?
Yes, using the output texture of one pass as the input texture of another is what compositors are all about! Most of my implementation time will be making sure multisample textures are accessible from the compositor.
Post Reply