Rendering large scene & floating point precision on hard
-
- Goblin
- Posts: 260
- Joined: Mon Sep 01, 2003 3:59 am
- Location: London, United Kingdom
- x 1
Rendering large scene & floating point precision on hard
We need to be able to render large scene's (approx 700km's from start to end) where you can move from the start of the scene to the end in one continuous movement.
I've been looking at the options for emilinating floating point precision problems caused by this huge scene. eg: An animation near the origin will render perfectly since the floating point numbers used will all be near 0. Imagine that same animation running 700km's away from the origin where small movements in arms (say 0.1 metres) will be calculated against an offset of (700000.0, 700000.0, 0.0). If we were to use the standard 32-bit floating point representation this would have huge animation errors.
There are a few ways around this. We are considering loading up multiple SceneManager's and changing the camera from one SceneManager to the next in such a way that the user would not be able to tell. Each of these SceneManagers would represent some sub-scene of the entire world however each sub-scene would be translated back to (0, 0, 0) using some offset for the sub-scene. I think this solution will work fine - but I do need to test this.
Another option is to build ogre using doubles (instead of the default 32-bit floats). I've done this and ogre seems to compile and run fine (just one compiler error in the 'PlayPen' test). It runs about 3-6% slower with doubles with my test scene but I think we can live with this.
The problem with using ogre with doubles is that the hardware may still use 32-bit floating point numbers internally, which means that all of the accuracy is lost and we'll get the same problems.
*However* I was just looking on the nVidia web site and it states that the 8800 series of GPUs use "Full 128-bit floating point precision through the entire rendering pipeline".
Does this mean that I can use ogre with doubles and everything will be ok?
Since the hardware is using a higher precision than doubles (80-bit?) the given animation problem above won't be an issue?
I can see how this would totally fall-over. It could fall-over anywhere that the values are converted from doubles to floats and then up to 128-bit numbers. This could happen in any of ogre's shader's (the actual shader code), the shader API might use floats (eg: cg.dll), the nvidia driver might use floats, directx or opengl may use floats in there somewhere.
Does anyone know if the doubles within ogre would actually get to the video card pipeline "in tact"?
I've been looking at the options for emilinating floating point precision problems caused by this huge scene. eg: An animation near the origin will render perfectly since the floating point numbers used will all be near 0. Imagine that same animation running 700km's away from the origin where small movements in arms (say 0.1 metres) will be calculated against an offset of (700000.0, 700000.0, 0.0). If we were to use the standard 32-bit floating point representation this would have huge animation errors.
There are a few ways around this. We are considering loading up multiple SceneManager's and changing the camera from one SceneManager to the next in such a way that the user would not be able to tell. Each of these SceneManagers would represent some sub-scene of the entire world however each sub-scene would be translated back to (0, 0, 0) using some offset for the sub-scene. I think this solution will work fine - but I do need to test this.
Another option is to build ogre using doubles (instead of the default 32-bit floats). I've done this and ogre seems to compile and run fine (just one compiler error in the 'PlayPen' test). It runs about 3-6% slower with doubles with my test scene but I think we can live with this.
The problem with using ogre with doubles is that the hardware may still use 32-bit floating point numbers internally, which means that all of the accuracy is lost and we'll get the same problems.
*However* I was just looking on the nVidia web site and it states that the 8800 series of GPUs use "Full 128-bit floating point precision through the entire rendering pipeline".
Does this mean that I can use ogre with doubles and everything will be ok?
Since the hardware is using a higher precision than doubles (80-bit?) the given animation problem above won't be an issue?
I can see how this would totally fall-over. It could fall-over anywhere that the values are converted from doubles to floats and then up to 128-bit numbers. This could happen in any of ogre's shader's (the actual shader code), the shader API might use floats (eg: cg.dll), the nvidia driver might use floats, directx or opengl may use floats in there somewhere.
Does anyone know if the doubles within ogre would actually get to the video card pipeline "in tact"?
-
- OGRE Retired Team Member
- Posts: 1603
- Joined: Wed Oct 20, 2004 7:54 am
- Location: Beijing, China
- x 1
-
- Goblin
- Posts: 260
- Joined: Mon Sep 01, 2003 3:59 am
- Location: London, United Kingdom
- x 1
Thanks xavier, but I dont exactly understand what you mean.
Is this what you meant?
Do you think my idea of having multiple SceneManagers will work ok? Obviosuly there will be a small overlap region between SceneManagers to ensure you can't "see off the end of the SceneManager". I'll have to ensure the required resources are pre-loaded using the background loading thread before doing the swap. I'm hopeing this'll allow me to do a seemless change of SceneManager between frames. This is basically like doing your suggested 'variable origin' (ie: each SceneManager's sub-scene is centered on 0,0,0) except having the benefit of being able to compile StaticGeometry/Instancing at load time and not moving it dynamically.
I dont think I can do this because the scene is likely to have hundreds of thousands of objects, many of them grouped as StaticGeometry to achieve acceptable performance. If I was to move thousands of objects in a certain frame when the origin was 'shifted' I'd have to destroy and rebuild this StaticGeometry wouldn't I? We need to maintain a smooth 60Hz frame rate at all times (after the initial load) so I can't afford pre-processing the entire scene for a shift every 'certain distance'.Just use a variable origin -- break up your level into cells and move the origin as you move through the level.
Is this what you meant?
Do you think my idea of having multiple SceneManagers will work ok? Obviosuly there will be a small overlap region between SceneManagers to ensure you can't "see off the end of the SceneManager". I'll have to ensure the required resources are pre-loaded using the background loading thread before doing the swap. I'm hopeing this'll allow me to do a seemless change of SceneManager between frames. This is basically like doing your suggested 'variable origin' (ie: each SceneManager's sub-scene is centered on 0,0,0) except having the benefit of being able to compile StaticGeometry/Instancing at load time and not moving it dynamically.
-
- OGRE Retired Moderator
- Posts: 9481
- Joined: Fri Feb 18, 2005 2:03 am
- Location: Dublin, CA, US
- x 22
-
- Goblin
- Posts: 260
- Joined: Mon Sep 01, 2003 3:59 am
- Location: London, United Kingdom
- x 1
StaticGeometry wont work with multiple SceneManagers?!? WTF!?
I believe I use StaticGeometry for one of my scene's, and display the other one in a seperate Viewport. I think it works for me so far. Although I was planning to use StaticGeometry in both SceneManagers once I got the other scene built up. What is the problem here?
I understand what you mean about breaking up the level in to multiple cells and preloading them in the background. I dont think you can build StaticGeometry in the background though?
I still dont understand what you mean about 'moving the origin as I move through the level'. Do you mean move all of the objects in the entire scene some "offset" to "reset" the origin to some value other than zero. Do you think I'll be able to move all of the entities for the visible tiles and render the next frame within 16ms? (ie: 60 FPS required).
I believe I use StaticGeometry for one of my scene's, and display the other one in a seperate Viewport. I think it works for me so far. Although I was planning to use StaticGeometry in both SceneManagers once I got the other scene built up. What is the problem here?
I understand what you mean about breaking up the level in to multiple cells and preloading them in the background. I dont think you can build StaticGeometry in the background though?
I still dont understand what you mean about 'moving the origin as I move through the level'. Do you mean move all of the objects in the entire scene some "offset" to "reset" the origin to some value other than zero. Do you think I'll be able to move all of the entities for the visible tiles and render the next frame within 16ms? (ie: 60 FPS required).
-
- OGRE Retired Moderator
- Posts: 9481
- Joined: Fri Feb 18, 2005 2:03 am
- Location: Dublin, CA, US
- x 22
The renderer doesn't care what you consider your world coordinates to be. It just renders stuff from wherever you tell it. It has a frustum, which has a depth from the near to far plane, and inside that frustum is some stuff. The "real" coordinates are meaningless to the renderer -- objects are relative to one another, as well as relative to an arbitrary origin. So you "move" the origin at your convenience. You don't need to move objects -- just transform their verts to be relative to an arbitrary origin (something that can be done in a vertex shader easily).
As for having objects in multiple scene managers simulataneously-- I was under the impression that support was not in Ogre yet. You would have to detach a renderable from one scene manager and attach it to another.
As for having objects in multiple scene managers simulataneously-- I was under the impression that support was not in Ogre yet. You would have to detach a renderable from one scene manager and attach it to another.
-
- Goblin
- Posts: 260
- Joined: Mon Sep 01, 2003 3:59 am
- Location: London, United Kingdom
- x 1
I see what you mean, but I'm still not sure about how to implement it. If we dont move our objects and transform the vertices in a vertex shader we'll still loose precision badly. The object will be passed to the hardware at a position (700000.0, 700000.0, 0.0) and have some other object that is meant to lay flush against it at (700000.01, 700000.01, 0.0). When these numbers are transformed as 32-bit floats in the vertex shader they are probably going to result in the same location. I was thinking there would probably be noticable 'gaps' between the scenery at these ranges because of rounding. The converting of objects from 'world space' to 'camera space' would have to be done by my application, or by ogre, to avoid the limited hardware floating point precision errors creaping in.
As for multiple scene managers, I was planning to load the geometry on the 'egde' of the sub-scenes multiple times (if it belongs in multiple scene managers). I dont think there should be name clashes about multiple copies of the same scene node name between seperate scene managers.
I wasn't planning on 'moving' objects from one scene manager to another.
As for multiple scene managers, I was planning to load the geometry on the 'egde' of the sub-scenes multiple times (if it belongs in multiple scene managers). I dont think there should be name clashes about multiple copies of the same scene node name between seperate scene managers.
I wasn't planning on 'moving' objects from one scene manager to another.
-
- OGRE Retired Moderator
- Posts: 9481
- Joined: Fri Feb 18, 2005 2:03 am
- Location: Dublin, CA, US
- x 22
Keep your world positions as doubles and scale them down into high-precision floats by transform before sending them for rendering. Alternately, create your level chunks in float range and the objects' precision will be kept.
Unfortunately, this probably means a good deal of rework on your level design -- this is something that needed to be taken into account long ago in the process.
Unfortunately, this probably means a good deal of rework on your level design -- this is something that needed to be taken into account long ago in the process.
-
- Goblin
- Posts: 260
- Joined: Mon Sep 01, 2003 3:59 am
- Location: London, United Kingdom
- x 1
This particular project (with a huge 700km scene) isn't due for some time, we usually work with scene's around 30km. But looking forward it seems we have a lot of work to get this working, all the way from our 3dsmax import tools using doubles, oFusion needs to export using doubles, and our ogre wrapper has to strictly deal with doubles too.
-
- OGRE Retired Moderator
- Posts: 9481
- Joined: Fri Feb 18, 2005 2:03 am
- Location: Dublin, CA, US
- x 22
Not if you break up the level into chunks that work in float range. The levels for title we just shipped (Afterburner on the PSP) consist of long strips (hundreds of km in "real world" length) that were authored in Max, in floats. The levels are chunked and the chunks loaded as needed (and this is on a gimped platform with only a slow-ass UMD for storage media). You just need to adapt the authoring process and toolchain to make your world work in numbers that make sense for the platform.
-
- OGRE Retired Team Member
- Posts: 19269
- Joined: Sun Oct 06, 2002 11:19 pm
- Location: Guernsey, Channel Islands
- x 66
-
- Goblin
- Posts: 260
- Joined: Mon Sep 01, 2003 3:59 am
- Location: London, United Kingdom
- x 1
-
- Goblin
- Posts: 260
- Joined: Mon Sep 01, 2003 3:59 am
- Location: London, United Kingdom
- x 1
I've finally got around to writing a scene manager that solves our 'very large scene' floating-point problems. It was only a small amount of work in the end, but I had to choose my changes carefully.
Our scene manager works like the octree scene manager except during the rendering stage two things happen:
1) the view matrix is overwritten with a position of zero (orientation left in tact),
2) the geometry transforms have the cameras position taken away from them (to transform their position to camera space).
This turns out to be an easy way of translating the entire scene to camera space (in position only) which handles any single-precision floating point problems on the hardware side of things.
Unfortunately I had to modify two classes within ogre to get this to work.
AutoParamDataSource had to be modified to add a couple extra methods:
-setOverrideViewMatrix (to override the view matrix)
-setGeometryOffset (to get the geometry offset for the lights)
-getLightPosition (to get the moved light position)
-getLightAs4DVector (to get the moved light position)
GpuProgram had to be modified to call getLightPosition and getLightAs4DVector from AutoParamDataSource because it was explicitly calling Light::getDerivedPosition and Light::getAs4DVector which did not take in to account the geometry offset.
How would you like me to post these changes to the ogre source?
These may or may/not be the kind of thing you want to roll in to the actual ogre codebase.
I think the adding of the getLightPosition and getLightAs4DVector methods to AutoParamDataSource and GpuProgram are worthwhile changes. It feels a bit 'wrong' that GpuProgram is calling methods of Light explicitly. AutoParamDataSource should be acting as an interface to get that information.
Alternatively if AutoParamDataSource was implemented as an over-writable class I could have sub-classed it and added my own methods, without having to update the ogre source. However the way the reference to AutoParamDataSource is passed from the SceneManager through to the GpuProgram doesn't allow this.
Anyway, thanks for your help. Let me know how you want me to post my changes.
Our scene manager works like the octree scene manager except during the rendering stage two things happen:
1) the view matrix is overwritten with a position of zero (orientation left in tact),
2) the geometry transforms have the cameras position taken away from them (to transform their position to camera space).
This turns out to be an easy way of translating the entire scene to camera space (in position only) which handles any single-precision floating point problems on the hardware side of things.
Unfortunately I had to modify two classes within ogre to get this to work.
AutoParamDataSource had to be modified to add a couple extra methods:
-setOverrideViewMatrix (to override the view matrix)
-setGeometryOffset (to get the geometry offset for the lights)
-getLightPosition (to get the moved light position)
-getLightAs4DVector (to get the moved light position)
GpuProgram had to be modified to call getLightPosition and getLightAs4DVector from AutoParamDataSource because it was explicitly calling Light::getDerivedPosition and Light::getAs4DVector which did not take in to account the geometry offset.
How would you like me to post these changes to the ogre source?
These may or may/not be the kind of thing you want to roll in to the actual ogre codebase.
I think the adding of the getLightPosition and getLightAs4DVector methods to AutoParamDataSource and GpuProgram are worthwhile changes. It feels a bit 'wrong' that GpuProgram is calling methods of Light explicitly. AutoParamDataSource should be acting as an interface to get that information.
Alternatively if AutoParamDataSource was implemented as an over-writable class I could have sub-classed it and added my own methods, without having to update the ogre source. However the way the reference to AutoParamDataSource is passed from the SceneManager through to the GpuProgram doesn't allow this.
Anyway, thanks for your help. Let me know how you want me to post my changes.

-
- OGRE Expert User
- Posts: 1538
- Joined: Sat Jan 14, 2006 8:00 pm
- x 1
-
- Goblin
- Posts: 260
- Joined: Mon Sep 01, 2003 3:59 am
- Location: London, United Kingdom
- x 1
I haven't heard of the term "boiling" before so I'm not sure what you are referring to. I do have to compile ogre with double-precision (and build everything else with double-precision) so that the very large offset calculations result in accurate numbers, but the scene looks fine to me.
If I load an oFusion scene at its original position it works fine with the standard OctreeSceneManager and our own scene manager.
If I offset the oFusion scene by (100000, 0, 100000) when you render it using the standard OctreeSceneManager you can see gaps between the geometry and the geometry 'flickers'/'shifts' backwards and forwards as you move around. If I load up this scene with our own scene manager the scene looks fine, just as it did in the first example.
Our scene contains many animated gpu people, glow shaders on lights etc which is why I needed to modify the AutoParamDataSource. Also, static geometry must be turned off when you get a far distance from the origin because it just throws an exception "Our of range" (or something similar) for very large floats (which is a fair way to handle it).
If I load an oFusion scene at its original position it works fine with the standard OctreeSceneManager and our own scene manager.
If I offset the oFusion scene by (100000, 0, 100000) when you render it using the standard OctreeSceneManager you can see gaps between the geometry and the geometry 'flickers'/'shifts' backwards and forwards as you move around. If I load up this scene with our own scene manager the scene looks fine, just as it did in the first example.
Our scene contains many animated gpu people, glow shaders on lights etc which is why I needed to modify the AutoParamDataSource. Also, static geometry must be turned off when you get a far distance from the origin because it just throws an exception "Our of range" (or something similar) for very large floats (which is a fair way to handle it).
-
- Gold Sponsor
- Posts: 26
- Joined: Sun Jan 14, 2007 11:56 pm
- Location: chicago, il
"Boiling" geometry refers to how geometry will start to jitter as the vertex coordinates become large. This is due to the quantization inherent as floating point exponents become large (i.e. if you're using bits to describe the big part of the number, you have less bits to describe the small parts)gerds wrote:I haven't heard of the term "boiling" before so I'm not sure what you are referring to.
I worked on a PC game called "Freelancer" once, and we had this problem in certain situations. It didn't become evident until a tester decided to point their ship into empty space and leave the game running for 24 hours while they went to bed. When they got back, everything was jittering like mad, it looked like the geometry was trying to shake itself apart...
-
- Goblin
- Posts: 260
- Joined: Mon Sep 01, 2003 3:59 am
- Location: London, United Kingdom
- x 1
-
- Goblin
- Posts: 260
- Joined: Mon Sep 01, 2003 3:59 am
- Location: London, United Kingdom
- x 1
For those interested, I've started a thread on my proposed changes to AutoParamDataSource and GpuProgram on this thread: http://www.ogre3d.org/phpBB2/viewtopic.php?t=34260
-
- Gnoblar
- Posts: 2
- Joined: Wed Jun 11, 2008 10:55 pm
- Location: Paris, France
Hi everyone,
I am currently working on a project based on very large distances and i'm experiencing the same 'boiling' problem when I get too far from the origin.
So I decided to follow the hints given here to write my own sceneManager.
The problem is that I'm not very experienced in Ogre core, or even with rendering systems in general.
I looked up in the sources in order to find the correct place to do the geometry transforms. I found out that renderSingleObject in the sceneManager is one of the lowest-level function before sending the object to the renderSystem.
Is this this function I have to overload in my sceneManager by doing something like:Thanks for any hints about this.
I am currently working on a project based on very large distances and i'm experiencing the same 'boiling' problem when I get too far from the origin.
So I decided to follow the hints given here to write my own sceneManager.
The problem is that I'm not very experienced in Ogre core, or even with rendering systems in general.
I looked up in the sources in order to find the correct place to do the geometry transforms. I found out that renderSingleObject in the sceneManager is one of the lowest-level function before sending the object to the renderSystem.
Is this this function I have to overload in my sceneManager by doing something like:
Code: Select all
void SceneManager::renderSingleObject(const Renderable* rend, const Pass* pass, bool lightScissoringClipping, bool doLightIteration, const LightList* manualLightList)
{
//Do geometry transforms
SceneManager::renderSingleObject(rend, pass, lightScissoringClipping, doLightIteration, manualLightList);
//Put back the object to its original position
}
Last edited by lujeni on Tue Jun 24, 2008 10:11 am, edited 1 time in total.
-
- Goblin
- Posts: 260
- Joined: Mon Sep 01, 2003 3:59 am
- Location: London, United Kingdom
- x 1
You are very close.
You should be able to modify renderSingleObject to calculate the transforms in eye-space, instead of world-space, before passing them to the render system.
The other option would be to modify the render system itself to transform the transforms, but that's not how I have done it.
You should be able to modify renderSingleObject to calculate the transforms in eye-space, instead of world-space, before passing them to the render system.
The other option would be to modify the render system itself to transform the transforms, but that's not how I have done it.
-
- Gnoblar
- Posts: 2
- Joined: Wed Jun 11, 2008 10:55 pm
- Location: Paris, France
Hi !
After reading your posts on the subject and many papers about 3D transformations, I think I understand better the mechanics of world-space and eye-space, and it's actually not as difficult as it seemed to be
So I got my hands into the renderSingleObject and here is what I wrote :I'm currently just trying to get a simple rendering, without worrying about shaders and autoParamDataSource which I could handle when the basics will work.
So, after viewing the result I think I'm not so far, because I still get an image on the screen
but I must have missed something because it gets a little weird when I move my camera.
The camera does not move as excpected, resulting some geometries are not rendered while they are in front of the camera. I think that could be explained by the fact the cam is not where she is supposed to be and the geometries are culled as if she where in the proper place.
Can anyone see something gross in my code ? Because after hours of loosing my brain on performing step-by-step debug I may have done simple mistakes that I can't see anymore
Thank you.
PS: My english may be not so good, I apologize for that, but it's quite difficult to explain that sort of thing in a language that is not yours.
After reading your posts on the subject and many papers about 3D transformations, I think I understand better the mechanics of world-space and eye-space, and it's actually not as difficult as it seemed to be

So I got my hands into the renderSingleObject and here is what I wrote :
Code: Select all
Matrix4 camView = mCameraInProgress->getViewMatrix();
Vector3 trans = camView.getTrans();
camView.setTrans(Vector3::ZERO);
mDestRenderSystem->_setViewMatrix(camView); // (1) overwrite the view matrix with a position of zero
mResetIdentityView = true;
numMatrices = rend->getNumWorldTransforms();
if (numMatrices > 0)
{
rend->getWorldTransforms(mTempXform);
for (unsigned int i = 0; i < numMatrices; ++i)
mTempXform[i].setTrans(mTempXform[i].getTrans() - trans); // (2) remove camera position from the transforms
if (numMatrices > 1)
{
mDestRenderSystem->_setWorldMatrices(mTempXform, numMatrices);
}
else
{
mDestRenderSystem->_setWorldMatrix(*mTempXform);
}
}
// Rest of the function
So, after viewing the result I think I'm not so far, because I still get an image on the screen

The camera does not move as excpected, resulting some geometries are not rendered while they are in front of the camera. I think that could be explained by the fact the cam is not where she is supposed to be and the geometries are culled as if she where in the proper place.
Can anyone see something gross in my code ? Because after hours of loosing my brain on performing step-by-step debug I may have done simple mistakes that I can't see anymore

Thank you.
PS: My english may be not so good, I apologize for that, but it's quite difficult to explain that sort of thing in a language that is not yours.
-
- Goblin
- Posts: 260
- Joined: Mon Sep 01, 2003 3:59 am
- Location: London, United Kingdom
- x 1
I think what you've got there should work.
I'm not sure why you need to set mResetIdentityView to true, but apart from that it looks fine.
You need to be careful which which geometry you apply the camera offset to, for example, you do not want to apply the camera offset to the overlays, but that doesn't sound like the kind of problem you are having.
I'd look carefully through the rest of the scene manager code to ensure you know how things work. What you are doing is correct but you might need to do some tweaking else-where to ensure everything is ok.
I'm not sure why you need to set mResetIdentityView to true, but apart from that it looks fine.
You need to be careful which which geometry you apply the camera offset to, for example, you do not want to apply the camera offset to the overlays, but that doesn't sound like the kind of problem you are having.
I'd look carefully through the rest of the scene manager code to ensure you know how things work. What you are doing is correct but you might need to do some tweaking else-where to ensure everything is ok.