Page 1 of 1
Remove calls to srand
Posted: Sat Mar 26, 2005 11:55 pm
I suggest that the calls to srand in the constructors of Root and Math are removed. I don't think a well-behaved library should call srand, it should only be called by the application. Sometimes, for example in networked games or for testing, you may want everything to be deterministic.
Sure you can call srand yourself to reseed the RNG, but note that both these classes are singletons and are thus created as statics. This makes it a little complicated to know when the objects are actually created.
Posted: Sun Mar 27, 2005 12:11 am
I take the point, but please note that Singleton instances are not statics, only the pointers are. The instances are created at very predictable times - if you reseeded after Root::initialise you would have guaranteed behaviour. One of the reasons our Singletons are not auto-creating is because I like to have 100% predictabilty of construction and destruction.
Posted: Sun Mar 27, 2005 9:18 am
Why would we remove srand ?
srand can be deterministic.
Ode use a dRandSetSeed() function : here's an interesting topic and easy to read on that : http://ode.petrucci.ch/viewtopic.php?p= ... 0715561558
Ogre::Root can/should have a similar method that is to be called before Root::initialise() for those who need it. (or a parameter with default behaviour to be more explicit to users.)
If you're allowed to speak about it, I'm curious about why you would need Ogre to be determinstic ?
Client-server would'nt need that as client side prediction (dead reckoning) doesn't use same algo than server... so it's a p2p Ogre based project ?
Posted: Sun Mar 27, 2005 12:50 pm
I tend to agree, libraries should preferably not mess with the C library random seed. It's not as bad as changing the locale but still. Why is it done?
Posted: Sun Mar 27, 2005 1:06 pm
Well, as Sinbad pointed out, the singletons are not created as statics so this is not a big a problem as I first though. But I still thinks that a library should not call srand, because this may not expected by the user. For example, a user may call srand first in their program and think everything is well, not realizing that Ogre reseeds it. Ode, by the way, do not use the time to seed its RNG but do instead set it to a fixed value (which I think is the correct thing to do).
We are using OgreOde and want to run the GranTurismOgre demo on four clients, each connected to a screen, forming a cave around the user. The server sends input events regularly to the clients who should perform the exact same simulations.
We have had some problems with this (not necessarily related to the RNGs). When I run the clients on a single computer they are always in sync, but on multiple computers it only sometimes works (either the clients starts deviating directly or not at all). I have had some discussions on the ODE mailing list, but still haven't found the problem.
I tried to run the same simulation on one computer, and when I use OpenGL it gives the same result every time, but when I use DirectX the results may be different from run to run.
Posted: Mon Mar 28, 2005 7:54 pm
From the ODE wiki:
There's also an issue about Direct X autonomously changing the internal FPU accuracy, leading to inconsistent calculations. Direct3D by default changes the FPU to single precision mode at program startup and does not change it back to improve speed (This means that your double variables will behave like float). To disable this behaviour, pass the D3DCREATE_FPU_PRESERVE flag to the CreateDevice call. This might adversely affect D3D performance (not much though). Search the mailing list for details.
Hope that helps...
Posted: Mon Mar 28, 2005 11:12 pm
Slan, thanks for the link. I use the D3DCREATE_FPU_PRESERVE flag (in Ogre it can be done by setting the configuration option "Floating-point mode" to "Consistent"), but the result is still different.
Posted: Tue Mar 29, 2005 3:16 pm
grimm wrote:Slan, thanks for the link. I use the D3DCREATE_FPU_PRESERVE flag (in Ogre it can be done by setting the configuration option "Floating-point mode" to "Consistent"), but the result is still different.
Sinbad added that option to the d3d9 driver those days because I had problems with the FPU mode. Meanwhile I managed to do what you are writing about, getting everything consistent on every client. Only one problem persists and perhaps it also influences why it doens't work for you: There are some drivers of grafix adapters that seem to ignore that setting and switch the FPU state to what they want whenever you can an OpenGL/Direct3D function. I proved that by writing a small OpenGL application on my own. Now I know for sure that the drivers of an Intel Extreme Graphics 82845G have this "feature". I had to add a _control87 call after many of the OpenGL calls to keep the fpu state like I need it to get consistent results.
Note that according to my experiments, ATI, NVidia and S3 graphics drivers don't manipulate the FPU state. I didn't had the chance to test others graphics adapters yet.
I still have a problem with getting determinisim with Ogre on that Intel grafixcard. I guess it's because I'm not quite sure where I have to add all those _control87 to guarantee the correct FPU state. Note that it works consistent between multiple pcs if they only have ATI, Nvidia and S3 graphics adapters, also mixed. Only Intel makes trouble.
So perhaps you have some clients with a graphics adapter which drivers manipulate the FPU mode like it wants to and screws your silumation.