Remove calls to srand

Anything and everything that's related to OGRE or the wider graphics field that doesn't fit into the other forums.
Post Reply
grimm
Gnoblar
Posts: 11
Joined: Sat Feb 12, 2005 12:39 am
Location: Sweden

Remove calls to srand

Post by grimm »

I suggest that the calls to srand in the constructors of Root and Math are removed. I don't think a well-behaved library should call srand, it should only be called by the application. Sometimes, for example in networked games or for testing, you may want everything to be deterministic.

Sure you can call srand yourself to reseed the RNG, but note that both these classes are singletons and are thus created as statics. This makes it a little complicated to know when the objects are actually created.
User avatar
sinbad
OGRE Retired Team Member
OGRE Retired Team Member
Posts: 19269
Joined: Sun Oct 06, 2002 11:19 pm
Location: Guernsey, Channel Islands
x 66
Contact:

Post by sinbad »

I take the point, but please note that Singleton instances are not statics, only the pointers are. The instances are created at very predictable times - if you reseeded after Root::initialise you would have guaranteed behaviour. One of the reasons our Singletons are not auto-creating is because I like to have 100% predictabilty of construction and destruction.
User avatar
tuan kuranes
OGRE Retired Moderator
OGRE Retired Moderator
Posts: 2653
Joined: Wed Sep 24, 2003 8:07 am
Location: Haute Garonne, France
x 4
Contact:

Post by tuan kuranes »

Why would we remove srand ?

srand can be deterministic.

Ode use a dRandSetSeed() function : here's an interesting topic and easy to read on that : http://ode.petrucci.ch/viewtopic.php?p= ... 0715561558


Ogre::Root can/should have a similar method that is to be called before Root::initialise() for those who need it. (or a parameter with default behaviour to be more explicit to users.)

If you're allowed to speak about it, I'm curious about why you would need Ogre to be determinstic ?


Client-server would'nt need that as client side prediction (dead reckoning) doesn't use same algo than server... so it's a p2p Ogre based project ?
User avatar
:wumpus:
OGRE Retired Team Member
OGRE Retired Team Member
Posts: 3067
Joined: Tue Feb 10, 2004 12:53 pm
Location: The Netherlands
x 1

Post by :wumpus: »

I tend to agree, libraries should preferably not mess with the C library random seed. It's not as bad as changing the locale but still. Why is it done?
grimm
Gnoblar
Posts: 11
Joined: Sat Feb 12, 2005 12:39 am
Location: Sweden

Post by grimm »

Well, as Sinbad pointed out, the singletons are not created as statics so this is not a big a problem as I first though. But I still thinks that a library should not call srand, because this may not expected by the user. For example, a user may call srand first in their program and think everything is well, not realizing that Ogre reseeds it. Ode, by the way, do not use the time to seed its RNG but do instead set it to a fixed value (which I think is the correct thing to do).

We are using OgreOde and want to run the GranTurismOgre demo on four clients, each connected to a screen, forming a cave around the user. The server sends input events regularly to the clients who should perform the exact same simulations.

We have had some problems with this (not necessarily related to the RNGs). When I run the clients on a single computer they are always in sync, but on multiple computers it only sometimes works (either the clients starts deviating directly or not at all). I have had some discussions on the ODE mailing list, but still haven't found the problem.

I tried to run the same simulation on one computer, and when I use OpenGL it gives the same result every time, but when I use DirectX the results may be different from run to run.
Slan
Halfling
Posts: 49
Joined: Wed Jun 09, 2004 11:13 pm

Post by Slan »

From the ODE wiki:
There's also an issue about Direct X autonomously changing the internal FPU accuracy, leading to inconsistent calculations. Direct3D by default changes the FPU to single precision mode at program startup and does not change it back to improve speed (This means that your double variables will behave like float). To disable this behaviour, pass the D3DCREATE_FPU_PRESERVE flag to the CreateDevice call. This might adversely affect D3D performance (not much though). Search the mailing list for details.
See http://ode.org/cgi-bin/wiki.pl?SaveAndRestoreHowTo

Hope that helps...
grimm
Gnoblar
Posts: 11
Joined: Sat Feb 12, 2005 12:39 am
Location: Sweden

Post by grimm »

Slan, thanks for the link. I use the D3DCREATE_FPU_PRESERVE flag (in Ogre it can be done by setting the configuration option "Floating-point mode" to "Consistent"), but the result is still different.
User avatar
Marc
Gremlin
Posts: 182
Joined: Tue Jan 25, 2005 7:56 am
Location: Germany
Contact:

Post by Marc »

grimm wrote:Slan, thanks for the link. I use the D3DCREATE_FPU_PRESERVE flag (in Ogre it can be done by setting the configuration option "Floating-point mode" to "Consistent"), but the result is still different.
Sinbad added that option to the d3d9 driver those days because I had problems with the FPU mode. Meanwhile I managed to do what you are writing about, getting everything consistent on every client. Only one problem persists and perhaps it also influences why it doens't work for you: There are some drivers of grafix adapters that seem to ignore that setting and switch the FPU state to what they want whenever you can an OpenGL/Direct3D function. I proved that by writing a small OpenGL application on my own. Now I know for sure that the drivers of an Intel Extreme Graphics 82845G have this "feature". I had to add a _control87 call after many of the OpenGL calls to keep the fpu state like I need it to get consistent results.
Note that according to my experiments, ATI, NVidia and S3 graphics drivers don't manipulate the FPU state. I didn't had the chance to test others graphics adapters yet.
I still have a problem with getting determinisim with Ogre on that Intel grafixcard. I guess it's because I'm not quite sure where I have to add all those _control87 to guarantee the correct FPU state. Note that it works consistent between multiple pcs if they only have ATI, Nvidia and S3 graphics adapters, also mixed. Only Intel makes trouble.

So perhaps you have some clients with a graphics adapter which drivers manipulate the FPU mode like it wants to and screws your silumation.
Post Reply