Page 1 of 2

Ogre #defines operator new? Say it isn't so.

Posted: Mon Dec 18, 2006 9:15 am
by TwoBit
In OgreMemoryMacros.h, I see that Ogre is #defining operator new to something else. That strikes me as being an untenable thing to do. The fundamental problem is that it assumes that all uses of new are standard global new, but in any serious application that's not going to be so. You have per-class new, new in user-defined namespaces, and different versions of new such as placement new. By unilaterally #defining operator new you pretty much break all these legitimate uses of operator new. The result is that any serious application will have to disable OGRE_DEBUG_MEMORY_MANAGER. IMHO, it's always a bad idea to #define C++ keywords.

Instead of stomping on new/delete, how about it if instead Ogre made a #define called OGRE_NEW and users can use that instead of new and if OGRE_DEBUG_MEMORY_MANAGER is disabled then OGRE_NEW can be #defined to just new. IMO, this is a lot safer and is the standard solution to this problem.

Thanks.

Posted: Mon Dec 18, 2006 10:32 am
by simply
Have a look at OgreConfig.h, scroll down to the line where you can found a define regarding memory management, set it to null.

Posted: Mon Dec 18, 2006 3:41 pm
by Project5
@TwoBit: the trouble is that the fluid memory manager that Ogre's using is dependent on the global operator new being overwritten so that all instances of it go through Ogre, and that it's macroed so that __FUNCTION__, __FILE__, and __LINE__ information get collected with the call to new (and others). If you include the file OgreNoMemoryMacros.h, new (and others) will be undefined, but any memory leaks will appear in the log file as ??(0) ??.

The Microsoft CRT new and delete operators use inline function expansion to get function, file, and line information in there, but it depends on inline function expansion being turned on (which may not be the case for debug builds).

Both have their issues, but that extra information needs to be acquired somehow.

--Ben

Posted: Mon Dec 18, 2006 6:25 pm
by xavier
Guys, all the Ogre debug MM does is track allocations. It does not actually manage any memory.

Posted: Mon Dec 18, 2006 8:57 pm
by TwoBit
I don't see that Ogre needs to be doing what it's doing, and I've never seen any software library #define new like that (globally in header files).

It simply breaks C++.

I propose a practical alternative above, which is what most libraries do when they need this kind of functionality.

Posted: Mon Dec 18, 2006 9:03 pm
by Kencho
What's wrong with tracking the allocated memory through macros? Tracking (aka, observing) doesn't actually interfere in the construction of a program. The only difference I've noticed when programming with Ogre memory management, is that I get feedback of what memory allocations forgot to free. About the behavior of a program, I haven't noticed a difference yet :?

Posted: Mon Dec 18, 2006 9:46 pm
by TwoBit
>> What's wrong with tracking the allocated memory through macros?

See the original post: it breaks all usage of C++ new that is not global default new.

The simplest case is the case whereby you implement class-specific new/delete. Ogre breaks this is a way that cannot be worked around.

Another case is whereby you simply use the existing C++ placement new, which is very common indeed. Ogre breaks this as well.

Another case is whereby you define alternative implementations of global operator new. Ogre breaks all usage of these.

Another case is where you create a default new in a different namespace than the global namespace. Ogre breaks all usage of this as well.

Posted: Mon Dec 18, 2006 9:53 pm
by Chris Jones
if you dont need/want ogre to do this, just set it not to use memory macros before you compile ogre.

Posted: Tue Dec 19, 2006 12:14 am
by Kerion
I write ALOT of C++, and rarely, if ever, do I do any of the crazy new tricks you are talking about TwoBit. If you don't want OGRE defining new, then tell it not to. I understand your argument from a purist point of view, but it's so simple to work around, that I don't see the issue. You claim to never need OGRE's allocation tracking, so simply turn it off. It's not a difficult thing to do by any stretch.

Posted: Tue Dec 19, 2006 5:43 am
by Praetor
I definitely understand your points twobit. I am also very much a purist, and I've always felt funny with this arrangement. Whenever I deal with Ogre source, I usually turn off the memory manager. As has been stated all the time, Ogre is meant to be a renderer, nothing else. For instance, I don't think Ogre's job is to be a debug helper. There are tools and techniques you can employ to do that for you, if you want to. I would prefer the memory manager quietly depart from Ogre forever. However, there is resistance (though, I don't know why, all I see are complaints, never a "the ogre memory manager helped me out greatly!"). It really isn't that difficult to get rid of the memory manager's influence completely. If I am part of the minority I can accept removing it manually myself. However, I do have to agree with TwoBit.

Posted: Tue Dec 19, 2006 5:47 am
by Game_Ender
It only bother me because its hurts Ogre integration with other libraries like Boost and wxWidgets. It is obviously helpful to the Ogre devs so I think leaving it off by default in default Ogre builds, even Debug ones, would probably be a good middle of the road solution.

Posted: Tue Dec 19, 2006 5:51 am
by Falagard
I only see it as a problem for the compiled SDK, which maybe should turn off memory management by default. Otherwise you can simply change the OgreConfig.h file to turn off the memory manager, recompile and voila, problem solved.

I do see your point about Ogre using a macro called OgreNew and OgreDelete (or whatever) but I guess that wouldn't look as clean in the code as just new and delete. I've seen other memory trackers override the global new and delete operators so it's not a big surprise, and it can be turned off, so I don't see the big deal. If Sinbad uses it frequently during development and wants it in the code, it's his baby so it's his choice.

Posted: Tue Dec 19, 2006 6:04 am
by Praetor
I think off in the SDK is a good choice.

Posted: Tue Dec 19, 2006 6:14 am
by johnhpus
I can see the point about the memory manager not being part of Ogre. I use it, and love it, but I wouldn't be heartbroken by having to integrate it into my project instead of it coming in with Ogre. It is, after all, a separate piece of work created by Fluid Studios.

Posted: Tue Dec 19, 2006 7:47 am
by TwoBit
First let me say thanks for the responses, everybody.

The comments above seem to indicate that I am coming from a purist point of view and this is a theoretical problem and not a practical one.

I don't think this is true. I am a professional game programmer at EA and write core engine code for EA world-wide, some of which is in most EA games currently on the shelves (like them or not :)), including the memory allocation systems that most EA games use. We write to about 10 platforms and numerous compilers. Memory management is a very big deal on console gaming platforms and a big deal on large PC games. All of the above mentioned operator new uses are in heavy use in just about every major consumer game written in C++; they are the rule and not the exception. Indeed I discovered this issue because I am trying to integrate Ogre into a demo with our code and ran into compiler errors.

I think Ogre having that leak reporter is great. We happen to not actually need it because we have our own memory allocation systems that have that functionality and lots of additional stuff built-in. A description of the systems we have may be useful to those interested in the topic.

In game companies, we generally stay away from global new and delete, but when we need to use global new/delete, we call a macro called EA_NEW and delete the memory with EA_DELETE. I could probably post the definitions we use if it would help people see the situation.

Posted: Tue Dec 19, 2006 8:12 am
by johnhpus
A description of the systems we have may be useful to those interested in the topic.
Yes, I think most of would be interested in knowing more about how the "big boys" do things. Anything you could share would be great.

Posted: Tue Dec 19, 2006 11:45 am
by sinbad
As everyone else has pointed out, the memory manager is a tweaked version of a useful little piece of code created by Fluid Studios. We've had it for some years now, and originally incorporated it in the absence of a better way to trace memory leaks without a commercial tool or writing our own.

Yes, the redefinition of new/delete has issues, but those issues have been, in my view, relatively minor when compared to the additional leaks that might be missed without it. Whilst we could have used OGRE_NEW and OGRE_DELETE everywhere to trace leaks, that has it's own issues - if you can forget to use 'delete' to introduce a leak, you can certainly forget to use OGRE_NEW instead of 'new' too, and that would lead to an uncaught memory leak. I suppose you could do global code searches now and then to pick this up but it seemed messier and far too manual.

The issues only affect developers (since the leak tracker is disabled in release builds), and problems with 3rd-party code can generally be sorted by disabling the memory macros locally or globally. I do agree that it would be an idea to turn off the memory manager in future SDK builds for easier integration.

If you can give me an alternative way that will catch developer memory allocation mistakes without using an external tool and without relying on them remembering to use another allocation method (which is as easy to forget as deleting), I'd certainly be glad to hear it.

As it happens, this is all moot anyway for the next version, since Shoggoth (1.6) is going to feature more customisable memory layouts, exposing all the allocation routines of the containers and everything else so that end-users can tweak it. This is something that's been on my mind for a while (having read several articles about various approaches traditionally used in the industry) but there were always other things that needed doing first, but I really need to do it now - this is obviously a must if Ogre is ever going to run on reduced / specialised memory machines like consoles. This will mean that most if not all internal new/delete calls will be replaced with allocator functors. So in fact we will have to take a similar approach to what you use anyway, although I'd be tempted to try to keep something in debug builds to catch misuse of the original new/delete.

Thanks for the insight anyway, it's made me more convinced that memory configuration has to be one fo the top priority features in Shoggoth. For now, quietly turn the leak tracker off in OgreConfig.h :)

Posted: Tue Dec 19, 2006 5:03 pm
by klauss
Anyway, whenever I ran into trouble because libraries define some nasty defines, like Microsoft does with min and max (try typing std::min in MSVC), I just use #undef.

Maybe it will help you:

Code: Select all

#ifdef new
#undef new
#endif
BTW: I've made use of placement new several times, it's a big performance helper, allowing you to implement allocator pools transparently. Cool stuff. And I also had that trouble when using placement new together with Ogre's MM - AFAIK, my solution was that #undef thingy. I don't exactly remember, that's old code, but I think so.

Posted: Wed Dec 20, 2006 2:52 am
by Game_Ender
That's why you don't magically replace new and delete as TwoBit has mentioned. While the transparency is cool, its also your enemy. When someone sees new and delete they expect it to be, new and delete, not a pool allocator. I think its better be more explicit than magically and use and actual pool allocation function instead.

Posted: Wed Dec 20, 2006 4:52 am
by Whitebear
TwoBit wrote:
I am coming from a purist point of view
we call a macro called EA_NEW and delete the memory with EA_DELETE
Real C++ purists dont use macros at all :wink:

Posted: Wed Dec 20, 2006 6:08 am
by Praetor
Interesting discussion.

I really must say (again, probably) that I support turning off the memory manager for SDKs. At that point they should be assumed "stable" which I think means we don't need memory leak detection either. Besides you can just grab the source if you need it.

Beyond that, there have been some issues with integrating Ogre into projects in the past, that undefing, OgreNoMemoryMacros.h, and turning off the manager hasn't solved. I've found combing them all plus a little extra works well. First, build from source with the manager turned off. Then, when including Ogre do it like so:

Code: Select all

#include <OgreNoMemoryMacros.h>
#define __MemoryManager_H__
#include <Ogre.h>
That should ensure all traces of the memory manager are wiped out.

Posted: Wed Dec 20, 2006 9:10 am
by jfaust
sinbad wrote: As it happens, this is all moot anyway for the next version, since Shoggoth (1.6) is going to feature more customisable memory layouts, exposing all the allocation routines of the containers and everything else so that end-users can tweak it. This is something that's been on my mind for a while (having read several articles about various approaches traditionally used in the industry) but there were always other things that needed doing first, but I really need to do it now - this is obviously a must if Ogre is ever going to run on reduced / specialised memory machines like consoles. This will mean that most if not all internal new/delete calls will be replaced with allocator functors.
I'm going to chime in here for the first time in a while to say that this is awesome. It is definitely required for console development, so it's great to hear it's on its way. From what I hear getting the STL to work w/custom allocators is a pain...any thought to becoming even more console-friendly and removing the STL altogether? :twisted:

Posted: Wed Dec 20, 2006 9:29 am
by TwoBit
At EA we have ditched std STL altogether nearly EA-wide and we re-wrote all of it from scratch and then doubled its size with additional performance-friendly containers and algorithms. The fundamental change from std STL was that we changed the way custom allocators work to be much friendlier. This is a well-known problem in the C++ community, even amongst the more academic types. On top of this our custom STL outperforms all commercial STL implementations, especially on console platforms but on desktop platforms as well. I am the author of EASTL, FWIW.

I could spend a day talking about this, and it happens that after having some talks with Scott Meyers over the last couple months, I am in the process of writing a paper to the C++ community describing our STL (EASTL), the motivations behind it, the requirements of embedded and console application development, etc. Don't expect to see this paper for a couple months, though, as I am very busy and it's a background task. I do need to get it done before very long because it would be nice to present it before the C++ standardization group before they get too far cementing the new standard (though much is in place already). Alas it is unlikely that we can present the full source to EASTL. But I hope I can present the gist of it so that people can do their own efforts.

Posted: Wed Dec 20, 2006 3:31 pm
by Game_Ender
Did you test against libstdc++ and stlport as well as the Dinkumware implementation that is shipped with VC++?

By rewrote, do you mean supplied a new implementation of the STL interface? Or you wrote a generic container library that has iterators and custom allocators, but with and easier to use interface. I can understand that a little, I have never made a custom allocator myself but just reading a tutorial it did seem a little nasty.

EDIT: To correct some of my horrible english.

Posted: Wed Dec 20, 2006 6:20 pm
by pjcast
TwoBit wrote:At EA we have ditched std STL altogether nearly EA-wide and we re-wrote all of it from scratch and then doubled its size with additional performance-friendly containers and algorithms. The fundamental change from std STL was that we changed the way custom allocators work to be much friendlier. This is a well-known problem in the C++ community, even amongst the more academic types. On top of this our custom STL outperforms all commercial STL implementations, especially on console platforms but on desktop platforms as well. I am the author of EASTL, FWIW.

I could spend a day talking about this, and it happens that after having some talks with Scott Meyers over the last couple months, I am in the process of writing a paper to the C++ community describing our STL (EASTL), the motivations behind it, the requirements of embedded and console application development, etc. Don't expect to see this paper for a couple months, though, as I am very busy and it's a background task. I do need to get it done before very long because it would be nice to present it before the C++ standardization group before they get too far cementing the new standard (though much is in place already). Alas it is unlikely that we can present the full source to EASTL. But I hope I can present the gist of it so that people can do their own efforts.
Interesting. I would think that you if you plan to present this as a C++ standard, you might release it to the wild for a bit so that any problems can be hammered out, or just so we can all check it out and possibly use it :D Anyway, I look forward to seeing this (I for one don't really like stl allocator syntax).