Datablock lifetime

Discussion area about developing with Ogre-Next (2.1, 2.2 and beyond)


peteb
Gnoblar
Posts: 17
Joined: Thu Feb 26, 2009 9:05 pm

Datablock lifetime

Post by peteb »

Whats the lifetime of a datablock? I'm currently assuming in my code that once I create one with a given name then it exists forever until I explicitly delete it from the hlms manager - kind of like the old materials.

This can be a bit of a pain for hlms materials that are created on the fly. If you are implementing a custom renderable that can change material (e.g due to async texture updates from a server) then managing this gets quite annoying.
For example if you know you are the only owner of the material and you really must destroy it (otherwise you'll rapidly run out of heap) then you have to make sure you set the default hlms material before destroying any VAOs (or it will call getRenderOperation which for v2 objects 'should' throw), followed by destroying the hlms datablock. It gets worse of a few Renderables share the datablock.
One really simple thing that would make this slightly less painful is if if you could set a NULL datablock on a Renderable. Its possible to do with an access class and a static cast since the members are thankfully protected.. but its ugly. This does at least mean you can destroy the datablock without paying the performance penalty of setting the default datablock. Seems reasonable since this is the initial state of Renderable anyway (a null datablock).


I don't fully understand the internal implementation of the HLMS system yet - so probably I am missing some stuff here. But on the face of it to me, it would be lovely if the hlms manager only stored hlms definitions when loaded from file, and on-the-fly hlms materials could either add a definition or just create a concrete implementation. Definitions would be turned into concrete objects on demand and passed around with shared pointers (std::shared_ptr or boost::shared_ptr + ...::weak_ptr), rather than the current bald pointer scheme. The actual datablock would be stored as a weak pointer internally in the hlms manager (and could be re-obtained if it is in use elsewhere). Renderable would retain a strong pointer when setDatablock is used and it would automatically be destroyed when no more renderables (or other strong pointers) reference it. You could also add a flag to hlms materials to get them to be retained (or not) as strong pointers in the hlms manager in another vector to make this behaviour optional.
Alternatively, skipping the 'definition' concept, hlms materials loaded from file would be strong by default and could be set to weak retention in the hlms manager.One they are destroyed you'd need to reload the material script.
User avatar
dark_sylinc
OGRE Team Member
OGRE Team Member
Posts: 5448
Joined: Sat Jul 21, 2007 4:55 pm
Location: Buenos Aires, Argentina
x 1349

Re: Datablock lifetime

Post by dark_sylinc »

peteb wrote:Whats the lifetime of a datablock?
You pretty much figured it out. Datablocks get destroyed when you call hlms->destroyDatablock( datablock->getName() );
It's your responsibility to ensure no Renderables are still using that datablock (if you attempt to render them, engine will crash).
Though you can inspect mLinkedRenderables to debug/checkout which objects are using it, in case you want to iterate through them and set a different datablock, or iterate through them to find out which Renderable you forgot to cleanup.
peteb wrote:One really simple thing that would make this slightly less painful is if if you could set a NULL datablock on a Renderable. Its possible to do with an access class and a static cast since the members are thankfully protected.. but its ugly. This does at least mean you can destroy the datablock without paying the performance penalty of setting the default datablock. Seems reasonable since this is the initial state of Renderable anyway (a null datablock).
That sounds reasonable. If you are destroying both the Renderable and datablock, then it makes no sense to pay that performance penalty. That comes at your own risk though: trying to render a Renderable with a null datablock will cause a crash.

I'll add a function like renderable->_setNullDatablock(); where the _underscore denotes advanced function / for experts / internal use.

Edit: Done.
peteb wrote:I don't fully understand the internal implementation of the HLMS system yet - so probably I am missing some stuff here. But on the face of it to me, it would be lovely if the hlms manager only stored hlms definitions when loaded from file, and on-the-fly hlms materials could either add a definition or just create a concrete implementation. Definitions would be turned into concrete objects on demand and passed around with shared pointers (std::shared_ptr or boost::shared_ptr + ...::weak_ptr), rather than the current bald pointer scheme. The actual datablock would be stored as a weak pointer internally in the hlms manager (and could be re-obtained if it is in use elsewhere). Renderable would retain a strong pointer when setDatablock is used and it would automatically be destroyed when no more renderables (or other strong pointers) reference it. You could also add a flag to hlms materials to get them to be retained (or not) as strong pointers in the hlms manager in another vector to make this behaviour optional.
Alternatively, skipping the 'definition' concept, hlms materials loaded from file would be strong by default and could be set to weak retention in the hlms manager.One they are destroyed you'd need to reload the material script.
I'm afraid to tell you there will be no shared_ptr/weak_ptr scheme. The first reason is that I strongly believe shared_ptr makes for a lazy design. The second reason is much more objective though: Passing or accessing shared_ptr/weak_ptr isn't free (it has a performance cost for managing the reference counters using thread-safe accesses, and also checking if the object needs to be freed every time the ref count is decreased), and we would have one per Renderable.

Since you can easily have 100.000 Renderables in your scene, achieving 60 FPS means the compiled code would have to manage acquire and release thread safe shared_ptr/weak_ptr semantics at least 6.000.000 x N times per second where N is the number of times we accidentally increment/decrement a reference. And that is a assuming you render everything once (which is not true with shadow mapping, or multipass techniques).

Being thread-safe makes it a lot worse because even if the std lib implements everything with lockless programming (which often it is not) it still means a lot of QPI / bus noise. The contention for us is really high.

When it comes to Renderables and Datablocks, even the tiniest detail can have a big impact during render because this small change gets multiplied literally millions of times.

So I'm afraid regardless of what I think and we could discuss about shared_ptr/weak_ptr pattern, the impact on performance is too high.
peteb
Gnoblar
Posts: 17
Joined: Thu Feb 26, 2009 9:05 pm

Re: Datablock lifetime

Post by peteb »

I'll add a function like renderable->_setNullDatablock(); where the _underscore denotes advanced function / for experts / internal use.
Nice one - thank you.

r.e. shared/weak pointers - fair enough - but I disagree that it would necessarily be a major performance hit on a per-frame basis. A Renderable would have a strong pointer - and as such would be equivalent to a bald pointer access (accessing the pointer need not access the ref count). It would also be perfectly safe for it to have a bald pointer copy in a more cache friendly section of memory, with the main Renderable object holding the smart pointer (provided get/set of the smart pointer ensures the bald pointer is updated correctly).

The pointer itself does not need to be atomic, only the ref count. I guess what I'm saying is that from a scene management point of view, using ref counted pointers would save a load of grief - its not lazy, its pragmatic ;) Only if you are resetting the material on renderables millions times every frame would you run into the problems you describe.

Having to check if hashes of names are already used and manually destroy them when not used by potentially multiple Renderables is very error prone.
User avatar
dark_sylinc
OGRE Team Member
OGRE Team Member
Posts: 5448
Joined: Sat Jul 21, 2007 4:55 pm
Location: Buenos Aires, Argentina
x 1349

Re: Datablock lifetime

Post by dark_sylinc »

peteb wrote:r.e. shared/weak pointers - fair enough - but I disagree that it would necessarily be a major performance hit on a per-frame basis. A Renderable would have a strong pointer - and as such would be equivalent to a bald pointer access (accessing the pointer need not access the ref count). It would also be perfectly safe for it to have a bald pointer copy in a more cache friendly section of memory, with the main Renderable object holding the smart pointer (provided get/set of the smart pointer ensures the bald pointer is updated correctly).
True-ish.
The second we need to do "someCall( renderable->datablock );" "variable = renderable->datablock;" or "return renderable->datablock;"; there is potential for the ref counts to be touched (if it passes by value instead of reference); hence the overhead problems and atomic problems. Constantly auditing the generated code for something with frequent usage as these datablocks to ensure the refCounts aren't touched is unfeasable.
Not to mention shared_ptr add a second level of indirection. Having two copies (a raw pointer and a strong one) would solve that issue... but makes the Renderable fatter. A million Renderables x 16 bytes = 15.25kb (64-bit build). That's an entire L1 data cache in overhead.
peteb wrote:Having to check if hashes of names are already used and manually destroy them when not used by potentially multiple Renderables is very error prone.
I'm not sure I follow.
HlmsDatablock::getLinkedRenderables will return all the Renderables linked to that HlmsDatablock. Safely releasing all of them is trivial.
You could of course have copies of your own to that HlmsDatablock in your code; but for those, you will have to take care. If that's too important to your code and thinking about memory management is too cumbersome, you can always wrap the HlmsDatablock into a shared_ptr and use a custom deleter that will call iterate through all linked renderables and then call hlms->destroyDatablock (when the shared_ptr ref count reaches 0 in your own code).
peteb
Gnoblar
Posts: 17
Joined: Thu Feb 26, 2009 9:05 pm

Re: Datablock lifetime

Post by peteb »

dark_sylinc wrote: The second we need to do "someCall( renderable->datablock );" "variable = renderable->datablock;" or "return renderable->datablock;"; there is potential for the ref counts to be touched (if it passes by value instead of reference); hence the overhead problems and atomic problems.
But you could just use renderable->non_ref_counted_datablock (or a similarly snappy name - which just accesses the bald pointer). Accessing the smart pointer would only really be needed in client code - the non-ref counted pointer could be guaranteed to be valid during rendering. Or just shared_ptr.get().
dark_sylinc wrote:
peteb wrote:Having to check if hashes of names are already used and manually destroy them when not used by potentially multiple Renderables is very error prone.
I'm not sure I follow.
HlmsDatablock::getLinkedRenderables will return all the Renderables linked to that HlmsDatablock. Safely releasing all of them is trivial.
Ok - but it is still manual resource management which is error prone. I was just suggesting it would be lovely if this could be more automatic. I do see your point and I guess everyone who would like this can write their own wrappers etc to manage this. Thank you for the suggestion r.e. custom deleters - I was thinking of something along similar lines previously.


I kind of muddied the water there with hash names as this is really a separate issue: If I am creating e.g. a texture, then previously in Ogre I would generate names that I could reasonably guarantee would be unique. In Ogre 2.1 there is no runtime hash collision resolution AFAICS (and I can see that this would be a performance burden). So if I generate some texture names then I need to hash the name and check if the hash is already in use. Except that you can't do that with HlmsTextureManager without a hack along the following lines:

Code: Select all

class HlmsTextureManagerAccess : public Ogre::HlmsTextureManager
{
public:
    bool hasTextureAlias(Ogre::IdString const& id) const {
        TextureEntry searchName(id);
        auto it = std::lower_bound(mEntries.begin(), mEntries.end(), searchName);
        return ((it != mEntries.end()) && (it->name == id));
    }
};

Ogre::String UniqueHlmsTextureName(Ogre::HlmsTextureManager* tex_mgr)
{
    auto tm = static_cast<HlmsTextureManagerAccess*>(tex_mgr);
    Ogre::String name;
    do {
        name = TruncatedGuidString();
    } while (tm->hasTextureAlias(name));
    return name;
}
Unless I overthink this - perhaps there is a different way of dealing with resources that are created on the fly?