http://www.ogre3d.org/mantis/view.php?id=421
It seems minor to me, but there's at least two different ways to resolve it. One is to break the dependence in OgreXMLConverter for sequential bone IDs from 0 to N-1. The other is to simply flag IDs that don't go from 0 to N-1 and fail gracefully with an invalid input error. The latter seems a bit easier, but the former also seems workable, requiring something like a hash table to track the skeleton IDs rather than assuming they can be sued directly as array indices.
[PaperCut] OgreXMLConverter assumes bone IDs from 0 to N-1
-
- Gnoblar
- Posts: 23
- Joined: Thu Aug 12, 2010 1:52 pm
-
- Gnoblar
- Posts: 23
- Joined: Thu Aug 12, 2010 1:52 pm
Re: [PaperCut] OgreXMLConverter assumes bone IDs from 0 to N
Rereading the code involved, I'm convinced that this is either a failure of documentation to indicate that any skeleton which can be exported can only be composed of sequential index numbers from 0 to N-1, a failure of data structures in that currently the bones are stored in a vector using random acess based on IDs rather than accessing items in the vector based on ID in the way that we do with names, or a failure of the skeleton serializer to properly handle the vector of bones by insisting on retrieving the items from 0 to N-1.
My gut feeling is that large gaps in IDs on the bone list are unlikely. Most people will have consecutive IDs whether they start at 0, 1, or 22. Thus, the extant space issue of the gaps isn't horrible. Most bone access seems to be looking at a specific bone by ID rather than cycling through every bone from 0 to N-1. Thus, my feeling is that the serialization is the issue. Serialization is probably a pretty rare event, so it would probably be relatively low cost to simply scan from 0 to OGRE_MAX_NUM_BONES-1, checking each entry to see if it's NULL and storing valid IDs until we've either read in all of the bones (success) or run into the end before finding all of them (error). Then, that list of valid IDs can be used to export the bones and their parents.
Edit: Looks like, at the least, there is a "HACK" in Entity::_updateRenderQueue which also tries to go from 0 to N-1 for displaying the skeleton for debugging. Skeleton::_mergeSkeletonAnimations and Skeleton::_buildMapBoneByName also assume 0 to N-1 on handles. Does that mean we're getting to a situation where it makes sense to have a hash of IDs to handles in the Skeleton?
My gut feeling is that large gaps in IDs on the bone list are unlikely. Most people will have consecutive IDs whether they start at 0, 1, or 22. Thus, the extant space issue of the gaps isn't horrible. Most bone access seems to be looking at a specific bone by ID rather than cycling through every bone from 0 to N-1. Thus, my feeling is that the serialization is the issue. Serialization is probably a pretty rare event, so it would probably be relatively low cost to simply scan from 0 to OGRE_MAX_NUM_BONES-1, checking each entry to see if it's NULL and storing valid IDs until we've either read in all of the bones (success) or run into the end before finding all of them (error). Then, that list of valid IDs can be used to export the bones and their parents.
Edit: Looks like, at the least, there is a "HACK" in Entity::_updateRenderQueue which also tries to go from 0 to N-1 for displaying the skeleton for debugging. Skeleton::_mergeSkeletonAnimations and Skeleton::_buildMapBoneByName also assume 0 to N-1 on handles. Does that mean we're getting to a situation where it makes sense to have a hash of IDs to handles in the Skeleton?
-
- Gnoblar
- Posts: 23
- Joined: Thu Aug 12, 2010 1:52 pm
Re: [PaperCut] OgreXMLConverter assumes bone IDs from 0 to N
And... upon further consideration, I think that the intent of Ogre is indeed to store boneIDs as a sequential list in the vector but there is no requirement for other formats to follow such a constraint. Therefore, I plan on changing OgreXMLConverter to read in the bone IDs and either hash or map them to the "handle" that's assigned in Ogre to them. I also plan to go into the three listed cases of directly accessing the bone IDs and adding a check so that if a NULL pointer is encountered (a non-extant bone), it safely skips over it rather than creating a NULL pointer exception. As always, these plans are dependent on me actually remembering what I'm doing once I'm home, always a dicey prospect.
-
- OGRE Moderator
- Posts: 7157
- Joined: Sun Jan 25, 2004 7:35 am
- Location: Brisbane, Australia
- x 535
Re: [PaperCut] OgreXMLConverter assumes bone IDs from 0 to N
In ogreskeleton.h it says:
You should also ensure that all bone handles are eventually contiguous (this is to simplify their compilation into an indexed array of transformation matrices).
-
- Gnoblar
- Posts: 23
- Joined: Thu Aug 12, 2010 1:52 pm
Re: [PaperCut] OgreXMLConverter assumes bone IDs from 0 to N
Ah. I completely missed that. It sounds more and more like the possible error is in the Converter tool. Once one has both the skeleton and mesh files, the values can be rearranged since you no longer need to have the specific number used in the source files. I do still feel that the lack of data checking when cycling through is potentially harmful if for no other reason than that it does not gracefully handle failure if there is a NULL record. The overhead of doing a quick check for NULL wouldn't be too painful, right?Kojack wrote:In ogreskeleton.h it says:You should also ensure that all bone handles are eventually contiguous (this is to simplify their compilation into an indexed array of transformation matrices).