Concerning transparency sorting, right now :
- it is written at many places in the code that the sorting is done by z.
- but in fact the sorting is done by distance.
The fact that the sorting is done by distance and not z (in camera view), recently came up as a problem for one of our game, which has many 2D elements. The good thing is that until now it hasn't been a problem to anyone apparently. I would like to add the opportunity to sort by z, but there are already many places where "sort by depth" is already suggested in the names (of functions / struct) instead of "sort by distance".
The renderqueue is sorted by :
- OgreRenderQueueSortingGrouping.h/RadixSortFunctorDistance.
==>this name is great, it sorts by distance. I would propose another functor near this one, a RadixSortFunctorZ for example.
- DepthSortDescendingLess
==> this name is less good, because it does sort by distance too, not by z. (although mathematically depth and distance could be synonymous, in 3D computer graphics depth sometimes means z for me)
Then DepthSortDescendingLess calls
Code: Select all
// Different renderables, sort by depth
Real adepth = a.renderable->getSquaredViewDepth(camera);
and getSquaredViewDepth is from OgreNode :
Code: Select all
Real Node::getSquaredViewDepth(const Camera* cam) const
{
Vector3 diff = _getDerivedPosition() - cam->getDerivedPosition();
// NB use squared length rather than real depth to avoid square root
return diff.squaredLength();
}
Question 1 :
If i were to add the opportunity to sort by z, then what do I do with the existing names?
Should I rename getSquaredViewDepth => getSquaredViewDistance?
I feel like it will give more pain to almost everyone than help people given the wide spread of these function in Ogre. Also I am not sure there is so many needs for a z .
If i use "ZSortDescendingLess" for example as a name for my sort, is it different enough from "DepthSortDescendingLess"? Or should this last one be called DistanceSortDescendingLess
Question 2 :
Is it ok with you if I simply propose a patch to get z depth, without making it to head, so that if anyone needs it from 1.9 he actually can use it?
One big issue is that there is no real place to choose 1 strategy over the other (apart from a static variable ... erk) without shaking too much the user.
Question 3:
I think transparent sorting could (not should!) be done by z anyway, not by distance, here is an explanation :
Let say
- camera is in [0;0;0], looking towards -Z.
- there is a renderable bigHuman1 is in 5,0,-9.
- there is a renderable bigHuman2 is in 0,0,-10.
- there are fragments of bigHuman1 and bigHuman2 that share the same 2D coordinates.
So obviously fragments of bigHuman1 should be closer from cam than those of bigHuman2. But with distance calculation, bigHuman2 is sorted to be closer.
There is no pure answer for these (appart shader tricks), since it highly depends on geometry. That is why z sorting could be done anyway.
What do you think?
Best regards,
Pierre
EDIT for those interested, transparent sorting is done this way :
1/ Your material's technique's pass tells if it needs transparent sorting (or if you got blending).
2/ the renderprioritygroups are created
OgrePass::getTransparentSortingEnabled
is called by
OgreTechnique::isTransparentSortingEnabled
which is called by
is used in RenderPriorityGroup::addRenderable to add it to RenderPriorityGroup::mTransparents.
3/
Later, the RenderPriorityGroup::mTransparents is sorted by :
RenderPriorityGroup::sort
which calls
RenderPriorityGroup::mTransparent.sort(camera);
mTransparent is a QueuedRenderableCollection.
So the call is : QueuedRenderableCollection::sort.
Inside this call, depending on the number of object, different sorting function are used (more or less faster, but same results).
In the end, it's always Renderable::getSquaredViewDepth(camera) that is called to make the sort.
This last function calculate the squared distance between the camera and the renderable most of the time (it does not for Rectangle2D for example).