2D screen annotation of 3D rendered model

Problems building or running the engine, queries about how to use features etc.
Post Reply
Ogreniz
Gnoblar
Posts: 3
Joined: Fri Sep 16, 2022 6:04 pm
x 1

2D screen annotation of 3D rendered model

Post by Ogreniz »

Ogre Version: 13.2
Operating System: Ubuntu 18.04
Render System: N/A

I am working on a use case where the scene 3D rendered models need to be annotate with rectangle boxes (2D screen coordinates).

The most intuitve way I explored first was using the models AABB bounding boxes (projecting the corners into the 2D screen coordinates). This works roughly but end up providing bad results when some of the bounding box corners get out of the 2D visible screen space (models too close to the camera kind-of).

The second approach I used was for a non-real-time case. The model was rendered into a texture, then the texture was extracted into a bitmap. The contours of the model could be computed pretty easily scanning the bitmap for light up pixels. This works well for ONE model at a time.

I am struggling to find a better solution than the AABB boxing box for many models in a scene, while this should be done in a fast pace (let's say 25 FPS).

Thanks for your help,
Regards

rpgplayerrobin
Gnoll
Posts: 619
Joined: Wed Mar 18, 2009 3:03 am
x 353

Re: 2D screen annotation of 3D rendered model

Post by rpgplayerrobin »

Firstly, how do you project the 3D points into 2D screen coordinates? It sounds strange that the points don't work when outside of the camera (since it should work I think), so if you post your code I could try it myself and see why it would not work.
And how do you render the rectangle around the meshes? Is it in 2D in the UI or are they actual 3D models? If they are actual 3D models I could understand that they act strange when they come close to the camera, so I would use 2D here or you would always have problems with the camera near-clip distance.

For meshes that are animated, I would calculate an AABB on runtime based on where all of its bones are (with a margin per mesh, since bones are not a true representation of the model always), because otherwise their AABB would be incorrect at runtime (either taking up the whole animation or just the t-pose, which would be wrong).

For static meshes, I would try to use a similar kind of approach if you are facing problems with them.
There is something called "extremity points" that can be generated for meshes that are mostly used for sorting transparency, but they could possibly be used the same way as the above example with meshes that are animated. Just take each extremity point and handle it like it would be a bone.
That would probably just be the same as an AABB though, but you could instead make it a non-axis aligned bounding box which would make more detail bounds, which could easily be used with extremity points or bones.

Ogreniz
Gnoblar
Posts: 3
Joined: Fri Sep 16, 2022 6:04 pm
x 1

Re: 2D screen annotation of 3D rendered model

Post by Ogreniz »

I project the 8 corners (from the AABB bounding box from Ogre) using some like:

Code: Select all

    const auto getScreenPt = [=] (const Ogre::Vector3& point)
    {
        auto projPt = camera_->getProjectionMatrix() * (camera_->getViewMatrix() * point);
        return cv::Point{
            static_cast<int>(((projPt.x / 2) + 0.5f) * static_cast<float>(screen_width)),
            static_cast<int>((1 - ((projPt.y / 2) + 0.5f)) * static_cast<float>(screen_height))
        };
    };

…from which I clamp the 2D points in respect with the screen limits and compute the bounding rectangle of all 8 points.

In all cases, the output is headless (no screen), so I extract the bitmap of rendered image from the texture. The 2D rectangle drawing is performed into the (2D) bitmap.

Indeed, I forgot to mention, but the 3d models are animated humanoids.

If I understand correctly, you suggest that I should compute the AABB bounding box in-house instead of using the Ogre provided AABB? (...or to compute a Non-Aligned BB)

I don't understand why the box is not tighter?... Is that because of the animation?

Here attached are 2 images of the sequence showing the AABB bounding box of the model (note that the red rectangles should be wider, but the subsystem that is adding it keeps a margin).

Also are 2 images that shows the result of the boxes using bitmap scanning (which is the best result I wish to get, but the computation is too expensive to keep up with a decent frame rate):

Image
Image
Image
Image

rpgplayerrobin
Gnoll
Posts: 619
Joined: Wed Mar 18, 2009 3:03 am
x 353

Re: 2D screen annotation of 3D rendered model

Post by rpgplayerrobin »

Yes, exactly like I wrote, the animation is not automatically updating the AABB so you need to do it yourself with its bones.
But even if you do that in 3D, you will get incorrect results compared to what you want.

To get correct results is rather easy actually:
For each bone, gets its position in screen coordinates (2D), in a list called "std::vector<Vector2> bone2DPositions".
Then do something like this to calculate the 2D AABB on the screen where to render your rectangle:

Code: Select all

AxisAlignedBox tmp2DAABB;
for(unsigned int i = 0; i < bone2DPositions.size(); i++)
     tmp2DAABB.merge(Vector3(bone2DPositions[i].x, bone2DPositions[i].y, 0.0f);

When you render the 2D rectangle over that AABB it will have almost correct results (since its bones do not cover all the triangles of course).

To do it correctly you would have to either add bones to the many edges of the mesh, or use virtual bones to do that.
With virtual bones, I mean that you precompute them relative to the meshes real bones at many edges of the mesh (extremity points), and then just get the virtual bones 3D position at runtime based on the orientation and position of the real bones.
This might be a bit complex to explain though, but I think you can figure it out.
Then you just use the virtual bones instead of the real bones, which would create a perfect 2D rectangle in screen coordinates around the character.
With the virtual bones, you can also add an additional margin if you would want when you precompute them, so you can get a bit of a distance between the 2D rectangle rendered and the actual character model.

paroj
OGRE Team Member
OGRE Team Member
Posts: 1994
Joined: Sun Mar 30, 2014 2:51 pm
x 1074
Contact:

Re: 2D screen annotation of 3D rendered model

Post by paroj »

rpgplayerrobin wrote: Tue Sep 20, 2022 12:47 am

Yes, exactly like I wrote, the animation is not automatically updating the AABB so you need to do it yourself with its bones.
But even if you do that in 3D, you will get incorrect results compared to what you want.

To get correct results is rather easy actually:

there is also https://ogrecave.github.io/ogre/api/lat ... 5acdd59909

rpgplayerrobin
Gnoll
Posts: 619
Joined: Wed Mar 18, 2009 3:03 am
x 353

Re: 2D screen annotation of 3D rendered model

Post by rpgplayerrobin »

That is good, I did not know that existed.

But either way, it will not work in this situation as the 3D AABB will create invalid results compared to the wanted 2D AABB that is created from the points of the bones instead (since that takes the view of the camera into consideration, which AABB does not).

Example using normal 3D AABB (from bones or not, wrong either way):
(The black ugly snake is the mesh, blue lines are how it actually shows up in the camera, red is the rectangle rendered from the AABB)
Image

Example using my 2D AABB from camera view method instead:
(The black ugly snake is the mesh, blue lines are how it actually shows up in the camera, red points are my virtual bones and red is the AABB calculated from the virtual bones in 2D that also is the rectangle rendered)
Image

Ogreniz
Gnoblar
Posts: 3
Joined: Fri Sep 16, 2022 6:04 pm
x 1

Re: 2D screen annotation of 3D rendered model

Post by Ogreniz »

Thanks for the hints, I appreciate it.

"For each bone, gets its position in screen coordinates (2D)"... Could you give me some clues which API I should explore to query the bones... in world coordinates? (...and in respect with current animation state?)... because my understanding is that this must be in world coordinate to make sense when transposing the values in 2D screen space.

One side question: I thought this method would help me see the skeleton of my model walking on the screen: Entity::setDisplaySkeleton, but I only see some axes arrows floating around as the model moves around (no clue about the bones). From the sources, it seems that this feature only works when the model is at the origin? Is that true?

Code: Select all

        // HACK to display bones
        // This won't work if the entity is not centered at the origin
        // TODO work out a way to allow bones to be rendered when Entity not centered
        if (mDisplaySkeleton && hasSkeleton() && mManager && mManager->getDebugDrawer())
        {
            for (Bone* bone : mSkeletonInstance->getBones())
            {
                mManager->getDebugDrawer()->drawBone(bone);
            }
        }
rpgplayerrobin
Gnoll
Posts: 619
Joined: Wed Mar 18, 2009 3:03 am
x 353

Re: 2D screen annotation of 3D rendered model

Post by rpgplayerrobin »

You can just loop through the bones from the entity:

Code: Select all

for(unsigned short i = 0; i < tmpEntity->getSkeleton()->getNumBones(); i++)
{
	Bone* tmpBone = tmpEntity->getSkeleton()->getBone(i);
	...
}

Yeah, you need to get the bones in world space to then convert it to 2D screen coordinates.
Here is the function I am using to get a bone in world space:

Code: Select all

// Gets the world position of a bone
Vector3 GetPosition(Bone* bone, SceneNode* node)
{
	// Get the position of the bone
	Vector3 tmpPosition = node->_getFullTransform() * bone->_getDerivedPosition();

// Return the position of the bone
return tmpPosition;
}

The world position will of course be where it actually is, even if multiple animations are running or if you have rotated/positioned the scene node.

I have never tried "drawBone" so I have no idea how that works.
But you can probably just create an arbitrary entity each frame on each bone if you really want to debug them, then just destroy them right before the next time you want to create the arbitrary entities on the bones.
A simple code like this will work, just place it in your per-frame update function:

Code: Select all

SceneNode* tmpCharacterSceneNode = YOUR CHARACTER SCENE NODE HERE;
Entity* tmpCharacterEntity = YOUR CHARACTER ENTITY HERE;

class CMyModel // Yes, you can define classes inside of functions like this
{
public:
	Entity* m_entity;
	SceneNode* m_sceneNode;

CMyModel(std::string meshName, Vector3 position, float scale)
{
	m_entity = app->m_SceneManager->createEntity(meshName);
	m_sceneNode = app->m_SceneManager->getRootSceneNode()->createChildSceneNode(CGeneric::GenerateUniqueName());
	m_sceneNode->setPosition(position);
	m_sceneNode->setScale(Vector3(scale, scale, scale));
	m_sceneNode->attachObject(m_entity);
}

~CMyModel()
{
	app->m_SceneManager->destroyEntity(m_entity);
	app->m_SceneManager->destroySceneNode(m_sceneNode);
}
};

// Destroy all models from last frame (if any), since the vector is static it will remain with the same objects here to the next frame
static std::vector<CMyModel*> tmpModels;
for (size_t i = 0; i < tmpModels.size(); i++)
	delete tmpModels[i];
tmpModels.clear();

// Loop through all bones and create models at them for debug purposes, and add them to the static vector for the next frame
for (unsigned short i = 0; i < tmpCharacterEntity->getSkeleton()->getNumBones(); i++)
{
	Bone* tmpBone = tmpCharacterEntity->getSkeleton()->getBone(i);
	Vector3 tmpBonePosition = GetPosition(tmpBone, tmpCharacterSceneNode); // The same function I posted above in this post

CMyModel* tmpModel = new CMyModel("sphere.mesh", tmpBonePosition, 0.1f); // Make sure to have a mesh here that will be good for debugging, like a normal sphere
tmpModels.push_back(tmpModel);
}
Post Reply