Picking; linking x y screen coordinates to scene components

What it says on the tin: a place to discuss proposed new features.
guido
Gnoblar
Posts: 22
Joined: Fri Jan 18, 2008 10:44 pm

Picking; linking x y screen coordinates to scene components

Post by guido »

The Ogre team appears to have the idea that picking should be done at a physics engine level. Arguably this is very true. Based on the underlying assumption that there actually is a physics engine, which is plausible for many typical games being built right now.

However, there is a large number of different developers, developers that want to build an rts, an rpg, an adventure game or a puzzle game, that do not need a whole full fledged physics engine. And they most definetely don't want to have the overhead of a physics engine tracking every movable object. They don't want to make a decision on a physics engine nor read through the documentation just to get to one very simple piece of functionality that is already implemented half way.

Another good argument is that picking is not a physics issue. It is a GUI tier issue. If I click on a control in my typical java swing application I most definitely don't require a 3rd party logic tier module to identify the control I just clicked on. Just like the rendering of a swing control to the screen is the G in GUI for a swing application, Ogre can be perceived as the G for a game (or some other 3d graphics environment). It becomes very arguable that the dependencies will get very much mixed up when a developer decides to use some different physics module, which is against the design philosophy of ogre.

Perhaps a neat new function on RenderTarget? Just pass an x and a y and you get an iterator with accurate picking information on what is behind? Maybe some simple extra information to determine what type of object is considered valid?

Should be straight forward, and would greatly reduce the time for a very large part of the users of ogre.
Owen
Google Summer of Code Student
Google Summer of Code Student
Posts: 91
Joined: Mon May 01, 2006 11:36 am
x 21

Post by Owen »

It sounds like Ogre::RaySceneQuery is what you are looking for. See Intermediate Tutorial 3.
guido
Gnoblar
Posts: 22
Joined: Fri Jan 18, 2008 10:44 pm

Post by guido »

After wrestling with that class for about a day trying various interpretations of the public functions in the class reference and trying google for solutions you will reach the conclusion that that specific query only finds the intersection with an axis aligned bounding box (aabb).

Just like you I read the tutorial fairly thoroughly, only realizing waaay later that it really does not find the intersection point with movable objects, and I don't know yet if the intersection point found with world fragments is with respect to some bounding box.
User avatar
Klaim
Old One
Posts: 2565
Joined: Sun Sep 11, 2005 1:04 am
Location: Paris, France
x 56
Contact:

Post by Klaim »

After wrestling with that class for about a day trying various interpretations of the public functions in the class reference and trying google for solutions you will reach the conclusion that that specific query only finds the intersection with an axis aligned bounding box (aabb).

Correct me if i'm wrong, but it is totally dependant of the scene manager. It's the Scene manager that will setup the bounding boxes and this raycast will collide on those bounding boxes. In the OctreeSceneManager for example it's AABBox indeed. I don't know if other BBox are used elsewhere but that's the thing to look for.

Anyway it's recommanded to use some "physic engine" because the informations you need is not totally visual. Anyway there are some libs that are little enough to only let you do precise raycast. I remember a wiki page too that explain how to do triangle-level picking (in an inefficient but working way).
User avatar
nullsquared
Old One
Posts: 3245
Joined: Tue Apr 24, 2007 8:23 pm
Location: NY, NY, USA
x 11

Post by nullsquared »

It's not that you need to depend on a physics engine, it's that you should not depend on Ogre. Meaning - do you ask OpenGL or Direct3D about where your AI unit should go to next? Of course not. Same idea here. It's logic-based picking, you need to pick the logical object that is represented by Ogre visually. Whether you do this logical picking with a physics engine, a collision engine, or on your own, is a different story. The reason physics engines are the most popular for this task is because they already have efficient intersection routines by nature - geometry is stored in a collision-optimized format in system RAM; rendering, however, stores geometry in a rendering-optimized format on GPU RAM, and this is obviously not what you should be using for collisions.
User avatar
KungFooMasta
OGRE Contributor
OGRE Contributor
Posts: 2087
Joined: Thu Mar 03, 2005 7:11 am
Location: WA, USA
x 16
Contact:

Post by KungFooMasta »

I disagree that a physics or outside lib should be the solution to this problem. In fact, there was a post a while back with some code that shows how to raycast down to the polygon level. This code should be integrated into ogre; just as there are utility classes that can parse and int to a string and vice versa, there should be a function that can return a list of movable objects given a render target and an x, y 2d pair. It doesn't involve ai, it doesn't involve physics, its a part of space/scene management. I'm not vollunteering to do this, but trying to shake it off like it doesn't belong in Ogre is wrong. :wink:
Creator of QuickGUI!
beaugard
OGRE Contributor
OGRE Contributor
Posts: 265
Joined: Sun Mar 25, 2007 1:48 pm
x 2

Post by beaugard »

In fact, there was a post a while back with some code that shows how to raycast down to the polygon level.
The code you refer to uses brute-force ray-vs-polygon for all polys in an object. waaay to inefficient for most apps. Sinbad would get a forum post per week complaining about it ;)
just as there are utility classes that can parse and int to a string and vice versa
But these are trivial tasks, raycasts are very non-trivial, and a "good" implementation will vary between apps. IMHO adding this to the Ogre code base would be madness.

Maybe adding a demo with integration of a collision engine would be a good idea, like the BSP collision demo?
guido
Gnoblar
Posts: 22
Joined: Fri Jan 18, 2008 10:44 pm

Post by guido »

I am fairly certain that I've read all the forum posts and wiki pages and the likes where solutions are mentioned. They all refer to GetMeshInformation from the code snippet section.

This code after studying it seemed to make a wrong assumption on how triangles were correlated to the set of vertices, such that the default models shipped with ogre worked, but the maya ogre exporter meshes caused a crash.

But even if this code is correct (which I'm pretty sure it isn't) that would only lower the threshold I hope to incorporate picking in Ogre :)

But suppose an expert at Ogre or someone who built a solution reads this, please post the solution.

Either you just solve my problem, or you solve a huge load of peoples problems :) Keep it simple as they say
User avatar
KungFooMasta
OGRE Contributor
OGRE Contributor
Posts: 2087
Joined: Thu Mar 03, 2005 7:11 am
Location: WA, USA
x 16
Contact:

Post by KungFooMasta »

The code you refer to uses brute-force ray-vs-polygon for all polys in an object. waaay to inefficient for most apps. Sinbad would get a forum post per week complaining about it
It should be there for use, but that does not mean people have to use it. (I don't think Ogre is minimalistic, if so, I haven't used Compositors yet so we should yank that out.. :P ) If its too slow people can always roll their own. And once the tracks are laid out, people might improve it and/or derive other methods.

If this is really dependent on the scene manager, then scene managers should implement this functionality, no? I'm not asking for anybody to do this, but I think people should acknowledge this functionality is related to scene management. With Raycasts come query results. Raycasting to the polygon level is not different fundamentally, the only real difference is the accuracy and granularity of the results. :)

Here is the link in the wiki:
http://www.ogre3d.org/wiki/index.php/Ra ... ygon_level
Creator of QuickGUI!
User avatar
Nauk
Gnoll
Posts: 653
Joined: Thu May 11, 2006 9:12 pm
Location: Bavaria
x 36
Contact:

Post by Nauk »



I am using that one since quite some time and so far I am totally satisfied with the performance. I am using it for mousepicking in Artifex and for simple collision detection in our game project, doing quite well in both.

I think there are many cases where the use of a full blown physics library is just a total overkill and makes things unnecessarily more complicated and bloatier than they have to be. Irrlicht has a simple collision built in and imho it has its place and use in a graphics library. Where is the idelogical difference between have a raycasting system to bounding box level and raycasting down to polygon level? - I don't see it, but I totally can understand the decision not to put it into Ogre for above stated reasons.

Since there seems to be a high demand for such a thing a friend and me decided to pack down a little bit of functionality into a small Ogre collision library, we will release it somewhen this week opensource under MIT license, if anyone feels like contributing or improving it, be happily invited, it will be hosted either on sourceforge or google-code.
User avatar
Azgur
Goblin
Posts: 264
Joined: Thu Aug 21, 2008 4:48 pm

Post by Azgur »

Imo, ray casting to polygon level makes sense in a graphics engine for a very simple reason:
The difference in representation in a physics engine or graphics engine.

Alot of objects are represented as cubes or spheres inside a physics world which usually makes sense in game logic.
However, when I for example want to prevent a camera from going through a wall, I'm only interested in the graphics world which might be very different from the physics world.

I do agree collision detection (and response) between objects is way beyond the scope of a graphics engine.
beaugard
OGRE Contributor
OGRE Contributor
Posts: 265
Joined: Sun Mar 25, 2007 1:48 pm
x 2

Post by beaugard »

If this is really dependent on the scene manager, then scene managers should implement this functionality, no?
Yep, a custom scene manager should implement polygon-level ray picking. I just think adding it to the core (the default scene manager, octree scenemanager) would be a mistake.
guido
Gnoblar
Posts: 22
Joined: Fri Jan 18, 2008 10:44 pm

Post by guido »

I think we should stop talking about whether picking should be in Ogre or not. We don't decide on this, the OgreTeam does. What we should talk about is how to implement it.

So (also in response to Beaugard);

I never asked for raycasting. I asked for picking. Picking requires a RenderTarget, Camera, Viewport, you know, the works. Figuring out what is in front of what is most definitely a trivial task for these classes. In fact they can effectively determine precisely this for an entire screen of Rays. This is precisely what a 3D engine does when it is rendering.

A very simple solution would be to temporarily render for a screen of one pixel, using the Entities memory address rather than its colour.

Figuring out how to filter objects is i think the real issue; which objects should be chosen? MovableObjects? WorldFragments? Particles, billboards..? What should be done with transparent objects? And should some objects sometimes be masked on semantic criteria? How do you specify these things? ie, how do you pass it to the function call? Which class should be responsible?

Concrete questions;

Which class should offer the function?
What arguments should be passed?
What should be returned?


To me for now the answers are;

The class should be RenderTarget; Because RenderTarget serves as the canvas with which an overlay gui interfaces and depends on. Also it is practical because it has the X Y coordinate system that is relevant


Necessary arguments;

Some way of specifying which classes of objects to filter on that is extendable (OgreTeam might want to build more classes that are pickable)

An alpha threshold or boolean


The function must return a class that supplies;

The nearest object conforming to the set criteria
The distance to the object

Perhaps the colour of the texture at the intersection point and an interface to change that colour or even the whole texture (read some post of some dude that wanted to do this)

Though the last option isnt something that is typically part of a graphics engine, it would be really cool to play with it and make life a lot easier for people building ogre graphics editors.


So... what do you think/need? Did I miss something? Any clever ideas on how to further refine this into more concrete specs?
User avatar
nullsquared
Old One
Posts: 3245
Joined: Tue Apr 24, 2007 8:23 pm
Location: NY, NY, USA
x 11

Post by nullsquared »

guido wrote: I never asked for raycasting. I asked for picking. Picking requires a RenderTarget, Camera, Viewport, you know, the works. Figuring out what is in front of what is most definitely a trivial task for these classes. In fact they can effectively determine precisely this for an entire screen of Rays. This is precisely what a 3D engine does when it is rendering.
That is precisely where you go wrong. A RenderTarget, Camera, Viewport, etc. have nothing to do with the scene. They just render things. They might have rendered a plain old triangle, or a SubEntity, or an Entity of SubEntities, or a Billboard, or it might have been a manual Direct3D/OpenGL call that drew something, or it might be something that wasn't cleared from the previous frame, etc.

Thus, the SceneManager should be in charge of this 'picking' - which is indeed ray casting. 'picking' what's at [x,y] is very simply transforming [x,y] by the inverse of the view/projection matrix, and then ray casting from there.
]
This is precisely what a 3D engine does when it is rendering.
No, it's not. The 3D engine renders triangles, not entities. When an entity (SubEntity, actually) renders, it sends its triangle data to the GPU for rendering. It does not say "oh, OK, so I'll end up precisely at [345, 121] on the screen."
The class should be RenderTarget; Because RenderTarget serves as the canvas with which an overlay gui interfaces and depends on. Also it is practical because it has the X Y coordinate system that is relevant
Definitely not. The render target should know nothing about the scene. Also, a GUI should not depend on a render target, but rather on a viewport.

The problem is that you seem to think that once an object is rendered, it has some sort of information along with it. It doesn't. It is merely a colour now. Nothing more, nothing less. Not even alpha. Once something is alpha-blended, you cannot programatically say "oh, yes, that's a transparent object" - it is transparent to the human eye, but not the computer, as it is merely a colour.
User avatar
KungFooMasta
OGRE Contributor
OGRE Contributor
Posts: 2087
Joined: Thu Mar 03, 2005 7:11 am
Location: WA, USA
x 16
Contact:

Post by KungFooMasta »

I just think adding it to the core (the default scene manager, octree scenemanager) would be a mistake.
I don't understand how this would be a mistake. A mistake would be creating an Ogre::TexturePtr and holding onto it while shutting down Ogre.

There are 2 scenarios:

1. Hey look, a function that returns the first movable object hit with a raycast, with high precision! I need such a feature, I'm glad Ogre has these utility functions in here to make things easy.

2. Hey look, there is an inneficient function that I don't need. I just won't use it. I could even roll my own or improve this if I needed to.

So.. where is this mistake you're talking about?
Creator of QuickGUI!
User avatar
nullsquared
Old One
Posts: 3245
Joined: Tue Apr 24, 2007 8:23 pm
Location: NY, NY, USA
x 11

Post by nullsquared »

KungFooMasta wrote: 2. Hey look, there is an inneficient function that I don't need. I just won't use it. I could even roll my own or improve this if I needed to.
The problem is that you well understand the causes of why its inefficient, but others won't. Newbies will constantly come with something like:
So I'm using the PreciseSceneQuery to pick my RTS units (and some of the units can programatically 'pick' other units), and once I have about 50 or more units spawned, it gets very slow. Each unit is about 5000 triangles or so. Here's my code:

Code: Select all

Ogre::PreciseRaySceneQuery *query = sceneMgr->createPreciseRaySceneQuery(ray);
query->execute(); // <-- this is where it is very slow, and causes choppy gameplay
And then we tell him/her that this is slow because it iterates through all of the triangles manually, and that it has to download vertex/index data from the GPU, etc.
Hm. I think the OgreTeam should optimize this function. Ogre is well optimized all around but here, and this is a very useful function, so it should be optimized just like the rest of the engine.
Catch my drift?
User avatar
KungFooMasta
OGRE Contributor
OGRE Contributor
Posts: 2087
Joined: Thu Mar 03, 2005 7:11 am
Location: WA, USA
x 16
Contact:

Post by KungFooMasta »

That makes sense. Alternatively you could just comment the function saying
NOTE: This function can traverse down to the polygon level resulting in slower performance. For large and/or complex scenes, consider using a Physics library.
Creator of QuickGUI!
User avatar
Jabberwocky
OGRE Moderator
OGRE Moderator
Posts: 2819
Joined: Mon Mar 05, 2007 11:17 pm
Location: Canada
x 218
Contact:

Post by Jabberwocky »

I think selection/picking falls well within the range of a graphics engine's responsibility. Even if you have a physics engine, you may still want the fine-grained pixel-precision of graphics-based selection, which a physics engine can't provide.

I agree with those arguing performance is a concern. But there's a million ways you can shoot yourself in the foot through misuse of ogre, so I don't think performance concerns are a good enough reason not to include it. I would be in favor of adding it to the OctreeSceneManager, if we could think of a good way of doing it.

I like Guido's point - it would be great to move on and talk about how it could be most efficiently implemented.
Image
User avatar
nullsquared
Old One
Posts: 3245
Joined: Tue Apr 24, 2007 8:23 pm
Location: NY, NY, USA
x 11

Post by nullsquared »

Jabberwocky wrote:it would be great to move on and talk about how it could be most efficiently implemented.
That's my point all along, it can't be efficiently implemented. Ogre's mesh structure is optimized for rendering (data is stored on the GPU, etc.), not intersections/collisions (for picking).
User avatar
KungFooMasta
OGRE Contributor
OGRE Contributor
Posts: 2087
Joined: Thu Mar 03, 2005 7:11 am
Location: WA, USA
x 16
Contact:

Post by KungFooMasta »

Take a deep breath, relax, and try to open your mind. :)
Creator of QuickGUI!
User avatar
Jabberwocky
OGRE Moderator
OGRE Moderator
Posts: 2819
Joined: Mon Mar 05, 2007 11:17 pm
Location: Canada
x 218
Contact:

Post by Jabberwocky »

nullsquared wrote:
Jabberwocky wrote:it would be great to move on and talk about how it could be most efficiently implemented.
That's my point all along, it can't be efficiently implemented. Ogre's mesh structure is optimized for rendering (data is stored on the GPU, etc.), not intersections/collisions (for picking).
Notice I said most efficiently implemented. That's a relative term. There's obviously faster and slower implementations of this.

Graphical applications use this kind of selection all the time, without the use of a physics engine. What is it that these applications do that Ogre doesn't?

As an analogy, compositors can be expensive. If someone ran 5 separate, 10 pass compositors, then cried when their framerate hit 0.03, we wouldn't freak out and remove compositors from the engine Instead, we would tell the person they are misusing compositors.

Similar for graphics-based picking. There are going to be certain situations where it is ok. Take for example one of those point and click adventure games. If the scene is simple, and we only run the picking routine when the user clicks the mouse, I somehow doubt Ogre is going to grind to a halt. Maybe some editor tools would also make good use of this mouse picking. Or in an architecture CAD program, where it might be perfectly acceptable to have a short pause when you click in the scene. If for some other applications graphics-based picking is less appropriate, no worries. Don't use it for those.

Having to integrate a physics engine for picking seems like overkill for many kinds applications. Plus, as I noted before, sometimes you want exact-pixel picking, which a physics engine does not provide.
Image
User avatar
nullsquared
Old One
Posts: 3245
Joined: Tue Apr 24, 2007 8:23 pm
Location: NY, NY, USA
x 11

Post by nullsquared »

Jabberwocky wrote: Graphical applications use this kind of selection all the time, without the use of a physics engine. What is it that these applications do that Ogre doesn't?
Either they used an accelerated structure of the type physics engines use, or they don't use video RAM vertex buffers, or they keep system RAM shadow copies.
Having to integrate a physics engine for picking seems like overkill for many kinds applications.
I didn't say you need a physics engine. I said you may want a physics engine. You can also use collision-based libraries, or you can provide your own accelerated structure for intersections.
User avatar
Jabberwocky
OGRE Moderator
OGRE Moderator
Posts: 2819
Joined: Mon Mar 05, 2007 11:17 pm
Location: Canada
x 218
Contact:

Post by Jabberwocky »

What do you think about Guido's idea of rendering an entity's ID / memory address (rather than colour) to a temporary render target, and using that to pick? Sounds promising to me.
Image
User avatar
nullsquared
Old One
Posts: 3245
Joined: Tue Apr 24, 2007 8:23 pm
Location: NY, NY, USA
x 11

Post by nullsquared »

Jabberwocky wrote:What do you think about Guido's idea of rendering an entity's ID / memory address (rather than colour) to a temporary render target, and using that to pick? Sounds promising to me.
What about the other 90% of things that are not entities?

Besides - rendering the full scene to an RTT, changing colours per-object, then reading pixels back and attempting to decode them ... well, let's say it won't be any faster than the already slow brute-force triangle iterating method.
User avatar
Jabberwocky
OGRE Moderator
OGRE Moderator
Posts: 2819
Joined: Mon Mar 05, 2007 11:17 pm
Location: Canada
x 218
Contact:

Post by Jabberwocky »

nullsquared wrote:What about the other 90% of things that are not entities?
Good point. Although entities would probably cover most of the things a person would care about picking. Maybe IDs could intelligently be assigned to non-entities too.
nullsquared wrote:Besides - rendering the full scene to an RTT
Maybe the user could choose an RTT size, trading some efficiency for precision.
nullsquared wrote:changing colours per-object
Wouldn't this be similar to rendering the depth buffer to a texture, with a simple shader that writes an entity identifier rather than a colour?
nullsquared wrote:then reading pixels back
Dunno if this helps, but usually you'd only need to read 1 pixel back, unless you're supporting some kind of drag-selection.
nullsquared wrote:and attempting to decode them
How exactly do you think decoding the output is going to be expensive?
nullsquared wrote:well, let's say it won't be any faster than the already slow brute-force triangle iterating method.
Not sure if I agree.

I'm not trying to prove you wrong to win an argument or anything. I'm just trying to talk over some of the very reasonable issues you identified.
Image
Post Reply