Lots of edge cases. Take this, for example: an entity is covered by a user-drawn primitive (in immediate mode OpenGL, for example). Obviously, you shouldn't be able to 'pick' that entity since it'd be occluded. But unless you mimic the scene perfectly in this 'picking RTT', then you'll be able to pick occluded entities.Jabberwocky wrote:Good point. Although entities would probably cover most of the things a person would care about picking. Maybe IDs could intelligently be assigned to non-entities too.nullsquared wrote:What about the other 90% of things that are not entities?
Also - storing the entity is nice an all, but what if you want the exact hit point? Then you'd need an MRT, one RT for the entity ID and the other RT for the hit point/distance.
Then what's the point of using this method in the first place? I though the whole point was 'pixel-perfect' picking.Maybe the user could choose an RTT size, trading some efficiency for precision.nullsquared wrote:Besides - rendering the full scene to an RTT
No, because you need to change the colour per-entity. This is different from changing the material per-entity for a depth render since there is only a single depth material that needs to be used.Wouldn't this be similar to rendering the depth buffer to a texture, with a simple shader that writes an entity identifier rather than a colour?nullsquared wrote:changing colours per-object
AFAIK, you'd need to pull the whole texture off the GPU even for that single pixel.Dunno if this helps, but usually you'd only need to read 1 pixel back, unless you're supporting some kind of drag-selection.nullsquared wrote:then reading pixels back
Alright, that wasn't a great pointHow exactly do you think decoding the output is going to be expensive?nullsquared wrote:and attempting to decode them
See my statements above.Not sure if I agree.nullsquared wrote:well, let's say it won't be any faster than the already slow brute-force triangle iterating method.