vLOD: High-Fidelity Walkthrough of Large Virtual Environment

Anything and everything that's related to OGRE or the wider graphics field that doesn't fit into the other forums.
Post Reply
User avatar
boola
Silver Sponsor
Silver Sponsor
Posts: 25
Joined: Mon Feb 07, 2005 12:19 am
Location: Paris, France

vLOD: High-Fidelity Walkthrough of Large Virtual Environment

Post by boola »

I found this research paper yesterday and thought it might be very interesting for any Ogre::SceneManager coder..

Warning!! this is hot stuff :twisted:

Abstract:
[[[
We present visibility computation and data organization algorithms
that enable high-fidelity walkthroughs of large 3D geometric data sets. A novel feature of our walkthrough system is that it performs work proportional only to the required detail in visible geometry at the rendering time. To accomplish this, we use a precomputation phase that efficiently generates per cell vLOD: the geometry visible from a view-region at the right level of detail. We encode changes between neighboring cells' vLODs, which are not required to be memory resident. At the rendering time, we incrementally construct the vLOD for the current view-cell and render it. We have a small CPU and memory requirement for rendering and are able to display models with tens of millions of polygons at interactive frame rates with less than one pixel screen-space deviation and accurate visibility.
]]]

hmm... interesting, isn't it?

http://citeseer.ist.psu.edu/719049.html

direct link to the pdf (4.2MB)

PS: I've nothing to do with these guys, but could help to implement/understand it if anyone is interested in creating a new scenemanager plugin.

alex

User avatar
sinbad
OGRE Retired Team Member
OGRE Retired Team Member
Posts: 19265
Joined: Sun Oct 06, 2002 11:19 pm
Location: Guernsey, Channel Islands
x 66
Contact:

Post by sinbad »

Interesting, thanks.

User avatar
boola
Silver Sponsor
Silver Sponsor
Posts: 25
Joined: Mon Feb 07, 2005 12:19 am
Location: Paris, France

Post by boola »

Here are some point that deserve some investigation..

1. Their disk layout algorithm...
quote:
"We have devised a disk layout
scheme that reduces the number of disk accesses by storing
together on the disk objects that tend to be fetched together."

This made me think: This is good for any data that has some spatial coherence and could be useful for any (static) geometry storing that needs fast read access. PLSM comes to mind..i dont know if PLSM2 implements some sort of data reordering but it has to be done offline anyway, so it could be an external tool...

2. The mesh data is simplifictation is really interesting. They take a big mesh and partition it in smaller components (objects) using a connected components algorithm. They generate LODs on these objects.
It means you could just put all your static geometry in one big mesh and the offline tool would partition it (so you can have good culling) and generate LODs (so you can have good detail). This is GREAT 8)

And if you add to that, offline, per view point, choice of LOD and occlusion tests.. You have something really cool!! : an Offline Scene Manager for static geometry. :twisted:
And this is logic after all, since it's static, the maximum should be done offline.

What you need to do to render is just smartly swapping geom from disk, nicely arranged for you by the offline processing.

:shock:

I've only read it twice. But from what i understand, we're close to the 'long awaited' reconciliation between interior and exterior scene managers.
these guys are geniuses :) they made my day.. i hope to see this in action someday!

User avatar
tuan kuranes
OGRE Retired Moderator
OGRE Retired Moderator
Posts: 2653
Joined: Wed Sep 24, 2003 8:07 am
Location: Haute Garonne, France
x 4
Contact:

Post by tuan kuranes »

Interesting and complex.

But pre-processing time and storage is a little... scary !

"16 1.53GHzAMD Athlon-based Linux PCs, takes a cumulative total of
128 hours (the individual PCs taking from 8 to 10 hours). This
produces a total of 7GB storage"

That is something !

perhaps it's different for simpler models (< 10 millions), and would gives decent results...

Much more DotsceneOctree oriented than plsm, as it works on 3d Meshes, not heighfields (and plsm2 supports deformation so != static.)

Anyway did you check direct use or occlusion queries on the power plant model ?
http://www.delphi3d.net/listfiles.php?category=5

I got not that bad results.

User avatar
boola
Silver Sponsor
Silver Sponsor
Posts: 25
Joined: Mon Feb 07, 2005 12:19 am
Location: Paris, France

Post by boola »

Preprocessing is not so scary, for the 'big city' mesh, for example, preprocessing is much faster.

quote:
"Although the city model is larger (67 million polygons
including LODs), it has only vertical walls used as occluders.
In 2.5D our visibility precomputation is even more efficient
[6]. The city was preprocessed in 14 (cumulative) hours,
generating nealy 90,000 cells for a per-cell threshold of 0.5
million triangles. This results in 1.1 GB of storage for the
compressed deltas"

Anyway, what is important is the result, people will wait for that.

>Much more DotsceneOctree oriented than plsm, as it works on 3d Meshes, not heighfields (and plsm2 supports deformation so != static.)

Yes you're a bit right... but an heightfield produces 3d meshes in the end, so the problem is to know how their partitionning and LODing would behave on such a mesh.
It may not be well suited for landscape as we see them now, but could be adapted.

About staticity:

"We currently allow no update of detail at the rendering time; this is an nteresting area for future research. It would also be useful to allow some animation or other dynamic changes to the models. We believe small
coherent changes can be handled in our framework.
"

But anyway, their framework is good for any simplification and visibility algorithm:

"We have presented a novel 3D visibility algorithm and a new
technique to combine visibility computation and model
simplification into a common framework. In fact, as more
efficient simplification and visibility algorithms are developed,
they can be easily incorporated into our system. This
framework is particularly well-suited for models with high
occlusion complexity and large spatial extent."

>Anyway did you check direct use or occlusion queries on the power plant model ?
no, i'll have a look.

User avatar
tuan kuranes
OGRE Retired Moderator
OGRE Retired Moderator
Posts: 2653
Joined: Wed Sep 24, 2003 8:07 am
Location: Haute Garonne, France
x 4
Contact:

Post by tuan kuranes »

Paper is very good, I just want to point out the limitations. Agreed on the result is what we want. But that doesn't ease map/level creation process. (lot of build/test/rebuild cycles) and not everybody can afford/wait 100 hours computing...

Between city and plant model should lies normal game/app env, and that would mean 4 gb of data. But that's not so scary on today's PC. (if you have a very few map on the game/appl)

Its adapted to Landscape Mesh, and even could give very good results.

Once you use mesh instead of heighfield you'd loose all those optimisations. PLSM uses heightfield format that gives many many optimisations (special storage on GPU and disk, vertex buffer, lod morphing, terrain colorization, special lod, etc..), and once new horizon occlusion culling would be done, I doubt we'll see huge fps difference between both.

I really think it could enhance a lot the DotSceneOctree manager, without too much work, as ogre implents the HWocclusion query and paper explains wells the compression thing.
(I mean coding work, some CPUs and GPUs will certainly burns...)

Post Reply