G'day Xavier,
With regards to what ....
xavier wrote:Hmm...what's the communication strategy you have in mind?
With regards to my implementation of the ROAM Algorithm, When updating the terrain mesh I have to ...
A: Merge the redundant triangles that are not in view, and reduce the triangles just outside the view Frustum while letting the page the triangle is in that it has been modified, and by how much( if it's only minimal, then the update will wait a little, until the page has been modified enough to worry about).
B: Then for each Chunk that is visible or just outside the view frustum, go through the pages of that chunk and increase the detail if needed (same as above, but opposite). If the chunk is behind the horizon then remove it from consideration for the next run of Page checking, if it has just come into view, bring it into the consideration queue for the next Page run.
C: When that's all done, go through the chunks and see if they need updating to the video card.
* if there is enough change that it will make a difference to the view,
> go through the pages of this chunk and if it needs updating then update the batch for this page.
* if this page is past the horizon, remove it from the page merge split queue (freeing it's triangles from RAM).
* if this page has just come into the horizon, then put it into the page merge split queue for the next run through the loop (rebuilt by the split routine).
D: After all of that, then the water comes into play, doing pretty much the same as above, but as a layer of the terrain page, that way all the culling is already done. we go though the chunks and pages that are relevant and check the water triangle vertices depth compared to the terrain vertices and cull the triangles that are bellow a certain depth, so we don't draw water under all of the terrain, just under the low coastal shoreline.
What I was thinking is that the routines for the chunks can monitor a state mutex so when the merge for a chunk is complete, the chunk can start splitting while the other chunks are being merged, and while the chunks are being split, as a page is completed, it can start re indexing for input to the vertex buffers if needed, while the others are being split. because as I re index the page I add in skirts on the fly to the vertex buffers so I don't have to store a whole lot of data in ram as well.
It all seems to work at the moment, but in a timed incremental linear fashion, with the use of the work queue in the new Ogre, I reckon I could get this working like a champion. I might be going up the wrong track, but I would love to give it a go.
regards
Alex