PolyVox wrote:Hmmm, I see. I can understand why they do that - working with lattice deformations is common in many 3D modeling packages so it was probably more natural to them that sculpting the individual voxels. However, it not something I was really planning to implement because (as you noticed) it gets a lot more complicated.
True - but don't forget, most of the additional complication was imposed by our target platform. For instance, if you can regenerate the geometry without timeslicing, or simply retain the old geometry until the new geometry is ready, you can do your collision with the geometry rather than the raw voxel data - at which point deforming the lattices makes no difference at all.
What tools were the artists using for making the voxel models and applying the deformations? Presumably something developed in house? I was instead planning to have objects modeled in a standard package, placed into the world, and then voxelized.
Yes, I wrote an in-house editor. Your plan does have some drawbacks:
1. Voxelised polygonal models simply won't look like the originals. Even if you use prohibitively high voxel densities, the lighting won't look right because you've only ever got the 90/45 angles of voxel corners to work with.
2. You can't hope to animate models converted in this fashion. Even if you have multiple 'frames' of voxel animation you can't extrapolate damage incurred by one frame across the rest.
It's a lot more work, but creating a proper editor and supporting lattice deformation - including animated lattice deformation - will make it possible for your users to get the absolute most out of the engine.
But it sounds like your system does have a limitation - presumably you can't have an object which is a different material inside to outside. So you can't blast through a brick wall and find there is earth behind it? Or shoot a concrete statue and discover it has a steel core?
Sure you can - just embed an indestructible 'core' lattice within the outer one.
Regenerating data over multiple frames is also something I have considered but haven't needed yet. I guess I have the advantage that I'm targeting a considereably more powerful machine. There's a lot more speed I can squeeze out of my marching cubes implementation but it may turn out that the bottle neck is in uploading the data to the graphics card.
Yeah, you've got a lot more headroom. 32mb isn't much, not when it has to accommodate the whole of the rest of the game graphics and code as well
Shadowing is a very interesting issue which I haven't addressed yet. But I'm basically planning to use one of the standard approaches (probably shadow maps) and I'm expecting it to 'just work' on fully dynamic scenes. Is this naive? Or were you precomputing the lighting in order to get higher quality or a certain artistic feel?
No, I'm sure your method will be fine - again, you're targeting a far higher spec. The PS2 didn't even have multitexturing, so we had to use vertex lighting (and hence vertex shadowing).
Actually I have thought about computing the lighting by tracing rays through the voxel data and I think it does have potential. The first article in GPU Gems 3 does a voxel based landscape and uses raytracing in the volume data to compute ambient occlusion. I think it will look nice, but I'm concerned that recomputing the lighting whenever part of the world is destroyed will be too expensive for the CPU.
Don't be afraid to timeslice different jobs differently. I timesliced the geometry regeneration in big chunks, with the recalculation of shadows following at a more leisurely pace. You could easily do the same, and simply 'fade' between the old and new when the new is finished.
I'm glad to have been of some assistance
