Lightwave converter
-
- Gremlin
- Posts: 157
- Joined: Mon Nov 11, 2002 4:21 pm
- x 3
Lightwave converter
Hey there,
My LWO2MESH is now at a stage where pre-triangulated objects can be converted. Object-layers are put in submeshes. I have some trouble with materials. In Lightwave every polygon can have it's own surface properties, in Ogre this does not seem possible. What would be the best way in dealing with this problem?
regards,
Dennis
My LWO2MESH is now at a stage where pre-triangulated objects can be converted. Object-layers are put in submeshes. I have some trouble with materials. In Lightwave every polygon can have it's own surface properties, in Ogre this does not seem possible. What would be the best way in dealing with this problem?
regards,
Dennis
-
- OGRE Retired Team Member
- Posts: 19269
- Joined: Sun Oct 06, 2002 11:19 pm
- Location: Guernsey, Channel Islands
- x 66
You should group triangles by their surface properties, and each group should be a SubMesh. In reality for realtime models it's best to keep the number of different materials per mesh to a minimum because the more you have, the more renderops it takes to render (although OGRE will group separate instances using the same material for you).
-
- Gremlin
- Posts: 157
- Joined: Mon Nov 11, 2002 4:21 pm
- x 3
Grouping tri's on surface properties is not a problem. But I figured that a submesh is used to add something to a mesh that can be (sub-/pre-)transformed different from the main mesh or other submeshes. (Like a car would be built of 5 submeshes, one for the main body and one for each wheel.) Perhaps it's time to rethink or discuss the current 'model' model.
-
- OGRE Retired Team Member
- Posts: 19269
- Joined: Sun Oct 06, 2002 11:19 pm
- Location: Guernsey, Channel Islands
- x 66
No, you've got this wrong. SubMeshes are simply there to allow a single model to have sections with different material properties. The 'car' idea you mention would be made up of different entities, which would be attached to subnodes of the scene graph, so that the wheels had a transform relative to the body of the car, but could pivot in their own local space.
If we did it your way we'd have to have a submes per wheel, which is a waste because every wheel is the same. Better to have a single mesh representing a wheel, then have 4 entities (instances) of it. They can be moved independently (suspension, steering) but relative to the car. Also, then if you crash you can detach a wheel and send it spinning off into the distance.
For sections that don't need to move independently, they can be part of the same mesh, for example the OGRE head is made up of 4 submeshes because the skin, tusks, eyes and earring all have different material properties.
If we did it your way we'd have to have a submes per wheel, which is a waste because every wheel is the same. Better to have a single mesh representing a wheel, then have 4 entities (instances) of it. They can be moved independently (suspension, steering) but relative to the car. Also, then if you crash you can detach a wheel and send it spinning off into the distance.
For sections that don't need to move independently, they can be part of the same mesh, for example the OGRE head is made up of 4 submeshes because the skin, tusks, eyes and earring all have different material properties.
-
- Gremlin
- Posts: 157
- Joined: Mon Nov 11, 2002 4:21 pm
- x 3
Could you elaborate on this some more?sinbad wrote:(although OGRE will group separate instances using the same material for you).
And, considering that a submesh is used to indicate a surface, why does a submesh contain separate geometry at all? (It somehow doesn't quite feel right how the mesh, submesh, geometrydata and material class are related ... And actually submesh could be called 'surface' then?)
This also requires that an option of grouping Lightwave layers into a single mesh or saving them out as separate meshes must be added...
-
- Gremlin
- Posts: 191
- Joined: Sun Dec 01, 2002 12:38 am
- x 3
-
- Gremlin
- Posts: 157
- Joined: Mon Nov 11, 2002 4:21 pm
- x 3
Animations: No. Lightwave has a strict separation of modelling and animating objects. However, the animations in a Lightwave scene (.lws) could probably be quite easily converted into an ogre-animation.
Skeletons: YES! (well, bones actually ...) Newtek is probably the inventor of bone and skeletal animation.
Skeletons: YES! (well, bones actually ...) Newtek is probably the inventor of bone and skeletal animation.
-
- OGRE Retired Team Member
- Posts: 19269
- Joined: Sun Oct 06, 2002 11:19 pm
- Location: Guernsey, Channel Islands
- x 66
Ok, in the rendering queue, discrete renderables using the same material are rendered together (not as one renderop, just in sequence). This enables us to avoid expensive render state changes; it's a common approach in hardware-oriented 3D engines. SubMeshes become renderable when an entity requests addition to the render queue.dennis wrote:Could you elaborate on this some more?sinbad wrote:(although OGRE will group separate instances using the same material for you).
A SubMesh is a group of triangles. They do not have to be contiguous, only share the same material properties and the same rough locality (ie be part of a single discrete object). So 'surface' is inaccurate: the submesh may actually be lots of fragments on the same model using the same material (think the patches on Quake3 models that have animated bits).And, considering that a submesh is used to indicate a surface, why does a submesh contain separate geometry at all? (It somehow doesn't quite feel right how the mesh, submesh, geometrydata and material class are related ... And actually submesh could be called 'surface' then?)
-
- Gremlin
- Posts: 157
- Joined: Mon Nov 11, 2002 4:21 pm
- x 3
Well here's a model (An old standard lightwave model ...) I've converted to a .mesh with my converter: SpaceDestroyer
It has several submeshes, but they all have the same basic white material.
Have fun with it.
... to be continued ...
It has several submeshes, but they all have the same basic white material.
Have fun with it.
... to be continued ...
-
- Gremlin
- Posts: 157
- Joined: Mon Nov 11, 2002 4:21 pm
- x 3
Hmmm ... I thought my conversion routine was good enough to try some larger objects. So I grabbed some big .lwo files on my disk (cowsub.lwo, 1730396 bytes, 46689 points, 92832 polygons and tieinterceptor.lwo, 3836726 bytes, 91288 points, 137082 polygons) and converted them. When I loaded them in skybox_demo all I saw was either junk or nothing at all. I didn't expect this, so I went hunting for a bug ... but I couldn't find any. (Pretty frustrating, eh?) Then I decided to reduce the number of points/polygons per submesh (to 0x2000) because some smaller testfiles worked fine and then it worked! Then it hit me ... probably some (or the) 64k buffersize limit! So now the point- and polygonlimits are set to 0x5555 per submesh. I can understand why these limits exists in hardware ... but in Ogre ?!?!, or worse, in a file ?!?! Is this documented somewhere or is this supposed to be common knowledge! Why doesn't the meshserializer start to yak about it !?!?
Last edited by dennis on Wed Apr 13, 2005 10:49 am, edited 1 time in total.
-
- OGRE Retired Team Member
- Posts: 19269
- Joined: Sun Oct 06, 2002 11:19 pm
- Location: Guernsey, Channel Islands
- x 66
Yes, there is an indexable limit of 64k on vertices, because you may have noticed that the indexes are an array of ushorts, which are 16-bit. This was previously the index limit on Direct3D, but now indexes can be 32-bits. This is one of the things which will change when we move to compiled vertex / index buffers in the new year.
It's not actually invalid to have a vertex buffer bigger than the indexes can address because you can use a shared buffer bigger than this and use an offset (each chunk of indexes uses a subset of the buffer). Meshes don't use this feature as yet (the BspSceneManager does though) but it's a possible expansion.
For this reason MeshSerializer can't really complain because all you've done is overflow a ushort index. Your compiler should have complained about this in a warning - did you ignore any compilation warnings or turn yours down?
It's not actually invalid to have a vertex buffer bigger than the indexes can address because you can use a shared buffer bigger than this and use an offset (each chunk of indexes uses a subset of the buffer). Meshes don't use this feature as yet (the BspSceneManager does though) but it's a possible expansion.
For this reason MeshSerializer can't really complain because all you've done is overflow a ushort index. Your compiler should have complained about this in a warning - did you ignore any compilation warnings or turn yours down?
-
- Gremlin
- Posts: 157
- Joined: Mon Nov 11, 2002 4:21 pm
- x 3
-
- OGRE Retired Team Member
- Posts: 19269
- Joined: Sun Oct 06, 2002 11:19 pm
- Location: Guernsey, Channel Islands
- x 66
Only if you use a unique set of 3 vertices per face, which is not normally the case (verts are normally reused by faces unless they disagree in some vertex component along 'seams'). In most typical closed-manifold models with smooth shading (so normals are usually reused on adjoining faces) you typically get twice the number of faces as vertices because of this reuse, so for every 100 verts you get ~200 faces rather than 33. This figure drops the more texture and normal seams you have on the model, and drops even further for open-manifold models (like a flat patch) where there are lots of 'edges' where the vertices are not reused.
Re the size of the 'data' don't get confused between index and vertex data. The referenced data (the verts) can be 0xFFFF * sizeof(Vertex) in bytes and still be indexed by a ushort index value. The sizeof(Vertex) depends on how the buffers are structured and which component you're talking about. If it's the position buffer sizeof(Vertex) == sizeof(Real) * 3 for example, but for vertex colours sizeof(Vertex) == sizeof(ulong). You can also get combined (strided) vertex buffers although Mesh doesn't use these at the moment.
Re the size of the 'data' don't get confused between index and vertex data. The referenced data (the verts) can be 0xFFFF * sizeof(Vertex) in bytes and still be indexed by a ushort index value. The sizeof(Vertex) depends on how the buffers are structured and which component you're talking about. If it's the position buffer sizeof(Vertex) == sizeof(Real) * 3 for example, but for vertex colours sizeof(Vertex) == sizeof(ulong). You can also get combined (strided) vertex buffers although Mesh doesn't use these at the moment.
-
- Gremlin
- Posts: 157
- Joined: Mon Nov 11, 2002 4:21 pm
- x 3
Hey Sinbad,
Perhaps it's just easier if you tell me te maximum number of points and polygons, given that sizeof(point)==12 and sizeof(polygon)==6.
The maximum number of polygons was expertimentally detemined at 0x5555. 0x5556 resulted in junk. This implies a polygon buffer of 128K.
The maximum number of points I got out of an object in combination with a maximum number of polygons (0x5555) is 13719. That's in the cow mesh and that works. This implies that the point buffer is at least more than 13719 * 12 bytes = 164628 bytes. However, putting all 46689 points in the shared pointbuffer (ogreMesh->sharedGeometry->pVertices) doesn't work. ogreSubMesh->useSharedVertices is set to true in each submesh. The number of points is still less than the 0xFFFF points you mentioned the buffer could hold. Please give me a clue on why this should work but doesn't.
Perhaps it's just easier if you tell me te maximum number of points and polygons, given that sizeof(point)==12 and sizeof(polygon)==6.
The maximum number of polygons was expertimentally detemined at 0x5555. 0x5556 resulted in junk. This implies a polygon buffer of 128K.
The maximum number of points I got out of an object in combination with a maximum number of polygons (0x5555) is 13719. That's in the cow mesh and that works. This implies that the point buffer is at least more than 13719 * 12 bytes = 164628 bytes. However, putting all 46689 points in the shared pointbuffer (ogreMesh->sharedGeometry->pVertices) doesn't work. ogreSubMesh->useSharedVertices is set to true in each submesh. The number of points is still less than the 0xFFFF points you mentioned the buffer could hold. Please give me a clue on why this should work but doesn't.
-
- OGRE Retired Team Member
- Posts: 19269
- Joined: Sun Oct 06, 2002 11:19 pm
- Location: Guernsey, Channel Islands
- x 66
It's not a case of buffer size, it's a case of how much you can address with an index value. The index is multiplied by the size of the vertex data to physically address it, don't worry about this though as it's internal to the graphics API.
The maximum number of addressable vertices from the index buffer is 65536. There is (theoretically) no limit on the number of faces because you can use the same vertex for multiple faces (this is what the indexes are used for). As a guide, with a closed manifold model you can expect to get about 128k useful polys in a single submesh if you max out the vertex list to 64k. By using multiple submeshes with their own geometry there is effectively no limit to the number of polys in a single mesh.
If you are generating 3 unique vertices per face then this would explain why you're running out of addressable vertices at 0x5555 faces, because 0x5555 * 3 = 0xFFFF as you pointed out earlier.
You really shouldn't be generating unique vertices for every face, you should try to reuse them. As discussed in another thread you sometimes have to duplicate a vertex if the face disagrees with it's neighbours about texture coordinates or normals at that vertex. However the norm is to be able to reuse vertices more often than not. You will likely have to do some conversion from the modeller's format to do this, modellers typically don't give a monkeys about being efficient with their polygon data because they're not designed for realtime usage. If you are simply parsing the face list and generating 3 vertices every face, you are generating more vertices than you need and should try to collate a master unique vertex list first, then reference it from your face list. You'll find yourself saving enormous numbers of vertices that way.
The maximum number of addressable vertices from the index buffer is 65536. There is (theoretically) no limit on the number of faces because you can use the same vertex for multiple faces (this is what the indexes are used for). As a guide, with a closed manifold model you can expect to get about 128k useful polys in a single submesh if you max out the vertex list to 64k. By using multiple submeshes with their own geometry there is effectively no limit to the number of polys in a single mesh.
If you are generating 3 unique vertices per face then this would explain why you're running out of addressable vertices at 0x5555 faces, because 0x5555 * 3 = 0xFFFF as you pointed out earlier.
You really shouldn't be generating unique vertices for every face, you should try to reuse them. As discussed in another thread you sometimes have to duplicate a vertex if the face disagrees with it's neighbours about texture coordinates or normals at that vertex. However the norm is to be able to reuse vertices more often than not. You will likely have to do some conversion from the modeller's format to do this, modellers typically don't give a monkeys about being efficient with their polygon data because they're not designed for realtime usage. If you are simply parsing the face list and generating 3 vertices every face, you are generating more vertices than you need and should try to collate a master unique vertex list first, then reference it from your face list. You'll find yourself saving enormous numbers of vertices that way.
-
- Gnoblar
- Posts: 8
- Joined: Fri Dec 27, 2002 1:00 pm
- Location: Switzerland
Hey Dennis, some feedback here: I'm a Lightwave user too (V7.5) and some blokes from my team are using LW too. So your work is much anticipated here!
Say when do you expect to support animation in your converter?
Greetz and thanx in advance, pat le cat
Say when do you expect to support animation in your converter?
Greetz and thanx in advance, pat le cat
"What the heck do I need to test my code for? It compiled man!" (an anonymous purist)
-
- Gremlin
- Posts: 157
- Joined: Mon Nov 11, 2002 4:21 pm
- x 3
Well I've decided to make a 'new year release' of the converter.
lwo2mesh.exe v0.7 (Win32)
You probably need an Ogre version later than 99f (currently only CVS) to get it to work. (I'm thinking of a way to make it completely stand-alone though.)
Sinbad, could you host this file and perhaps the images as well? (and rearrange the links of course ...) They will be deleted from my site when I need the space for something else.
Update: this version is no longer available
lwo2mesh.exe v0.7 (Win32)
You probably need an Ogre version later than 99f (currently only CVS) to get it to work. (I'm thinking of a way to make it completely stand-alone though.)
Sinbad, could you host this file and perhaps the images as well? (and rearrange the links of course ...) They will be deleted from my site when I need the space for something else.
Update: this version is no longer available
Last edited by dennis on Thu Feb 06, 2003 3:46 pm, edited 1 time in total.
-
- OGRE Retired Team Member
- Posts: 19269
- Joined: Sun Oct 06, 2002 11:19 pm
- Location: Guernsey, Channel Islands
- x 66
-
- Gnoblar
- Posts: 8
- Joined: Fri Dec 27, 2002 1:00 pm
- Location: Switzerland
Oh and Dennis please, do include a doc explaining what can be exported from Lightwave objects and what version of LW you support.
Like if one can export a complete map or only character objects. What about textures/materials, animation etc..
Thanx already, pat le blissed
Like if one can export a complete map or only character objects. What about textures/materials, animation etc..
Thanx already, pat le blissed
"What the heck do I need to test my code for? It compiled man!" (an anonymous purist)
-
- Gremlin
- Posts: 157
- Joined: Mon Nov 11, 2002 4:21 pm
- x 3
Sure, I will ... when I release 1.0. (At least the packaging ...)sinbad wrote:Ok, but I can't put this in CVS without the source code, are you planning on releasing it?
Please package this up with whatever dependent dlls you're using (your OgreMain.dll, stlportdll etc) and the docs in one archive and I'll stick it in the downloads area.
Eventually anything that can be exported from an .lwo and imported into a .mesh will be. It supports ALL versions of the .lwo format. The loading code is based on Ernie Wright's source (lwsdk). It's completely ceeplusplussified and uses stlport vectors (keeping type info intact) and streams instead of the linkedlist and plain FILE* code Ernie wrote. This one even supports LWLO, which is not in Ernie's implementation. It also uses a simple triangulation routine, which far from perfect but gets the object converted. (Actually it triangulates quite badly so if anyone can get me a better routine ..., oh and while we're at it: I need a decent tri-stripping routine also.)patlecat wrote: Oh and Dennis please, do include a doc explaining what can be exported from Lightwave objects and what version of LW you support.
Like if one can export a complete map or only character objects. What about textures/materials, animation etc..
Thanx already, pat le blissed
To do:
- textures
- bones
- tri-stripping
- handle feedback
As far as I can tell animation is not in a .lwo since Lightwave enforces a strict separation of modellling and animating.
Last edited by dennis on Sat Jan 04, 2003 7:54 pm, edited 1 time in total.
-
- Gnoblar
- Posts: 8
- Joined: Fri Dec 27, 2002 1:00 pm
- Location: Switzerland
-
- Gremlin
- Posts: 157
- Joined: Mon Nov 11, 2002 4:21 pm
- x 3
I have the following problem:
Here's a cube consisting of 2 submeshes. The 'red' one {{A,B,C,D},{B,F,G,C},{F,G,H,E}} and the 'blue' one {{A,B,F,E},{A,E,H,D},{D,C,G,H}}. Now I have 2 two textures, one on the 'red' submesh and one on the 'blue' submesh. This means that every point in the cube has 2 texture coordinates. When I use shared geometry only one of the two textures shows correctly. When I don't use shared geometry, none of the textures show correctly. How do I get 2 (or more) texturemap coordinates attached to 1 point in Ogre?
Here are the actual 3D and UV coordinates.
The texture coordinates map into this picture:
1, 2 and 3 are mapped over the 'blue' submesh {{A,B,F,E},{A,E,H,D},{D,C,G,H}} and 4, 5 and 6 are mapped over the 'red' submesh {{A,B,C,D},{B,F,G,C},{F,G,H,E}}.
Here's a cube consisting of 2 submeshes. The 'red' one {{A,B,C,D},{B,F,G,C},{F,G,H,E}} and the 'blue' one {{A,B,F,E},{A,E,H,D},{D,C,G,H}}. Now I have 2 two textures, one on the 'red' submesh and one on the 'blue' submesh. This means that every point in the cube has 2 texture coordinates. When I use shared geometry only one of the two textures shows correctly. When I don't use shared geometry, none of the textures show correctly. How do I get 2 (or more) texturemap coordinates attached to 1 point in Ogre?
Here are the actual 3D and UV coordinates.
Code: Select all
TXUV "texture1" dim 2 nverts 8 vmad (no)
point 0 (-5, -5, -5) 0.333333 0.5
point 1 (5, -5, -5) 0.66666 0.5
point 2 (-5, 5, -5) 0.333333 1
point 3 (5, 5, -5) 0.66666 1
point 4 (-5, -5, 5) 0 0.5
point 5 (5, -5, 5) 1 0.5
point 6 (-5, 5, 5) 0 1
point 7 (5, 5, 5) 1 1
TXUV "texture2" dim 2 nverts 8 vmad (no)
point 0 (-5, -5, -5) 1 0.5
point 1 (5, -5, -5) 1 0
point 2 (-5, 5, -5) 0 0.5
point 3 (5, 5, -5) 0 0
point 4 (-5, -5, 5) 0.66666 0.5
point 5 (5, -5, 5) 0.66666 0
point 6 (-5, 5, 5) 0.33333 0.5
point 7 (5, 5, 5) 0.33333 0
1, 2 and 3 are mapped over the 'blue' submesh {{A,B,F,E},{A,E,H,D},{D,C,G,H}} and 4, 5 and 6 are mapped over the 'red' submesh {{A,B,C,D},{B,F,G,C},{F,G,H,E}}.
Last edited by dennis on Wed Apr 13, 2005 10:52 am, edited 1 time in total.
-
- OGRE Retired Team Member
- Posts: 19269
- Joined: Sun Oct 06, 2002 11:19 pm
- Location: Guernsey, Channel Islands
- x 66
This is an example of what we talked about earlier: you need to duplicate vertices where some elements of the data at the point dont agree from face to face.
Looking at your data there, none of your coincident points share the same texture coordinate data, so you need 16 vertices to make up that model. You're fortunate in that your texture has clearly been designed to be wrapped around a cube, so some edges of the cube do share the same texture coordinates; if it wasn't you could need up to 4 unique vertices per cube face (i.e. 24 verts).
Whether you use one shared buffer with 16 vertices, and reference only 8 at a time from each SubMesh, or use a dedicated buffer of 8 vertices per Submesh is entirely up to you.
The short answer to you question "How do I get 2 (or more) texturemap coordinates attached to 1 point in Ogre?" is 'you don't'. Instead you create 2 vertices at the same location with the 2 different texture coordinates. As I've said before this is not OGRE's limitation, it's the way all realtime graphics APIs work, but not generally how modellers work (they like to store only unique data so it's easier to modify and convert for the realtime API on the fly).
I don't want to confuse you here, but just incase you notice that geometry has an array of texture coordinate pointers, here's why: it IS possible to have multiple channels of texture coordinates for every vertex, but this is used for multitexturing where, say you want a detail texture and a base texture repeated at different frequencies. This is not for use for texture coord discrepancies on a single texture layer though; it's wasteful to do it that way because you'd have 2 texture coordinates for every vertex and only ever use 1.
HTH
Looking at your data there, none of your coincident points share the same texture coordinate data, so you need 16 vertices to make up that model. You're fortunate in that your texture has clearly been designed to be wrapped around a cube, so some edges of the cube do share the same texture coordinates; if it wasn't you could need up to 4 unique vertices per cube face (i.e. 24 verts).
Whether you use one shared buffer with 16 vertices, and reference only 8 at a time from each SubMesh, or use a dedicated buffer of 8 vertices per Submesh is entirely up to you.
The short answer to you question "How do I get 2 (or more) texturemap coordinates attached to 1 point in Ogre?" is 'you don't'. Instead you create 2 vertices at the same location with the 2 different texture coordinates. As I've said before this is not OGRE's limitation, it's the way all realtime graphics APIs work, but not generally how modellers work (they like to store only unique data so it's easier to modify and convert for the realtime API on the fly).
I don't want to confuse you here, but just incase you notice that geometry has an array of texture coordinate pointers, here's why: it IS possible to have multiple channels of texture coordinates for every vertex, but this is used for multitexturing where, say you want a detail texture and a base texture repeated at different frequencies. This is not for use for texture coord discrepancies on a single texture layer though; it's wasteful to do it that way because you'd have 2 texture coordinates for every vertex and only ever use 1.
HTH
-
- Gremlin
- Posts: 157
- Joined: Mon Nov 11, 2002 4:21 pm
- x 3
That was by design of course.sinbad wrote:You're fortunate in that your texture has clearly been designed to be wrapped around a cube, so some edges of the cube do share the same texture coordinates; if it wasn't you could need up to 4 unique vertices per cube face (i.e. 24 verts).
When not using any shared geometry I got the texturing to work. (Although I'm now thinking about marking this up as a good study project, rethink my strategies and scratch a large piece of code.). However, it seems that the texture colours are blended with the colours already in the ambient material colour.
How do I turn that off when I only want to show the real texture colours?
A piece of code to show how I copy some surface information:
Code: Select all
ogreMat->setAmbient(surface->color.rgb[0], surface->color.rgb[1], surface->color.rgb[2]);
ogreMat->setDiffuse(surface->diffuse.val, surface->diffuse.val, surface->diffuse.val);
ogreMat->setSpecular(surface->specularity.val, surface->specularity.val, surface->specularity.val);
ogreMat->setShininess(surface->glossiness.val);
Should I multiply the rgb values with the diffuse value (0..1) and the specularity value (0..1) to get the parameters for setDiffuse and SetSpecular?