K, so here's the baked lighting fixed into proper position:
And here's the baked texture re-applied to the original mesh.
Light has been turned off here, so it's only the the baked texture being displayed. Notice the seams, which is a problem that will have to be addressed, perhaps with a blur filter applied to the image as a post process shader on the resulting baked texture.
Next I'm going to update the shader so it looks a bit more impressive with per pixel lighting instead of per vertex, and then add in texture shadows.
A few tricky things I need to work on:
1. Depth texture shadows - need to look at Ogre and get them workign.
2. Fix the seam problems.
3. Handling multiple entities, including the same entity multiple times in a scene, and entities with multiple materials. I have a cool idea, but here's the not so cool idea first:
The easiest way to do it is to have a lightmap per unique entity, so if say a chair is in the scene 4 times, you generate four lightmaps. Lightmap size can be chosen based on size of the object, so only 64x64 needed for small objects, etc. This idea is easy but I think is a bad approach.
Another possible approach is to start merging multiple meshes together into groups to be applied to a lightmap, and here's where the cool idea comes in. I want this process to be fast, so I don't want to be merging vertex buffers together or anything like that, and especially not trying to calculate new unwrapped UVs for a group of merged meshes. I'd like to use existing unwrapped UV coordinates which were precalculated yet have a single lightmap cover multiple meshes, including duplicates of the same mesh. Solution - each mesh keeps its own unwrapped UVs, which are precalculated (either by your 3d modelling program, or automatically using the API built into DirectX). Each mesh has will have a target size - an amount of room it should be allocated into a lightmap (such as 64x64 for a small mesh, and 512x512 for a larger one). Since the precalculated UVs always fit into a square, you can actually use a packing algorithm to pack these squares into a larger lightmap such as 1024x1024. So, for example, say there's a couple chairs and a table and some walls that need to be lightmapped - determine where they'd fit into a 1024x1024 lightmap, and for each mesh you can have an offset and scale value that tells it where it goes into that lightmap. These parameters can be passed to each entity and the shader can tell it where to find its actual UV coordinates by offsetting into the larger lightmap. Voila - the packing algo should be able to run very quickly - tuan is already using something like it for his imposter system.
4. Cubemap shadow maps for point lights, which currently aren't built into Ogre. Requires rendering 6 shadow textures per point light into a cubemap and using that cubemap for the shadows in the shader.
5. Find a very fast ambient occlusion technique that can be baked into the lighting that won't require me to merge the meshes into one big vertex buffer and is compatible with the packing idea above. There's screen space AO but that won't cut it in this case since that can't be baked as far as I know. NVidia had a sample of Dynamic Ambient Occlusion that I think might be useful and includes source, but the sample worked at a per vertex level and would take some work to modify it to be per texel instead, if it's even possible. The NVidia example even offered an idea how it could be re-used to perform simple but convincing radiosity quickly.
So... in general I think this idea of baking lighting using realtime techniques is pretty awesome. At the very least, it could be used for previewing and tweaking the lighting in scenes before sending it off to a true lightmapping process that could take days to run. At the most, it could be used at runtime to generate baked lighting on the fly without having to distribute lightmaps at all with an application, also allowing you to have far more shadow casting lights on static geometry than a realtime solution would allow.
Some other things have crossed my mind - it'd be easy to generate directional lightmaps ala Half Life 2. Also, it'd be possible to have per light lightmaps - this is an alternative to directional lightmaps. Basically, you could bake up to 4 lights into a what would essentially be a shadow map, with each channel of RGBA storing a light's shadow mask, and then use traditional per pixel normal maps, specular, etc per light.
Any help or feedback on these ideas is appreciated.