Light baking

Anything and everything that's related to OGRE or the wider graphics field that doesn't fit into the other forums.
User avatar
Falagard
OGRE Retired Moderator
OGRE Retired Moderator
Posts: 2060
Joined: Thu Feb 26, 2004 12:11 am
Location: Toronto, Canada
x 3

Light baking

Post by Falagard »

I had another idea :-)

If you have a unique set of UV coordinates as a second set of UVs you can bake out lighting directly to a texture using render to texture. The general idea of it is found here:

http://www.cs.washington.edu/homes/dgol ... e_Oven.htm

Basically, you have a mesh that has unique UVs (flattened UVs that could be used for lightmapping, for example). Set up a render target, a camera with orthographic projection. Then in your vertex shader instead of using the position in each vertex, you use the unique UVs as your position (output the position to the fragment shader using the texCoord.x and texCoord.y with z position being irrelevant because of orthographic projection). This will render a texture that matches the unique UVs... essentially a flattened version of your mesh. Let's assume simple vertex lighting for a moment, with the lighting being calculated in the vertex shader per vertex. If you use the original position and normal of each vertex and N dot L against your lights to determine the lighting of each vert, but then output the position using the UVs you'll be rendering a baked version of the lighting to a texture. If you use that baked texture on your mesh using the second set of UVs you'll get baked per vertex lighting. Pretty cool. Per pixel lighting in a pixel shader is the same, though you're dependent on the resolution of the texture you're baking to obviously.

So... here's the delimna. If you also use depth shadow mapping will you be able to bake that out properly as well?
User avatar
sinbad
OGRE Retired Team Member
OGRE Retired Team Member
Posts: 19269
Joined: Sun Oct 06, 2002 11:19 pm
Location: Guernsey, Channel Islands
x 66

Post by sinbad »

Yes, you can do this, again it's just a space conversion; the shadow texture matrix you get will move anything from world space into the coordinate frame of the shadow texture and the comparisons will be valid.

Of course, your lighting will only look as good as real time lighting would have done (unless you do something clever afterwards like using that initial mapping as a starting point to render bounce lighting), but you do then save all the calculation at runtime.
User avatar
Falagard
OGRE Retired Moderator
OGRE Retired Moderator
Posts: 2060
Joined: Thu Feb 26, 2004 12:11 am
Location: Toronto, Canada
x 3

Post by Falagard »

Cool, thanks, that's what I thought.

I'm thinking of using it for fast lightmap calculations (direct lighting + ambient occlusion looks pretty decent, no radiosity), but I'm also considering its application for runtime generation of lightmaps.

Just something I'm looking into. It might end up being the most awesome thing ever, or a total waste of time.
User avatar
Kojack
OGRE Moderator
OGRE Moderator
Posts: 7157
Joined: Sun Jan 25, 2004 7:35 am
Location: Brisbane, Australia
x 535

Post by Kojack »

I've got an idea for baked lighting, but I need a shader 4 card to display the result (need the geometry shader's per triangle parameter). It would mean light map generation with no UV unwrapping needed for the pre-calc phase.
Of course I'd have to write my own light mapper app to generate the data for it (been a while since I wrote a raytracer).
User avatar
Falagard
OGRE Retired Moderator
OGRE Retired Moderator
Posts: 2060
Joined: Thu Feb 26, 2004 12:11 am
Location: Toronto, Canada
x 3

Post by Falagard »

Kojack wrote:I've got an idea for baked lighting, but I need a shader 4 card to display the result (need the geometry shader's per triangle parameter). It would mean light map generation with no UV unwrapping needed for the pre-calc phase.
Of course I'd have to write my own light mapper app to generate the data for it (been a while since I wrote a raytracer).
Cool. Any details?

Do you understand that what I described above is just baking dynamic lighting to a texture without a raytracer? I'm sure you do, but just checking.
User avatar
SpaceDude
Bronze Sponsor
Bronze Sponsor
Posts: 822
Joined: Thu Feb 02, 2006 1:49 pm
Location: Nottingham, UK
x 3

Post by SpaceDude »

Falagard wrote:I'm thinking of using it for fast lightmap calculations (direct lighting + ambient occlusion looks pretty decent, no radiosity), but I'm also considering its application for runtime generation of lightmaps.

Just something I'm looking into. It might end up being the most awesome thing ever, or a total waste of time.
That's cool! I don't know if you saw my wiki article on in-game light mapping:

http://www.ogre3d.org/wiki/index.php/Light_mapping

It uses ray tracing to draw the lightmaps rather than render to texture. And the downside to this is that it is very slow for large scenes especially when you up the resolution of the lightmap texture. I used it in my game (http://www.konggame.co.uk/). I'd definitely be interested in implementing your idea as it should be a lot faster. I'm not sure I understood your initial post entirely but I'll have another read later when I have more time.

Please keep us posted on your progress.

PS: About shadow quality, would it be possible to apply some blurring to the RTT after its been calculated to reduce the hard edges on the shadows?
User avatar
Falagard
OGRE Retired Moderator
OGRE Retired Moderator
Posts: 2060
Joined: Thu Feb 26, 2004 12:11 am
Location: Toronto, Canada
x 3

Post by Falagard »

Thanks!

Yes, I did see your article and tried out your game, including the level builder and lightmapping. Your lightmapping is pretty fast for small scenes, so that was impressive.

It is a bit tricky to understand how it works initially, but you just have to give it some thought to figure it out.

When you have unwrapped UV coordinates they map each vertex to a position on a 2D texture right? A 2D texture is just like a flat plane. A vertex shader lets you modify the the position of each vertex as you're rendering. If you set up an orthographic camera and in your vertex shader you ignore the actual position of each vertex and instead use the UV.x, UV.y, and 0 as your xyz position, it will render a flattened version of your mesh that matches your original unwrapped UV coordinates. When calculating the lighting you use the unmodified original position of each vertex, so the lighting is correct, just baked into a flat texture that can be then applied directly to the mesh.

Not sure if that was any more clear though :-D

About shadow quality, I'll be looking into that as I go, but the basic idea is to blur the shadow before it is baked. I'm guessing I'll also have to do some post processing on the RTT as well.
User avatar
SpaceDude
Bronze Sponsor
Bronze Sponsor
Posts: 822
Joined: Thu Feb 02, 2006 1:49 pm
Location: Nottingham, UK
x 3

Post by SpaceDude »

Ok I think I get it. So your camera frustum would be something like
x: 0 - 1
y: 0 - 1
z: -1 - 1

And you would need 1 render operation per sub-entity in order to bake each of your entities.

And I guess you need to specify a different culling frustum from your camera frustum. I think I read that you can do that with Ogre somewhere. The camera frustum is faked, while the culling frustum should encompass the entity so that it is actually drawn (maybe setting it equal to the bounding box of the entity would be sufficient).

Or did I misunderstand?
User avatar
Falagard
OGRE Retired Moderator
OGRE Retired Moderator
Posts: 2060
Joined: Thu Feb 26, 2004 12:11 am
Location: Toronto, Canada
x 3

Post by Falagard »

Yup, I think you got it.
User avatar
Falagard
OGRE Retired Moderator
OGRE Retired Moderator
Posts: 2060
Joined: Thu Feb 26, 2004 12:11 am
Location: Toronto, Canada
x 3

Post by Falagard »

Need some help. I figure someone knows this off the top of their head.

I have a simple vertex shader where I want to render a flattened version of a mesh like I described above.

I want to use the UV coordinates and output them as the position of each vertex, so that it renders x = 0 y = 0 to the top left of the screen and x = 1 and y = 1 to the bottom right.

Here's what I have currently:

Code: Select all

void LightmapBakeVP(
  float4 position	: POSITION,
  float3 normal : NORMAL,
  float2 uv0 : TEXCOORD0,
  float2 uv1 : TEXCOORD1,
	
  // outputs
  out float4 oPosition : POSITION,
  out float4 oColor : COLOR,
  uniform float4x4 viewProj)
{
  float4 flattenedPos = float4(uv1.x, uv1.y, 1, 1);
  oPosition = mul(viewProj, flattenedPos);
  oColor = float4(1,1,1,1);
}
That was just quickly thrown together and I know it's not right. uv1 is the second set of unwrapped UV coordinates, so I want to use these UVs to determine vertex position in screen space, which essentially renders the mesh unwrapped directly onto the screen.

The camera isn't set in orthographic mode or anything at the moment, I figured I could get away with this with a normal perspective camera but just output the verts in screen space.

Sadly I'm a n00b at some things, and that includes understanding matrices and how view and projection work to output a vertex position to the screen.

Any help is appreciated, in the meantime I'll just keep trying different things.
User avatar
Praetor
OGRE Retired Team Member
OGRE Retired Team Member
Posts: 3335
Joined: Tue Jun 21, 2005 8:26 pm
Location: Rochester, New York, US
x 3

Post by Praetor »

You mean unwrapped in the same sense as in max or maya unwrapping? The view-projection is intended for translating vertices into their final positions for screen rendering, but it won't be the kind of thing I think you want. You want all the verts spread out flat correct?
User avatar
Falagard
OGRE Retired Moderator
OGRE Retired Moderator
Posts: 2060
Joined: Thu Feb 26, 2004 12:11 am
Location: Toronto, Canada
x 3

Post by Falagard »

My UVs are already unwrapped via 3ds Max, stored in uv1.

I want to render the mesh to the render target and use the uv1.xy as the vertex positions instead of the original vertex positions so it renders to a flat plane, which is spread out over the entire screen, where 0,0 is top left and 1,1 is bottom right. Read the thread if it's unclear.

I'm positive it's possible to do this, just not sure how yet :-)

I may need to set up the camera as an orthographic camera, but was hoping to just be able to set up the proper matrix within the shader.

I have a feeling that I need to multiply each position - which is in fact, float3(uv1.x, uv1.y, 1) - by the view matrix, then the result of that by the view projection matrix, or something like that.

I'm also not sure what to store in flattenedPos.w or whether it matters at all.
User avatar
Praetor
OGRE Retired Team Member
OGRE Retired Team Member
Posts: 3335
Joined: Tue Jun 21, 2005 8:26 pm
Location: Rochester, New York, US
x 3

Post by Praetor »

I see now. So uv1 holds the max-unwrapped texcoords. uv0 holds what?

Did you try not multiplying by any matrix. Just output your flattenedPos as-is? I don't think it would be a mul by view, then view-projection, since that would be view^2 * projection. The projection is what maps eye-space 3D onto the 2D plane. Now, since your uv coordinates are already in 2D, you should be able to simply ensure they are within the viewport bounds (I can't remember if it goes 0-1 or -1 - 1) and output it.
User avatar
Falagard
OGRE Retired Moderator
OGRE Retired Moderator
Posts: 2060
Joined: Thu Feb 26, 2004 12:11 am
Location: Toronto, Canada
x 3

Post by Falagard »

uv0 holds the regular uv coordinates, such as a repeating tiled set of UV coordinates, or whatever you want. Same thing you'd do with any lightmapped mesh.

I did try outputting the flattenedPos directly but maybe I'll look into that some more, thanks.
User avatar
Falagard
OGRE Retired Moderator
OGRE Retired Moderator
Posts: 2060
Joined: Thu Feb 26, 2004 12:11 am
Location: Toronto, Canada
x 3

Post by Falagard »

Nah, still isn't working. Have turned culling off to be sure it doesn't have to do with normals, and tried various z values and near and far clipping planes just to be sure it isn't getting clipped, but I'm never seeing anything render when I just output the flattened position directly. I'd happy to see a screwy polygon render *somewhere* on the screen but instead it's just black. Works fine if I do oPosition = mul(worldViewProj, position) so the shader is set up properly.

I'll keep trying but I'm still open to suggestions.
User avatar
DWORD
OGRE Retired Moderator
OGRE Retired Moderator
Posts: 1365
Joined: Tue Sep 07, 2004 12:43 pm
Location: Aalborg, Denmark

Post by DWORD »

Falagard wrote:...and tried various z values and near and far clipping planes just to be sure it isn't getting clipped...
Just a shot in the dark, and you probably already tried it. :) Have you tried a negative z value?

Edit: Btw, sounds like a nice idea.
User avatar
Falagard
OGRE Retired Moderator
OGRE Retired Moderator
Posts: 2060
Joined: Thu Feb 26, 2004 12:11 am
Location: Toronto, Canada
x 3

Post by Falagard »

No, I hadn't tried that yet actually. It didn't work, but thanks for the suggestion!

I got excited with a "why didn't I think of that?" moment too :-(
User avatar
Praetor
OGRE Retired Team Member
OGRE Retired Team Member
Posts: 3335
Joined: Tue Jun 21, 2005 8:26 pm
Location: Rochester, New York, US
x 3

Post by Praetor »

Ok, how about feeding in just the projection matrix and multiplying by that?
User avatar
SpaceDude
Bronze Sponsor
Bronze Sponsor
Posts: 822
Joined: Thu Feb 02, 2006 1:49 pm
Location: Nottingham, UK
x 3

Post by SpaceDude »

Does this not work then?

Code: Select all

void LightmapBakeVP(
  float4 position   : POSITION,
  float3 normal : NORMAL,
  float2 uv0 : TEXCOORD0,
  float2 uv1 : TEXCOORD1,
   
  // outputs
  out float4 oPosition : POSITION,
  out float4 oColor : COLOR,
  uniform float4x4 viewProj)
{
  oPosition = float4(uv1.x, uv1.y, 0, 0);
  oColor = float4(1,1,1,1);
}
I guess you already tried that but I think it's just a question of figuring out what the visible range is.
User avatar
Falagard
OGRE Retired Moderator
OGRE Retired Moderator
Posts: 2060
Joined: Thu Feb 26, 2004 12:11 am
Location: Toronto, Canada
x 3

Post by Falagard »

No, that doesn't work, but thanks for another suggestion.

Yes, I'm confused why I'm not seeing anything either.

The closest I've gotten to actually seeing it is using the regular world view projection matrix multiplied by the flattenedPos - what this did was output a tiny quad that is basically 1 x 1 unit - which in comparison to my original mesh which is in the range of 200x200 units is small. The 1x1 obviously makes sense because that's the range that the UVs are in, and it actually looks like it might be correct - I just need to figure out how to get it to project properly when rendering.

Here's the unwrapped UVs from 3ds max:

Image

Here's a zoomed in version (cropped and scaled in Photoshop) of what rendered when I used world view projection. Note I also have simple lighting enabled when I did this, and it baked the lighting into the texture... just really really small :-)

Image

Here's the uncropped output from the render target:

Image

Here's what it looks like when I just use the regular position and not flattened position:

Image

Further suggestions are appreciated, but I may start investigating an orthographic camera.... I just don't think it's really needed.
User avatar
Praetor
OGRE Retired Team Member
OGRE Retired Team Member
Posts: 3335
Joined: Tue Jun 21, 2005 8:26 pm
Location: Rochester, New York, US
x 3

Post by Praetor »

Ok, I think this is a good development. My first inclination at this point is that you need a custom projection matrix. The projection matrix should take your small output and blow it up to the full size of the render target. I admit I'm not a master at projection matrices either...

Attempt to switch to an orthographic projection just to see the change it makes.
User avatar
Falagard
OGRE Retired Moderator
OGRE Retired Moderator
Posts: 2060
Joined: Thu Feb 26, 2004 12:11 am
Location: Toronto, Canada
x 3

Post by Falagard »

Turns out this works:

Code: Select all

oPosition = float4(uv1.x, uv1.y, 1, 1);
I must have changed something like culling mode or clip distance to fix it. I'm trying to track down what I did to make it work, because I know I tried that before (obviously). It could be any combination of the trial and error things I was playing with ;-)

The screen coords must actually be -1 to 1 because it's showing up in the upper right quadrant of the screen and upside down, but I can easily fix that.

So... thanks!

Image
User avatar
Praetor
OGRE Retired Team Member
OGRE Retired Team Member
Posts: 3335
Joined: Tue Jun 21, 2005 8:26 pm
Location: Rochester, New York, US
x 3

Post by Praetor »

I believe the w coordinate of the point is used during interpolation. I can't believe I didn't remember/mention that.
User avatar
Falagard
OGRE Retired Moderator
OGRE Retired Moderator
Posts: 2060
Joined: Thu Feb 26, 2004 12:11 am
Location: Toronto, Canada
x 3

Post by Falagard »

K, so here's the baked lighting fixed into proper position:

Image

And here's the baked texture re-applied to the original mesh.

Image

Light has been turned off here, so it's only the the baked texture being displayed. Notice the seams, which is a problem that will have to be addressed, perhaps with a blur filter applied to the image as a post process shader on the resulting baked texture.

Next I'm going to update the shader so it looks a bit more impressive with per pixel lighting instead of per vertex, and then add in texture shadows.

A few tricky things I need to work on:

1. Depth texture shadows - need to look at Ogre and get them workign.

2. Fix the seam problems.

3. Handling multiple entities, including the same entity multiple times in a scene, and entities with multiple materials. I have a cool idea, but here's the not so cool idea first:

The easiest way to do it is to have a lightmap per unique entity, so if say a chair is in the scene 4 times, you generate four lightmaps. Lightmap size can be chosen based on size of the object, so only 64x64 needed for small objects, etc. This idea is easy but I think is a bad approach.

Another possible approach is to start merging multiple meshes together into groups to be applied to a lightmap, and here's where the cool idea comes in. I want this process to be fast, so I don't want to be merging vertex buffers together or anything like that, and especially not trying to calculate new unwrapped UVs for a group of merged meshes. I'd like to use existing unwrapped UV coordinates which were precalculated yet have a single lightmap cover multiple meshes, including duplicates of the same mesh. Solution - each mesh keeps its own unwrapped UVs, which are precalculated (either by your 3d modelling program, or automatically using the API built into DirectX). Each mesh has will have a target size - an amount of room it should be allocated into a lightmap (such as 64x64 for a small mesh, and 512x512 for a larger one). Since the precalculated UVs always fit into a square, you can actually use a packing algorithm to pack these squares into a larger lightmap such as 1024x1024. So, for example, say there's a couple chairs and a table and some walls that need to be lightmapped - determine where they'd fit into a 1024x1024 lightmap, and for each mesh you can have an offset and scale value that tells it where it goes into that lightmap. These parameters can be passed to each entity and the shader can tell it where to find its actual UV coordinates by offsetting into the larger lightmap. Voila - the packing algo should be able to run very quickly - tuan is already using something like it for his imposter system.

4. Cubemap shadow maps for point lights, which currently aren't built into Ogre. Requires rendering 6 shadow textures per point light into a cubemap and using that cubemap for the shadows in the shader.

5. Find a very fast ambient occlusion technique that can be baked into the lighting that won't require me to merge the meshes into one big vertex buffer and is compatible with the packing idea above. There's screen space AO but that won't cut it in this case since that can't be baked as far as I know. NVidia had a sample of Dynamic Ambient Occlusion that I think might be useful and includes source, but the sample worked at a per vertex level and would take some work to modify it to be per texel instead, if it's even possible. The NVidia example even offered an idea how it could be re-used to perform simple but convincing radiosity quickly.

So... in general I think this idea of baking lighting using realtime techniques is pretty awesome. At the very least, it could be used for previewing and tweaking the lighting in scenes before sending it off to a true lightmapping process that could take days to run. At the most, it could be used at runtime to generate baked lighting on the fly without having to distribute lightmaps at all with an application, also allowing you to have far more shadow casting lights on static geometry than a realtime solution would allow.

Some other things have crossed my mind - it'd be easy to generate directional lightmaps ala Half Life 2. Also, it'd be possible to have per light lightmaps - this is an alternative to directional lightmaps. Basically, you could bake up to 4 lights into a what would essentially be a shadow map, with each channel of RGBA storing a light's shadow mask, and then use traditional per pixel normal maps, specular, etc per light.

Any help or feedback on these ideas is appreciated.
User avatar
SpaceDude
Bronze Sponsor
Bronze Sponsor
Posts: 822
Joined: Thu Feb 02, 2006 1:49 pm
Location: Nottingham, UK
x 3

Post by SpaceDude »

That's awesome Falagard, looks really promising.

>>3. Handling multiple entities

Yes I think what you describe is basically a texture atlas ( http://www.gamasutra.com/features/20060 ... v_01.shtml ) but for lightmaps.

>>Also, it'd be possible to have per light lightmaps - this is an alternative to directional lightmaps. Basically, you could bake up to 4 lights into a what would essentially be a shadow map, with each channel of RGBA storing a light's shadow mask, and then use traditional per pixel normal maps, specular, etc per light.

That sounds like a cool idea too, you could then have the lights switch on and off with a light switch for example, or lights destroyed by weapon fire.

>>2. Fix the seam problems.
I have had to solve similar problems with the ray tracing light mapping, you could use the same code I used to solve it. Basically you need some way to determine if a pixel on your lightmap corresponds to empty space (call this an invalid pixel) or is part of the light map (i.e. corresponds to somewhere on your entity). You could use the alpha channel for this, make the background have an alpha value of 0 and any pixels rendered having an alpha value of 255. So the invalid pixels have an alpha value of 0 while valid ones have alpha 255. Then you need to fill all the invalid pixels with the closest valid pixel.

In order to do this efficiently, I have pre-calculated a search pattern. So for each invalid pixel it will first search the direct neighbouring pixels, then pixels further and further away from the invalid pixel until it finds a valid one. The search pattern is constructed with the following function (note this only needs to be run once):

Code: Select all

// This code is based on STL quite heavily
// m_SearchPattern is of type "vector<pair<int, int> >"

// Used for sorting by distance (used for stl sort method)
struct SortCoordsByDistance
{
	bool operator()(pair<int, int> &left, pair<int, int> &right)
	{
		return (left.first*left.first + left.second*left.second) < 
			   (right.first*right.first + right.second*right.second);
	}
};

void CLightMap::BuildSearchPattern()
{
	m_SearchPattern.clear();
	const int iSize = 5;
	int i, j;
        // Create a square region of predefined size. This size is limited so it will not search further than is really necessary
	for (i=-iSize; i<=iSize; ++i)
	{
		for (j=-iSize; j<=iSize; ++j)
		{
			if (i==0 && j==0)
				continue;
			m_SearchPattern.push_back(make_pair(i, j));
		}
	}
        // Now sort these neighbouring pixels so that closest pixels come first.
	sort(m_SearchPattern.begin(), m_SearchPattern.end(), SortCoordsByDistance());
}
And here is the code that actually looks through each invalid pixel in a texture and fills it with the nearest valid pixel:

Code: Select all

// This code use the cimg class for accessing pixels however this dependency can easily be replaced with custom code or another library.
void CLightMap::FillInvalidPixels()
{
	int i, j;
	int x, y;
	vector<pair<int, int> >::iterator itSearchPattern;
	for (i=0; i<m_iTexSize; ++i)
	{
		for (j=0; j<m_iTexSize; ++j)
		{
			// Invalid pixel found, (Note: you can change this to check the alpha value of the texture)
			if ((*m_LightMap)(i, j, 0, 1) == 0)
			{
				for (itSearchPattern = m_SearchPattern.begin(); itSearchPattern != m_SearchPattern.end(); ++itSearchPattern)
				{
					x = i+itSearchPattern->first;
					y = j+itSearchPattern->second;
					if (x < 0 || x >= m_iTexSize)
						continue;
					if (y < 0 || y >= m_iTexSize)
						continue;
					// If search pixel is valid assign it to the invalid pixel and stop searching
					if ((*m_LightMap)(x, y, 0, 1) == 1)
					{
						(*m_LightMap)(i, j) = (*m_LightMap)(x, y);
						break;
					}
				}
			}
		}
	}
}
This code actually works very well. I had previously tried just blurring the texture but this does not give good results. You just make the seam look a bit more fuzzy but it will still be clearly visible.

PS: There is a description of the problem here: http://www.flipcode.com/articles/articl ... ping.shtml where they refer to it as bleeding and describe in words how you might go about solving the problem. They have conveniently left out the code which does it, but what I have just shown you above is an implementation of what they described.