How to retrieve UV and paint blood on mesh?

A place for users of OGRE to discuss ideas and experiences of utilitising OGRE in their games / demos / applications.
Post Reply
User avatar
blackgun
Halfling
Posts: 51
Joined: Tue Jun 21, 2005 2:49 pm

How to retrieve UV and paint blood on mesh?

Post by blackgun »

if I shoot a charactor, i hope he is bleeding.

I think it maybe this way:

1. retrieve the hit point's UV
2. paint the texture with some blood at that UV's point
3. reload the texture

Any suggestion? how to do that?

Thanks
User avatar
SpaceDude
Bronze Sponsor
Bronze Sponsor
Posts: 822
Joined: Thu Feb 02, 2006 1:49 pm
Location: Nottingham, UK
x 3
Contact:

Post by SpaceDude »

The problem with this idea is that if you just draw a blood on your texture in 2D without taking account of the UVs you are likely to draw across the boundaries of your UV maps if you see what I mean. For example you may get blood appearing on the legs of your character when he gets shot in the head if your legs happen to be drawn next to the head in your texture.

You may want to make a second texture with the blood draw on in pre-determined locations. You could then blend that texture with the regular texture in the regions required based on where the player was shot. I think this was the approach taken in the game SiN.

On the other hand Soldier of Fortune had something like what you describe I think. Well you could even blow limbs off, that was fun :)

If you wanted to go ahead with what you describe I think you would need to iterate over each triangle in the mesh, do an intersection test between the ray and that triangle. If it intersects you can interpolate the UV coordinates from the intersection point, the position and UVs of the 3 corners of the triangle.

Note you'll need a 1 to 1 mapping of UV to world space for this to work correctly, or the UV coordinates you obtain may not be unique.
User avatar
chmod
Greenskin
Posts: 131
Joined: Tue Feb 25, 2003 10:33 pm
Location: Seattle, Washington USA

Post by chmod »

Are decals not sufficient? I'm guessing you want procedurally generated blood or something?
User avatar
PolyVox
OGRE Contributor
OGRE Contributor
Posts: 1316
Joined: Tue Nov 21, 2006 11:28 am
Location: Groningen, The Netherlands
x 18
Contact:

Post by PolyVox »

Don't forget that if you are modifying the texture it affects all models which use it. So you shoot one guy in the head and get blood on everyone elses head too!

Most likely you need a decal based approach instead.
User avatar
SpaceDude
Bronze Sponsor
Bronze Sponsor
Posts: 822
Joined: Thu Feb 02, 2006 1:49 pm
Location: Nottingham, UK
x 3
Contact:

Post by SpaceDude »

Decals won't work if your character is animated in any way, which I assume it is. You would need to get the decal to stick to the mesh and deform with it, (i.e. not a good idea).

You can easily make a copy of the texture for each instance of the player, but this can be expensive in terms of memory.
User avatar
blackgun
Halfling
Posts: 51
Joined: Tue Jun 21, 2005 2:49 pm

Post by blackgun »

Texture cross drawing is not a problem, because i have a whole unwrapped uv map. look at the pic
Image
if above one doesn't work, look at this
Image


of course multi texture is a good way, don't draw on the base texture that remain in high resolution such as 1024*1024, draw on the "blood" texture in 256*256 just like a lightingmap. so if you shoot 10 charactors in screen, only one 1024*1024 texture, and 10 small 256*256 textures. when you leave that scene, all generated textures free/clear.


i did not use decal that maybe works. the main problem is retrieve the hit point's UV.
cast a ray query , get the triangle of mesh, and get the UV of that triangel, then draw a decal or something? how to do that? can anybody share some code? Thanks.
User avatar
Jabberwocky
OGRE Moderator
OGRE Moderator
Posts: 2819
Joined: Mon Mar 05, 2007 11:17 pm
Location: Canada
x 218
Contact:

Post by Jabberwocky »

Another potential approach -
Depending how many bullet wounds you need to show, you may not need to generate any new bullet textures at runtime at all. Just have a single texture of a bullet wound. Once you determine the UV of the bullet impact, apply a second material pass to the target. This pass would be your bullet wound texture - scrolling the texture to the appropriate UV offset. The downside is that each wound would require an additional pass, and so you couldn't have too many or you'd start suffering some serious performance problems.

[edit] actually you should be able to stick multiple bullet shots into a single pass:

Code: Select all

material
{
    technique
    {
        // Pass 1:  skin texture
        pass
        {
            texture_unit 
            {
                texture skin.jpg
            }
        }
        // Pass 2:  bullet wounds
        // These would need to be added dynamically as I get shot
        pass
        {
            scene_blend alpha
            // bullet wound 1			
            texture_unit 
            {
                texture bullet_wound.jpg
                scroll 0.15 0.73
            }
            // bullet wound 2
            texture_unit
            {
                texture bullet_wound.jpg
                scroll 0.95 0.21
            }
            // bullet wound 3
            texture_unit
            {
                texture bullet_wound.jpg
                scroll 0.43 0.56
            }
        }
    }
}
Anyway, I have no idea how well that would perform, but it might be worth a shot.
Last edited by Jabberwocky on Wed Aug 22, 2007 7:53 am, edited 2 times in total.
Image
Vectrex
Ogre Magi
Posts: 1266
Joined: Tue Aug 12, 2003 1:53 am
Location: Melbourne, Australia
x 1
Contact:

Post by Vectrex »

I'd also be interesting in a way to get the uv of a texture to directly draw into it. Decals are ok but sometimes direct drawing is better.

btw Can shaders for example store a little blood splat and draw it into a texture? Is the Ogre dyntex demo the only way to do it? because it seems pretty slow suggesting that it's doing lots of cpu/bus transfering
User avatar
SpaceDude
Bronze Sponsor
Bronze Sponsor
Posts: 822
Joined: Thu Feb 02, 2006 1:49 pm
Location: Nottingham, UK
x 3
Contact:

Post by SpaceDude »

blackgun wrote:cast a ray query , get the triangle of mesh, and get the UV of that triangel, then draw a decal or something? how to do that? can anybody share some code? Thanks.
Getting the triangles of a mesh involves reading the vertex buffer. This section of the ogre manual should be useful:

http://www.ogre3d.org/docs/manual/manual_56.html#SEC260

I wrote some code which accesses mesh information (triangles, uv coordinates, etc...) for creating light maps. It was mostly copied from some other source, I can't remember which. Here is the wiki article:

http://www.ogre3d.org/wiki/index.php/Light_mapping

Have a look at the "CLightMap::CalculateLightMap()" function. Most of the code in that function is accessing mesh information.

Now once you have mesh information. For each triangle in your mesh you need to test for intersection. If intersection is detected and you have triangle corner coordinates P1, P2, P3 and intersection point P you can calculate the barycentric coordinates. I wrote a function for that too in the lightmap class:

Code: Select all

Vector3 GetBarycentricCoordinates(const Vector2 &P1, const Vector2 &P2, const Vector2 &P3, const Vector2 &P)
{
	Vector3 Coordinates(0.0);
	Real denom = (-P1.x * P3.y - P2.x * P1.y + P2.x * P3.y + P1.y * P3.x + P2.y * P1.x - P2.y * P3.x);

	if (fabs(denom) >= 1e-6)
	{
		Coordinates.x = (P2.x * P3.y - P2.y * P3.x - P.x * P3.y + P3.x * P.y - P2.x * P.y + P2.y * P.x) / denom;
		Coordinates.y = -(-P1.x * P.y + P1.x * P3.y + P1.y * P.x - P.x * P3.y + P3.x * P.y - P1.y * P3.x) / denom;
//		Coordinates.z = (-P1.x * P.y + P2.y * P1.x + P2.x * P.y - P2.x * P1.y - P2.y * P.x + P1.y * P.x) / denom;
	}
	Coordinates.z = 1 - Coordinates.x - Coordinates.y;

	return Coordinates;
}
Some info on Barycentric coordinates here: http://en.wikipedia.org/wiki/Barycentri ... ematics%29

If you then want to use this to get UV coordinates (T) at the hit point given UV coordinates at the 3 corners of your triangle (T1, T2, T3) do something like this:

Code: Select all

Vector3 B = GetBarycentricCoordinates(P1, P2, P3, P);
Vector2 T = T1*B.x + T2*B.y + T3*B.z;
// Draw blood splat at UV coordinates T in your texture
User avatar
blackgun
Halfling
Posts: 51
Joined: Tue Jun 21, 2005 2:49 pm

Post by blackgun »

Thanks. I just read the code, i think that they lock the hardware's buffer, and calculate the animated mesh's vertics,normals,UVs from buffer. and if i have the hit point index, i can get the hit point's vertics[idx],normals[idx],UVs[idx], but how can i get the hit point index? sorry for not understanding the code. Would you show some example usage, like:

Code: Select all

Entity *ent = mSceneMgr->createEntity("head", "ogrehead.mesh");
GetMeshInformation(ent->getMesh,......)
http://www.ogre3d.org/wiki/index.php/Light_mapping

Code: Select all

	vector<Vector3> MeshVertices;
	{
		VertexData* vertex_data = submesh->useSharedVertices ? submesh->parent->sharedVertexData : submesh->vertexData;
		const VertexElement* posElem = vertex_data->vertexDeclaration->findElementBySemantic(Ogre::VES_POSITION);
		HardwareVertexBufferSharedPtr vbuf = vertex_data->vertexBufferBinding->getBuffer(posElem->getSource());
		unsigned char* vertex = static_cast<unsigned char*>(vbuf->lock(Ogre::HardwareBuffer::HBL_READ_ONLY));

		float* pReal;

		MeshVertices.resize(vertex_data->vertexCount);

		for (size_t j = 0; j < vertex_data->vertexCount; ++j, vertex += vbuf->getVertexSize())
		{
			posElem->baseVertexPointerToElement(vertex, &pReal);
			MeshVertices[j] = WorldTransform*Vector3(pReal[0],pReal[1],pReal[2]);
		}

		vbuf->unlock();
	}

...
...

User avatar
SpaceDude
Bronze Sponsor
Bronze Sponsor
Posts: 822
Joined: Thu Feb 02, 2006 1:49 pm
Location: Nottingham, UK
x 3
Contact:

Post by SpaceDude »

blackgun wrote:Thanks. I just read the code, i think that they lock the hardware's buffer, and calculate the animated mesh's vertics,normals,UVs from buffer. and if i have the hit point index, i can get the hit point's vertics[idx],normals[idx],UVs[idx], but how can i get the hit point index? sorry for not understanding the code.
Erm... I'm not quite sure what you are asking me here. Since you were asking about making the character bleed when he is hit. I assume you already have a method to determine if your character was hit. Are you using some kind of physics library? Or do you plan to write this code yourself?

If your plan to write yourself then there is some information here how you could determine if a ray (line) intersects with a triangle:

http://local.wasp.uwa.edu.au/~pbourke/g ... planeline/
http://local.wasp.uwa.edu.au/~pbourke/g ... nsidepoly/

This will give you your intersection point. And since you were iterating over the triangles in the mesh you also know which triangle it hit, right?
User avatar
blackgun
Halfling
Posts: 51
Joined: Tue Jun 21, 2005 2:49 pm

Post by blackgun »

actually, i am asking about how to cast a ray and get the collision point with the mesh in Ogre. I am using ODE, and i can do a ray collision and get all hit point information(position,normal),but that is the rigid body's --- cyllinders, not the mesh( more than 10000 triangles) , I know PhysX has a good convex collision detection solution. but I don't want to rebuild the 10000 triangles in physics space just for painting. And the buffer has all the mesh information, but ogre ray query only do collision in boundingbox level. i just look for the code do ray collision in triangles level under Ogre's framework. this is very useful not only for painting but also for realtime mesh editor.
User avatar
SpaceDude
Bronze Sponsor
Bronze Sponsor
Posts: 822
Joined: Thu Feb 02, 2006 1:49 pm
Location: Nottingham, UK
x 3
Contact:

Post by SpaceDude »

blackgun wrote:actually, i am asking about how to cast a ray and get the collision point with the mesh in Ogre. I am using ODE, and i can do a ray collision and get all hit point information(position,normal),but that is the rigid body's --- cyllinders, not the mesh( more than 10000 triangles) , I know PhysX has a good convex collision detection solution. but I don't want to rebuild the 10000 triangles in physics space just for painting. And the buffer has all the mesh information, but ogre ray query only do collision in boundingbox level. i just look for the code do ray collision in triangles level under Ogre's framework. this is very useful not only for painting but also for realtime mesh editor.
Yes fair point. Well what I would suggest is that you continue to use ODE for ray collision detection. If ODE detects a collision you can then do the expensive operation of actually figuring out which triangle the ray collides with. This means the expensive operations will only be performed when the bullet hits. Actually its not that hard to manually code in ray-triangle intersections, it's just hard to get it well optimised. But it would be cheaper to perform the collision detection manually than rebuilding your mesh in PhysX.
pilo
Gnoblar
Posts: 7
Joined: Tue Mar 25, 2008 7:02 pm

Post by pilo »

This thread helped me sort out my problems, thanks for all who posted.

The reason I resurrected this old thread is because I had to figure out how to find barycentric coordinates from a triangle with Vector3 points instead of Vector2 points.

This is how I did it so others don't have to reinvent the wheel:

Code: Select all

//Determines the barycentric coordinates of the collision for this triangle (used this as ref: http://www.farinhansford.com/dianne/teaching/cse470/materials/BarycentricCoords.pdf)
			Vector3 p1 = vertices[indices[i]];	//Triangle corners
			Vector3 p2 = vertices[indices[i+1]];
			Vector3 p3 = vertices[indices[i+2]];
			Vector3 p = ray.getPoint(hit.second);	//Intersection point

			Vector3 v = p2 - p1;
			Vector3 w = p3 - p1;
			Vector3 u = v.crossProduct(w);
			Real A = u.length();		//u

			v = p2 - p;
			w = p3 - p;
			u = v.crossProduct(w);
			Real A1 = u.length();		//u1

			v = p - p1;
			w = p3 - p1;
			u = v.crossProduct(w);
			Real A2 = u.length();		//u2

			//v = p2 - p1;
			//w = p - p1;
			//u = v.crossProduct(w);
			//Real A3 = u.length();		//u3

			//we should check dot products of u.u1, u.u2, u.u3 and use the signs of those as the signs of A1/A, A2/A, A3/A below, but since we know the ray intersects the triangle we know the 3 barycentric coordinates are all positive
			Vector3 barycentricCoords;
			barycentricCoords.x = A1/A;
			barycentricCoords.y = A2/A;
			barycentricCoords.z = 1.0f - barycentricCoords.x - barycentricCoords.y;
Post Reply