Page 1 of 1

Alpha Blending for hair rendering

Posted: Tue Jun 11, 2013 5:05 pm
by Bastien Cantoche
Hello everybody.
I read few topics about the same problem of alpha blending, but I didn't find a good solution for my problem. Further the last one was in 2009, so I think a new discution should help a lot of people.
I notice that I'm not sure the method I use is the best method. Let me explain what I want to do.

Note : Sorry for my english, it is not perfect.

I'm focusing on a character's hair rendering. My aim is to have this :

So I import my head into Ogre. In the file leonard.mesh, all the hairs are in the same mesh.
Each vertex of the mesh has texture coordinates which define :
- the position in the texture
- and the position in the transparency map associated

I write the file leonard.material to specify my own fragment shader. In it, I pick up the alpha factor into the transparency map and I associate it to the texture color.

Code: Select all

#version 330

uniform sampler2D texture, transparencyMap;

void main(void)

	vec4 alphaColor = texture2D(transparencyMap, gl_TexCoord[0].st); // the color onto the transparency map
	vec3 rgbColor = vec3(texture2D(texture, gl_TexCoord[0].st));
	float alphaFactor = alphaColor.r;			
	gl_FragColor = vec4(rgbColor, alphaFactor);
I think you found the problem here. If the depth-buffer checking is ON, I can only see the nearests pixels off the hairs. Because I want to see multiple overlay of hairs, I need to turn depth-test off. If I do this, all the pixels are written. But because I don't have a sorted list, some of the pixels wich are opaque and which are behind transparent pixels are drawn ahead instead. So I have this sort of image :


Therefore, I would like to :
- 1 : Sort my vertices by alpha factor.
- 2 : Draw all the opaque pixels with depth-test active.
- 3 : Turn the depth-tes OFF.
- 4 : Draw the transparent pixel.
- 5 : Turn the depth-test ON for the followings elements of the scene.
I spoke about vertices because a lot of guys said me that alpha blending is a CPU operation.

One part of this problem is I don't know how to split my different vertices from the Ogre entity, and I don't know how to say to Ogre to draw these vertices with a certain order.
Moreover, I don't know if my method is the good one. But I read everywhere that there is not a good solution to this problem.

I hope you will enjoy my topic and find some solutions with me.

Have a good day !

Re: Alpha Blending for hair rendering

Posted: Wed Jun 12, 2013 1:41 pm
by Bastien Cantoche
I think I found a solution.

If will export my model into a XML file and I will write a script to split the faces into two separates files.
I will be able to create two models : leonard.mesh.xml (which contains opaque faces) and leonard_hairs.mesh.xml (which contains the transparent faces).

I will import this XML files using XMLConverter, and will apply two different materials for the two models, one using depth_write on (for the opaque faces) and the oser using depth_write off (for the transparent faces).

With this method I think I will be able to draw my character preperly, whitout a lot of complicate tasks.

I will show you the result if I success.

Have a good day !

Re: Alpha Blending for hair rendering

Posted: Fri Jun 14, 2013 11:32 am
by Bastien Cantoche
Finally, my problem is back.

I succeded to split my faces (from file.mesh.xml) into two separated files (one with opaque faces, the other with transparent faces).
But with deth-write turned off, some of the transparent faces are still written front of the other. The blending is not working very well.

I really think that I have to sort the faces for each frame, and draw them further to the nearest.

If someone has any idea on how to do that in Ogre, it would be great !

Re: Alpha Blending for hair rendering

Posted: Fri Jun 14, 2013 1:36 pm
by Bastien Cantoche
I just noticed I'm on the wrong section. If someone can switch it :) (sorry)

Re: Alpha Blending for hair rendering

Posted: Mon Jun 17, 2013 11:19 am
by Bastien Cantoche
I found a solution. It is not the best but the image finally looks like good.

The problem with this render : Image is because the skin is drawn before, so some pixels of avatar's airs are not printed (because of depth check).

The solution I found is to draw hairs using two passes :

The first pass will print the top hairs quite good (depth-write OFF) :

The second pass will only draw the nearests polygons (depth-write ON) :

The result is quite good and I will keep this solution untill a better one.

Thank you.

Re: Alpha Blending for hair rendering

Posted: Wed Jun 19, 2013 6:04 pm
by madmarx
Interesting solution. Thanks for sharing your idea.

Re: Alpha Blending for hair rendering

Posted: Tue Jun 25, 2013 10:23 am
by Bastien Cantoche
Finally, this solution encouters a major problem.
The second pass which is supposed to hide the artifacts of the first one . But this second pass is also transparent, so depending on the camera viewpoint, we still can perceive them.

The finally solution I adopted is to do a depth-sorting of each transparent faces each time the camera (or his lookat) change. For that, we need to sort the index buffer of the submesh. Remember that the face are triangles so we have to sort vertices three by three. To calculate the depth of a triangle, I did an average of its three vertices.

Here is a part of the code I did (take care if you have a huge count of faces, it can take a lot time).

In createScene() method :

Code: Select all

Ogre::Entity* leonard = mSceneMgr->createEntity("Leonard", "leo.mesh");

// Fill a vector containing the names of the transparent materials

// For each submesh
Ogre::MeshPtr mesh = leonard->getMesh();
for(size_t i = 0; i < mesh->getNumSubMeshes(); ++i)
	Ogre::SubMesh* submesh = mesh->getSubMesh(i);
	Ogre::String submeshMaterialname = submesh->getMaterialName();
	// If the submesh use transparent material
	if(	std::find(mListOfTransparentMaterials.begin(), mListOfTransparentMaterials.end(), submeshMaterialname) != mListOfTransparentMaterials.end())
		// Get the submeshes' vertices positions
		Ogre::Vector3* verticesPositions = readVertexBuffer(submesh);
		std::pair<Ogre::SubMesh*, Ogre::Vector3*> entry(submesh, verticesPositions);
		// Sort the faces one time before the begining

method readVertexBuffer() (call this method just one time at the begining for each submesh)

Code: Select all

Ogre::Vector3* OgreApp::readVertexBuffer(Ogre::SubMesh* submeshToRead) {

	Ogre::VertexData* vertexData = submeshToRead->vertexData;
	Ogre::VertexBufferBinding* vertexBufferBinding = vertexData->vertexBufferBinding;
	if( vertexBufferBinding->getBufferCount() > 1 )
		std::cerr << "More than 1 vertex buffer for this submesh. Not supported. End" << std::endl;

	Ogre::HardwareVertexBufferSharedPtr vertexBuffer = vertexBufferBinding->getBuffer(0);
	size_t vertexCount = vertexBuffer->getNumVertices();
	Ogre::Vector3* verticesCoordinates = new Ogre::Vector3[vertexCount];
	Ogre::VertexDeclaration* decl = vertexData->vertexDeclaration;

	unsigned char* pVert = static_cast<unsigned char*>(vertexBuffer->lock(Ogre::HardwareBuffer::HBL_READ_ONLY));
	Ogre::Real* pReal;
	for (size_t v = 0; v < vertexCount; ++v)
		// Get elements
		Ogre::VertexDeclaration::VertexElementList elems = decl->findElementsBySource(0);
		Ogre::VertexDeclaration::VertexElementList::iterator i, iend;
		for (i = elems.begin(); i != elems.end(); ++i)
			Ogre::VertexElement& elem = *i;
			if (elem.getSemantic() == Ogre::VES_POSITION)
				elem.baseVertexPointerToElement(pVert, &pReal);
				verticesCoordinates[v].x = pReal[0];
				verticesCoordinates[v].y = pReal[1];
				verticesCoordinates[v].z = pReal[2];
		pVert += vertexBuffer->getVertexSize();
	return verticesCoordinates;
Finally, the updateIndexBuffer() method (call this one when the camera is changing) :

Code: Select all

void OgreApp::updateIndexBuffer(Ogre::SubMesh* submeshToSort) {

	Ogre::IndexData* indexData = submeshToSort->indexData;
	Ogre::HardwareIndexBufferSharedPtr indexBuffer = indexData->indexBuffer;
	Ogre::Vector3* verticesCoordinates =;

	// Work on IndexBuffer
	size_t indexesCount = indexBuffer->getNumIndexes();
	unsigned short* pIdx = static_cast<unsigned short*>(indexBuffer->lock(Ogre::HardwareBuffer::HBL_NORMAL));
	Ogre::Vector3 cameraCenter = mCamera->getPosition();
	Ogre::Real* distancesArray;
	// Problem if the indexes count is not a multiple of 3
	if(indexesCount%3 == 0)
		distancesArray = new Ogre::Real[indexesCount/3];
		std::cerr << "The number of indexes is not a multiple of 3. Not supported. End." << std::endl;

	for(size_t i = 0; i < indexesCount; i = i+3)
		// Calculate the triangle center by average
		Ogre::Vector3 center;
		size_t indexV1 = pIdx[i];
		size_t indexV2 = pIdx[i+1];
		size_t indexV3 = pIdx[i+2];
		center.x = (verticesCoordinates[indexV1].x + verticesCoordinates[indexV2].x + verticesCoordinates[indexV3].x)/3;
		center.y = (verticesCoordinates[indexV1].y + verticesCoordinates[indexV2].y + verticesCoordinates[indexV3].y)/3;
		center.z = (verticesCoordinates[indexV1].z + verticesCoordinates[indexV2].z + verticesCoordinates[indexV3].z)/3;
		// Calculate the distance between the center and the camera and fill the array
		Ogre::Real distance = sqrt(pow(cameraCenter.x - center.x, 2) + pow(cameraCenter.y - center.y, 2) + pow(cameraCenter.z - center.z, 2));
		distancesArray[i/3] = distance;

	// Sort
	for(size_t i = 0; i < indexesCount/3; ++i)
		for(size_t j = 0; j < i; ++j)
			if(distancesArray[i] > distancesArray[j])
				// Swap i and j
				Ogre::Real tempDistance = distancesArray[i];
				distancesArray[i] = distancesArray[j];
				distancesArray[j] = tempDistance;
				// By the way we have to swap i*3 with j*3, i*3+1 with j*3+1 and i*3+2 with j*3+2 (from the index buffer)
				for(size_t k = 0; k < 3; ++k)
					size_t tempIndex = pIdx[i*3 + k];
					pIdx[i*3 + k] = pIdx[j*3 + k];
					pIdx[j*3 + k] = tempIndex;;

	delete[] distancesArray;
Result :


If you have any ideas/advices I'm interested :)

Re: Alpha Blending for hair rendering

Posted: Tue Jun 25, 2013 2:47 pm
by madmarx
no need for sqrt ^^ + camera should be in the relative space of the object.

Best regards,


Re: Alpha Blending for hair rendering

Posted: Wed Jun 26, 2013 9:25 am
by Bastien Cantoche
[s]How would you calculate the distance in this case ?[/s]
[EDIT] : SquareRoot isn't necessary because it keeps the order (increasing function), so it is a useless operation.
To set the camera position in the relative space, should I have to take the local space as parameter ?

Re: Alpha Blending for hair rendering

Posted: Fri Jun 28, 2013 11:24 pm
by madmarx
Sorry i didn't see your anwser.
I would calculate the world matrix of the camera Wcam. The world matrix of the object Wobj. Then the inverse of the world matrix of the object is WobjInv. Then : Wcam * WobjInv gives you the camera in local space. (maybe its WobjInv * Wcam, too lazy to think clear, although I think its rather Wcam * WobjInv). Maybe there is already a function for that, but I don't use it atm.

Re: Alpha Blending for hair rendering

Posted: Sat Jun 29, 2013 6:36 pm
by bstone
That's an interesting approach. But I would rather use a simpler hair model, either without alpha blending or with much less overlap.

Re: Alpha Blending for hair rendering

Posted: Mon Jul 01, 2013 9:55 am
by Bastien Cantoche
Thank you bstone :) Can you develop your approach a bit ?

Re: Alpha Blending for hair rendering

Posted: Sun Jan 12, 2014 8:00 pm
by bstone
Sorry, I missed your post back then and didn't have time to check in for quite a while. Here's something I'd expect from an artist working on game assets in regard to hairs:


That works magic with normal mapped low-res geometry and no alpha transparency at all.

If you absolutely need that kind of patchy type of hair as on your images above then you should solve it using alpha-to-coverage, not transparency (as you already should have figured out), MSAA is not just a fancy word after all :)