[GSoC 2012] Off-Screen Particles project - continuation

Threads related to Google Summer of Code
Post Reply
User avatar
Assaf Raman
OGRE Team Member
OGRE Team Member
Posts: 3092
Joined: Tue Apr 11, 2006 3:58 pm
Location: TLV, Israel
x 76

Re: [GSoC 2012] Off-Screen Particles project

Post by Assaf Raman »

You can take days off, but keep us updated. The best thing is that every day you will post at the beginning of the day what you are going to do, at the end of the day what you actually did, and what you are going to do the next day - and we will know you are working. If you are taking a day off - write that as your plan.
After saying that - start working on the project, tutorials are nice - but we need to get to work here. :)
Watch out for my OGRE related tweets here.
Karol Badowski 1989
Google Summer of Code Student
Google Summer of Code Student
Posts: 185
Joined: Fri Apr 06, 2012 3:04 pm
x 18

Re: [GSoC 2012] Off-Screen Particles project

Post by Karol Badowski 1989 »

Tonight and this morning I've prepared for test from DNA sequencing methods. I just came back from test, at the moment I am back to work on depth acquisition implementation.
Google Summer of Code 2012 Student
Topic: "Implementation of Off-Screen Particles"
Project links: Project thread, WIKI page, Code fork for the project
Mentor: Assaf Raman
Karol Badowski 1989
Google Summer of Code Student
Google Summer of Code Student
Posts: 185
Joined: Fri Apr 06, 2012 3:04 pm
x 18

Re: [GSoC 2012] Off-Screen Particles project

Post by Karol Badowski 1989 »

Today i am continuing implementation of depth acquisition + I am analysing solutions used by ahmedismaiel.
Google Summer of Code 2012 Student
Topic: "Implementation of Off-Screen Particles"
Project links: Project thread, WIKI page, Code fork for the project
Mentor: Assaf Raman
Karol Badowski 1989
Google Summer of Code Student
Google Summer of Code Student
Posts: 185
Joined: Fri Apr 06, 2012 3:04 pm
x 18

Re: [GSoC 2012] Off-Screen Particles project

Post by Karol Badowski 1989 »

Still continue implementation of depth acquisition and compositor.
Google Summer of Code 2012 Student
Topic: "Implementation of Off-Screen Particles"
Project links: Project thread, WIKI page, Code fork for the project
Mentor: Assaf Raman
User avatar
Assaf Raman
OGRE Team Member
OGRE Team Member
Posts: 3092
Joined: Tue Apr 11, 2006 3:58 pm
Location: TLV, Israel
x 76

Re: [GSoC 2012] Off-Screen Particles project

Post by Assaf Raman »

Can you give more information what does that mean?
Watch out for my OGRE related tweets here.
Karol Badowski 1989
Google Summer of Code Student
Google Summer of Code Student
Posts: 185
Joined: Fri Apr 06, 2012 3:04 pm
x 18

Re: [GSoC 2012] Off-Screen Particles project

Post by Karol Badowski 1989 »

Yesterday (30.06) I hosted guests (family): unfortunatelly did not manage to progress the work during daylight.
Today I'll do subsequent job at night+day to hurry up for yesterday:
I'll attempt to make depth texture information passed between 2 compositors that are put in one compositor chain, (so that it could be further processed there) and use that compositor chain in my example.
After making it work correctly, I'll start implementing downsampling (start from seeing how it is done in HDR example).
Google Summer of Code 2012 Student
Topic: "Implementation of Off-Screen Particles"
Project links: Project thread, WIKI page, Code fork for the project
Mentor: Assaf Raman
User avatar
jacmoe
OGRE Retired Moderator
OGRE Retired Moderator
Posts: 20570
Joined: Thu Jan 22, 2004 10:13 am
Location: Denmark
x 179
Contact:

Re: [GSoC 2012] Off-Screen Particles project

Post by jacmoe »

Do you have any screenshots you'd like to show us Karol? :)
/* Less noise. More signal. */
Ogitor Scenebuilder - powered by Ogre, presented by Qt, fueled by Passion.
OgreAddons - the Ogre code suppository.
User avatar
Assaf Raman
OGRE Team Member
OGRE Team Member
Posts: 3092
Joined: Tue Apr 11, 2006 3:58 pm
Location: TLV, Israel
x 76

Re: [GSoC 2012] Off-Screen Particles project

Post by Assaf Raman »

Assaf Raman wrote:Well, the mid-term evaluations are in July 9, by that time - I will need to see real progress, else we will have a problem here.
According to your new schedule - you will have less then two weeks to program, I don't like it.
What would you do if you were in my place? Tell me how this is going to work out.
I haven't seen real progress since the quoted post, I am sorry to say that I don't see myself giving you a passing grade - as no real work has been done.
I am sorry.
Watch out for my OGRE related tweets here.
Karol Badowski 1989
Google Summer of Code Student
Google Summer of Code Student
Posts: 185
Joined: Fri Apr 06, 2012 3:04 pm
x 18

Re: [GSoC 2012] Off-Screen Particles project

Post by Karol Badowski 1989 »

I could edit and show some screenshot videos I've been taking of current version of application and of bugs in shader that i am trying to eliminate (they include application with basic rendering, versions modified by compositors and own materials, depth texture rendering from SSAO - works good in DirectX9, but has problem in OpenGL with skipping transparent materials), but i would like to finish fixing and joining shaders before I edit and upload these videos together with code as something valuable that could change mind whether to pass this project in evaluation.

During last days I' ve been (and still am) continuing to create shaders concatenation. I shall continue until it works, because this part would be understood as 'real work'.
It is true that other work has only purpose of training and finding out how to make final code work correctly. Finding out how to do something is most time-consuming part.
I believe I am able to finish and make it work correctly this week. Before evaluation.

Right now, downsampling is still based on one from SSAO (solid objects material is replaced with "GBuffer" material). Goal is to make it more simmilar to Deferred shading, where original materials are inherited, but modified.


Basic idea is:

Continue writing shaders for every step to the moment until halo effects are visible (depth texture downsampling, adding particle effects texture with binary depth test) - for now, dividing solid and particle objects manually by giving them separate materials, just until it works.

When I achieve goal, I'll come back to automatic modification of material from level of the code based on given parameters (downsample scale, flag whether object is on particle objects list or solid objects list, flag whether "maxium-of-depth" downsampling is preferred over simple downsamplng).
Last edited by Karol Badowski 1989 on Wed Jul 04, 2012 2:08 pm, edited 8 times in total.
Google Summer of Code 2012 Student
Topic: "Implementation of Off-Screen Particles"
Project links: Project thread, WIKI page, Code fork for the project
Mentor: Assaf Raman
Karol Badowski 1989
Google Summer of Code Student
Google Summer of Code Student
Posts: 185
Joined: Fri Apr 06, 2012 3:04 pm
x 18

Re: [GSoC 2012] Off-Screen Particles project

Post by Karol Badowski 1989 »

There is an issue to solve in the future:
Handling semi-transparent non-particle materials
It is an unsolved topic I've mentioned before, about transparent objects that are not particle effects:

Layers of semi-transparent non-particle objects and layers of particles could fork eachother like a sandwich.
"Off-screen particles" suggested in http://http.developer.nvidia.com/GPUGem ... _ch23.html would eliminate particles that are behind the glass...

Is it a good solution to treat semi-transparent objects like particle systems, so their output colour would include the depth order of glass/particles?

Perhaps not good enough... texture layer comming from colorfull glass objects sometimes could have relatively high contrast not only on edges, and should not be downsampled and blurred...

Maybe most correct solution is rendering semi-transparent objects, alfa-blending together with particles, but with full resolution:

------------------------------------------------------------------------------
-Generate colour and depth texture of non-particle, non-transparent objects.
-While doing that, also chceck transparent non-particle objects, but do not render them to texture.
Instead of that, use them in process of creating separate 2D texture: value 1 if the closest object is a semi-transparent object and value 0 if in front there is a non-transparent object.
Suppose it is possible to achieve that kind of data in one pass.
-Stencil render for value 0 on the mentioned texture (that means "semi-transparent objects are unvisible on that pixel"):
{
- downsample depth to new texture with "maximum of depth"
- do every step of "off-screen particles" - only for particles, until stenciling.
- when you finally detect edges with stencil buffer, add values 1 to the texture that we used earlier for stenciling
}
-Stencil render in another pass for value 1 on the mentioned texture (this texture has been just updated):
{
-do not downsample
-render non-particle semi-transparent objects (without "soft particles") and particle effects (with "soft particles" depth testing).
-For both use alpha-blending, togrther - to their one shared texture.
}
------------------------------------------------------------------------------


If semi-transparent non-particle objects do not have high frequency textures/reflections, the first stenciling part could be omitted. That way semi-transparent objects could be rendered in downsampled pass and final texture could be Gaussian-blurred in case of need. Still they shouldn't have "soft-particles" effect and still they should be rendered with full resolution on sharp edges - that is assured by stenciling that happens after Sobel filter.
Google Summer of Code 2012 Student
Topic: "Implementation of Off-Screen Particles"
Project links: Project thread, WIKI page, Code fork for the project
Mentor: Assaf Raman
User avatar
duststorm
Minaton
Posts: 921
Joined: Sat Jul 31, 2010 6:29 pm
Location: Belgium
x 80
Contact:

Re: [GSoC 2012] Off-Screen Particles project

Post by duststorm »

What I was wondering: Is there a reason why you don't check in code on your bitbucket fork?
I would recommend you do a commit at least once a day. Then you also show your mentor what you are doing, and your code can be tested. Now the only thing he has is the forum messages, which are sometimes cryptic.
Developer @ MakeHuman.org
User avatar
Assaf Raman
OGRE Team Member
OGRE Team Member
Posts: 3092
Joined: Tue Apr 11, 2006 3:58 pm
Location: TLV, Israel
x 76

Re: [GSoC 2012] Off-Screen Particles project

Post by Assaf Raman »

He is only in the stage of creating "test" projects and such.
I guess this can be a good idea - but create a different repo for committing tests if needed.
We are very late in the game (days from mid term) - to be in a stage of "researching", but this is where we seem to be.
More then that - writing a detailed daily reports would have been good, but it didn't happened here, even after it has been asked more then once.
A big part of the summer project is for the student to communicate with the open source community - not only doing the work.
Read this whole thread and see why this student has a really big problem now.
Watch out for my OGRE related tweets here.
Karol Badowski 1989
Google Summer of Code Student
Google Summer of Code Student
Posts: 185
Joined: Fri Apr 06, 2012 3:04 pm
x 18

Re: [GSoC 2012] Off-Screen Particles project

Post by Karol Badowski 1989 »

At this moment my application used for tests of compositor chains modiffication.
OffscreenParticles.cpp

Code: Select all

/*
-----------------------------------------------------------------------------
Filename:    OffScreenParticles.cpp
-----------------------------------------------------------------------------
*/
#include "OffScreenParticles.h"
#include <OgreMath.h>
#include <OgreVector4.h>
#include <OgreColourValue.h>
#include <OgreSubEntity.h>


	Ogre::Viewport* mViewport;

	Ogre::SceneNode* headNode;
	Ogre::SceneNode* barrelsNode;
	Ogre::SceneNode* launcherNode;
	Ogre::SceneNode* penguinTravelPathNode;
	Ogre::SceneNode* penguinNode;
	Ogre::SceneNode* fogNode;
	Ogre::SceneNode* houseNode;

	unsigned long period = 3000;
	float headStartingPhase = 1.5f;
	float barrelsPhaseDelay = 0.60f;
	float headAmplitude = 30.0f;
	float headAverageAngle = 2.5f;
	float barrelsAmplitude = 25.0f;
	float barrelsAverageAngle = 0.0f;

	float previousPenguinHeigth = 0.0f;
	float previousPenguinHeigthDelta = 0.0f;
	float previousTravelAngle = 0.0f;

	
	float previousPenguinAngle = 0.0f;
	float previousHeadAngle = headStartingPhase*Ogre::Math::PI;
	float previousBarrelsAngle = previousHeadAngle - barrelsPhaseDelay*Ogre::Math::PI;
	float penguinAngle;
	float headAngle;
	float barrelsAngle;

	int pushUpsBetweenTravelling = 5;
	int loopsPerTravelling = 2;

	unsigned int pushUpsCount = 0;
	bool isPenguinTravelling=false;



//-------------------------------------------------------------------------------------
OffScreenParticles::OffScreenParticles(void)
{
}
//-------------------------------------------------------------------------------------
OffScreenParticles::~OffScreenParticles(void)
{
}

Ogre::String GetShaderFileNameSuffix(){
	Ogre::String shaderFileNameSuffix;
	
    // Use GLSL ES in case of OpenGL ES 2 render system.
	if (Ogre::Root::getSingletonPtr()->getRenderSystem()->getName().find("OpenGL ES 2") != Ogre::String::npos)
	{
		shaderFileNameSuffix = "glsles";	
	}
    // Use GLSL in case of OpenGL render system.
    else if (Ogre::Root::getSingletonPtr()->getRenderSystem()->getName().find("OpenGL") != Ogre::String::npos)
	{
		shaderFileNameSuffix = "glsl";		
	}

	// Use HLSL, Multiple Render Target in case of D3D9 render system.
	else if (Ogre::Root::getSingletonPtr()->getRenderSystem()->getName().find("Direct3D9") != Ogre::String::npos)
	{
		shaderFileNameSuffix = "hlsl";			
	}
	// Use HLSL,  in case of D3D10 or D3D11 render system.
	else if (Ogre::Root::getSingletonPtr()->getRenderSystem()->getName().find("Direct3D1") != Ogre::String::npos)
	{
		shaderFileNameSuffix = "hlsl4";			
	}
	// If none of above render systems fit, use cg
	else{
		shaderFileNameSuffix = "cg";
	}

	return shaderFileNameSuffix;
}


bool OffScreenParticles::frameRenderingQueued(const Ogre::FrameEvent& evt)
{
	
	float headPhase = mRoot->getTimer()->getMilliseconds() * Ogre::Math::PI / period + headStartingPhase*Ogre::Math::TWO_PI;
	float barrelsPhase = headPhase - barrelsPhaseDelay*Ogre::Math::PI;

	headAngle = (Ogre::Math::Sin(headPhase)) * headAmplitude + headAverageAngle;
	barrelsAngle = (Ogre::Math::Sin(barrelsPhase)) * barrelsAmplitude + barrelsAverageAngle;

	headNode->pitch(Ogre::Degree(headAngle-previousHeadAngle));
	barrelsNode->pitch(Ogre::Degree(barrelsAngle-previousBarrelsAngle));

	

	float penguinHeigth = (sin(mRoot->getTimer()->getMilliseconds() / 500.0)+1.0f) * 50.0f;
	penguinNode->setPosition(0.0f, penguinHeigth, 0.0f);
	float penguinHeigthDelta = penguinHeigth - previousPenguinHeigth;

	if(!isPenguinTravelling){
		float penguinRocketOscilation = mRoot->getTimer()->getMilliseconds() / 270.0;
		// spin the head around and make it float up and down
		launcherNode->setPosition(sin(penguinRocketOscilation) * 10.0f - 50.0f, 0.0f, -100.0f);
		penguinAngle = -cos(penguinRocketOscilation) * 10.0f;
		penguinNode->roll(Ogre::Degree(penguinAngle-previousPenguinAngle));

	  //checking if penguin has made next push-up
		if(previousPenguinHeigthDelta>=0 && penguinHeigthDelta<0){
			pushUpsCount = (pushUpsCount+1) % pushUpsBetweenTravelling;
			if(pushUpsCount==0){
				/*start travelling*/
				isPenguinTravelling = true;
				launcherNode->pitch(Ogre::Degree(30));
			}
		}

		previousPenguinHeigth = penguinHeigth;
		previousPenguinHeigthDelta = penguinHeigthDelta;
		previousPenguinAngle = penguinAngle;

	}
	else{
		float angleDelta = evt.timeSinceLastFrame*50;
	  //checking if penguin has made a loop;
		if(previousTravelAngle + angleDelta > 360*loopsPerTravelling){
			isPenguinTravelling = false;
			angleDelta = 360*loopsPerTravelling - previousTravelAngle;
			launcherNode->pitch(Ogre::Degree(-30));
			previousTravelAngle = 0.0f;
		}
		else{
			previousTravelAngle += angleDelta; 
		}
		penguinTravelPathNode->yaw(Ogre::Degree(angleDelta));
	}

	previousHeadAngle = headAngle;
	previousBarrelsAngle = barrelsAngle;

	fogNode->yaw(Ogre::Degree(evt.timeSinceLastFrame * 200));

	return BaseApplication::frameRenderingQueued(evt);
}

/*void BloomCompositorListener::notifyCompositor( Ogre::CompositorInstance * instance )
        {
            CompositorListener::notifyCompositor( instance );

            // Get some RTT dimensions for later calculations
            Ogre::CompositionTechnique::TextureDefinitionIterator defIter = instance->getTechnique()->getTextureDefinitionIterator();

            while ( defIter.hasMoreElements() )
            {
                Ogre::CompositionTechnique::TextureDefinition * def = defIter.getNext();

                if ( def->name == TTEXT("renderTarget1") || def->name == TTEXT("renderTarget2") )
                {
                    def->width = mVpWidth / 2;
                    def->height = mVpHeight / 2;
                }
            }
        }*/

enum ShaderParam { SP_SHININESS = 1, SP_DIFFUSE, SP_SPECULAR };

//-------------------------------------------------------------------------------------
void OffScreenParticles::createScene(void)
{
	mViewport=mWindow->getViewport(0);
	mViewport->setBackgroundColour(Ogre::ColourValue::ColourValue(0.2f,0.2f,0.2f,1));
	
	Ogre::CompositorManager::getSingleton().addCompositor(mViewport, "OSP_SolidMaterialsBuffer");
    Ogre::CompositorManager::getSingleton().setCompositorEnabled(mViewport, "OSP_SolidMaterialsBuffer", true);
	Ogre::CompositorManager::getSingleton().addCompositor(mViewport, "OSP_ShowDepth");//"SSAO/Volumetric");//"SSAO/ShowNormals");
    Ogre::CompositorManager::getSingleton().setCompositorEnabled(mViewport, "OSP_ShowDepth", true);//"SSAO/Volumetric", true);//"SSAO/ShowNormals", true);
	Ogre::CompositorManager::getSingleton().addCompositor(mViewport, "OSP_PostFilter");
    Ogre::CompositorManager::getSingleton().setCompositorEnabled(mViewport, "OSP_PostFilter", true);
	/*Ogre::CompositorManager::getSingleton().addCompositor(mViewport, "Bloom");
    Ogre::CompositorManager::getSingleton().setCompositorEnabled(mViewport, "Bloom", true);*/

	//Ogre::CompositorManager::getSingleton().addCompositor(mViewport, "BlackWhite2");
    //Ogre::CompositorManager::getSingleton().setCompositorEnabled(mViewport, "BlackWhite2", true);


	//Ogre::CompositorChain *chain = Ogre::CompositorManager::getSingleton().getCompositorChain(mViewport);
	//Ogre::Compositor newCompositor = Ogre::CompositorManager::getSingleton().create("MyCompositor","MyGroup",true);
	//Ogre::CompositionTechnique::TextureDefinitionIterator defIter = Ogre::CompositorManager::instance->get

	mSceneMgr->setAmbientLight(Ogre::ColourValue(0.75f, 0.75f, 0.25f));
	Ogre::Light* light = mSceneMgr->createLight("MainLight");
    light->setPosition(200.0f, 80.0f, -50.0f);

	



    // create your scene here :)

	Ogre::SceneNode* root = mSceneMgr->getRootSceneNode();

  //Ogre's head
	headNode = root->createChildSceneNode("HeadNode");
	Ogre::Entity* ogreHead = mSceneMgr->createEntity("Head", "ogrehead.mesh");
	headNode->attachObject(ogreHead);
	headNode->setPosition(50.0f, 0.0f, -100.0f);
	headNode->pitch(Ogre::Degree(previousHeadAngle));


	//ogreHead->setMaterialName("SSAO/ShowDepth");//"Examples/CelShading");
	/*Ogre::SubEntity* sub;

    sub = ogreHead->getSubEntity(0);    // eyes
    sub->setCustomParameter(SP_SHININESS, Ogre::Vector4(35, 0, 0, 0));
    sub->setCustomParameter(SP_DIFFUSE, Ogre::Vector4(1, 0.3, 0.3, 1));
    sub->setCustomParameter(SP_SPECULAR, Ogre::Vector4(1, 0.6, 0.6, 1));

    sub = ogreHead->getSubEntity(1);    // skin
    sub->setCustomParameter(SP_SHININESS, Ogre::Vector4(10, 0, 0, 0));
    sub->setCustomParameter(SP_DIFFUSE, Ogre::Vector4(0, 0.5, 0, 1));
    sub->setCustomParameter(SP_SPECULAR, Ogre::Vector4(0.3, 0.5, 0.3, 1));

    sub = ogreHead->getSubEntity(2);    // earring
    sub->setCustomParameter(SP_SHININESS, Ogre::Vector4(25, 0, 0, 0));
    sub->setCustomParameter(SP_DIFFUSE, Ogre::Vector4(1, 1, 0, 1));
    sub->setCustomParameter(SP_SPECULAR, Ogre::Vector4(1, 1, 0.7, 1));

    sub = ogreHead->getSubEntity(3);    // teeth
    sub->setCustomParameter(SP_SHININESS, Ogre::Vector4(20, 0, 0, 0));
    sub->setCustomParameter(SP_DIFFUSE, Ogre::Vector4(1, 1, 0.7, 1));
    sub->setCustomParameter(SP_SPECULAR, Ogre::Vector4(1, 1, 1, 1));*/





  //Barrels on Ogre's tusks
	barrelsNode = headNode->createChildSceneNode("BarrelsNode");
	barrelsNode->setPosition(0.0f, 5.0f, -2.0f);
	barrelsNode->pitch(Ogre::Degree(previousBarrelsAngle));
	
  //Right tusk Barrel
	Ogre::SceneNode* barrel1Node = barrelsNode->createChildSceneNode("Barrel1Node");
	Ogre::Entity* barrel1 = mSceneMgr->createEntity("Barrel1", "Barrel.mesh");
	barrel1Node->attachObject(barrel1);
	barrel1Node->setPosition(-21.0f, 2.0f, 0.0f);
	barrel1Node->scale(1.25f, 1.25f, 1.25f);
  //Right Barrel's smoke
	Ogre::SceneNode* smoke1Node = barrel1Node->createChildSceneNode("Smoke1Node");
    Ogre::ParticleSystem* smoke1 = mSceneMgr->createParticleSystem("Smoke1", "Examples/Smoke2");
	//smoke1->setMaterialName("Examples/CelShading");
	smoke1Node->attachObject(smoke1);
	
  //Left tusk Barrel
	Ogre::SceneNode* barrel2Node = barrelsNode->createChildSceneNode("Barrel2Node");
	Ogre::Entity* barrel2 = mSceneMgr->createEntity("Barrel2", "Barrel.mesh");
	barrel2Node->attachObject(barrel2);
	barrel2Node->setPosition(21.0f, 2.0f, 0.0f);
	barrel2Node->scale(1.25f, 1.25f, 1.25f);
  //Left Barrel's smoke
	Ogre::SceneNode* smoke2Node = barrel2Node->createChildSceneNode("Smoke2Node");
    Ogre::ParticleSystem* smoke2 = mSceneMgr->createParticleSystem("Smoke2", "Examples/Smoke2");
	smoke2Node->attachObject(smoke2);


  //Penguin Travel Path
	penguinTravelPathNode = root->createChildSceneNode("PenguinTravelPath");
	penguinTravelPathNode->setPosition(25.0f, 0.0f, -100.0f);

  //Penguin Launcher
	launcherNode = penguinTravelPathNode->createChildSceneNode("LauncherNode");
	launcherNode->setPosition(-75.0f, 0.0f, 0.0f);

  //Penguin Rocket
	penguinNode = launcherNode->createChildSceneNode("PenguinNode");
	Ogre::Entity* penguin = mSceneMgr->createEntity("Penguin","penguin.mesh");
	//penguin->setMaterialName("shader/orange_"+GetShaderFileNameSuffix());//"shader/smooth_add");//"Examples/CelShading");//"shader/depth_cg");//"PlainTexture");//"TextureModColor");
	penguinNode->attachObject(penguin);
	penguinNode->setPosition(0.0f, 50.0f, 0.0f);

  //Penguin Rocket's jet
	Ogre::SceneNode* penguinJetNode = penguinNode->createChildSceneNode("PenguinJetNode");
    Ogre::ParticleSystem* penguinJet = mSceneMgr->createParticleSystem("PenguinJet", "Examples/JetEngine1");
	penguinJetNode->attachObject(penguinJet);

  //Fog
	fogNode = launcherNode->createChildSceneNode("FogNode");
    Ogre::ParticleSystem* fog = mSceneMgr->createParticleSystem("Fog", "Examples/Fog");
	fogNode->attachObject(fog);
	//fog->setMaterialName("ShowDepth");//"TextureModColor");//"PlainTexture");
	//Ogre::CompositorManager::getSingleton().addCompositor(mViewport, "SSAO/Post/CrossBilateralFilter");//"DOF");//"DeferredShading/ShowDepthSpecular");//"Sharpen Edges");//"B&W");
	//Ogre::CompositorManager::getSingleton().setCompositorEnabled(mViewport, "SSAO/Post/CrossBilateralFilter", true);//"DOF", true);//"DeferredShading/ShowDepthSpecular", true);//"Sharpen Edges", true);//"B&W", true);

	
    ogreHead->setMaterialName("OSP_SolidMaterialsBuffer");
    barrel1->setMaterialName("OSP_SolidMaterialsBuffer");
    barrel2->setMaterialName("OSP_SolidMaterialsBuffer");
	penguin->setMaterialName("OSP_SolidMaterialsBuffer");

	//smoke1->setMaterialName("SSAO/GBuffer");
	//smoke2->setMaterialName("SSAO/GBuffer");
	//penguinJet->setMaterialName("SSAO/GBuffer");
	//fog->setMaterialName("SSAO/GBuffer");

}



#if OGRE_PLATFORM == OGRE_PLATFORM_WIN32
#define WIN32_LEAN_AND_MEAN
#include "windows.h"
#endif

#ifdef __cplusplus
extern "C" {
#endif

#if OGRE_PLATFORM == OGRE_PLATFORM_WIN32
    INT WINAPI WinMain( HINSTANCE hInst, HINSTANCE, LPSTR strCmdLine, INT )
#else
    int main(int argc, char *argv[])
#endif
    {
        // Create application object
        OffScreenParticles app;

        try {
            app.go();
        } catch( Ogre::Exception& e ) {
#if OGRE_PLATFORM == OGRE_PLATFORM_WIN32
            MessageBox( NULL, e.getFullDescription().c_str(), "An exception has occured!", MB_OK | MB_ICONERROR | MB_TASKMODAL);
#else
            std::cerr << "An exception has occured: " <<
                e.getFullDescription().c_str() << std::endl;
#endif
        }

        return 0;
    }

#ifdef __cplusplus
}
#endif
OffScreenParticles.h

Code: Select all

/*
-----------------------------------------------------------------------------
Filename:    OffScreenParticles.h
-----------------------------------------------------------------------------
*/
#ifndef __TutorialApplication_h_
#define __TutorialApplication_h_

#include "BaseApplication.h"

class OffScreenParticles : public BaseApplication
{
public:
    OffScreenParticles(void);
    virtual ~OffScreenParticles(void);
    virtual bool frameRenderingQueued(const Ogre::FrameEvent& evt);

protected:
    virtual void createScene(void);

};

#endif // #ifndef __OffScreenParticles_h_
Entities that are manually given material "OSP_SolidMaterialsBuffer" should be displayed as non-particle systems - rendered to
texture that is visible in one of following compositors.

It seems that under DirectX9, all materials that are not "OSP_SolidMaterialsBuffer" are excluded from rendering.
In OpenGL rendering system they are visible. Compositor somehow renders them.


I should do that the way it is done in deferred shading. It is a little bit complicated there. However i think that there is no other solution than listeners. The clearest explanation how to use them that i found till now is http://www.ogre3d.org/forums/viewtopic.php?f=5&t=67593 I'll try to solve it that way.



Depth together with other values is passed between compositors in mrt texture (like in compositor "SSAO/GBuffer" from example applications).

Code: Select all

texture mrt target_width target_height PF_FLOAT32_RGBA PF_FLOAT32_RGBA PF_FLOAT32_RGBA chain_scope
And this is understood as G-Buffer there.


I'm doing passing of texture with colour in RGB and depth in alpha channel.

Code: Select all

texture depthInAlpha target_width target_height PF_FLOAT32_RGBA chain_scope
Right now code of gbuffer looks like this:
OSP.compositor

Code: Select all

compositor OSP_SolidMaterialsBuffer
{
    technique
    {
        texture depthInAlpha target_width target_height PF_FLOAT32_RGBA chain_scope
        texture occlusion target_width target_height PF_FLOAT32_RGBA chain_scope

        target depthInAlpha
        {
            input none
            shadows off
            
            pass clear 
			{
			    buffers colour depth stencil
				depth_value 1.0 
			}      

            pass render_scene {}
        }
    }
}
This compositor aim is to check if depth is acquired correctly:

Code: Select all

compositor OSP_ShowDepth
{
    technique 
    {
        texture_ref occlusion OSP_SolidMaterialsBuffer occlusion

        target occlusion
        {
            input none
            
            pass render_quad
            {
                // Renders a fullscreen quad with a material
                material OSP_ShowDepth
            }
        }
    }
}
In this one today I'll modify texture maximum of depth downsample.
(For every field of 2x2 or 4x4 or 8x8 depth values, the maximum value will be chosen).
Right now it just displays everything without modiffication.

Code: Select all

compositor OSP_PostFilter
{
    technique 
    {
        target_output
        {
            input none
            
            pass render_quad
            {
                material OSP_PostFilter
            }
        }
    }
}
This is the file with materials used in this copositor chain.
OSP.material

Code: Select all

/*fragment_program OSP_ShowDepth_fp_hlsl hlsl
{
    source OSP.cg
    entry_point OSP_ShowDepth_fp
    target ps_3_0
}*/

fragment_program OSP_ShowDepth_fp_cg cg
{
    source OSP.cg
    entry_point OSP_ShowDepth_fp
    profiles ps_2_x arbfp1
}

fragment_program OSP_ShowDepth_fp unified
{
	//delegate OSP_ShowDepth_fp_hlsl 
	delegate OSP_ShowDepth_fp_cg
}

material OSP_ShowDepth
{
    technique
    {
        pass
        {
            depth_check off

			vertex_program_ref Ogre/Compositor/StdQuad_vp {}			
            fragment_program_ref OSP_ShowDepth_fp {}

            texture_unit 
            {
                content_type compositor OSP_SolidMaterialsBuffer depthInAlpha
                tex_address_mode clamp
                filtering none
            }

            texture_unit
            {
                texture gray256.png
                tex_address_mode wrap
                filtering none
            }
        }
    }
}

//---------------------------------------------------

// Gbuffer Material

/*vertex_program OSP_SolidMaterialsBuffer_vp_hlsl hlsl
{
    source OSP.cg
    entry_point OSP_SolidMaterialsBuffer_vp
    target vs_3_0
}

fragment_program OSP_SolidMaterialsBuffer_fp_hlsl hlsl
{
    source OSP.cg
    entry_point OSP_SolidMaterialsBuffer_fp
    target ps_3_0 
}*/

vertex_program OSP_SolidMaterialsBuffer_vp_cg cg
{
    source OSP.cg
    entry_point OSP_SolidMaterialsBuffer_vp
    profiles vs_2_x arbvp1
	default_params
    {
    }
}

fragment_program OSP_SolidMaterialsBuffer_fp_cg cg
{
    source OSP.cg
    entry_point OSP_SolidMaterialsBuffer_fp
    profiles ps_3_0 arbfp1
	default_params
    {
    }
}

vertex_program OSP_SolidMaterialsBuffer_vp unified
{
	//delegate OSP_SolidMaterialsBuffer_vp_hlsl 
	delegate OSP_SolidMaterialsBuffer_vp_cg
}
fragment_program OSP_SolidMaterialsBuffer_fp unified
{
	//delegate OSP_SolidMaterialsBuffer_fp_hlsl 
	delegate OSP_SolidMaterialsBuffer_fp_cg
}

material OSP_SolidMaterialsBuffer
{
    technique
    {
        pass
        {	 
            //scene_blend alpha_blend
            vertex_program_ref OSP_SolidMaterialsBuffer_vp
            {
				param_named_auto worldViewProj worldviewproj_matrix
				param_named_auto texelOffsets texel_offsets
				param_named_auto depthRange scene_depth_range
            }

            fragment_program_ref OSP_SolidMaterialsBuffer_fp
            {
                param_named_auto depthRange scene_depth_range
            }
            texture_unit
            {
                texture Ten.png 2d
            }
        }
    }
}

//------------------------------------------------

/*fragment_program OSP_PostFilter_fp_hlsl hlsl
{
    source OSP.cg
    entry_point OSP_PostFilter_fp
    target ps_3_0 
}*/

fragment_program OSP_PostFilter_fp_cg cg
{
    source OSP.cg
    entry_point OSP_PostFilter_fp
    profiles ps_3_0 ps_2_x arbfp1
}

fragment_program OSP_PostFilter_fp unified
{
//	delegate OSP_PostFilter_fp_hlsl
	delegate OSP_PostFilter_fp_cg 
}

material OSP_PostFilter
{
    technique
    {
        pass
        {
            cull_hardware none
			cull_software none
			depth_check off
			
			vertex_program_ref Ogre/Compositor/StdQuad_vp {}
            fragment_program_ref OSP_PostFilter_fp {}

            texture_unit
            {
                content_type compositor OSP_SolidMaterialsBuffer occlusion
                tex_address_mode clamp
                filtering none
            }
        }
    }
}
As you can see, right now only cg shaders are used.
In *.cpp you can see commented part where i did choice of rendering sysytem by choosing *.material file with appropriate suffix, that would bring different shaders for different rendering systems.
However it seems that there is just more pleasant way
(fragment progrem (...) unified)
Where OGRE makes choice of supported shader.

Anyway, I'm doing cg shaders for now.

Code: Select all

void OSP_SolidMaterialsBuffer_vp(
        float4 inputPosition      : POSITION,
        out float4 outputPosition : POSITION,
        out float2 outputDepth    : TEXCOORD0,

	    uniform float4x4 worldViewProj,
	    uniform float4 texelOffsets,
	    uniform float4 depthRange
        )
{
        outputPosition = mul(worldViewProj, inputPosition);
		outputPosition.xy += texelOffsets.zw * outputPosition.w; //solution presented by cyanbeck
		outputDepth.x = smoothstep(depthRange.x,depthRange.y,outputPosition.z);
		outputDepth.y = outputPosition.w;
}

void OSP_SolidMaterialsBuffer_fp(
        float2 inputDepth              : TEXCOORD0,
        //float2 diffuse                 : TEXCOORD1,
		out float4 outputColorAndDepth : COLOR,
		
		uniform sampler2D texture
        )
{
		outputColorAndDepth = tex2D(texture, diffuse);
		float finalDepth = inputDepth.x;
        outputColorAndDepth.w = finalDepth;
}


void OSP_ShowDepth_fp
(
    in float2 iTexCoord: TEXCOORD0, 
    out float4 oColor0 : COLOR0,
    uniform sampler depthInAlpha: register(s0),
    uniform sampler tex : register(s1)
)
{
	float4 colorAndDepth = tex2D(depthInAlpha, iTexCoord);
    float depth = colorAndDepth.a;
	float3 color = colorAndDepth*depth;
	
	
    //oColor0 = float4(tex2D(tex, float2((255-depth*200), 0)).rgb, 1);
    //oColor0 = float4(color.rgb, 1);
    oColor0 = float4(depth, depth, depth, 1);
}

void OSP_PostFilter_fp (
    in float2 uv : TEXCOORD0,
    out float4 oColor0 : COLOR0,

    uniform sampler sOcclusion : register(s0)
)
{
    oColor0 = float4(tex2D(sOcclusion, uv).xyz, 1);
} 
Syntax of compositor+material+shaders has a lot of attributes that you need to know until you achieve goal you a re trying to achieve. (I am reading third chapter of http://www.ogre3d.org/docs/manual/ everyday again and again).

I have a question for today:
Have you seen a very simplest example where variable is set through compositorListener, so it could be visible on both levels: compositor and shader code?

I will use scale parameter that I want to pass from code by compositorLitener. Similat tosolution used in http://ogre3d.org/forums/viewtopic.php?f=2&t=43238 to modify render target texture (in my case it will be depth texture).
"target_width_scaled <value example: 0.25> " will be changed there.
This parameter variable should be visible on the level of gc shader as well (so it could iterate (1/width_scale) * (1/height_scale pixels) in search of appropriate one).

Right now there is downsampling in HDR sample effect that has separate versions of material for each scale of downsampling and hard-coded sizes of textures. HDR uses textures (2^a) x (2^a) to change them into (2^(a-1)) x (2^(a-1)) textures, iteratively till achieving 1x1 texture.
It is nice solution, but not flexible enough. My task for today is to change it to few versions of downsampling (at least one that chooses maximum value in rectangle of pixels).

For next few days:
In separate pass, the texture of particle effects will be rendered while solid objects are set to unvisilbe. In this process, access to depthtexture is needed. Also scale parameters should be passed by copositorlistener for two reasons:
-so xy addresses could be referenced appropriately (modulo operations in referencing element of depth texture) while binary tepth testing.
-in this compositor, output texture will be downsampled too.
-downsampling has to make things work quicker, so for each particle, while rendering less samples of particle teture should be taken.

I am not 100% sure yet, but for the current moment I think it should be done in vertex shader by modifying output position to correst scale, isn't it?
Pseudocode of the general idea

Code: Select all

-float2 offset = //(x,y) position of texture (0,0) point from camera view
-oututPosition = multiplication of woldViewProj and inputPosition.
-outputPosition.x = (outputPosition.x-offset.x)*scaleInWidth + offset.x
-outputPosition.y = (outputPosition.y-offset.y)*scaleInHeight + offset.y
Next task:

After binary depth test:
-scale parameters,
-downsampled particle texture,
-color texture of solid objects, from g-buffer
These three input sources will be further passed to compositor that joins textures.
Scale is needed while joining (for each backround pixel, we need to reference pixel from appropriate position on particles texture).
Or particle texture could be scaled to original view size and then concatenated, but.. this is not necessary.
Last edited by Karol Badowski 1989 on Fri Jul 06, 2012 9:02 am, edited 1 time in total.
Google Summer of Code 2012 Student
Topic: "Implementation of Off-Screen Particles"
Project links: Project thread, WIKI page, Code fork for the project
Mentor: Assaf Raman
TheSHEEEP
OGRE Retired Team Member
OGRE Retired Team Member
Posts: 972
Joined: Mon Jun 02, 2008 6:52 pm
Location: Berlin
x 65

Re: [GSoC 2012] Off-Screen Particles project

Post by TheSHEEEP »

Well, now he definitely is reporting ;)
My site! - Have a look :)
Also on Twitter - extra fluffy
Karol Badowski 1989
Google Summer of Code Student
Google Summer of Code Student
Posts: 185
Joined: Fri Apr 06, 2012 3:04 pm
x 18

Re: [GSoC 2012] Off-Screen Particles project

Post by Karol Badowski 1989 »

Well, it is not much yet to show unfortunatelly.
There is more "figured out how to implement someting" than actually implemented yet.

About new compositor wit alpha channel pass of depth:
I am still fixing it - it malfunstions.

To check the difference In DirectX and in OpengGl that i mentioned, you can use this compositor (It is nearly same like SSAO).

Quote is uploaded just for code. There were some questions in this message that can be skipped. However i do not earse them from quote, because maybe someone knows better solution than one mentioned in previous message. If you already have a very clear example application of compositor/material modiffication in runtime through listeners (for example, used to render two groups of objects to separate render targets), it will be usefull. There is a working example in deferred shader, but it is not the simplest one.


Quote from message sent yesterday to promotor:
Karol Badowski 1989 wrote:I've just added depth acquisition with depth passed by alfa channel of a texture.
My material equivalnet to "g-buffer" is still attached manually to objects that are supposed to be rendered as non-particle. There are some more differences since last time. All calculations of depth are done in the first compositor vertex shader.

Right now I am debugging it. I think I'll be finished in several hours with that task.
It does not change however strategy of choosing materials for objects on scene. That is why I can already ask a question based on older version.


There is a difference between running applocation under openGl and DirectX9 that causes strange thing (I'll explain it on old version, nearly the same as SSAO):

Particle effects are unvisible in DirectX (GOOD)
Image
Particle effects are visible, but with a very strange material in OpenGL (WRONG).
Image

In application:

Code: Select all

(...)
Ogre::CompositorManager::getSingleton().addCompositor(mViewport, "OSP_SolidMaterialsBuffer");
    Ogre::CompositorManager::getSingleton().setCompositorEnabled(mViewport, "OSP_SolidMaterialsBuffer", true);
	Ogre::CompositorManager::getSingleton().addCompositor(mViewport, "OSP_ShowDepth");//"SSAO/Volumetric");//"SSAO/ShowNormals");
    Ogre::CompositorManager::getSingleton().setCompositorEnabled(mViewport, "OSP_ShowDepth", true);//"SSAO/Volumetric", true);//"SSAO/ShowNormals", true);
	Ogre::CompositorManager::getSingleton().addCompositor(mViewport, "OSP_PostFilter");
    Ogre::CompositorManager::getSingleton().setCompositorEnabled(mViewport, "OSP_PostFilter", true);

(...)
//material of the G-buffer quivalence is attached manually only to solid objects

    ogreHead->setMaterialName("OSP_SolidMaterialsBuffer");
    barrel1->setMaterialName("OSP_SolidMaterialsBuffer");
    barrel2->setMaterialName("OSP_SolidMaterialsBuffer");
    penguin->setMaterialName("OSP_SolidMaterialsBuffer");

//It is not attached to particle effects
	//smoke1->setMaterialName("SSAO/GBuffer");
	//smoke2->setMaterialName("SSAO/GBuffer");
	//penguinJet->setMaterialName("SSAO/GBuffer");
	//fog->setMaterialName("SSAO/GBuffer");
(...)
In DirectX only solid objects are rendered.
In OpenGL particle effects are rendered too by the compositor. They have a really strange material. I suppose that in SSAO occlusion is based on fact that compositors in chain can't work unless object has a special buffer with data that is requested. In OpenGl it seems that it is still displayed.

This is old version of compositor chain, nearly same as SSAO:
OSP.cg file:

Code: Select all

void OSP_SolidMaterialsBuffer_vp(
        float4 iPosition : POSITION,
        float3 iNormal   : NORMAL,

        out float4 oPosition : POSITION,
        out float3 oViewPos : TEXCOORD0,
        out float3 oNormal : TEXCOORD1,

        uniform float4x4 cWorldViewProj,
        uniform float4x4 cWorldView
        )
{
        oPosition = mul(cWorldViewProj, iPosition);         // transform the vertex position to the projection space
        oViewPos = mul(cWorldView, iPosition).xyz;          // transform the vertex position to the view space
        oNormal = mul(cWorldView, float4(iNormal,0)).xyz;   // transform the vertex normal to view space
}

void OSP_SolidMaterialsBuffer_fp(
        float3 iViewPos : TEXCOORD0,
        float3 iNormal  : TEXCOORD1,

        out float4 oColor0 : COLOR0,        // rgb color
        out float4 oNormalDepth : COLOR1,   // normal + linear depth [0, 1]
        out float4 oViewPos : COLOR2,       // view space position
        
        uniform float cNearClipDistance,
        uniform float cFarClipDistance // !!! might be 0 for infinite view projection.
        )
{

        oColor0 = float4(1, 1, 1, 1); //...and a least little touch of the titanium white...
    	float clipDistance = cFarClipDistance - cNearClipDistance;
		oNormalDepth = float4(normalize(iNormal).xyz, (length(iViewPos) - cNearClipDistance) / clipDistance);
		oViewPos = float4(iViewPos, 0);
}


void OSP_ShowDepth_fp
(
    in float2 iTexCoord: TEXCOORD0, 
    
    out float4 oColor0 : COLOR0,

    uniform sampler mrt1: register(s0),
    uniform sampler tex : register(s1)
)
{
    float depth = tex2D(mrt1, iTexCoord).w;
    oColor0 = float4(tex2D(tex, float2((255-depth*200), 0)).rgb, 1);
}

void OSP_PostFilter_fp (
    in float2 uv : TEXCOORD0,
    out float4 oColor0 : COLOR0,

    uniform sampler sOcclusion : register(s0)
)
{
    oColor0 = float4(tex2D(sOcclusion, uv).xyz, 1);
} 
OSP.compositor

Code: Select all

compositor OSP_SolidMaterialsBuffer
{
    technique
    {
        // GBuffer enconding: --------------------------------------------------
        // mrt0: rgba --> unused in this sample (plain white, (1, 1, 1, 1))
        // mrt1: xyz --> normals, w --> normalized linear depth [0, 1]
        // mrt2: xyz --> position in view space
        // 
        // use a better packing of variables in the mrt to (possibly) increase
        // performance!
        // ---------------------------------------------------------------------
        
        texture mrt target_width target_height PF_FLOAT32_RGBA PF_FLOAT32_RGBA PF_FLOAT32_RGBA chain_scope
        texture occlusion target_width target_height PF_FLOAT32_RGBA chain_scope

        target mrt
        {
            input none
            shadows off
            
            pass clear 
			{
			    buffers colour depth stencil
				depth_value 1.0 
			}      

            pass render_scene {}
        }
    }
}


compositor OSP_ShowDepth
{
    technique 
    {
        texture_ref occlusion OSP_SolidMaterialsBuffer occlusion

        target occlusion
        {
            input none
            
            pass render_quad
            {
                // Renders a fullscreen quad with a material
                material OSP_ShowDepth
            }
        }
    }
}

compositor OSP_PostFilter
{
    technique 
    {
        target_output
        {
            input none
            
            pass render_quad
            {
                material OSP_PostFilter
            }
        }
    }
}
OSP.material

Code: Select all

/*fragment_program OSP_ShowDepth_fp_hlsl hlsl
{
    source OSP.cg
    entry_point OSP_ShowDepth_fp
    target ps_3_0
}*/

fragment_program OSP_ShowDepth_fp_cg cg
{
    source OSP.cg
    entry_point OSP_ShowDepth_fp
    profiles ps_2_x arbfp1
}

fragment_program OSP_ShowDepth_fp unified
{
	//delegate OSP_ShowDepth_fp_hlsl 
	delegate OSP_ShowDepth_fp_cg
}

material OSP_ShowDepth
{
    technique
    {
        pass
        {
            depth_check off

			vertex_program_ref Ogre/Compositor/StdQuad_vp {}			
            fragment_program_ref OSP_ShowDepth_fp {}

            texture_unit 
            {
                content_type compositor OSP_SolidMaterialsBuffer mrt 1
                tex_address_mode clamp
                filtering none
            }

            texture_unit
            {
                texture gray256.png
                tex_address_mode wrap
                filtering none
            }
        }
    }
}

//---------------------------------------------------

// Gbuffer Material

/*vertex_program OSP_SolidMaterialsBuffer_vp_hlsl hlsl
{
    source OSP.cg
    entry_point OSP_SolidMaterialsBuffer_vp
    target vs_3_0
}

fragment_program OSP_SolidMaterialsBuffer_fp_hlsl hlsl
{
    source OSP.cg
    entry_point OSP_SolidMaterialsBuffer_fp
    target ps_3_0 
}*/

vertex_program OSP_SolidMaterialsBuffer_vp_cg cg
{
    source OSP.cg
    entry_point OSP_SolidMaterialsBuffer_vp
    profiles vs_2_x arbvp1
}

fragment_program OSP_SolidMaterialsBuffer_fp_cg cg
{
    source OSP.cg
    entry_point OSP_SolidMaterialsBuffer_fp
    profiles ps_3_0 arbfp1
}

vertex_program OSP_SolidMaterialsBuffer_vp unified
{
	//delegate OSP_SolidMaterialsBuffer_vp_hlsl 
	delegate OSP_SolidMaterialsBuffer_vp_cg
}
fragment_program OSP_SolidMaterialsBuffer_fp unified
{
	//delegate OSP_SolidMaterialsBuffer_fp_hlsl 
	delegate OSP_SolidMaterialsBuffer_fp_cg
}

material OSP_SolidMaterialsBuffer
{
    technique
    {
        pass
        {	 
            vertex_program_ref OSP_SolidMaterialsBuffer_vp
            {
                param_named_auto cWorldViewProj worldviewproj_matrix
                param_named_auto cWorldView worldview_matrix
            }

            fragment_program_ref OSP_SolidMaterialsBuffer_fp
            {
                param_named_auto cNearClipDistance near_clip_distance
                param_named_auto cFarClipDistance far_clip_distance
            }
        }
    }
}

//------------------------------------------------

/*fragment_program OSP_PostFilter_fp_hlsl hlsl
{
    source OSP.cg
    entry_point OSP_PostFilter_fp
    target ps_3_0 
}*/

fragment_program OSP_PostFilter_fp_cg cg
{
    source OSP.cg
    entry_point OSP_PostFilter_fp
    profiles ps_3_0 ps_2_x arbfp1
}

fragment_program OSP_PostFilter_fp unified
{
//	delegate OSP_PostFilter_fp_hlsl
	delegate OSP_PostFilter_fp_cg 
}

material OSP_PostFilter
{
    technique
    {
        pass
        {
            cull_hardware none
			cull_software none
			depth_check off
			
			vertex_program_ref Ogre/Compositor/StdQuad_vp {}
            fragment_program_ref OSP_PostFilter_fp {}

            texture_unit
            {
                content_type compositor OSP_SolidMaterialsBuffer occlusion
                tex_address_mode clamp
                filtering none
            }
        }
    }
}
I wonder how to make particle effects be skipped by the compositor chain also in OpenGL in more appropriate way. For sure there is a simple solution for this kind of occlusion?
Separate viewports? rendering queue? Some possibility to give a command inside of Compositor "do not render object if(something... )"? It is not obvious for me yet, but I am sure that solution is easy when you have a right hint.

Question about downsampling:
I've seen advice to use target_width_scaled <parameter> and target_height_scaled <parameter>
to define size of new texture.


Is it better if I do several fixed scales <parameter>=0.5/0.25/0.125, or is it possible to use non-constant parameter in this syntax? (I'd like to use a variable, set from the level of application by setCustomParameter() function if non-constant is allowed)?

PS:
Today I have modified previous compositor according to advices from
http://www.ogre3d.org/forums/viewtopic.php?f=2&t=38354
proposed by cyanbeck in ordet to create lighter G-buffer.



Calculated depth there is a little bit unconventional
(i wonder whether this line is needed:
outPos.xy += texelOffsets.zw * outPos.w;
),
however it gets all needed data in vertex shader instead of separate compositor (much easier than it was in Deferred shading and in SSAO).

I've written saving depth to alpha channel based on depth acquired this way.
It can be easily modified back to mrt.
To check how application looks right now, comment sompositors in *.cpp file and comment substitution of solid objects material.

Here are materials modified to achieve particle effects, especially fumes:
fog.material

Code: Select all

material Examples/Fog
{
	technique
	{
		pass
		{
			lighting off
			scene_blend alpha_blend
			depth_write off
			diffuse vertexcolour

			texture_unit
			{
				texture fog.png
				tex_address_mode clamp
			}
		}
	}
}
Modified smoke particle effect, exaust fumes fog:

Code: Select all

// smoke2
particle_system Examples/Smoke2
{
	material        	Examples/Smoke
	particle_width  	10
	particle_height 	10
	cull_each       	true
	quota           	400
	billboard_type  	point
	sorted				true
    
	// Area emitter
	emitter Point
	{
		position 0 5 0
		angle 35
		emission_rate 20
		time_to_live 3
		direction 0 1 0
		velocity_min 20
		velocity_max 50    	
	}

	affector ColourImage
	{
		image smokecolors2.png
	}

   	affector Rotator
   	{
		rotation_range_start 0
		rotation_range_end 360
		rotation_speed_range_start -60
		rotation_speed_range_end 200
   	}

   	affector Scaler
   	{
       	rate 20
   	}
	
}

	// smoke2
particle_system Examples/Fog
{
	material        	Examples/Fog
	particle_width  	1
	particle_height 	1
	cull_each       	true
	quota           	700
	billboard_type  	point
	sorted				true
    
	// Area emitter
	emitter Point
	{
		position 0 -50 0
		angle 10
		emission_rate 70
		time_to_live 12
		direction 1 0 0
		velocity_min 10
		velocity_max 20    	
	}

	affector ColourImage
	{
		image fogcolors.png
	}

   	affector Rotator
   	{
		rotation_range_start 360
		rotation_range_end 0
		rotation_speed_range_start -60
		rotation_speed_range_end 10
   	}

   	affector Scaler
   	{
       	rate 10
   	}
	
	affector DirectionRandomiser
	{
		randomness	20
	}
	
	
}
To achieve this effect yoy need also to change size of jet (it is just a very little change):

Code: Select all

// A jet engine (of sorts)
particle_system Examples/JetEngine1
{
	material 		Examples/Flare
	particle_width 	20
	particle_height	20
	cull_each		true
	quota			200
	billboard_type	point
	sorted 			true

	emitter Point
	{
		angle 5
		emission_rate 100
        time_to_live    0.75
        direction       0 -1 0
        velocity_min    175
        velocity_max    250
        colour_range_start  1 1 0.5
        colour_range_end    1 0.8 0.3
		
	}
	affector ColourFader
	{
		red -0.25
		green -1
		blue -1
	}
	
}
You also may need textures:
http://bmm.yoyo.pl/textures.zip
Other shaders/ particle systems I modified or wrote are not used at the moment so I think that uploading them would NOT be useful.
Material for choice of rendering system is not used right now either (it turned ou to be not necessary, because Ogre has automatic recognition of supported shaders...).
Last edited by Karol Badowski 1989 on Fri Jul 06, 2012 10:20 am, edited 2 times in total.
Google Summer of Code 2012 Student
Topic: "Implementation of Off-Screen Particles"
Project links: Project thread, WIKI page, Code fork for the project
Mentor: Assaf Raman
Karol Badowski 1989
Google Summer of Code Student
Google Summer of Code Student
Posts: 185
Joined: Fri Apr 06, 2012 3:04 pm
x 18

Re: [GSoC 2012] Off-Screen Particles project

Post by Karol Badowski 1989 »

Would it be ok If I upload to a separate folder:
- visual studio project
- folder with meadia files
- instruction how to configure it to work with compiled SDK
?

Fork of entire mercurial code is used for compilation/build of source to create SDK. What I have right now is not a modification of a core...
Google Summer of Code 2012 Student
Topic: "Implementation of Off-Screen Particles"
Project links: Project thread, WIKI page, Code fork for the project
Mentor: Assaf Raman
User avatar
Assaf Raman
OGRE Team Member
OGRE Team Member
Posts: 3092
Joined: Tue Apr 11, 2006 3:58 pm
Location: TLV, Israel
x 76

Re: [GSoC 2012] Off-Screen Particles project

Post by Assaf Raman »

It is ok, we can always create a different fork.
Watch out for my OGRE related tweets here.
Karol Badowski 1989
Google Summer of Code Student
Google Summer of Code Student
Posts: 185
Joined: Fri Apr 06, 2012 3:04 pm
x 18

Re: [GSoC 2012] Off-Screen Particles project

Post by Karol Badowski 1989 »

I tried to fix this code.
Only way i know is Starting with program that works and adding line after line, fixing bugs.

I realised something:
Concatenation of compositors definatelly causes multiple render passes.
It is not good enough.
That is why I changed it to one Compositor.

Main idea is:
Compositor gets colour and depth in one pass, without change of objects materials
(either depth calculated in vertex program, or calculated in fragment program)
(colour borrowed from PREVIOUS compositor/original material)
(preferred concatenation in one fragment program).


I thought it could be possible to achieve texture just like it is done in B&W (texture taken as output of [objects materials/pprevious compositor], compositor just adds depth - either to mrt or to alpha).

I tried to achieve this all day, because i think it would be correct.
However, it seems that all objects have the same depth, or I just missed something.

Here is my code that is mix of b&w example and first pass of SSAO.
It seems to calculate everything, but somehow i loose depth information.

Did I forget to make some additional transformation, clearing or taking texture from previous passes (in compositor input: previous) just earses some data?

Can anyone see the mistake?
compositor

Code: Select all



compositor BlackWhite2
{
    technique
    {
        // Temporary textures:
		
	     // Keeps colour texture of solid objects
        texture solidObjectsTexture target_width target_height PF_A8R8G8B8
	    //texture depth target_width target_height PF_FLOAT32_RGBA
	    
        target solidObjectsTexture
        {
            input previous
        }

        target_output
        {
            // Start with clear output
            input none
            // Draw a fullscreen quad
            pass render_quad
            {
                // Renders a fullscreen quad with a material
                material BlackWhite2
                input 0 solidObjectsTexture
            }
        }
    }
}
material

Code: Select all

fragment_program BlackWhite2_Cg_FP cg
{
	source BlackWhite2.cg
	entry_point GrayScale_ps
	profiles ps_3_0 ps_2_x ps_4_0 ps_2_0 arbfp1
}
fragment_program BlackWhite2_FP unified
{
	delegate BlackWhite2_Cg_FP
}
//-------------------------------------------------
vertex_program BlackWhite2_Cg_vp cg
{
	source BlackWhite2.cg
	entry_point StdQuad_vp
    profiles vs_2_x vs_4_0 vs_2_0 vs_1_1 arbvp1

	default_params
	{
		//param_named_auto cWorldViewProj worldviewproj_matrix
        //param_named_auto cWorldView worldview_matrix
	}
}
vertex_program BlackWhite2_vp unified
{
	delegate BlackWhite2_Cg_vp
}
//-----------------------------------------------------
material BlackWhite2
{
	technique
	{

		pass
		{
			//depth_check off

			
			vertex_program_ref BlackWhite2_vp
			{
                param_named_auto cWorldViewProj worldviewproj_matrix
                param_named_auto cWorldView worldview_matrix
			}

			fragment_program_ref BlackWhite2_FP
			{
                //param_named_auto depthRange scene_depth_range
                param_named_auto near near_clip_distance
                param_named_auto far far_clip_distance
			}

			texture_unit
			{
				texture RT
				tex_coord_set 0
				//tex_coord_set 1
				tex_address_mode clamp
				filtering linear linear linear
			}
		}
	}
}
shaders:

Code: Select all

void StdQuad_vp
(
    in float4 inputPosition : POSITION,

    out float4 outputPosition : POSITION,
    out float3 outputViewPosition : TEXCOORD0,
    out float2 uv0 : TEXCOORD1,
	
    uniform float4x4 cWorldViewProj,
    uniform float4x4 cWorldView
)
{
    // Use standardise transform, so work accord with render system specific (RS depth, requires texture flipping, etc)
    outputPosition = mul(cWorldViewProj, inputPosition);
	outputViewPosition = mul(cWorldView, inputPosition).xyz; 

    // The input positions adjusted by texel offsets, so clean up inaccuracies
    //inputPosition.xy = sign(inputPosition.xy);

    // Convert to image-space
    uv0 = (float2(inputPosition.x, -inputPosition.y) + 1.0f) * 0.5f;
    /*outputDepth = uv0;*/	
}

sampler2D RT : register(s0);

float4 GrayScale_ps(
	float4 pos : POSITION,
    float3 inputViewPosition : TEXCOORD0,
	float2 iTexCoord : TEXCOORD1,
	
	uniform float near,
    uniform float far
) : COLOR
{
    float3 color = tex2D(RT, iTexCoord).rgb;

	float xPosition = inputViewPosition.x;
	float yPosition = inputViewPosition.y;
	float zPosition = length(inputViewPosition);
	float depth = (zPosition-near)/(far-near);
	float4 outputColorDepth;
	return float4(depth*255, color.gb, 1.0);
}

Well, till then I just have to acquire colour and depth in separate pass and do the downsampling posteffect (it is already evening and i planned to finish it by now), and leave this for later (till I know solution for a problem above - I hope I am doing some silly mistake in code).

I believe I have found solution for swapping materials in runtime but still with listeners.
Also material that scales depth from closest position to the futhest visible position. That material works too, however not as an element of compositor chain - it has to be given to a certain object - otherwise it is unable to achieve depth (that is why i thought it may be something in compositor chain preventing vertex program from execution - maybe there is a quad that just has the same depth everywhere?).

I mean material created by cyanbeck: http://www.ogre3d.org/forums/viewtopic.php?f=2&t=38354
I like it because it is not based on far_clip_distance or close_clip_distance - that is why it is more universal.

In the same topic, on second page Spookey makes runtime swapping of materials - i believe a little bit easier way than in DefferedShading.

Well, this is topic I've been analysing since few days ago.

PS:
All my debugs are based on log file and setting output values to r,g,b,a channels. It would be wonderfull If I could just make some watches and check what ogre does step by step in vertex and pixel programs. What data is in each variable. I've heared there is some plugin, but link I've seen has expired.
With such tool things would go 100x faster.

I have to take 4 hours of break and I am comming back to downsampling after 1:00 am Warsaw GMT tonight. This is much easier shader. About 3:00 I should make revision to repository.

After downsampling is done that I'm going to do runtime swap of render targets by listeners - that will be modification of *cpp file.

There are a lot of suggestions on forums - to render to separate viwewports and then retrieve textures from them. I'd preffer idea of doing everything on one viewport.
Google Summer of Code 2012 Student
Topic: "Implementation of Off-Screen Particles"
Project links: Project thread, WIKI page, Code fork for the project
Mentor: Assaf Raman
Karol Badowski 1989
Google Summer of Code Student
Google Summer of Code Student
Posts: 185
Joined: Fri Apr 06, 2012 3:04 pm
x 18

Re: [GSoC 2012] Off-Screen Particles project

Post by Karol Badowski 1989 »

During last 40 minutes, I've selected files modified /updated and created during implementation and tests.
They can be used to overwrite and (/or) merge with SDK. There is also modified configuration file that includes new folders of sources.
I have fond my backups of VS2010 project since beginning - I'm uploading them too.

All files have ben checked out to repository (approximately 1:45 am).

Output *.exe file should be run from YOUR_SDK\bin\debug folder
Google Summer of Code 2012 Student
Topic: "Implementation of Off-Screen Particles"
Project links: Project thread, WIKI page, Code fork for the project
Mentor: Assaf Raman
Karol Badowski 1989
Google Summer of Code Student
Google Summer of Code Student
Posts: 185
Joined: Fri Apr 06, 2012 3:04 pm
x 18

Re: [GSoC 2012] Off-Screen Particles project

Post by Karol Badowski 1989 »

When I woke up, I checked that no changes were visible on bitbucket.

It appears that previous commit happend only in local repository.
Turned out that there was a problem with SPACE in user name during authorisation when I used http protocol.
However it went ok this time with https protocol and username alias.

Repository update should be visible now (11:06).
Google Summer of Code 2012 Student
Topic: "Implementation of Off-Screen Particles"
Project links: Project thread, WIKI page, Code fork for the project
Mentor: Assaf Raman
User avatar
Assaf Raman
OGRE Team Member
OGRE Team Member
Posts: 3092
Joined: Tue Apr 11, 2006 3:58 pm
Location: TLV, Israel
x 76

Re: [GSoC 2012] Off-Screen Particles project

Post by Assaf Raman »

You wrote so much here - and I want to put things in focus.
Lets go back to the NVIDIA article: http://http.developer.nvidia.com/GPUGem ... _ch23.html
The basic algorithm is this:
1. Render all solid objects in the scene as normal, with depth-writes.
2. Downsample the resulting depth buffer.
3. Render the particles to the off-screen render target, testing against the small depth buffer.
4. Composite the particle render target back onto the main frame buffer, upsampling.
So - as I see it:
1. Just RTT the scene without the particle systems into a texture the size of the window.
2. Render the scene once more, to a smaller texture - but no color write - just depth write.
3. Render the particle system to the smaller sized texture (same one as in stage 2) - the depth test will work.
4. Use the resulting textures from 1 and 2+3 - with a compositor to reach the desired effect.

The only down side is that we need to render the scene twice (in stage 1 and 2) - but this can be improved later on, BUT - we should be able to see a performance gain in here - because the fill rate is the battleneck here and not the rendering of the scene.
I think with this approach you will be able to get around the "getting depth buffer" issue you have been having and focus on the rest of the project.

Regarding stage 1 and the issue of rendering the scene without rendering the particle systems - use MaskRendering http://www.ogre3d.org/forums/viewtopic.php?f=4&t=24145

Can you work out such a sample? We are not talking about core changes to OGRE - but only a simple sample that demonstrates the article.

The sample will have a checkbox to control if the scene is render the normal way or using off-screen paricles, and a combo to select the size of of the off screen texture.
Watch out for my OGRE related tweets here.
User avatar
Assaf Raman
OGRE Team Member
OGRE Team Member
Posts: 3092
Joined: Tue Apr 11, 2006 3:58 pm
Location: TLV, Israel
x 76

Re: [GSoC 2012] Off-Screen Particles project

Post by Assaf Raman »

If you find stage 2 hard - just draw the scene there a second time with the color write and all as normal, then clear the color but not the depth before stage 3. You will have the back buffer ready with the clear color and the depth of of the original scene.
Watch out for my OGRE related tweets here.
Karol Badowski 1989
Google Summer of Code Student
Google Summer of Code Student
Posts: 185
Joined: Fri Apr 06, 2012 3:04 pm
x 18

Re: [GSoC 2012] Off-Screen Particles project

Post by Karol Badowski 1989 »

I just found out where the strage difference and DirectX came from.

Right now I am saving depth to texture that has only 32 bits per pixel (single float).
When I swapped it from second to first channel - in directX there was the same effect as in OpenGL.

that means: in DirectX particles were unvisible because they did not have multiple render target. OpenGL did not see it as something unusual and when there was no multiple rendering targets, It referenced just the first one.

Strange artifacts was juste alpha channel of particles texture.
I found it out because now in DirectX their channel r is taken (there is one texture - and texture was referenced by its index in shader "OSP_ShowDepth_fp")

It seems that one mystery is solved now.
Google Summer of Code 2012 Student
Topic: "Implementation of Off-Screen Particles"
Project links: Project thread, WIKI page, Code fork for the project
Mentor: Assaf Raman
User avatar
Assaf Raman
OGRE Team Member
OGRE Team Member
Posts: 3092
Joined: Tue Apr 11, 2006 3:58 pm
Location: TLV, Israel
x 76

Re: [GSoC 2012] Off-Screen Particles project

Post by Assaf Raman »

What about what I wrote?
Watch out for my OGRE related tweets here.
User avatar
Assaf Raman
OGRE Team Member
OGRE Team Member
Posts: 3092
Joined: Tue Apr 11, 2006 3:58 pm
Location: TLV, Israel
x 76

Re: [GSoC 2012] Off-Screen Particles project

Post by Assaf Raman »

Can you replay to what I wrote?
BTW: We talked about a daily post - at the beginning of the day - what it planed for the day, and a summary at the end of the day, so what is your plan for today?
Watch out for my OGRE related tweets here.
Post Reply