[GSoC 2012] Off-Screen Particles project - continuation
-
- Google Summer of Code Student
- Posts: 185
- Joined: Fri Apr 06, 2012 3:04 pm
- x 18
Re: [GSoC 2012] Off-Screen Particles project
There is downsample too. For now we see colour, But I'll access colour_write field of pass ond set if OFF in order to destroy it.
Screenshot5 "downsampling just before earsing colour_write"
Next step:
Rendering particles to the same texture with depth_check set ON (but before that I have to go to the post office).
Screenshot5 "downsampling just before earsing colour_write"
Next step:
Rendering particles to the same texture with depth_check set ON (but before that I have to go to the post office).
Google Summer of Code 2012 Student
Topic: "Implementation of Off-Screen Particles"
Project links: Project thread, WIKI page, Code fork for the project
Mentor: Assaf Raman
Topic: "Implementation of Off-Screen Particles"
Project links: Project thread, WIKI page, Code fork for the project
Mentor: Assaf Raman
-
- OGRE Team Member
- Posts: 3092
- Joined: Tue Apr 11, 2006 3:58 pm
- Location: TLV, Israel
- x 76
Re: [GSoC 2012] Off-Screen Particles project
Ok, that sounds better.
Watch out for my OGRE related tweets here.
-
- Google Summer of Code Student
- Posts: 185
- Joined: Fri Apr 06, 2012 3:04 pm
- x 18
Re: [GSoC 2012] Off-Screen Particles project
I'm back from post office.
Now I'm trying to access the pass from level of render target listener (to set mentioned depth_write and colour_write flagues).
EDIT: it is accessible this way renderMaterial2->getTechnique(0)->getPass(0)->setColourWriteEnabled(false);
I am also trying do write second scenario of what is going on inside of preRenderTargetUpdate(const Ogre::RenderTargetEvent& evt) and postRenderTargetUpdate(const Ogre::RenderTargetEvent& evt), based on which render target is calling it.
I guess I can check it from evt, because:
Now I'm trying to access the pass from level of render target listener (to set mentioned depth_write and colour_write flagues).
EDIT: it is accessible this way renderMaterial2->getTechnique(0)->getPass(0)->setColourWriteEnabled(false);
I am also trying do write second scenario of what is going on inside of preRenderTargetUpdate(const Ogre::RenderTargetEvent& evt) and postRenderTargetUpdate(const Ogre::RenderTargetEvent& evt), based on which render target is calling it.
I guess I can check it from evt, because:
so I should be able to read a name of render target, right?struct RenderTargetEvent
{
/// The source of the event being raised
RenderTarget* source;
};
Last edited by Karol Badowski 1989 on Mon Jul 09, 2012 2:22 pm, edited 1 time in total.
Google Summer of Code 2012 Student
Topic: "Implementation of Off-Screen Particles"
Project links: Project thread, WIKI page, Code fork for the project
Mentor: Assaf Raman
Topic: "Implementation of Off-Screen Particles"
Project links: Project thread, WIKI page, Code fork for the project
Mentor: Assaf Raman
-
- OGRE Team Member
- Posts: 3092
- Joined: Tue Apr 11, 2006 3:58 pm
- Location: TLV, Israel
- x 76
Re: [GSoC 2012] Off-Screen Particles project
I wrote it before, but I guess you missed it, so, once more:
You don't need to touch the depth_write colour_write flags.
Just render the scene as normal to texture - without the particle systems.
Then clear the back buffer - clear only the color - not the depth.
Then render the scene as normal to the same texture - only the particle systems.
This should give you want you need.
You don't need to touch the depth_write colour_write flags.
Just render the scene as normal to texture - without the particle systems.
Then clear the back buffer - clear only the color - not the depth.
Then render the scene as normal to the same texture - only the particle systems.
This should give you want you need.
Watch out for my OGRE related tweets here.
-
- Google Summer of Code Student
- Posts: 185
- Joined: Fri Apr 06, 2012 3:04 pm
- x 18
Re: [GSoC 2012] Off-Screen Particles project
I have two previous viewports at a sime time - screen was taken about about 5,5 hours ago.

^Screenshot6 "two viewports - one downsampled"
There was a slowdown on swapping from render target listener to compositor listener.
However few minutes ago I finally swapped from RenderTargetListener and[/or] compoitorListeners + renderQueues idea to VisibilityFlags thanks to Assaf Raman advice.
VisibilityMasks on viewPorts seem to work just fine for me for filtering sets of objects.
Screenshot7 "all solid objects mask on viewport"
Screenshot8 "test... yes it wotks, finally"
Thanks Assaf Raman for advice! That is MUCH simpler solution
.
Now I finally can move to depthtesting part.
^Screenshot6 "two viewports - one downsampled"
There was a slowdown on swapping from render target listener to compositor listener.
However few minutes ago I finally swapped from RenderTargetListener and[/or] compoitorListeners + renderQueues idea to VisibilityFlags thanks to Assaf Raman advice.
VisibilityMasks on viewPorts seem to work just fine for me for filtering sets of objects.
Screenshot7 "all solid objects mask on viewport"
Screenshot8 "test... yes it wotks, finally"
Thanks Assaf Raman for advice! That is MUCH simpler solution

Now I finally can move to depthtesting part.
Google Summer of Code 2012 Student
Topic: "Implementation of Off-Screen Particles"
Project links: Project thread, WIKI page, Code fork for the project
Mentor: Assaf Raman
Topic: "Implementation of Off-Screen Particles"
Project links: Project thread, WIKI page, Code fork for the project
Mentor: Assaf Raman
-
- OGRE Team Member
- Posts: 3092
- Joined: Tue Apr 11, 2006 3:58 pm
- Location: TLV, Israel
- x 76
Re: [GSoC 2012] Off-Screen Particles project
Good, this is the progress I want to see.
Watch out for my OGRE related tweets here.
-
- OGRE Team Member
- Posts: 4304
- Joined: Mon Feb 04, 2008 2:02 pm
- Location: Germany
- x 136
Re: [GSoC 2012] Off-Screen Particles project
+1 Good to see that you are finally on trackAssaf Raman wrote:Good, this is the progress I want to see.

Ogre Admin [Admin, Dev, PR, Finance, Wiki, etc.] | BasicOgreFramework | AdvancedOgreFramework
Don't know what to do in your spare time? Help the Ogre wiki grow! Or squash a bug...
Don't know what to do in your spare time? Help the Ogre wiki grow! Or squash a bug...
-
- Google Summer of Code Student
- Posts: 185
- Joined: Fri Apr 06, 2012 3:04 pm
- x 18
Re: [GSoC 2012] Off-Screen Particles project
I haven't uploaded some older solutions from *.cpp that have been changed, but I just comment most of them - so they should not be lost.
There will be repository revision of VS project files tonight.
Masks seem to concatenate well with previous solution (here listeners change visibility of particles, but they leave penguin alone with his mask on the changed viewport).
Screenshot9 "mask + listener"
Listeners will be left for visibility update of mini-viewport debug screens only.
Screenshot10 "listeners left only for autoswap of visibility for mini view ports"
Screenshot11 "no mask"
Screenshot12 "mask 3 (binary 11) = 2|1"

^mask 1 on full resolution miniView and mask 2 on downsampled miniView.
This is how particle effect looks before depth testing, but already downsampled. Like above, but closer view.

After depth-test, ogre head should hide part of fume particles.
Picture has size 0.25 x 0.25 of entire window, but is downsampled 0.125 x 0.125. It seems that there is anti-aliacing (it was also noticeable on downsampled solid objects (much more sharp, but still anti-alliased). I suspect that it could lower a little bit the halo effect. I will try one of these then : full screen display or setting anti-aliasing /blur off.
There will be repository revision of VS project files tonight.
Masks seem to concatenate well with previous solution (here listeners change visibility of particles, but they leave penguin alone with his mask on the changed viewport).
Screenshot9 "mask + listener"
Listeners will be left for visibility update of mini-viewport debug screens only.
Screenshot10 "listeners left only for autoswap of visibility for mini view ports"
Screenshot11 "no mask"
Screenshot12 "mask 3 (binary 11) = 2|1"
^mask 1 on full resolution miniView and mask 2 on downsampled miniView.
This is how particle effect looks before depth testing, but already downsampled. Like above, but closer view.
After depth-test, ogre head should hide part of fume particles.
Picture has size 0.25 x 0.25 of entire window, but is downsampled 0.125 x 0.125. It seems that there is anti-aliacing (it was also noticeable on downsampled solid objects (much more sharp, but still anti-alliased). I suspect that it could lower a little bit the halo effect. I will try one of these then : full screen display or setting anti-aliasing /blur off.
Last edited by Karol Badowski 1989 on Tue Jul 10, 2012 1:09 pm, edited 1 time in total.
Google Summer of Code 2012 Student
Topic: "Implementation of Off-Screen Particles"
Project links: Project thread, WIKI page, Code fork for the project
Mentor: Assaf Raman
Topic: "Implementation of Off-Screen Particles"
Project links: Project thread, WIKI page, Code fork for the project
Mentor: Assaf Raman
-
- OGRE Team Member
- Posts: 3092
- Joined: Tue Apr 11, 2006 3:58 pm
- Location: TLV, Israel
- x 76
Re: [GSoC 2012] Off-Screen Particles project
So just to be clear - do you use a compositor here to merge the two small images to the image in the big window?
Watch out for my OGRE related tweets here.
-
- Google Summer of Code Student
- Posts: 185
- Joined: Fri Apr 06, 2012 3:04 pm
- x 18
Re: [GSoC 2012] Off-Screen Particles project
Not yet, I am still at depth test from level of code.
I thried doing two passes for test (without clear of colour) - just swapping flags and executind "update()" command.
It seems that it did not work - data was overwritten, or maybe It was set for the first time by the mask that was defined last.
I got really sleepy right now, and since an hour and half ago i am starting to think slowly... I am writing this message very slowly and fall asleep.
I am trying to do downsampling in loop section frameRenderingQueued. Perhaps I shoud do that just by compositor/material file and not from code.
First I have to render scene it to texture.
I know how to render instantly to texture file, but I am not sure if render to other texture is the "update" command.
Logically thinkiong, I should:
1) create next pass for the material.
2) find how to set this command for second pass:
set current texture as input and output
That means I will have to select the material I've been writing the texture to and check how from level of code can I execute the rendering.
I thought it is update() function, but It seams that my debug texture is still in motion even if "update" is disabled.
The rendering there is done anyway, perhaps after frameRenderingQueued() section...
I'd like to execute a pass to texture instantly, so I could swap values on viewport mask and render once again to the same texture.
Should I do these operation in that section, or in different one or just attach compositor?
I think I'll need 3 passes, however I've been testing on 2 passes right now.
I'll execute change of flagues between these passes.
I wrote this message 40 munites, that means I'm very sleepy and sorry if message makes no sense.
I made revision to repository. and I'll continue in the morning with fresh mind.
I thried doing two passes for test (without clear of colour) - just swapping flags and executind "update()" command.
It seems that it did not work - data was overwritten, or maybe It was set for the first time by the mask that was defined last.
I got really sleepy right now, and since an hour and half ago i am starting to think slowly... I am writing this message very slowly and fall asleep.
I am trying to do downsampling in loop section frameRenderingQueued. Perhaps I shoud do that just by compositor/material file and not from code.
First I have to render scene it to texture.
I know how to render instantly to texture file, but I am not sure if render to other texture is the "update" command.
Logically thinkiong, I should:
1) create next pass for the material.
2) find how to set this command for second pass:
Code: Select all
pass clear
{
buffers colour
}
Code: Select all
texture_unit
{
texture RttTex2
}
Code: Select all
texture_ref RttTex2(...)
target RttTex2{
}
That means I will have to select the material I've been writing the texture to and check how from level of code can I execute the rendering.
I thought it is update() function, but It seams that my debug texture is still in motion even if "update" is disabled.
The rendering there is done anyway, perhaps after frameRenderingQueued() section...
I'd like to execute a pass to texture instantly, so I could swap values on viewport mask and render once again to the same texture.
Should I do these operation in that section, or in different one or just attach compositor?
I think I'll need 3 passes, however I've been testing on 2 passes right now.
I'll execute change of flagues between these passes.
I wrote this message 40 munites, that means I'm very sleepy and sorry if message makes no sense.
I made revision to repository. and I'll continue in the morning with fresh mind.
Last edited by Karol Badowski 1989 on Mon Jul 09, 2012 11:14 pm, edited 4 times in total.
Google Summer of Code 2012 Student
Topic: "Implementation of Off-Screen Particles"
Project links: Project thread, WIKI page, Code fork for the project
Mentor: Assaf Raman
Topic: "Implementation of Off-Screen Particles"
Project links: Project thread, WIKI page, Code fork for the project
Mentor: Assaf Raman
-
- OGRE Team Member
- Posts: 3092
- Joined: Tue Apr 11, 2006 3:58 pm
- Location: TLV, Israel
- x 76
Re: [GSoC 2012] Off-Screen Particles project
Go to sleep. 

Watch out for my OGRE related tweets here.
-
- Minaton
- Posts: 921
- Joined: Sat Jul 31, 2010 6:29 pm
- Location: Belgium
- x 80
Re: [GSoC 2012] Off-Screen Particles project
Nice work, now you're really making progress!
Keep this up and I think you'll do fine for the mid-term
Keep this up and I think you'll do fine for the mid-term

Developer @ MakeHuman.org
-
- Google Summer of Code Student
- Posts: 185
- Joined: Fri Apr 06, 2012 3:04 pm
- x 18
Re: [GSoC 2012] Off-Screen Particles project
I'm back.
I am stiil trying to set rendering to one texture the way the depth backbuffer is pased between calling texture->update().
Right now it is being overwritten.
There is my question: How from the level of the code in *.cpp file can I execute only one pass of rendering technique?
The change of visibility has to happen not between full renderings (cose the backbuffer is lost), but between passes of one technique.
Would that mean that I have to come back to using listeners?
If it would be only way, I'd like to achieve something like this (advice if it is the good way would be helpfull). My interpretation of tutorials is that, "first_renderQueue" and "last_renderQueue" shows some ID number of listener, that would execute some code (change of flagues in this case). Ir is not clear for me if it is right interpretation.
I am stiil trying to set rendering to one texture the way the depth backbuffer is pased between calling texture->update().
Right now it is being overwritten.
There is my question: How from the level of the code in *.cpp file can I execute only one pass of rendering technique?
The change of visibility has to happen not between full renderings (cose the backbuffer is lost), but between passes of one technique.
Would that mean that I have to come back to using listeners?
If it would be only way, I'd like to achieve something like this (advice if it is the good way would be helpfull). My interpretation of tutorials is that, "first_renderQueue" and "last_renderQueue" shows some ID number of listener, that would execute some code (change of flagues in this case). Ir is not clear for me if it is right interpretation.
Code: Select all
compositor Downsample_DepthTest_Render
{
technique
{
texture smallRenderTexture global_scope //it can be accessed from application for example to set attributes like "target_width_scaled 0.128 target_height_scaled 0.128 PF_A8R8G8B8"
target smallRenderTexture
{
input previous
//before that pass set vievport mask to solid objects
pass render_scene //scene is rendered based on "previous" material.
{
first_render_queue 1
}
pass clear //colours cleared, only depth remains in backbuffer
{
buffers colour
}
//before that pass set vievport mask to transparent objects
pass crender_scene
{
first_render_queue 2
}
}
}
}
Google Summer of Code 2012 Student
Topic: "Implementation of Off-Screen Particles"
Project links: Project thread, WIKI page, Code fork for the project
Mentor: Assaf Raman
Topic: "Implementation of Off-Screen Particles"
Project links: Project thread, WIKI page, Code fork for the project
Mentor: Assaf Raman
-
- Google Summer of Code Student
- Posts: 185
- Joined: Fri Apr 06, 2012 3:04 pm
- x 18
Re: [GSoC 2012] Off-Screen Particles project
I've noticed change of behaviour when I set:
Now I'll clear and update render targets manually, so I'll controll what is cleared between updates. I hope that will work - it will turn out soon.
Code: Select all
renderTexture->getViewport(0)->setClearEveryFrame(false);
renderTexture->setAutoUpdated(false);
Google Summer of Code 2012 Student
Topic: "Implementation of Off-Screen Particles"
Project links: Project thread, WIKI page, Code fork for the project
Mentor: Assaf Raman
Topic: "Implementation of Off-Screen Particles"
Project links: Project thread, WIKI page, Code fork for the project
Mentor: Assaf Raman
-
- Google Summer of Code Student
- Posts: 185
- Joined: Fri Apr 06, 2012 3:04 pm
- x 18
Re: [GSoC 2012] Off-Screen Particles project
Everything seems to be working.
Even anti-aliasing in small display went off, so that is what I needed to show the desired effect.

^Screenshot15 "autoclear and autoupdate set off"
NEXT PICTIRES ARE WIDE, so it is better to open them in new window:
Screenshot16 "rendering of two images to one texture with no clearing and no auto-update"
Screenshot17 "no unneeded antialiasing + two renderings to one texture with use of backbuffer"
Now I am setting 3U buffer to do selevtive clearing Between updates.
Even anti-aliasing in small display went off, so that is what I needed to show the desired effect.
^Screenshot15 "autoclear and autoupdate set off"
NEXT PICTIRES ARE WIDE, so it is better to open them in new window:
Screenshot16 "rendering of two images to one texture with no clearing and no auto-update"
Screenshot17 "no unneeded antialiasing + two renderings to one texture with use of backbuffer"
Now I am setting 3U buffer to do selevtive clearing Between updates.
Google Summer of Code 2012 Student
Topic: "Implementation of Off-Screen Particles"
Project links: Project thread, WIKI page, Code fork for the project
Mentor: Assaf Raman
Topic: "Implementation of Off-Screen Particles"
Project links: Project thread, WIKI page, Code fork for the project
Mentor: Assaf Raman
-
- Google Summer of Code Student
- Posts: 185
- Joined: Fri Apr 06, 2012 3:04 pm
- x 18
Re: [GSoC 2012] Off-Screen Particles project
Here is a sneekpeak of the effect of saving full resolution texture of solid objects and another texture with display of these particle effects that are NOT hidden (behind a DOWNSAMPLED depth of solid objects - achieved from back buffer).

^Screenshot18 "Render of particles over downsampled depth map of solid objects"

^Screenshot19 "Another screenshot of previous effect, from different angle and moment of animation"
There is only one step remaining - concatenation of these render targets with a compositor applied to third miniscreen.
^Screenshot18 "Render of particles over downsampled depth map of solid objects"
^Screenshot19 "Another screenshot of previous effect, from different angle and moment of animation"
There is only one step remaining - concatenation of these render targets with a compositor applied to third miniscreen.
Google Summer of Code 2012 Student
Topic: "Implementation of Off-Screen Particles"
Project links: Project thread, WIKI page, Code fork for the project
Mentor: Assaf Raman
Topic: "Implementation of Off-Screen Particles"
Project links: Project thread, WIKI page, Code fork for the project
Mentor: Assaf Raman
-
- Google Summer of Code Student
- Posts: 185
- Joined: Fri Apr 06, 2012 3:04 pm
- x 18
Re: [GSoC 2012] Off-Screen Particles project
I did little tidying up in code and now I am doing revision to repository.
Today 19:25 I have train to my hometown - I'll continue work from there when I arrive.
About joining pictures in compositor:
I'll do it with simmilar operations that I used for implementing my previous compositors.
Here, the difference will be setting input textures from the level of code.
Also when applying the second texture - it is important to know whether the transparency level is saved or not (what we see here is black in background) There should be no bigger problem in recognising it. I'll save the output texture to a *.PNG file, for tests to be 100% sure if everything is ok and transparent where it should be.
That is why before I apply my compositor, I'll try to use other compositor first to know that I am setting input texture, setting comositor and everything the right way, in order.
Today 19:25 I have train to my hometown - I'll continue work from there when I arrive.
About joining pictures in compositor:
I'll do it with simmilar operations that I used for implementing my previous compositors.
Here, the difference will be setting input textures from the level of code.
Also when applying the second texture - it is important to know whether the transparency level is saved or not (what we see here is black in background) There should be no bigger problem in recognising it. I'll save the output texture to a *.PNG file, for tests to be 100% sure if everything is ok and transparent where it should be.
That is why before I apply my compositor, I'll try to use other compositor first to know that I am setting input texture, setting comositor and everything the right way, in order.
Google Summer of Code 2012 Student
Topic: "Implementation of Off-Screen Particles"
Project links: Project thread, WIKI page, Code fork for the project
Mentor: Assaf Raman
Topic: "Implementation of Off-Screen Particles"
Project links: Project thread, WIKI page, Code fork for the project
Mentor: Assaf Raman
-
- Google Summer of Code Student
- Posts: 185
- Joined: Fri Apr 06, 2012 3:04 pm
- x 18
Re: [GSoC 2012] Off-Screen Particles project
Third output can heve autoupdate and autoclear set on.
Like in other mini view ports, the activity in render target litener will be triggered. Third render target will be recognised this way in listener:
(I also present here triggering of compositor, so only concatenation acts "different way")
Update: Oh wait, I can already set which viewport should I applay compositor to
.
However, I'll trigger it on and off anyway when I change pte project to display everything on the main viewport with manual clearing and updating.
Adding compositor in createScene(void) section:
Setting it on/off just a moment before and moment after update in the end of frameRenderingQueued(const Ogre::FrameEvent& evt) section:
Like in other mini view ports, the activity in render target litener will be triggered. Third render target will be recognised this way in listener:
(I also present here triggering of compositor, so only concatenation acts "different way")
Code: Select all
void OffScreenParticles::preRenderTargetUpdate(const Ogre::RenderTargetEvent& evt)
{
Ogre::String targetName = evt.source->getName();
if(targetName.find("Concatenation") != Ogre::String::npos){
//HERE SETTING COMPOSITOR ON
//HERE UPDATING INPUT TEXTURES FOR COMPOSITOR (textures from renderTarget1 and renderTarget2)
}
mMiniScreen->setVisible(false);
mMiniScreen2->setVisible(false);
mMiniScreen3->setVisible(false);
}
void OffScreenParticles::postRenderTargetUpdate(const Ogre::RenderTargetEvent& evt)
{
Ogre::String targetName = evt.source->getName();
if(targetName.find("Concatenation") != Ogre::String::npos){
//HERE SETTING COMPOSITOR OFF
}
mMiniScreen->setVisible(true);
mMiniScreen2->setVisible(true);
mMiniScreen3->setVisible(true);
}

However, I'll trigger it on and off anyway when I change pte project to display everything on the main viewport with manual clearing and updating.
Adding compositor in createScene(void) section:
Code: Select all
Ogre::CompositorManager::getSingleton().addCompositor(mViewport, "Concatenation");//"SSAO/Post/CrossBilateralFilter");//"DOF");//"DeferredShading/ShowDepthSpecular");//"Sharpen Edges");//"B&W");
Code: Select all
Ogre::CompositorManager::getSingleton().setCompositorEnabled(mViewport, "Concatenation", true);//"SSAO/Post/CrossBilateralFilter", true);//"DOF", true);//"DeferredShading/ShowDepthSpecular", true);//"Sharpen Edges", true);//"B&W", true)
Google Summer of Code 2012 Student
Topic: "Implementation of Off-Screen Particles"
Project links: Project thread, WIKI page, Code fork for the project
Mentor: Assaf Raman
Topic: "Implementation of Off-Screen Particles"
Project links: Project thread, WIKI page, Code fork for the project
Mentor: Assaf Raman
-
- OGRE Team Member
- Posts: 3092
- Joined: Tue Apr 11, 2006 3:58 pm
- Location: TLV, Israel
- x 76
Re: [GSoC 2012] Off-Screen Particles project
Have a look here: [SOLVED] Q RTT: How to set transparent color?
Watch out for my OGRE related tweets here.
-
- Google Summer of Code Student
- Posts: 185
- Joined: Fri Apr 06, 2012 3:04 pm
- x 18
Re: [GSoC 2012] Off-Screen Particles project
I missed a train due to looking for something I lost somewhere (I do not where cose it is definition of loosing something), so I am back earlier.
Now I'll test the transparency issue.
Now I'll test the transparency issue.
Google Summer of Code 2012 Student
Topic: "Implementation of Off-Screen Particles"
Project links: Project thread, WIKI page, Code fork for the project
Mentor: Assaf Raman
Topic: "Implementation of Off-Screen Particles"
Project links: Project thread, WIKI page, Code fork for the project
Mentor: Assaf Raman
-
- Google Summer of Code Student
- Posts: 185
- Joined: Fri Apr 06, 2012 3:04 pm
- x 18
Re: [GSoC 2012] Off-Screen Particles project
Method proposed in that topic works fine only at first glance.
It applays:
It sets alpha based on output color brightness.
That gives too much transparency to darker particle effects (even if they are less transparent);
Black particles would be unvisible at all.
According to the specification of this one (SBT_TRANSPARENT_COLOUR) and tests that I ran - none of SceneBlendType-s is perfect.
SBT_TRANSPARENT_ALPHA does not accumulate alpha - it takes alpha of last rendered particle.
If many particles (each one very transparent) are in the same xy position, their sum should be less transparent. Also background color is inherited.
Close, but still not perfect, seems to be Ogre::SceneBlendType::SBT_ADD, so I use this one for proof of concept.
It applays:
Code: Select all
renderMaterial2->setSceneBlending( Ogre::SceneBlendType::SBT_TRANSPARENT_COLOUR);
That gives too much transparency to darker particle effects (even if they are less transparent);
Black particles would be unvisible at all.
According to the specification of this one (SBT_TRANSPARENT_COLOUR) and tests that I ran - none of SceneBlendType-s is perfect.
SBT_TRANSPARENT_ALPHA does not accumulate alpha - it takes alpha of last rendered particle.
If many particles (each one very transparent) are in the same xy position, their sum should be less transparent. Also background color is inherited.
Close, but still not perfect, seems to be Ogre::SceneBlendType::SBT_ADD, so I use this one for proof of concept.
Google Summer of Code 2012 Student
Topic: "Implementation of Off-Screen Particles"
Project links: Project thread, WIKI page, Code fork for the project
Mentor: Assaf Raman
Topic: "Implementation of Off-Screen Particles"
Project links: Project thread, WIKI page, Code fork for the project
Mentor: Assaf Raman
-
- Google Summer of Code Student
- Posts: 185
- Joined: Fri Apr 06, 2012 3:04 pm
- x 18
Re: [GSoC 2012] Off-Screen Particles project
This is my compositor for joining items for this moment.
Is it everything ok with syntax?
It is supposed to use two textures that are registered under names "RttTex" and "RttTex2" from level of code and render output in render_scene pass to a whichever render target is set as output in application.
As soon as I find out if there is anything to change, I'll attempt to apply it to application.
OSP.compositor
What I was willing to achieve is compositor that is not trying to calculate geometry. It just takes two textures and outputs them to rendered target (render scene). Is it ok?
OSP.material
However, according to tutorial number 7, texture qunique ID should be passed in texture units in material.
Maybe just using material instead of compositor is good enough then?
OSP.program
And this is shader:
OSP.cg
Is it everything ok with syntax?
It is supposed to use two textures that are registered under names "RttTex" and "RttTex2" from level of code and render output in render_scene pass to a whichever render target is set as output in application.
As soon as I find out if there is anything to change, I'll attempt to apply it to application.
OSP.compositor
Code: Select all
(...)
compositor OSP_JoinSolidAndTransparent
{
technique
{
texture RttTex target_width target_height PF_R8G8B8A8 global_scope
texture RttTex2 target_width_scaled 0.125 target_height_scaled 0.125 PF_R8G8B8A8 global_scope
compositor_logic JoinSolidAndTransparent //line taken from http://www.ogre3d.org/docs/manual/manual_30.html#SEC154 I want to be able to use listeners
target_output
{
input none
pass render_scene {
material OSP_JoinSolidAndTransparent
input 0 RttTex
input 1 RttTex2
}
}
}
}
(...)
OSP.material
Code: Select all
(...)
material OSP_JoinSolidAndTransparent
{
technique
{
pass
{
vertex_program_ref Ogre/Compositor/StdQuad_vp{} //used to achieve texture coordinates in TEXCOORD0 passed to fragment program
fragment_program_ref OSP_JoinSolidAndTransparent_fp {} //used to execute the concatenation of textures
texture_unit //will be referenced from shader as uniform sampler2D RttTex register(s0)
{
texture RttTex; //if I understand tutorials right - this sets as texture unit, a texture that has unique name RttTex and is accessed from application (intermediate tutorial 7)
}
texture_unit //will be referenced from shader as uniform sampler2D RttTex2 register (s1)
{
texture RttTex2; //(like above
}
}
}
}
(...)
Maybe just using material instead of compositor is good enough then?
OSP.program
Code: Select all
(...)
fragment_program OSP_JoinSolidAndTransparent_fp_cg cg
{
source OSP.cg
entry_point OSP_JoinSolidAndTransparent_fp
profiles ps_3_0 ps_2_x arbfp1
}
fragment_program OSP_JoinSolidAndTransparent_fp unified
{
delegate OOSP_JoinSolidAndTransparent_fp_cg
}
(...)
OSP.cg
Code: Select all
void OSP_JoinSolidAndTransparent_fp (
in float2 textureCoordinate : TEXCOORD0,
out float4 output : COLOR0,
uniform sampler2D RttTex : register(s0), //full size solid objects texture
uniform sampler2D RttTex2 : register(s1) //small particle effects texture
)
{
float4 BaseLayerColour = tex2D(RttTex,textureCoordinate);
float4 TopLayerColour = tex2D(RttTex2,textureCoordinate);
//alpha-blend join
output = float4(
BaseLayerColour.rgb * (1.0 - TopLayerColour.a) +
TopLayerColour.rgb * TopLayerColour.a, //definition of red, green and blue channel
1.0 //definition of alpha channel
);
}
Last edited by Karol Badowski 1989 on Wed Jul 11, 2012 4:39 pm, edited 1 time in total.
Google Summer of Code 2012 Student
Topic: "Implementation of Off-Screen Particles"
Project links: Project thread, WIKI page, Code fork for the project
Mentor: Assaf Raman
Topic: "Implementation of Off-Screen Particles"
Project links: Project thread, WIKI page, Code fork for the project
Mentor: Assaf Raman
-
- OGRE Team Member
- Posts: 3092
- Joined: Tue Apr 11, 2006 3:58 pm
- Location: TLV, Israel
- x 76
Re: [GSoC 2012] Off-Screen Particles project
Hard to tell if there are issues just by reading this.
Try to work it out.
Try to work it out.
Watch out for my OGRE related tweets here.
-
- Google Summer of Code Student
- Posts: 185
- Joined: Fri Apr 06, 2012 3:04 pm
- x 18
Re: [GSoC 2012] Off-Screen Particles project
Will do
.
MY PLAN:
The best way for me is to start with some working example of application that takes any texture, sets it as one of source textures (textures defined/referenced in compositor) and compositor rewrites it without modification to otput (just that, nothing else).
And next step is to continue transforming it into my structure - checking if it still works after each change (checking if it gives the effect that I desired).
Difference between working simplest example I'm seeking for and compositors used in "compositor example" is that it takes
"input none" instead of "input previous"
It is closer to post-effect compositors.
However there is also slight difference between this one and post effects from "SSAO example": textures are set not by previous compositors from chain, butfrom level of application.
I'll figure it out.
---------
PS:
Today from 5:00 am to 3:00 pm I have been on train and in some offices. I did some tests every moment I found in between. At the moment I'm examining post-effect shader examples and preparing third output for tests.
While doing tests, I had an issue with example compositor applied to application - set on and off just before and ofter one pf render target updates (I've it had disabled listeners that were set for the vievport) - I'll find out how to use "compositor_logic" attribute to fix that issue.

MY PLAN:
The best way for me is to start with some working example of application that takes any texture, sets it as one of source textures (textures defined/referenced in compositor) and compositor rewrites it without modification to otput (just that, nothing else).
And next step is to continue transforming it into my structure - checking if it still works after each change (checking if it gives the effect that I desired).
Difference between working simplest example I'm seeking for and compositors used in "compositor example" is that it takes
"input none" instead of "input previous"
It is closer to post-effect compositors.
However there is also slight difference between this one and post effects from "SSAO example": textures are set not by previous compositors from chain, butfrom level of application.
I'll figure it out.
---------
PS:
Today from 5:00 am to 3:00 pm I have been on train and in some offices. I did some tests every moment I found in between. At the moment I'm examining post-effect shader examples and preparing third output for tests.
While doing tests, I had an issue with example compositor applied to application - set on and off just before and ofter one pf render target updates (I've it had disabled listeners that were set for the vievport) - I'll find out how to use "compositor_logic" attribute to fix that issue.
Google Summer of Code 2012 Student
Topic: "Implementation of Off-Screen Particles"
Project links: Project thread, WIKI page, Code fork for the project
Mentor: Assaf Raman
Topic: "Implementation of Off-Screen Particles"
Project links: Project thread, WIKI page, Code fork for the project
Mentor: Assaf Raman
-
- Google Summer of Code Student
- Posts: 185
- Joined: Fri Apr 06, 2012 3:04 pm
- x 18
Re: [GSoC 2012] Off-Screen Particles project
Plan for today is to continue yesterdays task from previous post. Afterwards I am going to come back to creating render target of particles, but with own compositor.
I am sure that solution from topic
I still thank You for that link.
I found it very usefull, especially for tests and it got me thinking on universal solution.
My solution is doing universal alpha blending without background (but with saving and passing a parameter of accumulated alpha), so that it could be used with any background and any possible colour and aplha of particles (eveb black particles).
As I see It, in current solution from level of application, what happens wrong is:
From the level of of own shader, It could be done more properly. With initial colour RGBA [whatever,whatever,whatever,0].
also new alpha of accumulated layer of particles should be universal, so after concatenation:
Desired weight of solodObjectsColour in finalColour should be:
(1-delta[1].a)*(1-delta[2].a)...(1-delta[N].a)
[we can deduce it from equation of final colour final on NVIDIA article http://http.developer.nvidia.com/GPUGem ... _ch23.html]
That means that accumulatedParticles[N].a that we are looking for (achieved after accumulating N particles that passed depth test) is equal:
1 - (1-delta[1].a)*(1-delta[2].a)...(1-delta[N].a)
Let's do something simmilar to mathematical induction to check how to calculate alpha iteratively (I could pass multiplication to another output targer, but I think it could be done in-situ, modifying accumulatedParticles[N].a from prvious iteration accumulatedParticles[N-1].a. My hypothesis is that it is calculated just like colours. ( EDIT: it was close estimation, read more to find the correct answer)
For N-1 itereations, we have: accumulatedParticles[N-1].a = 1 - (1-delta1.a)*(1-delta2.a)...(1-delta[N-1].a)
[*] (1-delta1.a)*(1-delta2.a)...(1-delta[N-1].a) = 1 - accumulatedParticles[N-1].a
also:
(1-delta1.a)*(1-delta2.a)...(1-delta[N-1].a)*(1-delta[N].a) = 1 - accumulatedParticles[N].a
that is why
[**] (1-delta1.a)*(1-delta2.a)...(1-delta[N-1].a) = (1 - accumulatedParticles[N].a)/(1-delta[N].a)
from [*] and [**] we have:
1 - accumulatedParticles[N-1].a = (1 - accumulatedParticles[N].a)/(1-delta[N].a)
(1 - accumulatedParticles[N-1].a) * (1 - delta[N].a) = 1 - accumulatedParticles[N].a
So recirsive equation that what we are looking for is:
accumulatedParticles[N].a =
1 - (1 - accumulatedParticles[N-1].a)*(1 - delta[N].a) =
1 - (1 - delta[N].a - accumulatedParticles[N-1].a + accumulatedParticles[N-1].a*delta[N].a) =
[***] delta[N].a + accumulatedParticles[N-1].a - accumulatedParticles[N-1].a*delta[N].a
My hypothesis was that it is calculated like colours
accumulatedParticles[N].a = accumulatedParticles[N-1].a * (1 - delta[N].a) + delta[N].a * delta[N].a
It turns out that it is slightly different, cose if we transform [***], we get:
[****] accumulatedParticles[N].a = accumulatedParticles[N-1].a * (1 - delta[N].a) + delta[N].a
Te difference is:
(equation like for colour channels) - delta[N].a * delta[N].a + delta[N].a.
Ok...so hypothesis was wrong, but yey- we have recursive equation now, so we do not need another parameter - it can be passed in alpha channel in every iteration.
Let's check if [****] is correct...
For example, if we add solid particle, we shoud get new alpha that is equal 1.
accumulatedParticles[N].a = accumulatedParticles[N-1].a * 0 + 1
When we have something.a in background and we add totally transparent one, we should have no difference. Output should be something.a again.
accumulatedParticles[N].a = accumulatedParticles[N-1].a * 1 + 0
Well, equation
"accumulatedParticles[N].a = accumulatedParticles[N-1].a * (1 - delta[N].a) + delta[N].a * delta[N].a"
gives the same result here ... so "What is the difference?" one may ask...
This example shows the difference:
We have transparent background (accumulatedParticles[N-1].a=0) and we add particle with transparency 0.5
The output alpha should be 0.5:
accumulatedParticles[N].a = 0*0.5 + 0.5 .... it is ok.
Equation from hypothesis would have given accumulatedParticles[N].a = 0*0.5 + 0.25, so It seems that new equation is fine and one from hypothesis was wrong.
Youpiaiey!!!
- It works like I wanted.
That is how the right scenario of own shader will look like:
PS:
This is not very important thought, but let's mention it anyway:
I'll sligtly modify my proposition:
We calculate everything on graphic card, so matrix and vector operations can be paralellised. Maybe it is few nanoseconds faster then, to calculate the same equation for every channel: RGB, but also A (it takes the same amount of time, they are calculated paralelly) and then add difference to aplha channel.
The difference is: - delta[N].a * delta[N].a + delta[N].a = (1-delta[N].a)*delta[N].a;
Like I said, it is not that important - if it does not accelerate very little, It won't slow down too much neither.
I am sure that solution from topic
would be 100% correct only if particle colour was always white.Assaf Raman wrote:Have a look here: [SOLVED] Q RTT: How to set transparent color?
I still thank You for that link.
I found it very usefull, especially for tests and it got me thinking on universal solution.
My solution is doing universal alpha blending without background (but with saving and passing a parameter of accumulated alpha), so that it could be used with any background and any possible colour and aplha of particles (eveb black particles).
As I see It, in current solution from level of application, what happens wrong is:
Code: Select all
For every texture coordinate:{
initial colour RGBA is [0,0,0,1];
For each rendered semi-transparent object (with colour delta.rgba) from most distant, to the closest, the previous colour is taken and replaced with very simple alpha-blending output:{
if(depth test passed){
new.a=1;
new.rgb=old.rgb*(1-delta.a) + delta.rgb*delta.a;
}
old = new;
}
}
then there is a trick that "try to guess output alpha".
From the level of of own shader, It could be done more properly. With initial colour RGBA [whatever,whatever,whatever,0].
also new alpha of accumulated layer of particles should be universal, so after concatenation:
Desired weight of solodObjectsColour in finalColour should be:
(1-delta[1].a)*(1-delta[2].a)...(1-delta[N].a)
[we can deduce it from equation of final colour final on NVIDIA article http://http.developer.nvidia.com/GPUGem ... _ch23.html]
It is obvious, that this weight is equal: (1-accumulatedPartices[N].a).p 3 = d(1 - a1)(1 - a2)(1 - a3) + s 1a1(1 - a2)(1 - a3) + s 2a2(1 - a3) + s 3a3.
That means that accumulatedParticles[N].a that we are looking for (achieved after accumulating N particles that passed depth test) is equal:
1 - (1-delta[1].a)*(1-delta[2].a)...(1-delta[N].a)
Let's do something simmilar to mathematical induction to check how to calculate alpha iteratively (I could pass multiplication to another output targer, but I think it could be done in-situ, modifying accumulatedParticles[N].a from prvious iteration accumulatedParticles[N-1].a. My hypothesis is that it is calculated just like colours. ( EDIT: it was close estimation, read more to find the correct answer)
For N-1 itereations, we have: accumulatedParticles[N-1].a = 1 - (1-delta1.a)*(1-delta2.a)...(1-delta[N-1].a)
[*] (1-delta1.a)*(1-delta2.a)...(1-delta[N-1].a) = 1 - accumulatedParticles[N-1].a
also:
(1-delta1.a)*(1-delta2.a)...(1-delta[N-1].a)*(1-delta[N].a) = 1 - accumulatedParticles[N].a
that is why
[**] (1-delta1.a)*(1-delta2.a)...(1-delta[N-1].a) = (1 - accumulatedParticles[N].a)/(1-delta[N].a)
from [*] and [**] we have:
1 - accumulatedParticles[N-1].a = (1 - accumulatedParticles[N].a)/(1-delta[N].a)
(1 - accumulatedParticles[N-1].a) * (1 - delta[N].a) = 1 - accumulatedParticles[N].a
So recirsive equation that what we are looking for is:
accumulatedParticles[N].a =
1 - (1 - accumulatedParticles[N-1].a)*(1 - delta[N].a) =
1 - (1 - delta[N].a - accumulatedParticles[N-1].a + accumulatedParticles[N-1].a*delta[N].a) =
[***] delta[N].a + accumulatedParticles[N-1].a - accumulatedParticles[N-1].a*delta[N].a
My hypothesis was that it is calculated like colours
accumulatedParticles[N].a = accumulatedParticles[N-1].a * (1 - delta[N].a) + delta[N].a * delta[N].a
It turns out that it is slightly different, cose if we transform [***], we get:
[****] accumulatedParticles[N].a = accumulatedParticles[N-1].a * (1 - delta[N].a) + delta[N].a
Te difference is:
(equation like for colour channels) - delta[N].a * delta[N].a + delta[N].a.
Ok...so hypothesis was wrong, but yey- we have recursive equation now, so we do not need another parameter - it can be passed in alpha channel in every iteration.
Let's check if [****] is correct...
For example, if we add solid particle, we shoud get new alpha that is equal 1.
accumulatedParticles[N].a = accumulatedParticles[N-1].a * 0 + 1
When we have something.a in background and we add totally transparent one, we should have no difference. Output should be something.a again.
accumulatedParticles[N].a = accumulatedParticles[N-1].a * 1 + 0
Well, equation
"accumulatedParticles[N].a = accumulatedParticles[N-1].a * (1 - delta[N].a) + delta[N].a * delta[N].a"
gives the same result here ... so "What is the difference?" one may ask...
This example shows the difference:
We have transparent background (accumulatedParticles[N-1].a=0) and we add particle with transparency 0.5
The output alpha should be 0.5:
accumulatedParticles[N].a = 0*0.5 + 0.5 .... it is ok.
Equation from hypothesis would have given accumulatedParticles[N].a = 0*0.5 + 0.25, so It seems that new equation is fine and one from hypothesis was wrong.
Youpiaiey!!!

That is how the right scenario of own shader will look like:
Code: Select all
For every texture coordinate:{
initial colour RGBA is [whatever,whatever,whatever,0];
For each rendered semi-transparent object (with colour delta.rgba) from most distant, to the closest, the previous colour is taken and replaced with alpha-blending output fo calculating only new layer without background:{
if(depth test passed){
new.rgb=old.rgb*(1-delta.a) + delta.rgb*delta.a;
new.a=old.a*(1-delta.a) + delta.a;
}
old = new;
}
}
In the end we do not need any suspiciould tricks :).
PS:
This is not very important thought, but let's mention it anyway:
I'll sligtly modify my proposition:
Code: Select all
(...)
if(depth test passed){
new.rgba = old.rgba*(1-delta.a) + delta.rgba*delta.a;
new.a += (1-delta.a)*delta.a;
}
(...)
The difference is: - delta[N].a * delta[N].a + delta[N].a = (1-delta[N].a)*delta[N].a;
Like I said, it is not that important - if it does not accelerate very little, It won't slow down too much neither.
Google Summer of Code 2012 Student
Topic: "Implementation of Off-Screen Particles"
Project links: Project thread, WIKI page, Code fork for the project
Mentor: Assaf Raman
Topic: "Implementation of Off-Screen Particles"
Project links: Project thread, WIKI page, Code fork for the project
Mentor: Assaf Raman