[2.2] Using screen-space refractions

Discussion area about developing with Ogre-Next (2.1, 2.2 and beyond)


Post Reply
rujialiu
Goblin
Posts: 296
Joined: Mon May 09, 2016 8:21 am
x 35

[2.2] Using screen-space refractions

Post by rujialiu »

In order to make better use of screen space refractions, we decided to explicitly place opaque and transparent objects into separate render queues.... but I found I can't do that because render queues contain MovableObjects, not Renderables, but it's common that within the same Item, some SubItems are opaque and some subitems are transparent. It's even worse in our software because we can dynamically change transparency of subitems.

What I'm planning to do now, is to have three dedicated renderqueues (we don't have V1 objects so I'm omitting render queue IDs here):
A Pure opaque objects
B Partially transparent objects
C Pure transparent objects

If only renderqueue A affects refractions, opaque objects in B would be missing in the refractions, which might be VERY obvious; If both A and B affects refractions, transparent objects in B wouldn't have refractions, and many of our transparent objects are "hybrid" so only very few transparent objects would actually have refractions :(

Another question: according to the sample, it's nice to duplicate the objects and render it twice, but it looks like cloning objects would make our code hard to maintain due to its dynamica nature?
User avatar
TaaTT4
OGRE Contributor
OGRE Contributor
Posts: 267
Joined: Wed Apr 23, 2014 3:49 pm
Location: Bologna, Italy
x 75
Contact:

Re: [2.2] Using screen-space refractions

Post by TaaTT4 »

I've made the same choice (although for a different reason) to separate opaque objects from transparent objects and keep them in two different RQs. And, as rujialiu, I also had meshes composed by transparent and non-transparent submeshes. In this particular case, we've chosen that the artists have to split the mesh in two: one that contains the opaque geometry and one that contains the transparent geometry. This probably isn't the best solution because you can't rely too much on artists, it's an error prone approach and it's a bit tedious to check that the transparent RQ doesn't contain opaque objects. A more automated approach would be better by far.

Senior programmer at 505 Games; former senior engine programmer at Sandbox Games
Worked on: Racecraft EsportRacecraft Coin-Op, Victory: The Age of Racing

al2950
OGRE Expert User
OGRE Expert User
Posts: 1227
Joined: Thu Dec 11, 2008 7:56 pm
Location: Bristol, UK
x 157

Re: [2.2] Using screen-space refractions

Post by al2950 »

I have also been through the same thought process!

I ended up splitting them up during import, this was relatively easy for me as I abandoned built in exporters like OgreMax and created a FBX importer. TBH I dont really see the benefit of sub meshes anymore. So I might end having a 1:1 for mesh and sub mesh. It was useful sharing skeleton between multiple meshes, but I now do that differently as well.
User avatar
dark_sylinc
OGRE Team Member
OGRE Team Member
Posts: 5296
Joined: Sat Jul 21, 2007 4:55 pm
Location: Buenos Aires, Argentina
x 1278
Contact:

Re: [2.2] Using screen-space refractions

Post by dark_sylinc »

Hi!
Another question: according to the sample, it's nice to duplicate the objects and render it twice, but it looks like cloning objects would make our code hard to maintain due to its dynamica nature?
I understand the frustration as well.

The Ogre samples offer a solution for this limitation (this is one of the reasons the samples were specifically designed for this): GameEntity.

You can see that we have:
  • GameEntity::mMovableObject (it's not Item)
  • GameEntity::mMoDefinition
That last one is no coincidence. The samples provide MoTypeItem and MoTypeEntity. But you can code more. For example in my custom engine I've added one called MoTypeMultiItem (could as well be MoTypeRefractiveMultiItem specifically tailored for dealing with submeshes having refractive materials and requiring multiple instances).

When mMoDefinition == MoTypeMultiItem; I programatically create multiple items based on mMoDefinition; and mMoDefinition->submeshMaterials specifies which materials go on each SubItem.

Hence the central place to manage everything is GameEntity.

Moving, rotating and scaling the object is done either via GameEntity::mSceneNode directly, or better if your engine designs allow it, via mTransform[mTransformBufferIdx] (to allow fixed tick count for logic, and variable framerate for graphics; including multithreading logic and graphics), while mMoDefinition indicates how the Item(s) are prepared visually.

What Ogre is lacking and should provide out of the box but doesn't, is a simple way to split a Mesh into multiple Items, e.g.

Code: Select all

MeshUtils::splitMesh( "meshName.mesh" );
thus creating meshName0.mesh meshName1.mesh meshName2.mesh in memory.
It doesn't even have to clone GPU memory, since the Vertex and Index buffers could be shared.

Or optionally:

Code: Select all

MeshUtils::splitMesh( "meshName.mesh", { { 0, 1, 3 }, { 2 } } );
thus meshName0.mesh gets created containing submeshes 0, 1 and 3; and meshName1.mesh containing submesh 2.
al2950 wrote: Thu Jan 16, 2020 2:42 pm TBH I dont really see the benefit of sub meshes anymore.
I admit that's true. Submeshes do have their uses when you need multiple materials, but you have limited options if these materials are transparents, as only SubQueues is available (and the main reason to use multiple materials is often because some of them have different "hard" properties that can't be affected with a texture, such as transparency or culling mode).
and many of our transparent objects are "hybrid" so only very few transparent objects would actually have refractions
Rasterization isn't good for this type of material combination. Even ray/pathtracing has issues with these materials (as they require lots of bounces or samples)
User avatar
dark_sylinc
OGRE Team Member
OGRE Team Member
Posts: 5296
Joined: Sat Jul 21, 2007 4:55 pm
Location: Buenos Aires, Argentina
x 1278
Contact:

Re: [2.2] Using screen-space refractions

Post by dark_sylinc »

dark_sylinc wrote: Thu Jan 16, 2020 9:10 pm
and many of our transparent objects are "hybrid" so only very few transparent objects would actually have refractions
Rasterization isn't good for this type of material combination. Even ray/pathtracing has issues with these materials (as they require lots of bounces or samples)
Juan (Godot) just reminded me there is a very obvious solution to this problem: Dithered alpha testing (aka dithered opacity, aka hashed alpha testing, not exactly the same but 99% similar).
The base technique is very simple to implement and may have superb results.
The theory is very simple: instead of using alpha blending; alpha testing is used, where x% transparency means x% of the pixels are kept while the rest are discarded:
Image
Anyone who has used Windows 3.11 in his life has seen this technique in action.

Because it's not really alpha blending, it has the nice propriety of being Order Independent by nature, hence it behaves like opaque objects.
Variants use blue noise but ultimately the quality is limited because alpha testing is either on or off.
It works well for certain percentages (close to 0%, around 50% and close to 100%).
rujialiu
Goblin
Posts: 296
Joined: Mon May 09, 2016 8:21 am
x 35

Re: [2.2] Using screen-space refractions

Post by rujialiu »

dark_sylinc wrote: Fri Jan 17, 2020 2:51 am Juan (Godot) just reminded me there is a very obvious solution to this problem: Dithered alpha testing (aka dithered opacity, aka hashed alpha testing, not exactly the same but 99% similar).
I believe TaaTT4 wrote hashed alpha testing in his wishlist? :)
dark_sylinc wrote: Fri Jan 17, 2020 2:51 am Anyone who has used Windows 3.11 in his life has seen this technique in action.
...
It works well for certain percentages (close to 0%, around 50% and close to 100%).
I've even used windows 3.1 8-)
Clear glasses have percentage close to 0% and we have a lot of them so it may be worth trying!
Post Reply