For preliminary porting, yes. You could even try using OGRE_VERSION_MINOR to switch between 2.1 and 2.2 differences.I have been eyeing 2.2 a little bit after the porting manual was added.
Do you think it's ready enough for some preliminary porting for testing (we are on OpenGL only)?
If I understand correctly, I will need the 2.2 branch to be able to make custom resolve passes in OpenGL, for example to implement the reversible tonemapping you've mentioned (we need better HDR AA).
I wouldn't recommend it for production because right now what we need is lots of testing. I don't think it's well tested what happens when you unload a texture, for example.
As for anything related to MSAA, Ogre 2.2 is superior in every way to 2.1; because 2.1 just treated MSAA like a magic algorithm while 2.2 gives a lot of explicit access to MSAA.
Yes. Ogre 2.2 got rid of anything that used the "old" Textures. That includes the HlmsTextureManager.When it comes to texture streaming and those things, does 2.2 interoperate well with the HLMS texture arrays? For example, will there be any wins in texture usage or loading performance?
The new TextureGpuManager, which replaces the old TextureManager, also replaces HlmsTextureManager. I just updated the documentation (in Docs/src/manual/Ogre2.2.Changes.md, build it with doxygen) to explain this in detail.
As for loading performance: The code has been rewritten and there were many inefficiencies in the old one, so the new code generally is expected to perform better. But the big gain comes from background streaming. The whole point of 2.2's textures is that the textures do not block main rendering (unless you explicitly want that). The Progress Report from December 2017 has a video showing it.
The big main problem right now regarding background streaming performance, is that it will very likely cause a shader recompile (and that is bad). This is because the TextureGpuManager does not now the metadata in advance (like resolution, pixel format) and without it, it just presents a 4x4 dummy texture while the real texture is being uploaded in the background.
This can cause a shader recompile because let's suppose you have 2 textures, both are being loaded. The produced shader may end up looking like any of these:
Code: Select all
//Shader variant A
uniform sampler2DArray textures[2];
textures[0] //contains 4x4 dummy for diffuse
textures[1] //contains 1024x1024 loaded for normal map
Code: Select all
//Shader variant A
uniform sampler2DArray textures[2];
textures[0] //contains 1024x1024 dummy for diffuse
textures[1] //contains 4x4 loaded for normal map
Code: Select all
//Shader variant B
uniform sampler2DArray textures[1];
textures[0] //contains 4x4 dummy for both diffuse & normal maps
Code: Select all
//Shader variant B
uniform sampler2DArray textures[1];
textures[0] //contains 1024x1024 loaded for both diffuse & normal maps
Code: Select all
//Shader variant A
uniform sampler2DArray textures[2];
textures[0] //contains 1024x1024 loaded for diffuse (it's in pool M)
textures[0] //contains 1024x1024 loaded for normal (it's in pool N)
The solution to that will be a texture cache that will save the metadata and store it to disk; which can be loaded by subsequent runs. If we know the metadata in advance, this problem won't happen because we already know what pool and slot to reserve for the batched textures.
If the metadata becomes out of date (e.g. you replaced the texture with a new one that's bigger in resolution) then it's not a big problem because the TextureGpuManager will see when the texture finishes loading that the resolution didn't match, and relocate the pool. However this can cause a shader recompile.
Unlike the shader cache, generating the metadata cache offline is easy (just run a command line tool that recursively searches for textures in the given folder, loads their headers, and saves their data). The cache also doesn't need to be updated if the texture contents changes, unless the resolution or pixel format changed.
Because that metadata cache code hasn't yet been written, beware of this issue.
So TL;DR: I encourage you try it out, and report back your problems. We need testing. Thanks!
Just... do not devote a disproportionate amount of resources because you assumed it's ready to be deployed to your final users.
Cheers