But as time goes on, and users make it more visible... the current Texture system is showing its limits:
- Poor MSAA support.
- No async streaming.
- Takes forever to upload, often causing stalls.
- Wastes a lot of memory.
- Hlms forces textures to be loaded, consuming a lot of GPU memory.
- No residency support, thus we reach out of memory conditions too early (specially bad for Editors and big projects)
- Clunky interface (HardwarePixelBuffer, RenderTarget, Textures... they're interrelated yet different, and if a variable is required but you don't hold the pointer... it gets difficult to grab it).
- Hard to prepare for Vulkan/DX12.
- Can't render to 2D array slices, nor 3D slices.
Fortunately most of you (myself included) only use textures superficially, i.e. ask Ogre to load it, then set the material.
Creating RenderTextures via code will be affected as well, but I'm crossing my fingers this should be relatively easy to port (given that the RTT code should become simpler).
The ones I suspect will be affected the most is those who lock textures for writing/reading, as these interfaces and their behavior could change drastically. Those who relied on misc stuff like blitFramebuffer to scale will be affected the most (if you want to "blit" and not just copy, use a pass_quad!)
You think it will be too groundbreaking? Perhaps someone figures a clean way to phase the old system for the new one without deleting it?