Confused about shader array parameter parssing

A place for users of OGRE to discuss ideas and experiences of utilitising OGRE in their games / demos / applications.
Post Reply
TdZBdaO
Gnoblar
Posts: 21
Joined: Sat Mar 22, 2008 3:20 pm
x 1

Confused about shader array parameter parssing

Post by TdZBdaO » Sun Sep 20, 2015 6:18 pm

EDIT: Sorry, I think i posted on the wrong board, should probably be posted into the "Help" section of the forums.

Hi folks,

I ran into a bit of head aching confusion while trying to pass an array of - let's say - integer values to a GLSL shader. I'd really appreciate if someone could explain me the proper way of doing this. What I tried right now was this:

First i write a neat little shader (I'll skip the vertex shader since it's not relevant):

Code: Select all

varying vec2 UV;
uniform int MYARRAY[64];

void main(void)
{
    gl_FragColor = texture2D(MYARRAY[4], UV);
}
Next I define it in a material script:

Code: Select all

material Test
{
    technique
    {
        pass
        {
            lighting on
            ambient 0 0 0 1
            specular 0 0 0 1
            diffuse 0 0 0 1
            emissive 1 1 1            
            
            fragment_program_ref LightSourceCompositePreviewShaderF
            {
                param_named MYARRAY int64 0 # Tried, didn't work
                param_named MYARRAY int[] 0 # Again, no luck with this
                param_named MYARRAY int[64] 0 # Then i ran out of ideas...
            }
            
            vertex_program_ref LightSourceCompositePreviewShaderV
            {
                
            }
            
            
        }
Finally I populate the named constant in my program:

Code: Select all

int uMyArrayValues[64];
.... // Fill the array
GpuProgramParametersSharedPtr params = pass->getFragmentProgramParameters();
params->setNamedConstant("MYARRAY", uMyArrayValues, 1, 64); // Didn't work
params->setNamedConstant("MYARRAY[0]", uMyArrayValues, 1, 64); // I also tried this...

// and even this:
for(int i=0;i<64;++i) {
     params->setNamedConstant("MYARRAY[i]", uMyArrayValues[i]); // I know i need to do string formatting here, I just wrote it to make it more clear what i'm doing.
}
Then i finally ran out of ideas. I mixed most of the attempts above without any success (still getting a 0 filled array) and I spent a whole lot of time reading through old posts and wiki articles but surprisingly never found any real answer to this.

If someone could enlighten me here a bit I would be really, really grateful :)

Thank you very much in advance!
0 x

User avatar
dark_sylinc
OGRE Team Member
OGRE Team Member
Posts: 4066
Joined: Sat Jul 21, 2007 4:55 pm
Location: Buenos Aires, Argentina
x 228
Contact:

Re: Confused about shader array parameter parssing

Post by dark_sylinc » Sun Sep 20, 2015 8:15 pm

It's not working because you're declaring:

Code: Select all

int MYARRAY[64]
Yet you're later using it as a texture sampler:

Code: Select all

texture2D(MYARRAY[4], UV);
This is not valid GLSL code.

You should declare MYARRAY as a sampler2D. Once you fix that, you'll have the next problem which is you can only bind up to 16 textures per shader stage in OpenGL. 64 is not going to work:

Code: Select all

sampler2D MYARRAY[16];
Once you fix both bugs, the following should work:

Code: Select all

param_named MYARRAY[0] int 0
param_named MYARRAY[1] int 1
param_named MYARRAY[2] int 2
param_named MYARRAY[3] int 3
...
C++

Code: Select all

params->setNamedConstant("MYARRAY", uMyArrayValues, 1, 16); // Should work (update all at once)
// ...or...
params->setNamedConstant("MYARRAY[0]", uMyArrayValues[0], 1, 1); //Also should work (update one by one)
params->setNamedConstant("MYARRAY[1]", uMyArrayValues[1], 1, 1); //...
You must be getting GLSL compilation exceptions in the Ogre.log but probably you've missed them. That's why you're attempts at setting the constants are failing.
0 x

TdZBdaO
Gnoblar
Posts: 21
Joined: Sat Mar 22, 2008 3:20 pm
x 1

Re: Confused about shader array parameter parssing

Post by TdZBdaO » Sun Sep 20, 2015 8:46 pm

Thank you very much for helping me out here, dark_sylinc!

When copying the example code from my actual sources I made a mistake in that line saying "uniform int...". I'm sorry about that, that was meant to be a sampler2D array of course. Otherwise you are totally right with that suggestion.

The next thing you mention is the 16 textures limit which is also correct. I just discovered this limitation to render my shader design useless at this point. Before you're starting to wonder why anyone would want to have that many textures in a single pass: I'm not writing a game, it is a scientific application and i wanted to gain a speedup by doing the image space stuff on the GPU. The downside is - besides your hint with the texture unit limitations - that appearantly Ogre or I think GL itself behaves very different on different machines when it comes to blending. I quickly changed my source code to use multiple passes instead of a single one and blend them together, but I see the blending to be much more aggressive on my notebook compared to my desktop pc. That's why I wanted to implement everything inside a single pass and have full control over the blending process. Too bad it didn't work out - seems like I need to go back to CPU processing.

However, I guess this is a completely different topic and you've been very precise in answering what I was asking for. Thanks again, now I'm sure I will know how next time I'm trying to do something like that using Ogre. :)
0 x

User avatar
dark_sylinc
OGRE Team Member
OGRE Team Member
Posts: 4066
Joined: Sat Jul 21, 2007 4:55 pm
Location: Buenos Aires, Argentina
x 228
Contact:

Re: Confused about shader array parameter parssing

Post by dark_sylinc » Sun Sep 20, 2015 9:07 pm

You're welcome!

About the 16 texture limitation:
OGL ties texture descriptors (metadata of the texture itself) with sampler descriptor (how to sample the texture, e.g. bilinear filtering, clamp to edge vs wrap, etc).

Modern hardware has scarce amount of samplers, but they can have lots of textures. D3D11 separated textures from samplers; hence it's possible in that API to have up to 128 textures bound at the same time while only having up to 16 samplers (normally you need a couple of them, so 16 is more than enough).

As a workaround, you can use texture arrays, which allows you to sample lots and lots of textures from the shader even on old GPUs back to the GeForce 8.
The only problem is that all textures in the array must use the same resolution and same pixel format.
Due to its compatibility across APIs and HW, we opted to use texture arrays in Ogre 2.1; and we usually bind like 4 texture arrays (we need more than one due to the 'must use same resolution and format' restriction). It is an awkward workaround, but works very well on a broad range of hardware.
Therefore using tex. arrays we're now limited to "up to 16 texture arrays" which is usually plenty and enough for our needs, but has higher overhead CPU side (we needed to code a system that would track textures and would pack them together in a texture array as they got loaded and unloaded; and also create more arrays if an array was full. This system is called the HlmsTextureManager).

DX12 HW Tier2 and OpenGL using the bindless extension can use a nearly unlimited amount of textures at the same time in a very flexible manner (only limited by available RAM; but you will also be very limited in the HW support since only T2 hardware and beyond can support it). Ogre does not currently expose bindless texture extensions.
0 x

User avatar
dark_sylinc
OGRE Team Member
OGRE Team Member
Posts: 4066
Joined: Sat Jul 21, 2007 4:55 pm
Location: Buenos Aires, Argentina
x 228
Contact:

Re: Confused about shader array parameter parssing

Post by dark_sylinc » Sun Sep 20, 2015 9:11 pm

TdZBdaO wrote:I quickly changed my source code to use multiple passes instead of a single one and blend them together, but I see the blending to be much more aggressive on my notebook compared to my desktop pc. That's why I wanted to implement everything inside a single pass and have full control over the blending process. Too bad it didn't work out - seems like I need to go back to CPU processing.
Forgot to say you can do both! Blend together those 16 textures, and use 4 passes to blend that total of 64 you wanted.

Though blending 64 textures can be taxing on memory bandwidth depending on resolution and format, which is limited on integrated GPUs.
0 x

TdZBdaO
Gnoblar
Posts: 21
Joined: Sat Mar 22, 2008 3:20 pm
x 1

Re: Confused about shader array parameter parssing

Post by TdZBdaO » Sun Sep 20, 2015 10:56 pm

That is indeed a good idea, but unfortunately I cannot make use of it in this special case. You see, I'm doing calculations based on radiosity and luminosity of each indiviual light represented by a texture. Again, to do proper blending I would need every single one of them in each pass which again seems to be impossible using 16 units without any tricks. That scene_blend I'm doing at the moment doesn't really fit my needs and using combined blend/passes would require lots of complicated map/reduce strategies that would make the workflow more complicated than I was hoping for. I decided to think this one over and if I cannot come up with something until tomorrow just drop the GPU acceleration at this point. No worries, Ogre is still in the game, just not for this specific task :)
0 x

Post Reply