I can see that the compositor for the instanced stereo sample specifies two viewports, but I only see one camera in the C++ code. Without two different camera positions, how can it be stereo? I'm also not real clear on what "instanced stereo" even means, though I did Google it.
basic question on Sample_InstancedStereo Topic is solved
-
- Goblin
- Posts: 242
- Joined: Thu Aug 12, 2021 10:06 pm
- Location: San Diego, CA, USA
- x 18
-
- OGRE Team Member
- Posts: 5419
- Joined: Sat Jul 21, 2007 4:55 pm
- Location: Buenos Aires, Argentina
- x 1330
Re: basic question on Sample_InstancedStereo
Hi!
InstancedStereo relies on HW support and certain tricks (e.g. frustum culling). Which basically boils down to "left and right eye are separate by an X offset. Everything else stays the same". Hence it must use one camera (having two cameras are too flexible and would cause problems if they are not restricted).
It might be possible to do some wilder stuff (i.e. more than just X offset), but it can get tricky.
InstancedStereo sample is very basic showcasing and testing HW support. A much better sample you can look at is the OpenVR one which setups the culling camera and uses instanced stereo.
When using instanced stereo two things are important:
Camera::setVrData
. You use this function to provide left and right eye separation (see NullCompositorListener.cpp and OpenVRCompositorListener.cpp).- The culling camera. The idea is to create an auxiliary camera that encloses both eye's frustum, which will be used for frustum culling. This Stack Exchange thread explains it well. The sample uses Tutorial_OpenVRWorkspace.compositor which asks for camera with name "VrCullCamera" to do the culling, and C++ creates it and makes sure the culling camera is positioned properly to enclose what both left and right eyes can see.
-
- Goblin
- Posts: 242
- Joined: Thu Aug 12, 2021 10:06 pm
- Location: San Diego, CA, USA
- x 18
Re: basic question on Sample_InstancedStereo
@dark_sylinc Thanks for the explanation. It sounds like that approach might be more complicated than I want to deal with for the code I'm porting, which allows the two cameras to be converged rather than parallel.
I'm thinking of using two render scene passes. I hope that I can use a passEarlyPreExecute
listener override to adjust the camera position and direction for each pass.