I notice Ogre uses singlets everywhere. On the surface it looks like there would be no way to do fullscreen mode across multiple different devices. I could do borderless windowed mode, which might suffice. I just want to make sure I'm not missing something obvious.
Currently our program allows the user to choose any number of available devices. Each device is (inefficiently) instantiated as a completely separate Direct3D9 device and all assets (3d objects and textures) exist on only one device. Typical usage is to have one device display an experiment to a test subject while a secondary device shows statistics or other data about the experiment to the experimenter. Nothing is preventing the user from creating 8 devices and mirroring the assets across each one or whatever they want.
We need to preserve current behavior in our transition to Ogre3D, but it looks like we'll have to fake it with borderless windowed mode, which should be fine for the typical use case, but if they have 2 separate video cards and are expecting to be able to control output to each one independently, it seems like Ogre won't be able to do that. Is this correct? The only case where I can see this being an issue is in the unlikely scenario where the user has 2x 1GB (cheap) video cards and wants to max out the texture memory on each card with two completely different scenes. I know I'm talking about edge cases, but as a software engineer, I'm trained to think about all contingencies.
How is the multi-monitor support in Ogre 2.1 D3D11?
Multiple Devices in v2.1
-
dark_sylinc
- OGRE Team Member

- Posts: 5537
- Joined: Sat Jul 21, 2007 4:55 pm
- Location: Buenos Aires, Argentina
- x 1395
Re: Multiple Devices in v2.1
Yes, this is as old as Ogre's codebase. Unfortunately it's too deeply rooted to remove it without a lot of pain. However we are not adding more Singletons to the mix.cowtung wrote:I notice Ogre uses singlets everywhere.
This is a limitation of D3D9, and Ogre 1.x running D3D9 has this issue. I think there are ways to tell Ogre what to mirror and what not to mirror, but you would have to ask Assaf who is more experienced with that.cowtung wrote:Each device is (inefficiently) instantiated as a completely separate Direct3D9 device and all assets (3d objects and textures) exist on only one device. Typical usage is to have one device display an experiment to a test subject while a secondary device shows statistics or other data about the experiment to the experimenter. Nothing is preventing the user from creating 8 devices and mirroring the assets across each one or whatever they want.
However note Ogre 2.1 does not support D3D9.
There is multi-monitor, and multi-adapter. You asked both questions. There is:How is the multi-monitor support in Ogre 2.1 D3D11?
- Multiple monitors, single GPU.
- Multiple monitors, multiple GPU.
- Single monitor, single GPU.
- Single monitor, multiple GPU.
Now you're asking a multi-adapter, multi-monitor scenario. I honestly haven't tried this. I know Assaf got this to work in Ogre 1.x; I haven't experimented with this and I don't know if it works with 2.1 (unless you create two different processes, one for each GPU and communicate via TCP/UDP or OS-level shared memory; which will obviously work)if they have 2 separate video cards and are expecting to be able to control output to each one independently, it seems like Ogre won't be able to do that. Is this correct? The only case where I can see this being an issue is in the unlikely scenario where the user has 2x 1GB (cheap) video cards and wants to max out the texture memory on each card with two completely different scenes.
An edge case is an understatement. Typically if you want to use two cards, it's because you want card A to render to monitor 1 & 2, and card B to render to monitor 2 & 3; because A can't keep up with rendering to all 4 monitors alone.user has 2x 1GB (cheap) video cards and wants to max out the texture memory on each card with two completely different scenes
This is common with simulation software (car racing simulators, airplane flight simulators, etc). However this means that memory availability is not an issue. Rendering performance is. Both cards would be rendering a different section of the camera; which means they need to mirror all or most resources. Two 1GB cards will count as 1GB, not 2GB.
D3D12 improved on this area by supporting cross-vendor, cross-adapter communication, allowing any card to fetch data from the other card over the PCI-e bus (or via the faster SLI/Crossfire data link if available). However we don't have a D3D12 RenderSystem yet. And even if we did, cross-vendor, cross adapter communication is tricky and hard to get right (and there's still a lot of driver bugs to iron out).
If you want to render two completely different things (effectively making two 1GB cards = 2GB) I don't see any difference from running two processes to running one process to get the job done.