Raytrace renderer use ogre
-
- Gnoblar
- Posts: 6
- Joined: Thu Jul 13, 2006 5:48 am
- Location: China, ShenZhen
Raytrace renderer use ogre
i wrote a raytrace renderer with Ogre,
but the rendering efficiency is not very well(too slow),
and it temporary support these feature:
1.Specular reflection
2.Refraction
3.Texture Mapping
4.Multi-LightSource
5.Diffuse lighting
6.Specular lighting
7.Ambient lighting
8.Soft Shadow
here is some screenshots.
demo download here:
http://www.azure.com.cn/article.asp?id=229
source code download here:
http://www.azure.com.cn/article.asp?id=295
welcome to my site:
http://www.azure.com.cn/
[/list][/list]
but the rendering efficiency is not very well(too slow),
and it temporary support these feature:
1.Specular reflection
2.Refraction
3.Texture Mapping
4.Multi-LightSource
5.Diffuse lighting
6.Specular lighting
7.Ambient lighting
8.Soft Shadow
here is some screenshots.
demo download here:
http://www.azure.com.cn/article.asp?id=229
source code download here:
http://www.azure.com.cn/article.asp?id=295
welcome to my site:
http://www.azure.com.cn/
[/list][/list]
- Chris Jones
- Lich
- Posts: 1742
- Joined: Tue Apr 05, 2005 1:11 pm
- Location: Gosport, South England
- x 1
- sinbad
- OGRE Retired Team Member
- Posts: 19269
- Joined: Sun Oct 06, 2002 11:19 pm
- Location: Guernsey, Channel Islands
- x 66
- Contact:
- xavier
- OGRE Retired Moderator
- Posts: 9481
- Joined: Fri Feb 18, 2005 2:03 am
- Location: Dublin, CA, US
- x 22
- skullfire
- Gremlin
- Posts: 150
- Joined: Sat Mar 19, 2005 7:51 pm
- Location: San Jose, Costa Rica
- Contact:
- Levia
- Halfling
- Posts: 45
- Joined: Fri Feb 03, 2006 9:56 pm
- Location: The Netherlands
-
- Halfling
- Posts: 53
- Joined: Mon Oct 17, 2005 2:11 pm
- Location: Thessaloniki - Greece
- Contact:
Neither do I ...
and even the direct link to the image (http://www.azure.com.cn/uploads/200702/ ... esult7.jpg) gives me a "server taking too long.." error message.
Tested under XP for both Firefox 2.0.01 and .. what 's the other one .. the old one .. ah yes Internet Explorer.
Guyb.
and even the direct link to the image (http://www.azure.com.cn/uploads/200702/ ... esult7.jpg) gives me a "server taking too long.." error message.
Tested under XP for both Firefox 2.0.01 and .. what 's the other one .. the old one .. ah yes Internet Explorer.
Guyb.
- xavier
- OGRE Retired Moderator
- Posts: 9481
- Joined: Fri Feb 18, 2005 2:03 am
- Location: Dublin, CA, US
- x 22
Re: Raytrace renderer use ogre
To address a point sort of missed here...azureyes wrote: but the rendering efficiency is not very well(too slow),
What did you expect? Raytracing is still not really applicable to real-time rendering, even with high-powered hardware. It's still basically an offline process, and the fact that you get any framerate at all with it is an accomplishment in and of itself.
Most of the time when I hear that someone wants to adapt a traditionally "offline" process to Ogre, I recommend they just treat Ogre as a really fast offline renderer. It (Ogre) certainly is not set up for the normal offline rendering pipeline -- it's designed to render using hardware acceleration, and the hardware currently available usually is not really suitable for "offline" processing data flow. For example, the shaders you define in something like RenderMan do not map well to GPU shader processing.
- Game_Ender
- Ogre Magi
- Posts: 1269
- Joined: Wed May 25, 2005 2:31 am
- Location: Rockville, MD, USA
Outside of the game world there is growing use of real time raytrace rendering. Especially in high end CAD presentation in Cave environments. I believe there is small startup/university's group designing hardware accelerators for ray tracing as well.
Robotics @ Maryland AUV Team - Software Lead
- xavier
- OGRE Retired Moderator
- Posts: 9481
- Joined: Fri Feb 18, 2005 2:03 am
- Location: Dublin, CA, US
- x 22
That's what it would take, IMO -- hardware designed for the purpose. Commodity graphics hardware still is designed and optimized for gaming purposes. It would be nice to see hardware optimized for general-purpose global illumination.Game_Ender wrote:Outside of the game world there is growing use of real time raytrace rendering. Especially in high end CAD presentation in Cave environments. I believe there is small startup/university's group designing hardware accelerators for ray tracing as well.
-
- Gnoblar
- Posts: 11
- Joined: Thu Jun 08, 2006 12:47 pm
- Location: Hungary, Debrecen, Invictus HQ
A teacher with some of his students at Computer Grahpics Research Group, BME (BUTE, Hungary) wrote a "Raytrace Effects Car Driving Game" demo using the Ogre3D engine. They used raytrace-like effects.
http://www.cg.tuwien.ac.at/events/EG06/ ... tition.php
The 4. demo.
I dont know whether is showed up in the showcase forum.
Xir.
http://www.cg.tuwien.ac.at/events/EG06/ ... tition.php
The 4. demo.
I dont know whether is showed up in the showcase forum.
Xir.
-
- Gnoblar
- Posts: 20
- Joined: Thu May 11, 2006 8:18 pm
- Location: Greensboro, NC, USA
That's both true and untrue, if by special hardware you mean expansion cards. There have been many projects (I am way too lazy to link it all right now, so I will namedrop: OpenRT, MIT Cell processor competition, University of Saarland, '1995' demo at pouet.net) that suggest that people have been finding smarter methods for realtime raytracing and we can expect it to become more commonplace as the quality of hardware and software get closer. Multithreading is apparently (I don't have specific knowledge, so correct me if I'm wrong) well-suited for raytracing, so multicore processors can already give decent-considering-this-is-new framerates.xavier wrote: That's what it would take, IMO -- hardware designed for the purpose. Commodity graphics hardware still is designed and optimized for gaming purposes. It would be nice to see hardware optimized for general-purpose global illumination.
I forsee the gaming industry eventually ditching raster graphics entirely for realtime raytracing once this happens. Of course, it wouldn't be sudden, and I'd expect it to be similar to the adoption of 3D acceleration and accelerated graphics cards in the late 90s. Raytracing gives you more eyecandy with less effort, because you are simulating how light actually acts rather than trying to think of sneaky ways of creating illusions.
- xavier
- OGRE Retired Moderator
- Posts: 9481
- Joined: Fri Feb 18, 2005 2:03 am
- Location: Dublin, CA, US
- x 22
Multicore or multithreading has zero impact on how graphics are rendered on a hardware accelerator, because the GPU is still a single resource and all rendering commands and data go through a single channel to get there -- even with SLI. Effective hardware acceleration of offline processes will in fact require a fundamental change in the way GPUs operate -- raytracing needs more than just the vertex-stream processing that GPUs currently use, and fragments currently cannot reference other fragments or lighting data other than from actual light sources or texture data. While you certainly can approximate raytracing using those data sources, it's still a hack to get around the design limitations (intentions, really) of current commodity hardware.
-
- Goblin
- Posts: 297
- Joined: Thu May 12, 2005 2:30 pm
@nihilocrat
I agree with you, the way I see it in 3d max for example is about twice as fast using Brazil ray tracer with multi cores tahn it is on single cores. So I do not know why multi core will not make a race tracer faster.
Also with the new generation of GPU the days of generic raters API like direct 3d and open gl are numbered. These new cards are coming with general programming languages like CUDA and CTM, and they are very much generic multicore CPUs,
in the case of the nv8800 you can see it like a set of 16 independent SIMD CPUs, each one highly hyper treaded.
It is quite possible to write a full race tracer using CUDA and one, two or even three nv8800 and totally bypassing openGL and direct3d.
So maybe in a near future we may start seen hardware accelerated race tracer drivers for these card that can be of the order of 100 of times faster than software ray tracers.
I agree with you, the way I see it in 3d max for example is about twice as fast using Brazil ray tracer with multi cores tahn it is on single cores. So I do not know why multi core will not make a race tracer faster.
Also with the new generation of GPU the days of generic raters API like direct 3d and open gl are numbered. These new cards are coming with general programming languages like CUDA and CTM, and they are very much generic multicore CPUs,
in the case of the nv8800 you can see it like a set of 16 independent SIMD CPUs, each one highly hyper treaded.
It is quite possible to write a full race tracer using CUDA and one, two or even three nv8800 and totally bypassing openGL and direct3d.
So maybe in a near future we may start seen hardware accelerated race tracer drivers for these card that can be of the order of 100 of times faster than software ray tracers.
- PolyVox
- OGRE Contributor
- Posts: 1316
- Joined: Tue Nov 21, 2006 11:28 am
- Location: Groningen, The Netherlands
- x 18
- Contact:
For those who are interested, there's a (very long) discussion about this taking place at GameDev...
http://www.gamedev.net/community/forums ... _id=429607
http://www.gamedev.net/community/forums ... _id=429607
Last edited by PolyVox on Sat Feb 17, 2007 1:11 am, edited 1 time in total.
-
- Silver Sponsor
- Posts: 597
- Joined: Sun Jan 07, 2007 11:55 pm
- Location: Cologne, Germany
- Contact:
Able to execute more or less the same code, yes. But they are absolutly not generic CPUs.Jovani wrote:These new cards are coming with general programming languages like CUDA and CTM, and they are very much generic multicore CPUs
I don't get this never-ending hype about ray tracing. It is not like suddenly we would be getting photorealism and uber-graphics by magic. Also rasterization does have some advantages besides performance. E.g. creating visuals in a unique and non-realistic style is easier. By the way - all Pixar movies but Cars (which uses it partly for reflections and ambient occclusion) are done with no ray tracing at all. Well, enough. That thread at gamedev shows you can have pointless wars about ray tracing vs. rasterization all day long
Enough is never enough.
-
- Goblin
- Posts: 297
- Joined: Thu May 12, 2005 2:30 pm
One of the big difference with older GPU model is that the number of units is symmetrical(unified model) and can do difference things simultaneously, very much like the cell or the xenon except that they have lot more of them.
So yes definitely the internal SPUs (as nvidia call them) can be programed to run different things in parallel . When you are using dx of ogl the distribution of what is doing what is controlled by the drivers, but the beauty of the new languages is that now the programmer can use the units for anythings.
http://arstechnica.com/news.ars/post/20061108-8182.html
Ray tracing, physics, AI, particle effects, animations, neural nets, are just some of the the cpu intensive task that the new language allow the programmers to do with these cards.
They are truly consumer math co-processors, so the world will be quite different once these card become mainstream.
There are quite expensive now at over $400.00 plus, but I speculate that if the promised potential of the the new interface to these card is even remotely as powerful as it promise to be, there will be hundreds of application adding support for then, and that will drive up the demand which in turn will drive the cost down.
About ray tracing, all I can say is that if the later special effects we see in movies are a sign of what ray traced scenes can and cannot do, it seems to me that, ray tracing is a superior technique quality wise, other wise why packages like Render Man, Brazil, Mental ray all use it for photorealism despite running in software, when they could be using shaders to do the same job but much faster in hardware.
So yes definitely the internal SPUs (as nvidia call them) can be programed to run different things in parallel . When you are using dx of ogl the distribution of what is doing what is controlled by the drivers, but the beauty of the new languages is that now the programmer can use the units for anythings.
http://arstechnica.com/news.ars/post/20061108-8182.html
Ray tracing, physics, AI, particle effects, animations, neural nets, are just some of the the cpu intensive task that the new language allow the programmers to do with these cards.
They are truly consumer math co-processors, so the world will be quite different once these card become mainstream.
There are quite expensive now at over $400.00 plus, but I speculate that if the promised potential of the the new interface to these card is even remotely as powerful as it promise to be, there will be hundreds of application adding support for then, and that will drive up the demand which in turn will drive the cost down.
About ray tracing, all I can say is that if the later special effects we see in movies are a sign of what ray traced scenes can and cannot do, it seems to me that, ray tracing is a superior technique quality wise, other wise why packages like Render Man, Brazil, Mental ray all use it for photorealism despite running in software, when they could be using shaders to do the same job but much faster in hardware.
- Praetor
- OGRE Retired Team Member
- Posts: 3335
- Joined: Tue Jun 21, 2005 8:26 pm
- Location: Rochester, New York, US
- x 3
- Contact:
-
- Silver Sponsor
- Posts: 597
- Joined: Sun Jan 07, 2007 11:55 pm
- Location: Cologne, Germany
- Contact:
With a GPU you have a "CPU" that has a quite slow connection to the main memory. Obviously the local memory is accessible faster but both do not have good latency. You've got many execution units but also with bad latency. Poor performance for jumps in code. Naturally only single precision calculation. Only programmable through a driver.Jovani wrote:Ray tracing, physics, AI, particle effects, animations, neural nets, are just some of the the cpu intensive task that the new language allow the programmers to do with these cards.
For some tasks you can use such a card for cpu-like calculations. However, it makes no sense to say a gpu is more a "general purpose cpu".
Enough is never enough.
-
- Orc
- Posts: 468
- Joined: Sat Jan 27, 2007 12:06 pm
- Contact:
- JamesKilton
- Halfling
- Posts: 87
- Joined: Tue Jun 14, 2005 8:21 pm
- x 1
You last two posters DO realise that this thread is over a year and a half old?
Stop necro-posting please.
Stop necro-posting please.
Ogre.rb Project Lead