NVidia acquires AGEIA !!

Anything and everything that's related to OGRE or the wider graphics field that doesn't fit into the other forums.
MartinBean
Gnome
Posts: 331
Joined: Thu Oct 25, 2007 12:21 pm
Location: The Netherlands

NVidia acquires AGEIA !!

Post by MartinBean »

http://www.nvidia.com/object/io_1202161567170.html

Well, thats it... AGEIA has been sold. The big question is: will the products remain? Or will it be integrated in NVidia video cards? What will happen to the PhysX engine which was allowed to be used non-commercial and commercial, even without AGEIA hardware ??

Too bad this great physics engine falls in to the hands of a single GPU brand. I would liked it more if they would remain independent.
I have not failed... I've just found many ways that wont work
User avatar
eugen
OGRE Expert User
OGRE Expert User
Posts: 1422
Joined: Sat May 22, 2004 5:28 am
Location: Bucharest
x 8
Contact:

Post by eugen »

very informative, i dont think is bad ageia physics will be integrated into nvidia graphics architecture there is only to win from this in terms of what the future brings for physical simulatoions
i wonder what will happen with the ageia API and their physics engine!?
User avatar
syedhs
Silver Sponsor
Silver Sponsor
Posts: 2703
Joined: Mon Aug 29, 2005 3:24 pm
Location: Kuala Lumpur, Malaysia
x 51

Post by syedhs »

That is a good news considering I have been investing my time a lot on Ageia PhysX. It is even better that if NVIDIA decides to port some of Ageia 's intensive physics calculation to using NVIDIA cuda or maybe simply GPU.
MartinBean
Gnome
Posts: 331
Joined: Thu Oct 25, 2007 12:21 pm
Location: The Netherlands

Post by MartinBean »

Why is it good news?? Now your physics efforts are bound to a single brand GPU -> NVidia. This is not good for a game developer who is using PhysX intensivly. How are you going to give the ATI game player physics??
I have not failed... I've just found many ways that wont work
beaugard
OGRE Contributor
OGRE Contributor
Posts: 265
Joined: Sun Mar 25, 2007 1:48 pm
x 2

Post by beaugard »

So does this mean they are giving up the PhysX card development in the long run? This part seems highly overlapping with Nvidias current direction (unified architechture -> CUDA -> supercomputing such as their tesla system). I assumed the long-term plan of PhysX was also to use the mass consumer market as a springboard into pricier areas.

Maybe the CUDA stuff scared Ageia enough to make a good deal?
eLunir
Gremlin
Posts: 181
Joined: Thu Jan 17, 2008 10:04 pm

Post by eLunir »

This is actually VERY bad news, IMO (similar to something like MS acquiring Yahoo).
This now means gaming will be greatly shifted in favour of Nvidia (well for those titles, which are supporting the PhysX engine), which in turn means a new monopoly forming up. The physics solution itself would surely be quite eye-candy (based upon all the previous Nvidia-physics we've seen, and considering a pretty nice potential of fluids/cloth in PhysX). But, it would also mean, that now anyone not going for Nvidia/PhysX solution would be automatically classified in a lower-tech sector (even custom made engines, such as that of Crytek are no match to the upcoming solution). So, ATI, tough luck yet again.
Another funny (logical?) thing is that most likely Nvidia at first would attract the developers to go ahead with the new solution (like they've been doing all these years, providing all those marvellous Nvidia developer tools for free). But, after certain time, no doubt the prices would rise up again, and no wonder if they'll end up being double (triple?) to that of PhysX now (and surely there would be no software-only version any more).
Heh, and as far as current API goes, it is about to change too (since they would be rewriting it from Ageia's abstract driver to their (Nvidia's) drivers.
now bring GeForce-accelerated PhysX
(just a quote about their plans with changes to the API).

So, the bottom line is, unless we get some similar solutions from AMD/ATI or Intel, and provided Nvidia's marketing dept would not go crazy, a large portion of the market is going to fall the Nvidia's dish, and developers might very well become more and more at their mercy. :twisted:
Horizon
Greenskin
Posts: 122
Joined: Thu Jun 02, 2005 3:24 pm

Post by Horizon »

You have to keep in mind that Ageia is actually a fabless chip manufacturer and not a physics library developer. This, in my mind, makes this a very good change for PhysX, as Nvidia is clearly interested in the PhysX as well as the chips, so the library might get a little more attention.

AFAIK Ageia has currently actually outsourced all the work on the PhysX library to some Asian country, since all developer posts on the support forum are made by people with Asian souding names now. Not that anything's wrong with Asian programmers, but I feel the library doesn't really get the developer attention it needs at the moment.
User avatar
betajaen
OGRE Moderator
OGRE Moderator
Posts: 3447
Joined: Mon Jul 18, 2005 4:15 pm
Location: Wales, UK
x 58
Contact:

Post by betajaen »

We've been talking about this on the NxOgre forum.

Apart from the "OH GOD! ALL MY CODE IS GOING TO BE USELESS NOW!" moments. I still prefer Ageia to be indepdent, but as businesses go, Nvidia is probably the best ones, and I'm at least thanking somebody that it isn't AMD(look what they did with ATI), and at least we should be able to use the Nvidia GPU's now and if you don't have one you can use CPU as it has been since forever.

Anyway, Good news for us independent developers. They are keeping the SDK free.
User avatar
syedhs
Silver Sponsor
Silver Sponsor
Posts: 2703
Joined: Mon Aug 29, 2005 3:24 pm
Location: Kuala Lumpur, Malaysia
x 51

Post by syedhs »

First of all, as long as SDK is free, I think it should be fine. And I dont think NVIDIA would foolishly change Ageai design so that it depends on NVIDIA card. It is just plain stupid move, but there could be an option to link it NVIDIA Cuda - so it is good news who are always using NVIDIA anyway.

For me, it is better for Ageia - okay maybe just a fairly good news for us all :wink:
User avatar
betajaen
OGRE Moderator
OGRE Moderator
Posts: 3447
Joined: Mon Jul 18, 2005 4:15 pm
Location: Wales, UK
x 58
Contact:

Post by betajaen »

When the PPU came around, the SDK gradually implemented it in over several months. By default the SDK uses the processor to simulate the physics in I expect it will be the same with the GPU. From a C++ PhysX developer point of few, it could be just a matter of a few more flags and classes in the SDK.

There will be a ton of money to made with this and I doubt Ageia and Nvidia will mess this up by alienating their developers and rushing in.
User avatar
SpaceDude
Bronze Sponsor
Bronze Sponsor
Posts: 822
Joined: Thu Feb 02, 2006 1:49 pm
Location: Nottingham, UK
x 3
Contact:

Post by SpaceDude »

We will have to see what NVidia do with this, but seeing as NVidia already have a bunch of free developer tools (PerfHUD, FX Composer, Texture atlas tools, Cg, CUDA, etc..) as a PhysX user I'm not too worried. Their business seems to be based on selling hardware and not software, which is good for developers.

I think it would be a bad move for NVidia to only allow PhysX library to work in conjunction with their graphics cards. This will put off developers using PhysX because they have to come up with an alternative for gamers using other graphics cards. And it's not realistic to expect a games developer to implement physics code using two separate libraries. What's more likely is that the PhysX library will remain as it is but simply work faster on NVidia cards compared to other graphics cards. This means developers can still stick to using 1 physics library and NVidia win out because games will run faster on their hardware thus resulting in gamers buying their cards GPU/PPU.

They could continue to sell physics-only cards to be using in conjunction with other graphics cards. Although that may not be practical or profitable.
User avatar
Kojack
OGRE Moderator
OGRE Moderator
Posts: 7157
Joined: Sun Jan 25, 2004 7:35 am
Location: Brisbane, Australia
x 534

Post by Kojack »

As long as the PhysX sdk continues to be free for all, I'll be happy.
Of course nvidia's buyout of 3dfx didn't exactly leave things alone (not that I was ever a Glide fan, my first real 3d card (I don't count the Virge) was a nvidia TNT).

Hopefully they'll get Ageia to fix the system driver stuff so we can distribute a small dll instead of the larger installer (2.8 was going to do something about that apparently).

I guess nvidia felt they should grab Ageia while they still can. Since Intel bought Havok, I'm sure AMD was at least considering Ageia.
jjp
Silver Sponsor
Silver Sponsor
Posts: 597
Joined: Sun Jan 07, 2007 11:55 pm
Location: Cologne, Germany
Contact:

Post by jjp »

betajaen wrote:that it isn't AMD(look what they did with ATI)[/url]
OT: but what have they done to them? The current ATI cards have been in development for quite some time before ATI got bought up. So whatever this change does mean for ATI we can't see anything of it at this time.

Actually I don't think it is bad that NVidia now owns Ageia. It is certainly better than Intel getting them (like they did with Havoc to make sure their engine won't make use of GPUs). Dedicated physics hardware doesn't make too much sense anyways. I don't fear them excluding ATI hardware because that would certainly make very few developers use the PhysX engine. Plus history shows that NVidia mostly acts very reasonable in this regard, lots of their free tools and libraries are useful wether you use NVidia hardware or not.
Enough is never enough.
User avatar
syedhs
Silver Sponsor
Silver Sponsor
Posts: 2703
Joined: Mon Aug 29, 2005 3:24 pm
Location: Kuala Lumpur, Malaysia
x 51

Post by syedhs »

Kojack wrote:
Hopefully they'll get Ageia to fix the system driver stuff so we can distribute a small dll instead of the larger installer (2.8 was going to do something about that apparently).
Actually they have already released 'mini driver' for 2.7.3, 2.7.2 and 2.6.4. But you have to specifically request for it - but lately they seem not to entertain the request. FYI, my 2.7.3 is about 3MB :)
User avatar
Jabberwocky
OGRE Moderator
OGRE Moderator
Posts: 2819
Joined: Mon Mar 05, 2007 11:17 pm
Location: Canada
x 218
Contact:

Post by Jabberwocky »

Change is always a bit scary, but I'm going to throw my hat in with the "this is a good thing" crowd. The reason I like PhysX is 100% for the powerful and free physics SDK, and 0% for the separate PPU. If the PPU goes away in favor of a GPU-based solution, I'd be completely happy with that. Plus, as other folks have mentioned, I think the SDK will probably get a little more love with NVIDIA behind it.
Image
jjp
Silver Sponsor
Silver Sponsor
Posts: 597
Joined: Sun Jan 07, 2007 11:55 pm
Location: Cologne, Germany
Contact:

Post by jjp »

Big question mark for me though: how good does using a GPU to accelerate physics actually work? I imagine latencies are a considerable problem. Dedicated physics hardware is designed with this in mind, GPUs definitly are not.
Enough is never enough.
User avatar
Falagard
OGRE Retired Moderator
OGRE Retired Moderator
Posts: 2060
Joined: Thu Feb 26, 2004 12:11 am
Location: Toronto, Canada
x 3
Contact:

Post by Falagard »

I'd have been happier if Ageia had stayed independent and both Nvidia and ATI had been able to right physics drivers for their GPUs.
User avatar
JohnJ
OGRE Expert User
OGRE Expert User
Posts: 975
Joined: Thu Aug 04, 2005 4:14 am
Location: Santa Clara, California
x 4

Post by JohnJ »

Am I the only one who thinks multi-core CPUs should be used for (macro-)physics? All the latest games are constantly pushing GPUs to their limits, and it seems CPU-based physics with today's multi-core processors makes a lot more sense than using up your precious GPU power. Although I do agree that micro-physics (particles, etc.) can benefit from GPU acceleration where sheer CPU-GPU bandwidth could be a problem without it.
beaugard
OGRE Contributor
OGRE Contributor
Posts: 265
Joined: Sun Mar 25, 2007 1:48 pm
x 2

Post by beaugard »

Am I the only one ...
No, I agree completely. What are we going to use all those CPU cores for otherwise? ;)
User avatar
SpaceDude
Bronze Sponsor
Bronze Sponsor
Posts: 822
Joined: Thu Feb 02, 2006 1:49 pm
Location: Nottingham, UK
x 3
Contact:

Post by SpaceDude »

JohnJ wrote:and it seems CPU-based physics with today's multi-core processors makes a lot more sense than using up your precious GPU power.
Isn't that pretty much what you get when using the PhysX library in software mode? AFAIK, PhysX runs in a separate thread thus making it suitable for multi-core processors. I may be missing something though.
User avatar
xavier
OGRE Retired Moderator
OGRE Retired Moderator
Posts: 9481
Joined: Fri Feb 18, 2005 2:03 am
Location: Dublin, CA, US
x 22

Post by xavier »

I don't think this is as big a deal as many think. For one, Ageia was a dying concern; those using PhysX ought to be happy that someone decided to keep them alive. Ageia must have had some tech that nVidia found valuable enough to buy up the company (the same was true of 3dfx).

In terms of PhysX being around and free, I think nVidia's track record of high-quality free tools and SDKs speaks for itself in this regard.

And at the end of the day, it's entirely possible that the annoying (to me) PhysX Runtime gets rolled into the nVidia drivers and we no longer need to install the Runtime to use PhysX drivers...
Do you need help? What have you tried?

Image

Angels can fly because they take themselves lightly.
jjp
Silver Sponsor
Silver Sponsor
Posts: 597
Joined: Sun Jan 07, 2007 11:55 pm
Location: Cologne, Germany
Contact:

Post by jjp »

JohnJ wrote:Am I the only one who thinks multi-core CPUs should be used for (macro-)physics?
I think the point is that x86-cpus are not an architecture that is efficient at things like physics calculations. You could add quite a few more cpu cores and fluid simulation still wouldn't run at real time speed.
Enough is never enough.
User avatar
syedhs
Silver Sponsor
Silver Sponsor
Posts: 2703
Joined: Mon Aug 29, 2005 3:24 pm
Location: Kuala Lumpur, Malaysia
x 51

Post by syedhs »

SpaceDude wrote: Isn't that pretty much what you get when using the PhysX library in software mode? AFAIK, PhysX runs in a separate thread thus making it suitable for multi-core processors. I may be missing something though.
Separate thread and separate core are not always the same.
User avatar
SpaceDude
Bronze Sponsor
Bronze Sponsor
Posts: 822
Joined: Thu Feb 02, 2006 1:49 pm
Location: Nottingham, UK
x 3
Contact:

Post by SpaceDude »

A little more information in this interview:

http://www.firingsquad.com/features/age ... efault.asp
User avatar
Kojack
OGRE Moderator
OGRE Moderator
Posts: 7157
Joined: Sun Jan 25, 2004 7:35 am
Location: Brisbane, Australia
x 534

Post by Kojack »

So which do you think will be bought out next, and by who?
The KillerNic network card (400MHz Network Processing Unit), or the Intia ai processor (can do path finding 200 times faster than A* on a cpu)?
:)
Post Reply