The new forums will be named Coin Return (based on the most recent vote)! You can check on the status and timeline of the transition to the new forums here.
The Guiding Principles and New Rules document is now in effect.
Having a discrete physics card was always a bad idea except to those who were flush with cash, integrating it into current gpu's is the more feasible idea which is what I suspect nvidia is wanting to do here.
Buy Ageia, integrate the tech into nvidia graphics cards. Everybody wins, except maybe DAAMIT..
PSN | Steam
---
I've got a spare copy of Portal, if anyone wants it message me.
Ageia's physics cards on their own were pretty much a load of bullshit on par with the Killer NIC, weren't they?
Having this tech integrated into next gen GPUs would only help, but I wonder what the price vs. performance increases will be?
I think there was a point to them but I never really saw the value of them. It doesn't matter now. Both physics card developers are now swallowed up by big companies. Intel as Ageia's competetor which I don't remeber off the top of my head but ya. Phys-x cards never seemed worth the cash.
Rumor is that intel will have some 8 core beast or even a 16 core beast with 4 cores to act like a gpu and 3 cores to act like a physics processor. nVidia must be afraid of this.
Ageia's physics cards on their own were pretty much a load of bullshit on par with the Killer NIC, weren't they?
Having this tech integrated into next gen GPUs would only help, but I wonder what the price vs. performance increases will be?
I think there was a point to them but I never really saw the value of them. It doesn't matter now. Both physics card developers are now swallowed up by big companies. Intel as Ageia's competetor which I don't remeber off the top of my head but ya. Phys-x cards never seemed worth the cash.
Rumor is that intel will have some 8 core beast or even a 16 core beast with 4 cores to act like a gpu and 3 cores to act like a physics processor. nVidia must be afraid of this.
Ageia's physics cards on their own were pretty much a load of bullshit on par with the Killer NIC, weren't they?
Having this tech integrated into next gen GPUs would only help, but I wonder what the price vs. performance increases will be?
I think there was a point to them but I never really saw the value of them. It doesn't matter now. Both physics card developers are now swallowed up by big companies. Intel as Ageia's competetor which I don't remeber off the top of my head but ya. Phys-x cards never seemed worth the cash.
Rumor is that intel will have some 8 core beast or even a 16 core beast with 4 cores to act like a gpu and 3 cores to act like a physics processor. nVidia must be afraid of this.
Intel bought Havok.
I think in a few years we're going to have some sort of computer world civil war..
Ageia's physics cards on their own were pretty much a load of bullshit on par with the Killer NIC, weren't they?
Having this tech integrated into next gen GPUs would only help, but I wonder what the price vs. performance increases will be?
I think there was a point to them but I never really saw the value of them. It doesn't matter now. Both physics card developers are now swallowed up by big companies. Intel as Ageia's competetor which I don't remeber off the top of my head but ya. Phys-x cards never seemed worth the cash.
Rumor is that intel will have some 8 core beast or even a 16 core beast with 4 cores to act like a gpu and 3 cores to act like a physics processor. nVidia must be afraid of this.
Intel bought Havok.
I think in a few years we're going to have some sort of computer world civil war..
Unlike now, when there are no companies competing.
Anyways, I saw this and I thought "GPU-PPU single card" and then I thought "Hey what if it's still priced somewhat like a GPU" and then I thought "Jesus fuck that would rule"
I mean, I'm thinking the 9800XT2 or whatever is just two 8800 Ultras on one card, right? So what if they threw a Phys-Xs guts into an 8800GT, for maybe $50 more, since they can share some materials and the board, and really all that would be needed on top of the vanilla is the chip itself, right? Now if they could do that, you would have faster communication, similar to the dual-core architecture being better than two separate processors, right?
Am I the only one who thinks that would be pretty cool?
Ageia's physics cards on their own were pretty much a load of bullshit on par with the Killer NIC, weren't they?
Having this tech integrated into next gen GPUs would only help, but I wonder what the price vs. performance increases will be?
I think there was a point to them but I never really saw the value of them. It doesn't matter now. Both physics card developers are now swallowed up by big companies. Intel as Ageia's competetor which I don't remeber off the top of my head but ya. Phys-x cards never seemed worth the cash.
Rumor is that intel will have some 8 core beast or even a 16 core beast with 4 cores to act like a gpu and 3 cores to act like a physics processor. nVidia must be afraid of this.
Intel bought Havok.
I think in a few years we're going to have some sort of computer world civil war..
Unlike now, when there are no companies competing.
Anyways, I saw this and I thought "GPU-PPU single card" and then I thought "Hey what if it's still priced somewhat like a GPU" and then I thought "Jesus fuck that would rule"
I mean, I'm thinking the 9800XT2 or whatever is just two 8800 Ultras on one card, right? So what if they threw a Phys-Xs guts into an 8800GT, for maybe $50 more, since they can share some materials and the board, and really all that would be needed on top of the vanilla is the chip itself, right? Now if they could do that, you would have faster communication, similar to the dual-core architecture being better than two separate processors, right?
Am I the only one who thinks that would be pretty cool?
<sarcasm>No, I hate this idea. I think we should just go back to having CPU software rendering and no physics.</sarcasm>
LittleBoots on
Tofu wrote: Here be Littleboots, destroyer of threads and master of drunkposting.
0
Zilla36021st Century. |She/Her|Trans* Woman In Aviators Firing A Bazooka. ⚛️Registered Userregular
edited February 2008
Nvidia were already experimenting with physics 'shaders' so this is a perfect fit.
Maybe this was the only way that their technology would ever be supported by anyone.
Actually, it's a great business idea: create a decent technology and pray some big company comes along and swallows you up so you can escape with a golden parachute. And they won't even make you do infomercials to hock it like that Oxyclean guy who doesn't know how to say no to Grecian 5.
So the problem with having separate cards for this is that parallel programing is hard and anything that can be put on a separate card is a prime candidate for farming off to a different core without too much difficulty. Now that multi-core cpus are the norms it will be very hard to deliver value for money with cards that do just physics or AI or whatever else you care to name.
Eventually even GPU might become unecessary if we are talking about 100s of cores in the future.
lowlylowlycook on
(Please do not gift. My game bank is already full.)
So the problem with having separate cards for this is that parallel programing is hard and anything that can be put on a separate card is a prime candidate for farming off to a different core without too much difficulty. Now that multi-core cpus are the norms it will be very hard to deliver value for money with cards that do just physics or AI or whatever else you care to name.
Eventually even GPU might become unecessary if we are talking about 100s of cores in the future.
I remember hearing that the AMD/ATI solution to this was going to be writing an instruction set that would allow an AMD (ATI) card calculate the physics of a thing. The example I remember was someone going out and buying a top of the line card, and instead of throwing their old one out they just put it in the secondary video card slot.
They at the time were claiming it would be more useful than SLI and way less gay.
Has anyone thought that maybe this move isn't to put this tech on the GPU, but maybe as part of the motherboard chipset?
Would it be possible for a physics processor to be part of north or southbridge? I don't know much of how these chips work with the cpu so ???
But Nvidia does a lot more than just GPUs these days.
I'm fairly sure though that the PPU works really closely with the GPU, and they're essentially working on very similar stuff. So I think that the point of combining the two makes more sense then putting it on a mobo integration.
About time, I hated installing that stupid Ageia driver even though I didn't have the card. I imagine getting swallowed up is exactly what Ageagae wanted anyways, they must have known no one was going to drop $X on a card that isn't even necessary.
Does it speed up Crysis? If it can do it by even 5-10 frames for the people running quad-core, SLI 8800GTX monsters, then the technology will eventually be worth it.
So does this effectively mean the end of pointless stand-alone physics cards?
Gawd, I hope so. From what I had seen the physics applications didn't really do much for the few games that even supported it. With this tech firmly integrated into graphics cards, maybe now we can actually see it supported and put to good use.
this just in: Physics card prices plummet, because of physics!
In other news, graphics card prices soar, because of physics!
I'm just kidding, unless the market follows suit.
Anyway, does this make future games faster, or is this just another feature? I haven't played any physics accelerated games exept maybe Wii sports (assuming it uses it).
Right now it's purely a PC thing, the consoles don't use it.
From what I can tell the games that use it look ever-so-slightly better... not nearly better enough to justify the cost of the card. And I've heard that some games take a framerate hit with it on.
Not to mention there was that oh-so hilarious physics tech demo from one of the card makers that supposedly showed off how swank your games would look now that you had the card. (It looked okay without physics, but much better with the physics.) Some folks managed to make a minuscule tweak to the program to trick the demo into thinking the card was installed -- and presto, the demo acted like it had the physics card, even without a physics card installed!
Right now it's purely a PC thing, the consoles don't use it.
From what I can tell the games that use it look ever-so-slightly better... not nearly better enough to justify the cost of the card. And I've heard that some games take a framerate hit with it on.
Not to mention there was that oh-so hilarious physics tech demo from one of the card makers that supposedly showed off how swank your games would look now that you had the card. (It looked okay without physics, but much better with the physics.) Some folks managed to make a minuscule tweak to the program to trick the demo into thinking the card was installed -- and presto, the demo acted like it had the physics card, even without a physics card installed!
Right, but I think the problem with the framerate is you have objects being sent to the PPU and then sent to the GPU. If it's just CPU -> GP/PPU then we're talking much faster communication and ergo I would assume better framerates.
Right now it's purely a PC thing, the consoles don't use it.
From what I can tell the games that use it look ever-so-slightly better... not nearly better enough to justify the cost of the card. And I've heard that some games take a framerate hit with it on.
Not to mention there was that oh-so hilarious physics tech demo from one of the card makers that supposedly showed off how swank your games would look now that you had the card. (It looked okay without physics, but much better with the physics.) Some folks managed to make a minuscule tweak to the program to trick the demo into thinking the card was installed -- and presto, the demo acted like it had the physics card, even without a physics card installed!
Right, but I think the problem with the framerate is you have objects being sent to the PPU and then sent to the GPU. If it's just CPU -> GP/PPU then we're talking much faster communication and ergo I would assume better framerates.
Like I said, integrating physics chip stuff onto the GPU will eventually make the technology worthwhile, and I'm glad things are going this direction. But trying to justify an entire separate card for physics was just plain silly from cost and technical perspectives.
Ageia debuted the world's first dedicated physics processor in 2006. The chip is designed to handle real-time physics calculations that allow for more impressive and diverse visual effects. In addition to PC titles, Ageia's PhysX software is employed in the PS3, Xbox 360, and Wii.
The PhysX chip can be found in all three of the modern gaming consoles--Playstation 3, Xbox 360, and the Wii--as well as in add-in cards for PC gaming. Developers have to write their games with the processor in mind to unlock the performance, and over 140 titles are available for consoles and PCs that support the PhysX technology.
Take that!
So, I guess AMD/ATI will have to invent their own from now on.
The PhysX chip can be found in all three of the modern gaming consoles--Playstation 3, Xbox 360, and the Wii--as well as in add-in cards for PC gaming. Developers have to write their games with the processor in mind to unlock the performance, and over 140 titles are available for consoles and PCs that support the PhysX technology.
Take that!
So, I guess AMD/ATI will have to invent their own from now on.
I think that article is just written by some idiot who doesn't understand that Aegia PhysX is a software middleware solution, rather than there's a secret Physics Processor Chip on every single home console.
Rook on
0
ViscountalphaThe pen is mightier than the swordhttp://youtu.be/G_sBOsh-vyIRegistered Userregular
Apparently graphics chips are designed to do a ton of floating point calculations which make them much better at physics simulation than even the beastiest hojillion core CPU that Intel can churn out.
From a website using GPUs for simulating molecular dynamics:
The high degree of parallelism and floating point arithmetic capability of GPUs can attain performance levels twenty times that of a single CPU core
I have a feeling that a physics card isn't too different architecturally from a GPU. nVidia probably wanted the experience the Ageia folks have in hardware based physics simulation, since it seems like CUDA could open up a whole new market for nVidia cards.
Posts
Having this tech integrated into next gen GPUs would only help, but I wonder what the price vs. performance increases will be?
Steam / Bus Blog / Goozex Referral
I think there was a point to them but I never really saw the value of them. It doesn't matter now. Both physics card developers are now swallowed up by big companies. Intel as Ageia's competetor which I don't remeber off the top of my head but ya. Phys-x cards never seemed worth the cash.
Rumor is that intel will have some 8 core beast or even a 16 core beast with 4 cores to act like a gpu and 3 cores to act like a physics processor. nVidia must be afraid of this.
Not if all 4 cores are Intel integrated graphics.
Intel bought Havok.
I think in a few years we're going to have some sort of computer world civil war..
Unlike now, when there are no companies competing.
Anyways, I saw this and I thought "GPU-PPU single card" and then I thought "Hey what if it's still priced somewhat like a GPU" and then I thought "Jesus fuck that would rule"
I mean, I'm thinking the 9800XT2 or whatever is just two 8800 Ultras on one card, right? So what if they threw a Phys-Xs guts into an 8800GT, for maybe $50 more, since they can share some materials and the board, and really all that would be needed on top of the vanilla is the chip itself, right? Now if they could do that, you would have faster communication, similar to the dual-core architecture being better than two separate processors, right?
Am I the only one who thinks that would be pretty cool?
<sarcasm>No, I hate this idea. I think we should just go back to having CPU software rendering and no physics.</sarcasm>
Tofu wrote: Here be Littleboots, destroyer of threads and master of drunkposting.
Actually, it's a great business idea: create a decent technology and pray some big company comes along and swallows you up so you can escape with a golden parachute. And they won't even make you do infomercials to hock it like that Oxyclean guy who doesn't know how to say no to Grecian 5.
Eventually even GPU might become unecessary if we are talking about 100s of cores in the future.
(Please do not gift. My game bank is already full.)
I remember hearing that the AMD/ATI solution to this was going to be writing an instruction set that would allow an AMD (ATI) card calculate the physics of a thing. The example I remember was someone going out and buying a top of the line card, and instead of throwing their old one out they just put it in the secondary video card slot.
They at the time were claiming it would be more useful than SLI and way less gay.
Am I hallucinating?
Would it be possible for a physics processor to be part of north or southbridge? I don't know much of how these chips work with the cpu so ???
But Nvidia does a lot more than just GPUs these days.
[SIGPIC][/SIGPIC]
MMOG Comic, Quests, and News. www.thebrasse.com
Fixed.
I'm fairly sure though that the PPU works really closely with the GPU, and they're essentially working on very similar stuff. So I think that the point of combining the two makes more sense then putting it on a mobo integration.
Does it speed up Crysis? If it can do it by even 5-10 frames for the people running quad-core, SLI 8800GTX monsters, then the technology will eventually be worth it.
Gawd, I hope so. From what I had seen the physics applications didn't really do much for the few games that even supported it. With this tech firmly integrated into graphics cards, maybe now we can actually see it supported and put to good use.
In other news, graphics card prices soar, because of physics!
I'm just kidding, unless the market follows suit.
Anyway, does this make future games faster, or is this just another feature? I haven't played any physics accelerated games exept maybe Wii sports (assuming it uses it).
From what I can tell the games that use it look ever-so-slightly better... not nearly better enough to justify the cost of the card. And I've heard that some games take a framerate hit with it on.
Not to mention there was that oh-so hilarious physics tech demo from one of the card makers that supposedly showed off how swank your games would look now that you had the card. (It looked okay without physics, but much better with the physics.) Some folks managed to make a minuscule tweak to the program to trick the demo into thinking the card was installed -- and presto, the demo acted like it had the physics card, even without a physics card installed!
Right, but I think the problem with the framerate is you have objects being sent to the PPU and then sent to the GPU. If it's just CPU -> GP/PPU then we're talking much faster communication and ergo I would assume better framerates.
Like I said, integrating physics chip stuff onto the GPU will eventually make the technology worthwhile, and I'm glad things are going this direction. But trying to justify an entire separate card for physics was just plain silly from cost and technical perspectives.
Hold it!
from: http://www.gamespot.com/news/6185534.html
Just the software?
there's more:
from: http://crave.cnet.com/8301-1_105-9864532-1.html?part=rss&tag=feed&subj=Crave
Take that!
So, I guess AMD/ATI will have to invent their own from now on.
So what is going on with AMD's supposed all-in-one CPU-GPU-motherboard idea anyway?
I think that article is just written by some idiot who doesn't understand that Aegia PhysX is a software middleware solution, rather than there's a secret Physics Processor Chip on every single home console.
AMD's fusion? I thought it isn't scheduled to show up until 2009. In which case intel is going haul ass on getting larrabee finished well before then.
Apparently graphics chips are designed to do a ton of floating point calculations which make them much better at physics simulation than even the beastiest hojillion core CPU that Intel can churn out.
From a website using GPUs for simulating molecular dynamics:
I have a feeling that a physics card isn't too different architecturally from a GPU. nVidia probably wanted the experience the Ageia folks have in hardware based physics simulation, since it seems like CUDA could open up a whole new market for nVidia cards.