The new forums will be named Coin Return (based on the most recent vote)! You can check on the status and timeline of the transition to the new forums here.
The Guiding Principles and New Rules document is now in effect.
At Hot Chips today, Microsoft's Xbox team unveiled details of the system-on-a-chip (SoC) that powers the newer, slimmer Xbox 360 250GB model. Produced on the IBM/GlobalFoundries 45nm process, it's fair to say that the new SoC (pictured above) is the first mass-market, desktop-class processor to combine a CPU, GPU, memory, and I/O logic onto a single piece of silicon. The goal of the consolidation was, of course, to lower the cost of making the console by reducing the number of different chips needed for the system, shrinking the motherboard, and reducing the number of expensive fans and heatsinks.
So do you think this is going to be a thing, or do you think it will be limited to special applications like this?
GnomeTankWhat the what?Portland, OregonRegistered Userregular
edited August 2010
For now, it's a limited application thing, I think. Intel killing off Larrabee was really the first sign that completely integrated GPU/CPU systems were not coming to the mainstream desktop for a while.
Right now discreet graphics are still very much "the thing" for mainstream desktops. Obviously consoles are a completely different ball of wax, as both the PS3 and 360 use some form of SoC (if you want to call Cell an SoC, it's kind of the same concept I guess).
I think just the fact that it involves a GPU would limit it to special applications - the other place I could see this being really useful is in laptops, and that'd only really be useful for gaming models or portable workstations. GPU-computing (to my very limited understanding, anyway) doesn't seem to have a whole lot of uses at this point in time (either video encoding or stuff like Floating@Home), and those seem to be tasks you'd usually use desktops for.
This being said, I'm all for more powerful gaming laptops with longer battery life/less heat issues/lower cost, if any of these can be made possible by this.
Ayulin on
0
GnomeTankWhat the what?Portland, OregonRegistered Userregular
edited August 2010
GPU computing doesn't have a lot of use in mainstream applications, perhaps, but it's gaining an incredibly quick foothold in the sciences and research, because you can build a super computer equivalent in FLOPS for 1/100th the price. Several big physics research labs have recently announced large GPU farms.
The real issue, and why this continues to be such a hard nut to crack, is that GPU's and general register CPU's are really nothing alike, at all, besides both being made of silicon and using transistors.
It will become the norm (if it isn't already) on mobile hardware and likely general purpose computers (cheap laptops, pos systems, etc...) It's hard to say if it will take off in the desktop / workstation market and I don't really see a real use in server market.
While a cool idea the system itself has a self inducing lagg to it. So it is not faster than prior consoles. Pretty obvious why they did this but in the end all the nifty new hardware is limited to act like legacy systems.
I think there's a future for this. There will always be a demand for smaller, more compact devices -- and consolidating CPU and GPU allows them to cool both at the same time, and by integrating the north and south bridge, as well as I/O and memory, they can also facilitate even quicker communication between the separate parts. It'll lead to greater speed for less cooling and cheaper components.
Dehumanized on
0
GnomeTankWhat the what?Portland, OregonRegistered Userregular
edited August 2010
You wouldn't want your GPU on your northbridge. GPU's tend to use direct access to the memory across a bus, without a true memory controller being in the way. It's incredibly fast and allows them to make big, huge, wide pipes to pass those gobs and gobs of textures through.
Again, GPU's and CPU's don't work even remotely the same, which is one of the primary issues in making this work well.
I think there is an average consumer application for it. Every desktop and laptop MB has integrated graphics. Could this turn the usual barely functional integrated GPUs into just above passable systems?
While a cool idea the system itself has a self inducing lagg to it.
Source?
Pretty obvious why they did this but in the end all the nifty new hardware is limited to act like legacy systems.
Not really. If anything it should have a faster connection to the CPU and main memory, and aside from providing better integrated graphics it provides the potential option of having the CPU's shaders (or whatever the equivalent will be, though I imagine it'll have a similar programming model available) used to accelerate rendering on a normal graphics card. Sort of like SLI/Crossfire but without the need for a second video card. It might even be able to function as simply having additional shaders available to a card instead of working as a second GPU in such a system.
I imagine operations that will need to work on tons of memory (large textures) will be taken care of by normal graphics that gamers will own, but it's a neat idea to get some more cost effective rendering power for games that won't see much improvement through more/faster cores.
Barrakketh on
Rollers are red, chargers are blue....omae wa mou shindeiru
While a cool idea the system itself has a self inducing lagg to it.
Source?
Pretty obvious why they did this but in the end all the nifty new hardware is limited to act like legacy systems.
Not really. If anything it should have a faster connection to the CPU and main memory, and aside from providing better integrated graphics it provides the potential option of having the CPU's shaders (or whatever the equivalent will be, though I imagine it'll have a similar programming model available) used to accelerate rendering on a normal graphics card. Sort of like SLI/Crossfire but without the need for a second video card. It might even be able to function as simply having additional shaders available to a card instead of working as a second GPU in such a system.
I imagine operations that will need to work on tons of memory (large textures) will be taken care of by normal graphics that gamers will own, but it's a neat idea to get some more cost effective rendering power for games that won't see much improvement through more/faster cores.
Then you missed the point. It's not supposed to be faster, it's supposed to consume less power and generate less heat. They basically made it perform exactly like the other 360s, which is exactly the idea behind a console.
Barrakketh on
Rollers are red, chargers are blue....omae wa mou shindeiru
Then you missed the point. It's not supposed to be faster, it's supposed to consume less power and generate less heat. They basically made it perform exactly like the other 360s, which is exactly the idea behind a console.
Nah I didnt miss the point. There is a concrete reason why they would do it. It is a console. They can have different hardware but not different performance. I would imagine they are saving that extra power for Kinect since they have the "Kinect ready" statement on the boxes etc.
I would imagine they are saving that extra power for Kinect since they have the "Kinect ready" statement on the boxes etc.
Nope. There's a special connector for Kinect on the new 360. If you're not using one of the new 360s it'll require an external power source via a special cable and wall wart. That's all it means.
Barrakketh on
Rollers are red, chargers are blue....omae wa mou shindeiru
0
proyebatGARY WAS HEREASH IS A LOSERRegistered Userregular
Nah I didnt miss the point. There is a concrete reason why they would do it. It is a console. They can have different hardware but not different performance. I would imagine they are saving that extra power for Kinect since they have the "Kinect ready" statement on the boxes etc.
If Microsoft did dish out faster 360s, Microsoft couldn't tell game designers to make the games on the new hardware; it would be unfair to previous console owners. The only benefit would be better framerates.
I can imagine a universe where Microsoft would make the faster console, tell developers to make their games off that new tech, and tell all previous consoles owners to buy a new console, else new games play like shit. Actually it very much sounds like Microsoft.
Judging by the image in the Ars post, it's a bit of a stretch to call it a true SoC because it looks like the I/O logic is still handled by a separate Southbridge which is not part of the actual chip. Intel already did the GPU (a crappy 3D GPU) on the physical chip with their Clarkdale/Arrandale chips and their recent Pineview chips. If we're talking about actually integrating the GPU into the same die, then Intel is already moving in that direction with their upcoming Sandybridge based chips and consolidates their I/O functionality to their new Cougar Point PCH.
Like GnomeTank stated above, it's kind of limited application wise because doing this for a game console and for a mainstream desktop are pretty different.
As far as mobile computing goes, we're already seeing mobile SoCs with fairly impressive GPU capabilities like Nvidia's Tegra platform and the A4 chip in Apple's offerings.
krapst78 on
Hello! My name is Inigo Montoya! You killed my father prepare to die!
Looking for a Hardcore Fantasy Extraction Shooter? - Dark and Darker
Yes Snap Dragon (ATI ARM based partnership) just reached 1.5ghz and Intel just released a duel core Atom CPU. Even VIA has some good mobile/small CPUs.
It's a great for portable devices and lower end desktops. I look forward to seeing what can be done with Mini-ITX systems a year from now.
GPU computing doesn't have a lot of use in mainstream applications, perhaps, but it's gaining an incredibly quick foothold in the sciences and research, because you can build a super computer equivalent in FLOPS for 1/100th the price. Several big physics research labs have recently announced large GPU farms.
Yeah, it also has a lot of application in the energy industry for finite element analysis. Because the GPUs natively allow for pretty complex vertex calculations, you can use something like CUDA as a catalyst to turn the GPU into a supercharged FPU. Future on-die CPU/GPU cubes should make those applications go even faster because you won't have to have any interim chips or the speed of light (no shit) slowing you down when the GPU and CPU need to talk. Jobs that used to take a week or two to process are being completed within a few hours, and once the GPUs get on the die, they think it may push that down to less than an hour. And, when you need to get answers to someone who is trying to use your information to make a decision worth millions of dollars, time really is money.
GungHo on
0
GnomeTankWhat the what?Portland, OregonRegistered Userregular
edited August 2010
I think what you will eventually see is actually the death of the GPU, but we are several years at a minimum from that. CPU's like the Cell in the PS3 are eventually going to become the norm: Hundreds of cores, hyper parallel, ability to change the "focus" of the processing unit at run time (so, for instance, when playing a game, 64 of your 128 cores go in to "FPU, fast memory mode" to run purely floating point calculations without the overhead of general register processing).
The only reason we require GPU's today is that general register CPU's are not very good at hyper parallel floating point operations. Things like shaders are actually just a bandaid to allow those hyper parallel stream processors to be programmable by the end user.
Posts
Right now discreet graphics are still very much "the thing" for mainstream desktops. Obviously consoles are a completely different ball of wax, as both the PS3 and 360 use some form of SoC (if you want to call Cell an SoC, it's kind of the same concept I guess).
This being said, I'm all for more powerful gaming laptops with longer battery life/less heat issues/lower cost, if any of these can be made possible by this.
The real issue, and why this continues to be such a hard nut to crack, is that GPU's and general register CPU's are really nothing alike, at all, besides both being made of silicon and using transistors.
Again, GPU's and CPU's don't work even remotely the same, which is one of the primary issues in making this work well.
Not really. If anything it should have a faster connection to the CPU and main memory, and aside from providing better integrated graphics it provides the potential option of having the CPU's shaders (or whatever the equivalent will be, though I imagine it'll have a similar programming model available) used to accelerate rendering on a normal graphics card. Sort of like SLI/Crossfire but without the need for a second video card. It might even be able to function as simply having additional shaders available to a card instead of working as a second GPU in such a system.
I imagine operations that will need to work on tons of memory (large textures) will be taken care of by normal graphics that gamers will own, but it's a neat idea to get some more cost effective rendering power for games that won't see much improvement through more/faster cores.
Ars
CPU is IBM, GPU is AMD/ATI. GlobalFoundries is what was created by AMD divesting itself of its fabrication plants.
Then you missed the point. It's not supposed to be faster, it's supposed to consume less power and generate less heat. They basically made it perform exactly like the other 360s, which is exactly the idea behind a console.
Nah I didnt miss the point. There is a concrete reason why they would do it. It is a console. They can have different hardware but not different performance. I would imagine they are saving that extra power for Kinect since they have the "Kinect ready" statement on the boxes etc.
If Microsoft did dish out faster 360s, Microsoft couldn't tell game designers to make the games on the new hardware; it would be unfair to previous console owners. The only benefit would be better framerates.
I can imagine a universe where Microsoft would make the faster console, tell developers to make their games off that new tech, and tell all previous consoles owners to buy a new console, else new games play like shit. Actually it very much sounds like Microsoft.
alternate response: xbox -> xbox 360 rite
Like GnomeTank stated above, it's kind of limited application wise because doing this for a game console and for a mainstream desktop are pretty different.
As far as mobile computing goes, we're already seeing mobile SoCs with fairly impressive GPU capabilities like Nvidia's Tegra platform and the A4 chip in Apple's offerings.
Looking for a Hardcore Fantasy Extraction Shooter? - Dark and Darker
It's a great for portable devices and lower end desktops. I look forward to seeing what can be done with Mini-ITX systems a year from now.
The only reason we require GPU's today is that general register CPU's are not very good at hyper parallel floating point operations. Things like shaders are actually just a bandaid to allow those hyper parallel stream processors to be programmable by the end user.