We are not playing the 'older than you' nostalgia game because no one wins.
........ and I turned 45 two weeks ago
+7
syndalisGetting ClassyOn the WallRegistered User, Loves Apple Productsregular
But seriously the backplate of the rumored 4090ti/titan is a sight to behold.
That large hole is for fan exhaust.
SW-4158-3990-6116
Let's play Mario Kart or something...
+5
jungleroomxIt's never too many graves, it's always not enough shovelsRegistered Userregular
edited February 7
That is absurd. That's the Homer Simpson mobile of GPU's.
If there's anything that will bring God's wrath it's mankinds hubris for making that monstrosity a consumer product and not an R&D lab curiosity/experiment.
jungleroomx on
+6
jungleroomxIt's never too many graves, it's always not enough shovelsRegistered Userregular
When it comes to single player games, I don't think I've ever had as much fun just playing a game than I am right now with Monster Hunter Rise, using bow.
When it comes to multiplayer the last real blast I had was... playing Jackbox with the family.
Honestly, I'd rather have an I/O plate that is honest enough to actually take 4 slots and make use of the space instead of the current trend of two slots on the I/O plate but the actual card is over 3 slots internally, which means it is 4 slots anyway.
And if it really is a Titan, I'll say again that no one should be buying this for gaming, as it'll be a GPU compute card first.
To combine the being old and video card exhaust fans conversation.
Back when I played through Max Payne, it would crash every 5 or 10 minutes. Tried changing settings, and futzing with drivers. I finally upgraded to XP in desperation to figure out the issue, ending a long love affair with 98SE. Can't remember exactly how, but i finally discovered the fan on my video card had failed. "Since when do video cards have fans?"
Now we have video cards where 75% of their size is fans.
To combine the being old and video card exhaust fans conversation.
Back when I played through Max Payne, it would crash every 5 or 10 minutes. Tried changing settings, and futzing with drivers. I finally upgraded to XP in desperation to figure out the issue, ending a long love affair with 98SE. Can't remember exactly how, but i finally discovered the fan on my video card had failed. "Since when do video cards have fans?"
Now we have video cards where 75% of their size is fans.
I once helped someone "de-furring" their GeForce MX something-something because it was crashing.
I'm not sure if it's the bigger backplate, or the cooler design in general, but with my 3090 FE I don't have any issues with GPU sag and the 3090fe is one hell of a big card.
Wunderbar is right; if you're going to have a card that is an x.5 slot card, it should just be an x+1 card because it's going to make the same number of PCI slots worthless without adding any stability.
+3
jungleroomxIt's never too many graves, it's always not enough shovelsRegistered Userregular
I'm not sure if it's the bigger backplate, or the cooler design in general, but with my 3090 FE I don't have any issues with GPU sag and the 3090fe is one hell of a big card.
Wunderbar is right; if you're going to have a card that is an x.5 slot card, it should just be an x+1 card because it's going to make the same number of PCI slots worthless without adding any stability.
You can still fit plenty of things with a GPU that takes up half a slot. There's a lot of hardware that is literally a card still, like M.2 adapters, capture cards, legacy ports, etc.
I'm not sure if it's the bigger backplate, or the cooler design in general, but with my 3090 FE I don't have any issues with GPU sag and the 3090fe is one hell of a big card.
Wunderbar is right; if you're going to have a card that is an x.5 slot card, it should just be an x+1 card because it's going to make the same number of PCI slots worthless without adding any stability.
You can still fit plenty of things with a GPU that takes up half a slot. There's a lot of hardware that is literally a card still, like M.2 adapters, capture cards, legacy ports, etc.
I don't think this is true? the "slot" starts at the PCI-e connector and goes down (assuming a traditionally mounted setup). So a GPU taking a half slot blocks the PCI-e connector for that spot. Maybe you could fit in an I/O device that plugs into USB headers and not the PCI-e connector, but anything that needs the PCI-e connector is out of the question.
I'm not sure if it's the bigger backplate, or the cooler design in general, but with my 3090 FE I don't have any issues with GPU sag and the 3090fe is one hell of a big card.
Wunderbar is right; if you're going to have a card that is an x.5 slot card, it should just be an x+1 card because it's going to make the same number of PCI slots worthless without adding any stability.
You can still fit plenty of things with a GPU that takes up half a slot. There's a lot of hardware that is literally a card still, like M.2 adapters, capture cards, legacy ports, etc.
I don't think this is true? the "slot" starts at the PCI-e connector and goes down (assuming a traditionally mounted setup). So a GPU taking a half slot blocks the PCI-e connector for that spot. Maybe you could fit in an I/O device that plugs into USB headers and not the PCI-e connector, but anything that needs the PCI-e connector is out of the question.
Technically speaking, I can still plug something in to the slot below.
It's not pretty and would block the fan, but it is possible.
I'm just curious if people are going to be welcoming to the eventual 5 slot or 6 slot GPUs, or what the cutoff is for "design a better card" instead of upcharging their customers $1000 for $50 worth of extra material.
I'm not sure if it's the bigger backplate, or the cooler design in general, but with my 3090 FE I don't have any issues with GPU sag and the 3090fe is one hell of a big card.
Wunderbar is right; if you're going to have a card that is an x.5 slot card, it should just be an x+1 card because it's going to make the same number of PCI slots worthless without adding any stability.
You can still fit plenty of things with a GPU that takes up half a slot. There's a lot of hardware that is literally a card still, like M.2 adapters, capture cards, legacy ports, etc.
I don't think this is true? the "slot" starts at the PCI-e connector and goes down (assuming a traditionally mounted setup). So a GPU taking a half slot blocks the PCI-e connector for that spot. Maybe you could fit in an I/O device that plugs into USB headers and not the PCI-e connector, but anything that needs the PCI-e connector is out of the question.
Technically speaking, I can still plug something in to the slot below.
It's not pretty and would block the fan, but it is possible.
I'm just curious if people are going to be welcoming to the eventual 5 slot or 6 slot GPUs, or what the cutoff is for "design a better card" instead of upcharging their customers $1000 for $50 worth of extra material.
I'll continue to disagree with your first point. but we can move on to the second.
Personally, I think where we are at is the top of the curve in companies pushing more power to get more performance, and at some point the focus will have to be on efficiency. Over the last 20+ years there have been several cycles of "we make things faster by using mower power" followed by a cycle of "holy crap we're using too much power make it more efficient." 10 years ago people were commonly recommending 1200W power supplies for high end builds, we got that back down to 850 or so for a number of years, and now we're back to that 1200 number if you have say a 13900k and 4090.
This has happened with both CPU and GPU. The trend over time has still been higher wattage, but there's usually a big bump, then a scale back for a bit. The most famous cases are in CPU where intel with the Pentium 4 had to throw out that entire design becuase they couldn't just keep pumping power through it to make it faster anymore, and that's how we got the Core design (that intel bought) that is still the basis for modern CPU cores. on AMD, the bulldozer era was "well let's just put as much power through this as possible and hope it is faster" until Zen was ready.
There are fewer examples on the GPU side, but there have definitely been that same kind of cadence of make things faster with more power, then make things more efficient and cut power draw.
The reality is in North America a circuit can only pull about 1800W total, so getting much past 1200W in a power supply seems like a big problem, as you really do need almost a full dedicated circuit just for a computer after you also account for a monitor.
And there's also the fact that we can, as consumers, make a decision that if we're not happy with a GPU pulling 600W of power from the wall, and a CPU pulling 250W of power from the wall.... to not buy those parts.
jungleroomxIt's never too many graves, it's always not enough shovelsRegistered Userregular
edited February 7
I mean you can argue my point all you want, I'm at home looking at my mobo and there's 3mm between the edge of my card and PCI-e slot below it that's touching the CMOS battery. I've got a picture raring and ready to go but I would hope you'd take me at my word here. Maybe the 3070 here is a 2.2 or 2.3 or whatever.
My question here comes back to Nvidia, which has no reason to redesign anything because people are still buying their stuff and the company is still profitable. The 7900XTX is a full card slot smaller than the 4080 and runs comparably (outside of RTX), so it's clear that it's possible for the cards to be slimmed down, but as long as people shrug their shoulders and buy the cards why would they bother spending money on R&D? Intel's i Core reinvention was 15 years ago when companies would still research things, and AMD needed to adapt or die.
It's not impossible Nvidia might do some investment into architecture rebuilding, but I expect the next gen of cards will be just like this one: More stuff jammed in with a bigger heat sink slapped on and sent out the door.
One of the big reasons I've gone with a 4070 for my new build is the power efficiency. I don't want to be pulling 600+ watts on the regular for my PC to run.
Also probably good to consider leaving space for airflow into the GPU, if you've got another PCI card within millimeters of the fans that can't be good for the thermals
I mean you can argue my point all you want, I'm at home looking at my mobo and there's 3mm between the edge of my card and PCI-e slot below it that's touching the CMOS battery. I've got a picture raring and ready to go but I would hope you'd take me at my word here. Maybe the 3070 here is a 2.2 or 2.3 or whatever.
My question here comes back to Nvidia, which has no reason to redesign anything because people are still buying their stuff and the company is still profitable. The 7900XTX is a full card slot smaller than the 4080 and runs comparably (outside of RTX), so it's clear that it's possible for the cards to be slimmed down, but as long as people shrug their shoulders and buy the cards why would they bother spending money on R&D? Intel's i Core reinvention was 15 years ago when companies would still research things, and AMD needed to adapt or die.
It's not impossible Nvidia might do some investment into architecture rebuilding, but I expect the next gen of cards will be just like this one: More stuff jammed in with a bigger heat sink slapped on and sent out the door.
In the case of the RTX 4080 vs the 7900 XTX there actually is a pretty clear reasoning with them. The RTX 4800 uses the same cooling system as the 4090, and the result is a card that tests at 60C under full load, which is quite good for a modern card. The 7900 XTX, on the other hand, runs 20C+ hotter under full load. nvidia mad the decision to make a beefier cooling system for the 4080 to make it run cooler, while AMD went for a smaller design at the cost of thermals.
Both are valid ideas, and it largely does come down to personal preference if you want a cooler running GPU.
jungleroomxIt's never too many graves, it's always not enough shovelsRegistered Userregular
I don't know if it was an absolute decision on Nvidia's part to release a 4080 that ran that fast. Wasn't the scuttlebutt that after crypto imploded they cut down the power draw?
I dunno. the decision to make a card run at 60 seems weird, since 60-80c are the sweet spot you generally want to be in. Materials aren't getting any cheaper. I'm sure there's a lot of inside baseball here, but the temp scaling on the 4080 doesn't even match up any of their other cards.
I don't know if it was an absolute decision on Nvidia's part to release a 4080 that ran that fast. Wasn't the scuttlebutt that after crypto imploded they cut down the power draw?
I dunno. the decision to make a card run at 60 seems weird, since 60-80c are the sweet spot you generally want to be in. Materials aren't getting any cheaper. I'm sure there's a lot of inside baseball here, but the temp scaling on the 4080 doesn't even match up any of their other cards.
They might make up the material costs by only having one production line for a cooler that's shared between multiple cards.
I mean you can argue my point all you want, I'm at home looking at my mobo and there's 3mm between the edge of my card and PCI-e slot below it that's touching the CMOS battery. I've got a picture raring and ready to go but I would hope you'd take me at my word here. Maybe the 3070 here is a 2.2 or 2.3 or whatever.
My question here comes back to Nvidia, which has no reason to redesign anything because people are still buying their stuff and the company is still profitable. The 7900XTX is a full card slot smaller than the 4080 and runs comparably (outside of RTX), so it's clear that it's possible for the cards to be slimmed down, but as long as people shrug their shoulders and buy the cards why would they bother spending money on R&D? Intel's i Core reinvention was 15 years ago when companies would still research things, and AMD needed to adapt or die.
It's not impossible Nvidia might do some investment into architecture rebuilding, but I expect the next gen of cards will be just like this one: More stuff jammed in with a bigger heat sink slapped on and sent out the door.
In the case of the RTX 4080 vs the 7900 XTX there actually is a pretty clear reasoning with them. The RTX 4800 uses the same cooling system as the 4090, and the result is a card that tests at 60C under full load, which is quite good for a modern card. The 7900 XTX, on the other hand, runs 20C+ hotter under full load. nvidia mad the decision to make a beefier cooling system for the 4080 to make it run cooler, while AMD went for a smaller design at the cost of thermals.
Both are valid ideas, and it largely does come down to personal preference if you want a cooler running GPU.
It uses the same coolers as the 4090s, which are already ridiculously overdesigned to cool a 600W card, when they're more like 400W cards. And even then, it's dumb because you lose like 5-10% by just reducing the power draw down to 300W, which could have easily been cooled by a 2 slot cooler. Nvidia actually has a much more power efficient design now, blowing both ampere and RNDA 3 out of the water with perf/power, but they keep setting their default power draw points well past the point of diminishing returns
0
OrcaAlso known as EspressosaurusWrexRegistered Userregular
The reality is in North America a circuit can only pull about 1800W total, so getting much past 1200W in a power supply seems like a big problem, as you really do need almost a full dedicated circuit just for a computer after you also account for a monitor.
And note, the recommended maximum continuous power you should pull from your breaker is 80% of its rated maximum. If you have a 15 amp circuit (common) and dedicate the entire circuit to your computer, you'll still want to stay at or under ~1440 watts at-the-wall for everything on that circuit.
There just isn't much headroom above 1200 watts while still allowing *literally anything else* on the circuit.
A common box fan will pull around 75 watts, for example. Your monitor is probably in the 20-50 watt range. A 60" TV is ~100 watts. A 100 watt-equivalent LED bulb is ~20 watts. At full tilt the PS5 is rated at 340 watts so you can imagine plugging that into the same circuit as your 1200 watt computer would be a bad idea if they might both be running all out at the same time.
Any full build taking more than ~1000 watts gets some serious side-eye from me due to how it limits what else can be on the same circuit. Maybe you have bedrooms with more than one 15 amp circuit per room, but I don't.
Orca on
+3
jungleroomxIt's never too many graves, it's always not enough shovelsRegistered Userregular
Yeah we're kind at the power limit for the US without getting 220 volt circuits.
I don't know if it was an absolute decision on Nvidia's part to release a 4080 that ran that fast. Wasn't the scuttlebutt that after crypto imploded they cut down the power draw?
I dunno. the decision to make a card run at 60 seems weird, since 60-80c are the sweet spot you generally want to be in. Materials aren't getting any cheaper. I'm sure there's a lot of inside baseball here, but the temp scaling on the 4080 doesn't even match up any of their other cards.
They might make up the material costs by only having one production line for a cooler that's shared between multiple cards.
Also: Charging their customers ridiculous prices to cover their 70% profit margin
The reality is in North America a circuit can only pull about 1800W total, so getting much past 1200W in a power supply seems like a big problem, as you really do need almost a full dedicated circuit just for a computer after you also account for a monitor.
And note, the recommended maximum continuous power you should pull from your breaker is 80% of its rated maximum. If you have a 15 amp circuit (common) and dedicate the entire circuit to your computer, you'll still want to stay at or under ~1440 watts at-the-wall for everything on that circuit.
There just isn't much headroom above 1200 watts while still allowing *literally anything else* on the circuit.
A common box fan will pull around 75 watts, for example. Your monitor is probably in the 20-50 watt range. A 60" TV is ~100 watts. A 100 watt-equivalent LED bulb is ~20 watts. At full tilt the PS5 is rated at 340 watts so you can imagine plugging that into the same circuit as your 1200 watt computer would be a bad idea if they might both be running all out at the same time.
Any full build taking more than ~1000 watts gets some serious side-eye from me due to how it limits what else can be on the same circuit. Maybe you have bedrooms with more than one 15 amp circuit per room, but I don't.
For all that we complain about efficiency of today's parts, it was a lot more common to hit 1000W 5 to 10 years ago. That was pretty much the golden age of HEDT and the last days of SLI. Corsair's old monster 1500W+ PSUs were for X99/X299/Threadripper boxes with many-cored CPUs and multiple GPUs.
My old 12-core X299 box with my 3090 has pulled more than 800W in CPU/GPU stress tests, so imagine the real world TDP of overclocked 18-core CPU with SLIed 2080 Tis or Titans. It would definitely be near the limit of the circuit.
Today's power thirsty GPUs are annoying because they're obsoleting several generations of 450-650W PSUs that used to be perfectly adequate for non-HEDT gaming builds. There's no current gen HEDT platforms, though, so I'm not sure you could really put together a (current gen) system that pulled more than 1000W without aggressive overclocking.
htm on
+1
jungleroomxIt's never too many graves, it's always not enough shovelsRegistered Userregular
I think I'm just bummed that we almost put 1,000 watt power supplies out to pasture for gaming rigs and things were trending downward in cost, and then suddenly 2017 comes in and we're back in the mid 2000's where everything runs way too hot and uses way too much power and costs way too damn much.
I dunno if it's the blockchain or other modern tech, AI, or us running out of untapped potential by using silicon as a base material, but it's not been a good transformation.
+3
syndalisGetting ClassyOn the WallRegistered User, Loves Apple Productsregular
I think I'm just bummed that we almost put 1,000 watt power supplies out to pasture for gaming rigs and things were trending downward in cost, and then suddenly 2017 comes in and we're back in the mid 2000's where everything runs way too hot and uses way too much power and costs way too damn much.
I dunno if it's the blockchain or other modern tech, AI, or us running out of untapped potential by using silicon as a base material, but it's not been a good transformation.
I dunno, you can have a stellar gaming rig on a 750w PSU, including a 4080/7900xtx/4090 (undervolt), an 8 core CPU, lightning fast pcie4 nvme storage, etc. etc.
It's just that the extreme overclocking high end and the top of the CPU side of things is really fucking thirsty right now.
Won't disagree on the cost thing, that has gotten real bad.
SW-4158-3990-6116
Let's play Mario Kart or something...
Guess I should get going with the NAS plans, I'm itching to play around with TrueNAS a bit, though I'll probably end up using Unraid. It seems more amenable to adding one drive at a time to the same logical volume, from what I've gathered TrueNAS (or ZFS, rather) seems to need each addition to be its own new volume.
No idea when the case gets here, but worst case (...) I have an old case (that's enough cases for this sentence) I can slap things into temporarily.
I think I'm just bummed that we almost put 1,000 watt power supplies out to pasture for gaming rigs and things were trending downward in cost, and then suddenly 2017 comes in and we're back in the mid 2000's where everything runs way too hot and uses way too much power and costs way too damn much.
I dunno if it's the blockchain or other modern tech, AI, or us running out of untapped potential by using silicon as a base material, but it's not been a good transformation.
Were not putting 1000W PSU out of pasture, were bringing in 1200W and 1600W
Honestly, for a 4-5 Slot monster - include a fucking PSU, which connects to the front slot (at this size it doesn't matter if its internal within the card or a Xbox 360 / One like powerbrick). Or make the whole thing external with a custom pcie riser card you can route outwards to an external enclosure. I am already miffed at my "block two additional pcie slots" Geforce 3080ti. Yes, there are people like me who like to use expansion cards. And because of GPU size, I have to use PCIe risers in this fashion:
I possibly fried my former GTX in a Fractal Define R4, plus had a PSU which was too weak. When I had to swap the PSU I out, I thought, I shouldn‘t skimp on cooling. The case is big tower sized, it might appear large because it does not have drive cages.
Caught up on the thread and seeing that there are monitors with built-in KVMs and daisy chainable DisplayPorts is news to me and will make the next monitor refit less difficult. No wonder when I got my standalone KVM they were so few and far between. Between that and USB C sorcery, being able to hook up an arbitrary computer without needing a custom dock is pretty neat.
I've used the local Steam streaming capability some with my laptop streaming from my desktop and if power requirements continue to go up I could see enthusiasts moving to a home render farm and a thin client that connects to it. Then you'd be able to plug whatever GPU monstrosity will require a full ATX case in right next to the power panel and remote to it.
0
syndalisGetting ClassyOn the WallRegistered User, Loves Apple Productsregular
Caught up on the thread and seeing that there are monitors with built-in KVMs and daisy chainable DisplayPorts is news to me and will make the next monitor refit less difficult. No wonder when I got my standalone KVM they were so few and far between. Between that and USB C sorcery, being able to hook up an arbitrary computer without needing a custom dock is pretty neat.
I've used the local Steam streaming capability some with my laptop streaming from my desktop and if power requirements continue to go up I could see enthusiasts moving to a home render farm and a thin client that connects to it. Then you'd be able to plug whatever GPU monstrosity will require a full ATX case in right next to the power panel and remote to it.
My guess is that HDMI/USB-C ports on the wall will be a part of modern buildout at some point for the enthusiast types, where there is a rack tucked into a HVAC-accessible closet somewhere that houses the noisy shit and has a couple dedicated circuits.
That way, with app-managed switches, you can point your rig to whatever screen you happen to be using at the moment and get the benefits of VRR, HDR, ALLM, and other stuff that would potentially be lost on a full streaming solution.
edit: Note the word enthusiast up there; the vast majority of folks will use consoles or mid-range gaming PCs and not these insane monstrosities, and therefore won't need to build around the strange limitations imposed by a computer drawing more than 1000w at load.
syndalis on
SW-4158-3990-6116
Let's play Mario Kart or something...
Putting USB into your walls seems iffy for how frequently they modify the standard
+8
syndalisGetting ClassyOn the WallRegistered User, Loves Apple Productsregular
edited February 8
But in truth, the 4090 has done it. You have a card that can convincingly hit 120fps on most titles at 4k. And it will probably do so for a while, considering the state of the console market and where developers are pointing their resources at.
So, I would be super happy with the next couple generations working on bringing 4090 performance down-market. The theoretical 6070/6060 should trade blows with the 4090, just as the 5080 should easily match the 4090 with the thermals of the 4080; and they could actually remove the 90 line from their consumer-oriented products in favor of aiming for 4080 thermals + whatever performance that affords for their 5080, and every couple generations make a titan for the compute market.
syndalis on
SW-4158-3990-6116
Let's play Mario Kart or something...
+1
OrcaAlso known as EspressosaurusWrexRegistered Userregular
Putting USB into your walls seems iffy for how frequently they modify the standard
I think USB4/Thunderbolt is probably as good a choice as you can make at this point for what it does and what it can do. And the main point of using it would be multi-signal transport; ethernet, displayport, USB, and if needed, the PCIe lanes for docks or whatever.
If you really wanted to spend some cash, optical/fiber channel would be just as good if not better. But much more expensive.
SW-4158-3990-6116
Let's play Mario Kart or something...
0
syndalisGetting ClassyOn the WallRegistered User, Loves Apple Productsregular
Putting USB into your walls seems iffy for how frequently they modify the standard
Cable lengths/signal integrity seem like they would be a serious concern as well.
If you had a serious run needed, that's where optical comes in, which can terminate on a USB-C connector for thunderbolt at lengths up to 100m, but it would lack power delivery, which honestly, that's fine.
SW-4158-3990-6116
Let's play Mario Kart or something...
Posts
........ and I turned 45 two weeks ago
But seriously the backplate of the rumored 4090ti/titan is a sight to behold.
That large hole is for fan exhaust.
Let's play Mario Kart or something...
If there's anything that will bring God's wrath it's mankinds hubris for making that monstrosity a consumer product and not an R&D lab curiosity/experiment.
When it comes to multiplayer the last real blast I had was... playing Jackbox with the family.
And if it really is a Titan, I'll say again that no one should be buying this for gaming, as it'll be a GPU compute card first.
Back when I played through Max Payne, it would crash every 5 or 10 minutes. Tried changing settings, and futzing with drivers. I finally upgraded to XP in desperation to figure out the issue, ending a long love affair with 98SE. Can't remember exactly how, but i finally discovered the fan on my video card had failed. "Since when do video cards have fans?"
Now we have video cards where 75% of their size is fans.
Steam ID: Good Life
I once helped someone "de-furring" their GeForce MX something-something because it was crashing.
Wunderbar is right; if you're going to have a card that is an x.5 slot card, it should just be an x+1 card because it's going to make the same number of PCI slots worthless without adding any stability.
You can still fit plenty of things with a GPU that takes up half a slot. There's a lot of hardware that is literally a card still, like M.2 adapters, capture cards, legacy ports, etc.
I don't think this is true? the "slot" starts at the PCI-e connector and goes down (assuming a traditionally mounted setup). So a GPU taking a half slot blocks the PCI-e connector for that spot. Maybe you could fit in an I/O device that plugs into USB headers and not the PCI-e connector, but anything that needs the PCI-e connector is out of the question.
Technically speaking, I can still plug something in to the slot below.
It's not pretty and would block the fan, but it is possible.
I'm just curious if people are going to be welcoming to the eventual 5 slot or 6 slot GPUs, or what the cutoff is for "design a better card" instead of upcharging their customers $1000 for $50 worth of extra material.
I'll continue to disagree with your first point. but we can move on to the second.
Personally, I think where we are at is the top of the curve in companies pushing more power to get more performance, and at some point the focus will have to be on efficiency. Over the last 20+ years there have been several cycles of "we make things faster by using mower power" followed by a cycle of "holy crap we're using too much power make it more efficient." 10 years ago people were commonly recommending 1200W power supplies for high end builds, we got that back down to 850 or so for a number of years, and now we're back to that 1200 number if you have say a 13900k and 4090.
This has happened with both CPU and GPU. The trend over time has still been higher wattage, but there's usually a big bump, then a scale back for a bit. The most famous cases are in CPU where intel with the Pentium 4 had to throw out that entire design becuase they couldn't just keep pumping power through it to make it faster anymore, and that's how we got the Core design (that intel bought) that is still the basis for modern CPU cores. on AMD, the bulldozer era was "well let's just put as much power through this as possible and hope it is faster" until Zen was ready.
There are fewer examples on the GPU side, but there have definitely been that same kind of cadence of make things faster with more power, then make things more efficient and cut power draw.
The reality is in North America a circuit can only pull about 1800W total, so getting much past 1200W in a power supply seems like a big problem, as you really do need almost a full dedicated circuit just for a computer after you also account for a monitor.
And there's also the fact that we can, as consumers, make a decision that if we're not happy with a GPU pulling 600W of power from the wall, and a CPU pulling 250W of power from the wall.... to not buy those parts.
My question here comes back to Nvidia, which has no reason to redesign anything because people are still buying their stuff and the company is still profitable. The 7900XTX is a full card slot smaller than the 4080 and runs comparably (outside of RTX), so it's clear that it's possible for the cards to be slimmed down, but as long as people shrug their shoulders and buy the cards why would they bother spending money on R&D? Intel's i Core reinvention was 15 years ago when companies would still research things, and AMD needed to adapt or die.
It's not impossible Nvidia might do some investment into architecture rebuilding, but I expect the next gen of cards will be just like this one: More stuff jammed in with a bigger heat sink slapped on and sent out the door.
In the case of the RTX 4080 vs the 7900 XTX there actually is a pretty clear reasoning with them. The RTX 4800 uses the same cooling system as the 4090, and the result is a card that tests at 60C under full load, which is quite good for a modern card. The 7900 XTX, on the other hand, runs 20C+ hotter under full load. nvidia mad the decision to make a beefier cooling system for the 4080 to make it run cooler, while AMD went for a smaller design at the cost of thermals.
Both are valid ideas, and it largely does come down to personal preference if you want a cooler running GPU.
I dunno. the decision to make a card run at 60 seems weird, since 60-80c are the sweet spot you generally want to be in. Materials aren't getting any cheaper. I'm sure there's a lot of inside baseball here, but the temp scaling on the 4080 doesn't even match up any of their other cards.
They might make up the material costs by only having one production line for a cooler that's shared between multiple cards.
It uses the same coolers as the 4090s, which are already ridiculously overdesigned to cool a 600W card, when they're more like 400W cards. And even then, it's dumb because you lose like 5-10% by just reducing the power draw down to 300W, which could have easily been cooled by a 2 slot cooler. Nvidia actually has a much more power efficient design now, blowing both ampere and RNDA 3 out of the water with perf/power, but they keep setting their default power draw points well past the point of diminishing returns
And note, the recommended maximum continuous power you should pull from your breaker is 80% of its rated maximum. If you have a 15 amp circuit (common) and dedicate the entire circuit to your computer, you'll still want to stay at or under ~1440 watts at-the-wall for everything on that circuit.
There just isn't much headroom above 1200 watts while still allowing *literally anything else* on the circuit.
A common box fan will pull around 75 watts, for example. Your monitor is probably in the 20-50 watt range. A 60" TV is ~100 watts. A 100 watt-equivalent LED bulb is ~20 watts. At full tilt the PS5 is rated at 340 watts so you can imagine plugging that into the same circuit as your 1200 watt computer would be a bad idea if they might both be running all out at the same time.
Any full build taking more than ~1000 watts gets some serious side-eye from me due to how it limits what else can be on the same circuit. Maybe you have bedrooms with more than one 15 amp circuit per room, but I don't.
Also: Charging their customers ridiculous prices to cover their 70% profit margin
For all that we complain about efficiency of today's parts, it was a lot more common to hit 1000W 5 to 10 years ago. That was pretty much the golden age of HEDT and the last days of SLI. Corsair's old monster 1500W+ PSUs were for X99/X299/Threadripper boxes with many-cored CPUs and multiple GPUs.
My old 12-core X299 box with my 3090 has pulled more than 800W in CPU/GPU stress tests, so imagine the real world TDP of overclocked 18-core CPU with SLIed 2080 Tis or Titans. It would definitely be near the limit of the circuit.
Today's power thirsty GPUs are annoying because they're obsoleting several generations of 450-650W PSUs that used to be perfectly adequate for non-HEDT gaming builds. There's no current gen HEDT platforms, though, so I'm not sure you could really put together a (current gen) system that pulled more than 1000W without aggressive overclocking.
I dunno if it's the blockchain or other modern tech, AI, or us running out of untapped potential by using silicon as a base material, but it's not been a good transformation.
I dunno, you can have a stellar gaming rig on a 750w PSU, including a 4080/7900xtx/4090 (undervolt), an 8 core CPU, lightning fast pcie4 nvme storage, etc. etc.
It's just that the extreme overclocking high end and the top of the CPU side of things is really fucking thirsty right now.
Won't disagree on the cost thing, that has gotten real bad.
Let's play Mario Kart or something...
No idea when the case gets here, but worst case (...) I have an old case (that's enough cases for this sentence) I can slap things into temporarily.
Were not putting 1000W PSU out of pasture, were bringing in 1200W and 1600W
Honestly, for a 4-5 Slot monster - include a fucking PSU, which connects to the front slot (at this size it doesn't matter if its internal within the card or a Xbox 360 / One like powerbrick). Or make the whole thing external with a custom pcie riser card you can route outwards to an external enclosure. I am already miffed at my "block two additional pcie slots" Geforce 3080ti. Yes, there are people like me who like to use expansion cards. And because of GPU size, I have to use PCIe risers in this fashion:
hyuuuuuuge
Steam, Warframe: Megajoule
I use it as “2 players, one PC” kind setup.
I've used the local Steam streaming capability some with my laptop streaming from my desktop and if power requirements continue to go up I could see enthusiasts moving to a home render farm and a thin client that connects to it. Then you'd be able to plug whatever GPU monstrosity will require a full ATX case in right next to the power panel and remote to it.
My guess is that HDMI/USB-C ports on the wall will be a part of modern buildout at some point for the enthusiast types, where there is a rack tucked into a HVAC-accessible closet somewhere that houses the noisy shit and has a couple dedicated circuits.
That way, with app-managed switches, you can point your rig to whatever screen you happen to be using at the moment and get the benefits of VRR, HDR, ALLM, and other stuff that would potentially be lost on a full streaming solution.
edit: Note the word enthusiast up there; the vast majority of folks will use consoles or mid-range gaming PCs and not these insane monstrosities, and therefore won't need to build around the strange limitations imposed by a computer drawing more than 1000w at load.
Let's play Mario Kart or something...
So, I would be super happy with the next couple generations working on bringing 4090 performance down-market. The theoretical 6070/6060 should trade blows with the 4090, just as the 5080 should easily match the 4090 with the thermals of the 4080; and they could actually remove the 90 line from their consumer-oriented products in favor of aiming for 4080 thermals + whatever performance that affords for their 5080, and every couple generations make a titan for the compute market.
Let's play Mario Kart or something...
Cable lengths/signal integrity seem like they would be a serious concern as well.
I think USB4/Thunderbolt is probably as good a choice as you can make at this point for what it does and what it can do. And the main point of using it would be multi-signal transport; ethernet, displayport, USB, and if needed, the PCIe lanes for docks or whatever.
If you really wanted to spend some cash, optical/fiber channel would be just as good if not better. But much more expensive.
Let's play Mario Kart or something...
If you had a serious run needed, that's where optical comes in, which can terminate on a USB-C connector for thunderbolt at lengths up to 100m, but it would lack power delivery, which honestly, that's fine.
Let's play Mario Kart or something...