I know the future is the hardest thing to predict, but I'd would like to hear some thoughts on the subject anyway 8-)
I'm thinking about replacing my beloved 7970 with a Fury X, but I can not stop wondering if the 4 GB memory on it is gonna be a major bottleneck sometime soon?
I'm gaming in 4K and I have found that my 7970 actually does run Fa Cry 4 surprisingly playable in 4K(3840x2160), so I'm sure current games will be more than fine with the Fury X and the review sites tell the same thing but what about when DX12 titles start to appear. The Fury X is expensive enough that I'd rather not have it be like a year old and then already a part to replace.
Thoughts?
The rig the card is to live in has these other specs:
Core i7-870
Gigabyte GA-P55 UD6(Rev.1.0) - Intel® P55 Express Chipset , PCI Express x16 slot at x16...
8 GB ram
Windows 8.1 (will likely be Windows 10 once I get round to it).
Primary monitor is 40" 3840x2160
Secondary monitor is a 27" 2560x1440
i.e. not the newest hardware except the 40", but unless I'm mistaken fast enough for the GPU to the bottleneck in 4K regardless of which GPU it is. Do say if I got that wrong?
Bones heal, glory is forever.
Posts
Source: I had a computer with 4gb of ram and it was a big reason why I built a new one, with 8gigs.
The question is about video memory, not system memory.
I won't go so far as to say Far Cry 4is running well in 4K with 7970 but it is playable which is much better than the slide show I expected to get with it. Unfortunately I don't know how to get data on the framerate, so I can't put numbers on that. It might be that due to me gaming since before there was 3D hardware functions on graphics cards I am more forgiving of lower frame rates.
Now on the possible issue with 4 GB VRAM for gaming I have also been thinking about 980 and it's 6GB as an Fury X alternative. There are three main reasons why I'm still considering and have simply not gone that way, one is that I like that the Fury X is so quiet and the other is that I wanna support the underdog ie. give my money to AMD in the hope that they'll stay in business(as consumers we need competition to keep seeing progress in GPU's). And then there is the final reason which is DX12 where I'm thinking the Fury X could turn out to be the one to have - unless the 4 GB think ruins it.
Decisions, decisions... :?
and just to be clear, the difference of the "slower" 0.5GB of memory in the 970 is nearly imperceptible in anything but a synthetic benchmark.
Also, 4K gaming is acceptable with current high end cards, but I really think that it's going to be a generation or two yet before we see really good performance on 4k from high end video cards. Remember, You're trying to push 4 times as many pixels as 1080p, and most of the current video card architectures were designed before anyone was really trying to do that. I don't think *any* high end card today will be super future proofed for 4k gaming 3 or so years from now.
On a personal level, I think the 980 Ti is a better card than the Fury X. If I was spending that much money on a video card today, that'd be what I get.
4K is 3840 X 2160.
4K is roughly 4X the total pixels of 1080p.
that's a rhetorical question, I understand why, it's just annoying and confusing
well, 2k *is* 1080p.
I think naming conventions changed for one main reason. it's hard-ish to explain to a "normal person" that 2160p is actually 4x as big as 1080p. the number 2160 is 2x as much, nto 4x. But then you'd have to explain the actual vertical and horizontal resolution and how they're both double which is why it's 4x as much but we just don't say the whole resolution but trust us it actually is 4x as much.
4k has the benefit of being (relatively) close to the number of horiztonal pixels, as well as sounding 4x bigger han 1080p.
tl;dr; it's mostly marketing speak to the layman because people don't want to do math while buying a tv at best buy.
Come back with me to when it was CGA/SVGA/WXGA/whatever and tell me what's confusing!
which leads us to the fact that right now there are qHD phones and QHD phones in the world, with very different resolutions.
qHD is quarter HD, or 960x540, and QHD Stands for Quad HD, which is 1440p.
And they somehow think that it makes sense.
no thank you
No. I am the one saying thank you hijacking the thread.
Now lets back to question whether or not 4 GB graphics memory is going to become a bottleneck real soon - like fx.once we see DX12 games coming out.
I have come to the reactualization that the fact that so few graphics cards sold currently have more than 4 GB it ought to hold back the need for 4+ GB memory, but on the other hand much of the memory used will be going for textures and it is rather easy for developers to include different resolution textures in their game so some will surely still be pushing the memory.
Plus then there is the 4K thing where it may or may not be that gaming in that resolution as opposed to the more widely used resolutions like 1920x1080 or even those below that. The question is here if the screen resolution matters that much in the greater scheme of things when talking graphics memory. I mean we have come a very long since back when your amount of graphics memory was what decided if you could have a screen resolution of 1024x768 in 256 colours or 16 bit colours. A 4K 32 bit image is less than 32 MB so the frame buffer can't really make that much of a difference in the matter whatever the resolution used.
I think your logic is sound that having 4GB+ of VRAM isn't going to be necessary at all for all gaming going forward, however in 4K, it absolutely will be. VRAM is critical for the huge textures needing to be rendered, and that ABSOLUTELY does scale with resolution.
Well I am not so sure about that. The scaling thing is just the question, because I would thing the textures are the same regardless of resolution and since the frame buffer even in 4K is not gone surpass 32 MB even if it is 32 bit (and I would think it is really 24 bit). In other words the difference in frame buffer memory size may be a big 400% wise when comparing say 4K and 1080p, but 32 MB out of 4096 MB is not a lot and in that context the frame buffer should only affect the memory requirements a little bit.
Maybe something else is happening that explains the drop in rendering speed seen in the article here which look at the 4GB thing to try and see if it a real issue:
techreport.com/blog/28800/how-much-video-memory-is-enough
This is from the article - it is a good read:
The main thing in the example seen here and the other ones is that the drop comes at resolutions above 4K, but on the other hand there may be other issues in some games with all the AMD cards so there is that. It could simply that AMD has optimized something for 4K at the most while Nvidia has done things differently.
4K will not only require bigger and more detailed textures, it'll require more of them becuase with more resolution it's physically needing to render more information at any time, which means having more actively loaded into the video memory. And if a 400x400 cube needs a million pixels to render just the external surfaces, you're going to need a lot of memory to render a or 8 k screen.
And I don't know if you can really look at a lot of current games as good measuring sticks. I doubt many developers are targeting 4k as a common resolution right now, so any current game probably won't be the best example of how quickly you can fill up video memory if you want. What we need is something akin to the the first Crysis, something that looked past what the high end was at the time, to really see the differences.
Games would want to keep all surfaces of all objects, especially those close to the player in game, in the buffer. Otherwise you'd get constant redraws of objects, even if you're standing right beside it in game.
ALL surfaces may not be loaded into the buffer at all times, but the sizes needed for 4K are going to be huge. That, AND the fact that with the smaller the VRAM quantity installed, the fewer textures can be loaded simultaneously. This means the PC will have to swap textures in and out of that VRAM from the hard drive, which is PAINFULLY slow compared to just having the storage in the first place.
I think this was pretty well spot on a week ago, so I'll re-iterate.