I posted this elsewhere, but I think it actually seems more likely that the issue is hardware-related and cannot be fixed. Here's an illustration of GM204 (the chip inside the 970 and the 980)
Three of those sixteen SMMs are cut/disabled to make a 970 whereas the 980 gets all sixteen fully enabled. It seems that each of the four 64-bit memory controllers corresponds with each of the four raster engines and in the same way that the 970's effective pixel fillrate has been demonstrated to be lower than the 980's even though SMM cutting leaves the ROPs fully intact (http://techreport.com/blog/27143/here-another-reason-the-geforce-gtx-970-is-slower-than-the-gtx-980) the same situation may apply to bandwidth with Maxwell. However, the issue may be completely independent of which SMMs are cut and may simply relate to how many.
GM206's block diagram demonstrates the same raster engine to memory controller ratio/physical proximity:
I expect a cut-down GM206 part and even a GM200 part will exhibit the same issue as a result, it might be intrinsically tied to how Maxwell as an architecture operates. Cut down SMMs -> effectively mess up ROP and memory controller behavior as well as shaders and TMUs. I also don't think there's a chance in hell Nvidia were unaware of this, but I could be wrong.
So what can we realistically expect for the false advertisement of 4GB? Refund? Exchange for 980s?
You might not be playing with Ultra textures. Just turning on the setting does nothing unless you also download with HD texture pack, which has to be done manually.
Holy crap, crazy timing for me, I was this close to pulling the trigger on a 970 today. Holding off for a bit to see what happens with this...
I don't know about that Nai Benchmark. I can't see the source code, and, thus, I don't trust it.
On the other hand, frame times and frame rate on AC: Unity are pretty stable at 8x MSAA with 3.8 GB of VRAM usage.
Interesting.
A refund at the very least. Just re equate the value of the card to make note of the actual power of it and refund the difference.
Well, they still haven't fixed the SLI voltage issues that have been apparent since launch, so I'm not holding my breath. I remember when I first heard about this, it was only a handful of users actually being affected. Seems like it may be those who have Hynix memory on their cards.
Seems like it may be those who have Hynix memory on their cards.
I found out today that my card is one of those
Shadow of Mordor with ultra textures (irrespective of resolution) uses max ~3600 MB for me. The max I've seen my card use is 3750MB when playing titanfall at 4k.
If it is a hardware issue then I imagine Nvidia will allow 970 owners to upgrade to a 980 free of charge. The only other alternative would be some sort of refund program and that'd leave Nvidia even more out of pocket.
You might want to add a note in the first post that this may only be affecting 970 users with Hynix memory as well as instructions for people to check to see who is their VRAM manufacturer.
Lots of manufacturers started going with Hynix after the initial batch of cards. Pretty shady shit. Samsung memory overclocks much better too.
To check what kind of memory you have install Nvidia Inspector v 1.9.7.3
Not if the hw problem is related to Hynix memory and not a fault in the design itself. At that point, it's not nvidia's fault. It's hynix's and the vendors that chose to use hynix.
True, but I would think if the issue could be tracked back to a specific brand of memory modules then the problem would manifest at random points, not specifically at ~3.5GB and higher.
Lots of manufacturers started going with Hynix after the initial batch of cards. Pretty shady shit. Samsung memory overclocks much better too.
To check what kind of memory you have install Nvidia Inspector v 1.9.7.3
I have Samsung memory on mine so I guess I'm in the clear. But I hope this gets sorted out for those who are having problems.
Also, if a bad batch was made by Hynix, it would likely be repeated across every card in that batch.
I have a Gigabyte G1 970 (rev 1.0), don't know which memory yet, I'll check when I come home from work.
Really happy with the card and haven't experienced any bottlenecks.
Then again, I'm gaming on 1080p (TV) and 1200p (Dell 24" monitor) so I'm not sure if I can even push it to 4GB usage.
The only issue I experienced is with AC:U (the occacionaly 4-5 second freeze, etc.) but that game itself is fucked so I don't think it's the GPU. :/
Really interested how this plays out.
Mine has Hynix memory, I haven't had the issue where the card goes past 3.5GB of VRAM and performance slows to a crawl. But it definitely doesn't like to use more than 3.5GB, it only goes past if I force AA or supersampling up a lot.
How is your performance with titanfall at 4k and what is your VRAM manufacturer?
Just did a quick test. Everything maxed but ambient occlusion disabled. 4k and I get 45-60 frames throughout the game including heavy battles (some frame drops to 30-40).
Maximum VRAM usage spiked to 3718MB for a few seconds but stabilized throughout the game at 3695MB.
Enabling AO drops the frame rate to 10-20 FPS with no change in VRAM usage.
Oh no! I bought one for my new PC. Will this be an issue that can be fixed by an update or are we screwed?
See my post above and report your stats.
Run the RAM benchmark seen in the OP (you may also need this)..
What the heck is 1.#J GB? My skepticism of that RAM benchmark increases. Bad bad programming.
What the heck is 1.#J GB? My skepticism of that RAM benchmark increases. Bad bad programming.
Lots of manufacturers started going with Hynix after the initial batch of cards. Pretty shady shit. Samsung memory overclocks much better too.
To check what kind of memory you have install Nvidia Inspector v 1.9.7.3
It'd be of more help if people could do this:
...and report back with a screenshot of their stats window, along with the make/model of their GPU and the brand of memory it uses (use GPU-Z for the latter).
Edit: It's telling me that the last ~400MB of whichever one of the 670s it's testing is 4GBps, which I find rather odd.
Maybe you should provide instructions for what we're supposed to do with the nvidia inspector?