• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AMD Radeon RX 7900 XT Flagship RDNA 3 Graphics Card Could Reach Almost 100 TFLOPs, Navi 31 GPU Frequency Hitting Over 3 GHz

OK fair enough, I thought there might be some MS games you have a sight interest in.

When it comes to power though, you can always get more power on PC compared to Mac at the same price point. Are video editing programs that much better on the Mac side of things? I always hear Macs are great for that type of work, but I've always wondered what the difference is.

I just feel like with every Apple product they are grossly overpriced!
I don't think the basic M1 Macs are that overpriced for what it offers, they just cater to a very specific crowd and certainly are not focused on games at all. Comparable laptops will be very expensive as well, it's a matter of preference now. It actually sucks that they don't care that much about PC gaming because their hardware is really interesting.
 
Last edited:

Bo_Hazem

Banned
OK fair enough, I thought there might be some MS games you have a sight interest in.

When it comes to power though, you can always get more power on PC compared to Mac at the same price point. Are video editing programs that much better on the Mac side of things? I always hear Macs are great for that type of work, but I've always wondered what the difference is.

I just feel like with every Apple product they are grossly overpriced!

I don't like neither Apple or MS, and before M1, Apple was overhyped as fuck, mind you. It's only now that Mac is really "cheap" and has so much value and gives much more power-to-buck vs Windows PC's. The closest thing to Mac is having an Intel APU, but man the Mac Studio is so powerful that their own Mac Pro at around $15,000 is way inferior due to x86 vs ARM architecture. Still, for 3D graphics a desktop is still better, but not massively so especially if videos/photos are your main thing.

Overall for video editing most mac M1's can embarrass a 3090 PC, even the Mac Mini M1 from 2020 is more powerful than most PC's at a $700-900 pricetag (8-16GB of RAM). Also Nvidia/AMD lack 442 H.265 hardware encoding, only Intel has that. So it's either a fully Intel PC vs Mac.
 
Last edited:

Bo_Hazem

Banned
I don't think the basic M1 Macs are that overpriced for what it offers, they just cater to a very specific crowd and certainly are not focused on games at all. Comparable laptops will be very expensive as well, it's a matter of preference now. It actually sucks that they don't care that much about PC gaming because their hardware is really interesting.

Yeah if they every take gaming seriously it's gonna be really, really beneficial for them. Not sure if that's an arrogant stance from them or what, but their M1's can run games pretty well according to native 3D tests. If I were Sony, MS, Steam, etc I would FIGHT to get my store there and let them enjoy the 30% cut.

Note: If you're into 3D graphics you should be careful as there is no Epic Games/Unreal Engine on Mac now, which actually hurts Apple way more than it hurts Epic Games.
 
Last edited:

JohnnyFootball

GerAlt-Right. Ciriously.
Always wondered if 4k is worth it on PC right now or with RTX 4xxx. Seems that 1440p 144hz would be better? I don't know if that's much difference between 1440p and 4k on 27" monitor..
I have both a 4K/120 LG C1 OLED and a 1440p/170Hz monitor.

So far, I have had a better gaming experience on my C1 at 120Hz than my IPS 1440p at 170Hz. The smoothness is comparable despite the IPS having a 42% framerate performance advantage. And yes, I am running older games that can far exceed those framerates. Thats just in framerate. In image quality the C1 is drastically superior due to near perfect contrast. (I say near perfect since OLED struggles with near blacks as opposed to perfect blacks.)

Still my experience from my IPS is a good one. For $330 it was a bargain.
 

JohnnyFootball

GerAlt-Right. Ciriously.
Intel probably won't compete much in the gaming space anyway, at least at first. But, they might do wonders for availability if lower-level production users snatch them up.
The thing that I am worried about Intel is drivers and support. Their GPUs on paper look to have decent horsepower, but if the drivers are terrible and causes issues, then they're gonna have a bad time.

Intel also has to worry about the Raja Koduri curse and sadly that is looking more and more like a reality.
 

RoadHazard

Gold Member
It's probably true but we are past the point where it matters.

TF is the new bits

I don't think that's exactly true. The number of floating point operations you can do per second is directly related to how many pixels you can shade per second, i.e. resolution and framerate. Sure, I guess once you can comfortably do 4K120 throwing even more flops at it might not be meaningful, but there are always more expensive rendering techniques waiting to happen. And there's 8K (4x the pixels of 4K), 144fps (or why not 200+), etc. So I don't think it's like bits at all.
 
Yeah if they every take gaming seriously it's gonna be really, really beneficial for them. Not sure if that's an arrogant stance from them or what, but their M1's can run games pretty well according to native 3D tests. If I were Sony, MS, Steam, etc I would FIGHT to get my store there and let them enjoy the 30% cut.

Note: If you're into 3D graphics you should be careful as there is no Epic Games/Unreal Engine on Mac now, which actually hurts Apple way more than it hurts Epic Games.
I dont think Steam needs to give Apple 30% to be on MacOS. The problem is that there are very few ports, apecially now with the chage to the M1.
 
Last edited:
Not sure, if they force Apple enough, I think by law it should not charge them 30% outside their own store. Something like Windows. But seems like regulators aren't that tough when it comes to Apple?
There is already Steam for MacOS, it's only a matter of support, Apple won't do anything and won't take any cut. Apple doesn't even get a cut if you use Steam on iOS since the game you buy are for another system.
 
Last edited:
I don't like neither Apple or MS, and before M1, Apple was overhyped as fuck, mind you. It's only now that Mac is really "cheap" and has so much value and gives much more power-to-buck vs Windows PC's. The closest thing to Mac is having an Intel APU, but man the Mac Studio is so powerful that their own Mac Pro at around $15,000 is way inferior due to x86 vs ARM architecture. Still, for 3D graphics a desktop is still better, but not massively so especially if videos/photos are your main thing.

Overall for video editing most mac M1's can embarrass a 3090 PC, even the Mac Mini M1 from 2020 is more powerful than most PC's at a $700-900 pricetag (8-16GB of RAM). Also Nvidia/AMD lack 442 H.265 hardware encoding, only Intel has that. So it's either a fully Intel PC vs Mac.

Reminder that the M1 computers aren't so good because Apple's "CPU" is "so much better than the competition". Reason why it's so good is that it's a SOC, with a lot of dedicated hardware that's actually being used by the OS.
But that's the natural direction the market is progressing. Now that Xilix joined with AMD, Intel will be forced to integrate more dedicated hardware turning their CPUs into SOCs just like the M1. Not meaning to discredit Apple here, but is if some people think this advantage will last long they'll be surprised.
 

dave_d

Member
I have both a 4K/120 LG C1 OLED and a 1440p/170Hz monitor.

So far, I have had a better gaming experience on my C1 at 120Hz than my IPS 1440p at 170Hz. The smoothness is comparable despite the IPS having a 42% framerate performance advantage. And yes, I am running older games that can far exceed those framerates. Thats just in framerate. In image quality the C1 is drastically superior due to near perfect contrast. (I say near perfect since OLED struggles with near blacks as opposed to perfect blacks.)

Still my experience from my IPS is a good one. For $330 it was a bargain.
I'm guessing you've had a much better experience with the 1440p than the curved screen. BTW, you're reminding me I really should try out my PC with my OLED in the other room. (I've got a 55" C9. When I got that 1440p monitor the 42" C2 wasn't out yet. Who knows, I might have bought one.)
 

JohnnyFootball

GerAlt-Right. Ciriously.
I'm guessing you've had a much better experience with the 1440p than the curved screen. BTW, you're reminding me I really should try out my PC with my OLED in the other room. (I've got a 55" C9. When I got that 1440p monitor the 42" C2 wasn't out yet. Who knows, I might have bought one.)
The curved G9 screen was returned. There were too many quality control issues and I lost all patience to deal with the bullshit. Windows simply isn’t ready for 32:9 abs it’s frustrating how many obvious features were missing.
 

rodrigolfp

Haptic Gamepads 4 Life
The 3000 series did not wipe AMD's 6000 series. Without DLSS the 6900XT would be the best price/performance card
You sure?
 
The thing that I am worried about Intel is drivers and support. Their GPUs on paper look to have decent horsepower, but if the drivers are terrible and causes issues, then they're gonna have a bad time.

Yeah, getting good, game specific driver updates probably won't be as consistent as with Nvidia/AMD at least at the beginning. That's why I'm figuring it will take them a couple years to mature. I guess they could always surprise everyone.
 
Last edited:

JohnnyFootball

GerAlt-Right. Ciriously.
You sure?
So you're arguing that a $2000+ card offers better price/performance than a $900 6800XT or $1000 6900XT?

giphy.gif


In order for that to be true, you'd have to get over DOUBLE the framerates in most games.

That didn't happen.
4K.png
 

rodrigolfp

Haptic Gamepads 4 Life
So you're arguing that a $2000+ card offers better price/performance than a $900 6800XT or $1000 6900XT?

giphy.gif


In order for that to be true, you'd have to get over DOUBLE the framerates in most games.

That didn't happen.
4K.png
So you are only seeing the performance of the 3090Ti? 6900 XT is performing the same as a 3060Ti in 1440p. RT performance sucks.
giphy.gif
 
Last edited:

Klik

Member
I have both a 4K/120 LG C1 OLED and a 1440p/170Hz monitor.

So far, I have had a better gaming experience on my C1 at 120Hz than my IPS 1440p at 170Hz. The smoothness is comparable despite the IPS having a 42% framerate performance advantage. And yes, I am running older games that can far exceed those framerates. Thats just in framerate. In image quality the C1 is drastically superior due to near perfect contrast. (I say near perfect since OLED struggles with near blacks as opposed to perfect blacks.)

Still my experience from my IPS is a good one. For $330 it was a bargain.
If you had to choose between 4k/60hz and 1440p/144hz?
 

rodrigolfp

Haptic Gamepads 4 Life
This discussion is confusing to me because the 3090 Ti and 6900 XT are the worst value parts in their product families. The best value part at RRP for RT would probably be a 3060 Ti.
No one said the 3090Ti is the best value. My link was to show the performance for various cards.
 
If a massive leap like this happens I think the PS5 Pro becomes inevitable down the line (if it already isn't). It became clear with the shortages that a lot of people are willing to pay more than $500. If it makes sense for MS to have the Series S and Series X, Sony could have the PS5 Slim and the PS5 Pro two years from now.
 
Last edited:

Hezekiah

Banned
Nobody buying a $1500 100TF GPU gives a flying fuck about power efficiency. Same goes for anyone buying a desktop or dGPU at any price. Literally no one's going to opt for an objectively slower part to save 100W under load. Unlimited power is the whole point of having a 3 cubic foot box sitting on your desk. You guys who continue to push this narrative for AMD are why they hold less than 5% dGPU marketshare. Based on the OP chart the 4090 will have a 30%+ advantage over a 7900XT when factoring in its RT/DLSS gains. Even if AMD gave them away, people would just sell them and buy 4090s with the proceeds.
vince-staples-fax.gif
 

skneogaf

Member
Unless AMD forces the situation I don't see nvidia doing more than 50 percent increase in performance.

So I believe if I get 90 fps with my 3090ti card I'll hit 120fps with a 4090ti

We always think we'll get more but best to be Conservative!
 

YeulEmeralda

Linux User
It is only a matter of time for EU and California to write some legislation to rein these power hungry GPUs in.

(Personally I'm an atheist without kids and so don't rightly give a flying fuck what happens to this planet).
 

winjer

Member
Still shitton of transistors.

DRAM libraries like these are much denser in transistors, than GPU or CPU logic libraries.
So it doesn't use as much die space.
But still, you are right, as it will use up a good amount of space in waffers.
 
That chart's slightly messed up; how do you get 32 GB from a 256-bit bus? Are 4 GB-capacity GDDR6X modules on the market? I've heard about 4 GB modules being possible, but seen nothing of them yet.

Also thank goodness for Infinity Cache; good luck feeding 92 TF with 576 GB/s bandwidth without it. Dude I can't wait until GDDR is left behind for HBM; even HBM2E would be a blessing, let alone HBM3. We're probably two more generations out from HBM being affordable enough for a consumer-grade GPU but the M100 shows promise, and I think MLID is right about that being a clue about RDNA 4 and definitely RDNA 5 architectures.
 
Last edited:

Xyphie

Member
That chart's slightly messed up; how do you get 32 GB from a 256-bit bus? Are 4 GB-capacity GDDR6X modules on the market? I've heard about 4 GB modules being possible, but seen nothing of them yet.

You get to 32GB by clamshelling 2x16Gbit modules to each 32-bit GDDR6 controller. 32GB is likely a Pro/"Special Edition" card thing only, the regular 7800XT is probably a 16GB card.
 
You get to 32GB by clamshelling 2x16Gbit modules to each 32-bit GDDR6 controller. 32GB is likely a Pro/"Special Edition" card thing only, the regular 7800XT is probably a 16GB card.

Oh yeah, forgot all about that! PS4 did that early on too, didn't it? Also modules in clamshell mode operate at half rate right, they give half the bandwidth they'd normally give IIRC.
 

tusharngf

Member
AD102 144SM RTX 4090Ti
2,5ghz =92TF
3ghz=110,5TF

AD103 84SM RTX 4080
2,5ghz=53,7TF
3ghz=64,5TF

AD104 60SM RTX 4070
2,5ghz=38,4TF
3ghz=46TF

AD106 36SM RTX 4060
2,5ghz=23TF
3ghz=27,6TF

AD107 24SM RTX 4050Ti
2,5GHZ=15,4TF
3ghz=18,4TF




If Nvidia fixed FP32 utilisation in Ada GPUs then FP32 will be fake no more so AMD will be done.
Ampere has FP32+FP32/INT32 ( Half of FP32 is shared with INT32, this is where those fake teraflops are coming)
If Ada Lovelace is FP32+FP32+INT32 then AMD is cooked.



4070 with that 38-40 TF mark means it's a 3090 level GPU !! We will have to see how the performance is compared to old gen cards.
 

Xyphie

Member
Oh yeah, forgot all about that! PS4 did that early on too, didn't it? Also modules in clamshell mode operate at half rate right, they give half the bandwidth they'd normally give IIRC.

Yes. With clamshelling each individual chip gets addressed at 16-bit instead of 32-bit (well, technically 2x16-bit for GDDR6). Total bandwidth doesn't drop because you just spread data over more chips.
 
That chart's slightly messed up; how do you get 32 GB from a 256-bit bus? Are 4 GB-capacity GDDR6X modules on the market? I've heard about 4 GB modules being possible, but seen nothing of them yet.

Also thank goodness for Infinity Cache; good luck feeding 92 TF with 576 GB/s bandwidth without it. Dude I can't wait until GDDR is left behind for HBM; even HBM2E would be a blessing, let alone HBM3. We're probably two more generations out from HBM being affordable enough for a consumer-grade GPU but the M100 shows promise, and I think MLID is right about that being a clue about RDNA 4 and definitely RDNA 5 architectures.

What about GDDR7?
 
Yes. With clamshelling each individual chip gets addressed at 16-bit instead of 32-bit (well, technically 2x16-bit for GDDR6). Total bandwidth doesn't drop because you just spread data over more chips.

Ok, I think I understand. When I was mentioning about the bandwidth, I just meant per module. You essentially get the same bandwidth off that bus width you'd expect with a given I/O pin rate it's just that you're using double the physical capacity and half the data rate per module to do it.

At least that's how I believe it works. Like if a single 14 Gbps 2 GB module on a 256-bit bus would give you 56 GB/s bandwidth in 32-bit mode, it drops to 28 GB/s in 16-bit mode. The tradeoff being you can double physical capacity using the same module densities (32 GB vs 16 GB as the leak in OP is hinting at).

I only really remember this because Sony were forced to use it with launch period PS4s but then 8 Gbit modules became available so they revised the setup for later PS4 systems.

And people were fighting over 2TF difference (PS5 vs XSX) LOL :messenger_beaming:. With 100TF I bet even UE5 games will run like a dream.

Hopefully. Depends on how much of those games use mesh shading (TF perf is actually more useful for that vs. fixed function graphics pipeline). Also probably how much a rasterization bump we get.

What about GDDR7?

It'll probably do for another generation, maybe even two, of GPUs but you're always going to be limited by module I/O density (32-bit), size of GDDR memory controllers (which unless AMD, Nvidia and Intel move to chipletizing the memory controllers, will factor into die sizes), and real-estate costs on the PCB (you can't stack GDDR modules like you can HBM). Also on average GDDR consumes more power than HBM, and has less room for design innovations (there are already Samsung HBM2E modules with PIM functionality built in and partially handled by the interposer. You can't really do that with GDDR).

Then there's bandwidth limitation issues, too. We've never seen a GPU with a bus wider than 512-bit for GDDR, and even with some theoretical 48 Gbps GDDR7X or whatever module your bandwidth is limited to 3 TB/s. Maybe 4 GB capacity modules become possible, so you can fit 64 GB on a card, but power consumption on that is going to be heavy. At that point why not just switch over to HBM which also brings way lower latency, power consumption, and real-estate costs on the PCB size?
 

Haint

Member
4070 with that 38-40 TF mark means it's a 3090 level GPU !! We will have to see how the performance is compared to old gen cards.

The 4070 will be a $700+ card and the 3090 will be over 2 years old at that point, so nothing unexpected there. The 4090/Titan/Ti will all but assuredly be a full fat $2000 though with a real world performance advantage of 80%+ over a 4080. I'd expect the fake "FE pricing" to be $599, $799, and $1799 for the 4070, 4080, and 4090, with 99% of units being AIBs priced a minimum of $100 higher, reaching $500+ higher for the exotic 4090s. No 4090 AIBs will exist below $2000, and no 4080s below $900. There may be a blower style 4070 for $689.
 
Last edited:
Top Bottom