TheChumpNation
Member
For those saying who cares, DLSS is better - sure, but competition will keep prices down and ensure Nvidia can't rest on their laurels. There's no downside to PSSR on PC, from a PC gamer's perspective.
if a game already has PSSR optimized for a pc port it can make a lot of sense
If the results were reversed, you'd be using that as proof that PSSR is better. Don't be a hypocrite.They aren't low resolution, but unclean high noise games on any platform at native at any resolution. Using them as the proof is so disingenuous when DLSS produces better, but ultimately crap results for stability with the same games, where highest resolution native is the least offensive image quality.
Complete nonsense, with interests in signal processing and graphics programming I want the solutions devs choose to produce low noise images like PlayStation first party or Kojima games, etc on all platforms and games.If the results were reversed, you'd be using that as proof that PSSR is better. Don't be a hypocrite.
As far as I remember, PSSR uses a new hardware stuff as of now available in the PS5 Pro GPU, which apparently will be added to future AMD PC GPUs. So at least in those future AMD GPUs would be possible but I'd say not likely since Sony may want to keep it as PS5 Pro selling point.Will Sony ever launch PSSR on their PC ports to optimize and update it better like DLSS as both Nvidia and AMD supports AI features on their GPU.
In my point of view there are many PC user who play game at 1080p, which can help Sony to scale better in lower resolution.
If you can find this I'd love to read about itI don't remember where I saw this but Silent Hill 2 on PC that still had the PSSR calls in the files and its 100% exact match to calling XeSS which I found curious even if of course, its not the same SDK so it doesn't mean much, just called in the engine the same.
If Sony wants to give PSSR / FSR / XeSS / DLSS / DLAA options in their games, why the hell not. Don't try to force just PSSR though.
I suspect the question will be answered if we see PSSR on the OG PS5, say in a game like Returnal that was already 60fps at launch and could take a hit on native resolution to free up compute for PSSR, along with all the SDK improvements freeing up resources since it was last patched.
If PSSR comes to OG PS5 then I'd expect to come on PC PS5 ports too.
PC has no obligation to help Sony support PSSR here, especially when it already have several superior choices
If PSSR is their upscaling tech of choice moving forward onto new hardware, like the PS6, then Sony's games will require it. So, they'll be forced to port it to PC hardware eventually.
How many games have been developed that cannot function without DLSS? As in - do not offer the option for native rendering of any kind?not really. all these reconstitution methods use the exact same data from the engine.
this is why you can replace FRS2 with DLSS, or the other way around, in games that don't support both, by simply replacing a handful of files or modifying some files slightly.
something easy enough that there have been DLSS mods for games the literal day they released.
The RDNA info is flat out wrong, and I've lots count of the number of times I've referenced the ML AI solution and optimisation in the Ragnarok paper on OG PS5 showing that PSSR can run on OG PS5, but at what native resolution and frame-rate, and to what output resolution is the only question in regards of freeing up enough resources on the OG PS5 to do it, because ...it is just matrix maths at the end of the day and any of these solutions could still work on any CPU for the last 20years provide they had enough time and access to enough RAM and storage.it won't come to base PS5, simply because it would muddy the PR strategy of the Pro alone.
but besides that, the base PS5 doesn't have the necessary RDNA2 hardware to accelerate ML code. which means it would run extremely slowly and would probably result in severely lower performance even at significantly reduced resolutions.
even on RDNA2 PC cards, when using XeSS for example, the performance you gain is very minor and only really helps if you are off by maybe 5~10fps from your target framerate. and that's with DP4a acceleration, which the base PS5 lacks. so, yeah... I don't believe it's possible, even if there wasn't a PR incentive to not bring it to the base console.
Next-gen cinematic vs basic rasterised? It looks better in stills but in actual motion the differences are minor. Especially from a regular viewing distance. PSSR is indeed better there, but no need to resort to needlessly hyperbolic statements like that.But the fur is next-gen cinematic on PSSR in comparison to basic rasterised fur on DLSS. so it isn't one way.
I suspect PSSR has less to literally 'learn' to train and fix those issue you mentioned in R&C compared to the depth of training DLSS needs to catch up on specific fx like fur.
Highly doubt PSSR is ever seeing the light of the day on the OG PS5. It can already cost 2ms to upscale from 1080p>4K on the Pro with much better ML. How long would it take on the PS5? It likely isn't viable.The RDNA info is flat out wrong, and I've lots count of the number of times I've referenced the ML AI solution and optimisation in the Ragnarok paper on OG PS5 showing that PSSR can run on OG PS5, but at what native resolution and frame-rate, and to what output resolution is the only question in regards of freeing up enough resources on the OG PS5 to do it, because ...it is just matrix maths at the end of the day and any of these solutions could still work on any CPU for the last 20years provide they had enough time and access to enough RAM and storage.
As for the PR strategy, it is only important at the beginning with the Pro sales, what will be more important to devs will be a unified workflow, so that testing lower native 60fps with PSSR modes on OG PS5 will be the same as the Pro just with a few bumped native settings, rather than it currently be like 6 different modes between two hardware SKUs to develop and test.
Quoting myself here, but just want to clarify that I’m not shitting on PSSR. In Ratchet and Clank DLSS and PSSR are extremely close (Spider-Man, Last of Us as well). All that needs to be done is to eliminate that stability issue that PSSR has and I’d personally like the PSSR image more. It will also hopefully sort out the issues that third party or low resolution games are facing as well.Next-gen cinematic vs basic rasterised? It looks better in stills but in actual motion the differences are minor. Especially from a regular viewing distance. PSSR is indeed better there, but no need to resort to needlessly hyperbolic statements like that.
What next-gen fur is is referring to? I wasn't aware there was a difference in the fur rendering of DLSS vs PSSR.Next-gen cinematic vs basic rasterised? It looks better in stills but in actual motion the differences are minor. Especially from a regular viewing distance. PSSR is indeed better there, but no need to resort to needlessly hyperbolic statements like that.
Still zoomed in images of Ratchets fur if I recall correctly. PSSR does indeed look more fur like, because it is better at handling very fine dithering by the look of it. Zoomed out it is extremely difficult to tell. Hence my criticism of the stability, impossible to notice in stills but it becomes apparent when looking at the entire image in motion.What next-gen fur is is referring to? I wasn't aware there was a difference in the fur rendering of DLSS vs PSSR.
Do you have screenshots or timestamps? I don't remember that.Still zoomed in images of Ratchets fur if I recall correctly. PSSR does indeed look more fur like, because it is better at handling very fine dithering by the look of it. Zoomed out it is extremely difficult to tell. Hence my criticism of the stability, impossible to notice in stills but it becomes apparent when looking at the entire image in motion.
It’s actually in a post you responded to, check the screenshots here:Do you have screenshots or timestamps? I don't remember that.
I'm guessing you've never tried rendering hair/fur in a 3D package to appreciate how long it takes with that type of particle count (vs rasterization fur) to make that difference in stills to actually appreciate the difference between what PSSR is training with, compared to DLSS, no?Next-gen cinematic vs basic rasterised? It looks better in stills but in actual motion the differences are minor. Especially from a regular viewing distance. PSSR is indeed better there, but no need to resort to needlessly hyperbolic statements like that.
It’s handling the edge dithering better via a ML algorithm, not exactly rendering each hair follicle. It does this better with some foliage shots in other games as well. It is better then DLSS in that regard (as I said), but comparing footage between the two and it’s not exactly the difference between basic and next-gen.I'm guessing you've never tried rendering hair/fur in a 3D package to appreciate how long it takes with that type of particle count (vs rasterization fur) to make that difference in stills to actually appreciate the difference between what PSSR is training with, compared to DLSS, no?
Put it this way, it is beyond a 4090 in real-time at 30fps, so there is nothing hyperbolic about the statement.
2ms to infer 6M pixels in an 8M pixel output from 2M pixels on hardware that is at best 4x more powerful than the OG PS5 at ML AI, would mean worst case 1.5M (6/4) could be inferred on the OG PS5 with a similar 2ms rendering headroom on native, so probably meaning a 720p native on PS5(900K pixel), plus 1.5M inferred pixels would give us 2.4M which is roughly 1200p but with higher quality inferenced IQ.Highly doubt PSSR is ever seeing the light of the day on the OG PS5. It can already cost 2ms to upscale from 1080p>4K on the Pro with much better ML. How long would it take on the PS5? It likely isn't viable.
It is more than the edges IMO, and it is inference offline fur rendering, rather than cheap real-time raster like DLSS is seemingly doing.It’s handling the edge dithering better via a ML algorithm, not exactly rendering each hair follicle. It does this better with some foliage shots in other games as well. It is better then DLSS in that regard (as I said), but comparing footage between the two and it’s not exactly the difference between basic and next-gen.
PS5 does not support INT 8 so would need to fall back on FP 16 capability which is 20.56 TFLOPS vs the INT 8 number of 300 for the PRO (that figure is from the leak, for the 16.7 TFLOPS number in the manual would be 267 TOPS assuming WMMA).2ms to infer 6M pixels in an 8M pixel output from 2M pixels on hardware that is at best 4x more powerful than the OG PS5 at ML AI, would mean worst case 1.5M (6/4) could be inferred on the OG PS5 with a similar 2ms rendering headroom on native, so probably meaning a 720p native on PS5(900K pixel), plus 1.5M inferred pixels would give us 2.4M which is roughly 1200p but with higher quality inferenced IQ.
They are both doing the exact same thing, altering the image based on the underlying ML model and temporal data being fed into it. It’s just that in this case PSSR is doing it better.It is more than the edges IMO, and it is inference offline fur rendering, rather than cheap real-time raster like DLSS is seemingly doing.
How would PS5 deal with PSSR without the PRO's custom ML hardware? Its cost in rendering is already significant even in a system tailored to it, i am not seeing how it would be practical. Besides, PSSR is nearly the main selling point of PS5 PRO, there is also the business side to consider.If PSSR comes to OG PS5 then I'd expect to come on PC PS5 ports too.
If it delivers quality close or better than dlss or whatever fsr4 is then why not? I think they should.
How many games have been developed that cannot function without DLSS? As in - do not offer the option for native rendering of any kind?
As I said: if PlayStation moves ahead with PSSR in their hardware, their games will be built with it. They could swap it out for DLSS, but if it's a requirement, then they've locked out AMD owners, which they won't do. Instead, they'll most likely develop PSSR such that it can run on AMD or NVidia GPUs of that generation (PS6+) and include it in their games.
Bubba those ports are ours. Sony got a taste of that PC gamer dough and they don’t want to go back to a customer base that must ask mom for permission before spending money.I hope if PSSR finish beating DLSS quality in the future, you are not one of those port beggars.
Is there a document that says what it needs to support for PSSR, last time I asked copilot it was honestly clueless, because graphics accelerators at their core are 8bit accelerators to provide accelerated workaround, and the Ragnarok paper ML AI is a 4/8bit quantizer because original S3TC predecessor to DXT and now predecessor to BC, the BC5 which the Ragnarok ML AI solution used for quantization is 4/8bit and accelerated on OG PS5 using that FP16 hardware, so all bets are off until we something technical about PSSR.PS5 does not support INT 8 so would need to fall back on FP 16 capability which is 20.56 TFLOPS vs the INT 8 number of 300 for the PRO (that figure is from the leak, for the 16.7 TFLOPS number in the manual would be 267 TOPS assuming WMMA).
That is a massive difference to overcome, even XeSS which has a DPA4 fallback path for RDNA 2 and Nvidia GPUs and so can utilise INT 8 on shaders has a noticeable performance overhead vs DLSS and FSR. I’m personally not saying it won’t happen with 100% certainty, just that I highly doubt it will.
They are both doing the exact same thing, altering the image based on the underlying ML model and temporal data being fed into it. It’s just that in this case PSSR is doing it better.
It depends what the custom aspect is technologically timestamped IMO. if it is custom from prior to RDNA3/4 which I believe it is, then dual issue would be custom. If it is more than that, then I agree it presents a performance barrier that OG PS5 probably can't bridge with lower native and lower output resolution, alone.How would PS5 deal with PSSR without the PRO's custom ML hardware? Its cost in rendering is already significant even in a system tailored to it, i am not seeing how it would be practical. Besides, PSSR is nearly the main selling point of PS5 PRO, there is also the business side to consider.
DLSS and XeSS both use INT8. I see no reason why PSSR would use the less performant FP16. Maybe it could use INT4, but that would further widen the gap between the base PS5 and Pro.Is there a document that says what it needs to support for PSSR, last time I asked copilot it was honestly clueless, because graphics accelerators at their core are 8bit accelerators to provide accelerated workaround, and the Ragnarok paper ML AI is a 4/8bit quantizer because original S3TC predecessor to DXT and now predecessor to BC, the BC5 which the Ragnarok ML AI solution used for quantization is 4/8bit and accelerated on OG PS5 using that FP16 hardware, so all bets are off until we something technical about PSSR.
PSSR/XeSS/DLSS all do the same thing. They are ML models running on a temporal upscaler. That ML model is designed to eliminate ghosting, reduce shimmer, provide anti-aliasing, remove stair-stepping, increase texture detail, remove disocclusion artifacts, and enhance fine details (wires, lines, mesh, fur, etc, etc…). All of this is run against the temporal data (motion vectors and a few other things as well) fed into the algorithm by the upscaler. Higher resolution has more information to work with so thus produces better results.As for the fur with PSSR, you don't have the necessary background if you think they are both trying to infer to the same result and just PSSR is doing better. DLSS is trying to infer to a high resolution rasterization of the fur fx. PSSR is aiming higher than that. It is either aiming for an offline render of the fur with minute render times, or is inferring to real animal fur like the X1 Sony TV chip do.
Your response is all guess work and tells me you don't understand how ML AI works fully.DLSS and XeSS both use INT8. I see no reason why PSSR would use the less performant FP16. Maybe it could use INT4, but that would further widen the gap between the base PS5 and Pro.
PSSR/XeSS/DLSS all do the same thing. They are ML models running on a temporal upscaler. That ML model is designed to eliminate ghosting, reduce shimmer, provide anti-aliasing, remove stair-stepping, increase texture detail, remove disocclusion artifacts, and enhance fine details (wires, lines, mesh, fur, etc, etc…). All of this is run against the temporal data (motion vectors and a few other things as well) fed into the algorithm by the upscaler. Higher resolution has more information to work with so thus produces better results.
Based on the evidence so far PSSR is really good at eliminating ghosting, removing disocclusion artifacts, enhancing fine detail, and smoothing out stair-stepping. It appears to be average on texture detail enhancement and bad at reducing shimmer and keeping the overall stability of the image. It will inevitably improve as Sony works on and fine tunes that ML model. Based on the results of a wide range of games, nothing appears different from any other modern ML upscaler.
Educated guesses based on how other ML upscalers are functioning and the results from over a dozen games. I’m not pulling the information from nowhere, it is literally how DLSS and XeSS function.Your response is all guess work and tells me you don't understand how ML AI works fully.
I’ve already said it’s doing a better job then DLSS. Can DLSS catch up? Probably. Just like DLSS is doing a better job with image stability. PSSR can probably catch up there as well. Any statement on what is technically more impressive or what is easier to catch up on is pure guesswork at this point, as you stated.The results of the fur in PSSR on R&C refute what you are saying, because it is inferring detail beyond DLSS's inference from training at higher quality native, and dev comments have alluded to as much too in other games using PSSR.
The RDNA info is flat out wrong, and I've lots count of the number of times I've referenced the ML AI solution and optimisation in the Ragnarok paper on OG PS5 showing that PSSR can run on OG PS5
You described how DLSS/XeSS do things which doesn't including inferencing to something that isn't an upscale of the existing source, when the fur in R&C is beyond just a multiplier of line segments in a higher quality rasterization hair/fur with depth of field. On the Pro the fur is more akin to making a stick man dance, training a model with storm troopers and then inferencing a deep fake of the storm troopers doing the stick man dance. That isn't sharpening edge detail on the sticks, but replacing them with something of a higher order representation, and that's what PSSR is effectively doing,Educated guesses based on how other ML upscalers are functioning and the results from over a dozen games. I’m not pulling the information from nowhere, it is literally how DLSS and XeSS function.
But I’m noticing a dire lack of meaningful sources on your end as well. So everything on your end is guess work as well. Just brushing off anything I say with a “you obviously don’t understand” is utterly meaningless without anything backing up such a statement.
So go on then, provide some factual sources that explain exactly what I don’t understand.
I’ve already said it’s doing a better job then DLSS. Can DLSS catch up? Probably. Just like DLSS is doing a better job with image stability. PSSR can probably catch up there as well. Any statement on what is technically more impressive or what is easier to catch up on is pure guesswork at this point, as you stated.
Is it? Have you compared it to the native image without TAA (IGTI) present to see what exactly is being changed? Well I can test that as you can completely disable TAA in the PC version if I’m recalling correctly so I’ll test it and get back to you.You described how DLSS/XeSS do things which doesn't including inferencing to something that isn't an upscale of the existing source, when the fur in R&C is beyond just a multiplier of line segments in a higher quality rasterization hair/fur with depth of field. On the Pro the fur is more akin to making a stick man dance, training a model with storm troopers and then inferencing a deep fake of the storm troopers doing the stick man dance. That isn't sharpening edge detail on the sticks, but replacing them with something of a higher order representation, and that's what PSSR is effectively doing,
But that 52TOPs is completely misrepresented as it exists and is accessed by a slower interconnect and has to work isochonous-ly with just the time slice between the end of the 'shaders' rendered a native frame and the v-sync timing to display the frame so if a frame takes 14ms for native, then the RTX2060 AI module sits idle for 14ms of 16.67ms, eg it is only working for 8-9TOPs per second, as it physically can't start work on inferencing a frame that hasn't been rasterized, yet.I am 99% certain that it is confirmed that the PS5 base doesn't support int8, and you would absolutely need that to run PSSR.
furthermore, even high end RDNA2 gpus on PC only reach like 90 TOPS, maybe 100 TOPS using int8. that's when using ALL the CUs, which of course isn't viable to do.
for comparison an RTX2060 has 52 TOPS that are useable in full at all times and do not interfere with the shaders.
According leaks, PS5 Pro AI accelerator is 300 INT8 TOPS. No way you can get current PSSR model in PS5But that 52TOPs is completely misrepresented as it exists and is accessed by a slower interconnect and has to work isochonous-ly with just the time slice between the end of the 'shaders' rendered a native frame and the v-sync timing to display the frame so if a frame takes 14ms for native, then the RTX2060 AI module sits idle for 14ms of 16.67ms, eg it is only working for 8-9TOPs per second, as it physically can't start work on inferencing a frame that hasn't been rasterized, yet.
edit:
if we then factored in my earlier comment estimate that the Pro - which is on par with a RTX DLSS solution for resources - is x4 time more powerful to infer x4 more pixels than OG PS5, then that 8-9TOPs then becomes2-2.25 TOPS, and ignoring the unit of TOPs or half FLOPs, would mean using FP16 in place would take 2.25 HFLOPs from the OG PS5's 20.46 HFLOPs which is just over 10%, even assuming that is 100% out in the TOPs to HFLOPs, conversion and we said 20% was needed., 20% of a 16.67ms frame time to allocate to 720p -> 1200p PSSR is only 3.3ms, and still leaving 13ms to render native at 720p. So still sounds possible IMO.
You should DM Mark Cerny, let him know he wasted his time and Sony's R&D money.... you can replace PSSR with simple TAA and it would look just fine.