I heavily doubt said rumors on the 480 overclocking to stock 1070 performance levels, judging by the early firestrike benches it have to gain 45% in performance from it's OC to match the 1070. On top of that there's no telling how this card is going to react in normal gaming situations yet. It's wait and see type of thing.
The thing is there's no technological advancement with Maxwell 3.0. Some people who care a little more wouldn't feel good spending that much money on a graphic's card that can't support the current technologies well and is already outdated by default. What's the point of a 2016 graphic's card that's designed for a 2008's API?
Then you can add all the dirty anti-consumer tactics, forcefully locking you into their brand by refusing to use whatever open standard there is while offering an alternative that only works with their hardware and the fact that newer GameWorks games will magically run worse on your hardware once the GTX1100 series is released. When you add everything you have a bunch of reasons not to buy Nvidia, and that's probably why most gamers who have a deeper knowledge about hardware tend to prefer AMD.
No I'm not. I can recommend Nvidia cards when they offer better perf/price like the GTX950 or the OC'd 980Tis. I can also see things from a logical PoV and offer arguments.
What I can't do is choose Nvidia over anyone else for myself, I don't like how they do business and I won't support a company that would make games "Nvidia-exclusive" if they could. PC was always about choice.
Ohhhh come on buddy, Nvidia's business practices might be a little shady, but it works and people don't seem to care and consumers are still buying their cards, I bet if Nvidia took a shit (in a literal sense of course) on a PCB called it the 1080ti people would still buy it.
You know the damn thing would still outsell AMD's competing card, lol.... there's the Rub.
Last edited by Bigvizz; 2016-06-19 at 01:57 PM.
Never said that, I was calling Artorius a Nvidia hater, not everyone else. Unlike most people Artorius seems to have plenty of tech knowledge to make informed decisions. Where most people make their decisions based on what is available first or fall for the marketing hype. Which Nvidia is a lot better at than AMD.
DX 12 is better supported on Pascal, while not showing gains like on AMD on async compute it is no longer loosing or at least not as much as it was under maxwell, and seeing as developers have to support async compute when they create a game it's hardly surprising it is not running well on Nvidia as at the time of the game's creation those cards/architecture were not available to optimise for.
There was one website who had a theory that AMD's larger gains are partially due to their lackluster DX 11 performance where the card is underperforming there and their GPU being put to better use in DX12 using async compute and that the gains would be more in line with Nvidia's if their DX 11 performance was better optimized, but that is something that is untestable and as such will remain a theory.
Not too mention that AMD did launch mantle which was also proprietary to their hardware iirc.
This^ Smoke and mirrors buddy, Smoke and mirrors. Seriously though this is what you're referring to.
Basically, terrible DX11 performance led to better looking DX 12 gains. Link
Last edited by Bigvizz; 2016-06-19 at 02:36 PM.
GCN was originally designed with consoles in mind and at consoles you need to extract every single portion of performance out of not so powerful hardware. Since you have way better control of the hardware, this is a possible thing to do with extremely well optimized games. Then AMD tried to bring this low-overhead control of the GPU to the PC with Mantle (which Intel had access BTW, Nvidia just chose not to because it wouldn't look good for them to support something from AMD), which showed ridiculously great performance increase across the board and forced MS to do the same thing with DX12.
Mantle then matured into Vulkan, and since it's maintained by the Khronos group now Nvidia supports it, and offers the same benefits of DX12 while being multi-platform which is might make Linux gaming viable. AMD's plan went awfully well and Nvidia couldn't do anything about it because Maxwell was already under development before Async compute was a thing.
@Bigvizz posting DX11 benchs of AotS with AMD cards doesn't make any sense, nor AMD nor Oxide ever tried to optimize the game at this scenario simply because it doesn't make any sense to do so. For a valid comparison of what you're trying to talk about you need to choose a game that originally was DX11 for a while and then got proper DX12 support. Which means that you won't ever be able to compare this properly.
The 390X and the 980Ti trade blows on Ashes DX12, if you honestly think that a 438 mm² GPU should be able to trade blows with a 601 mm² one because "terrible dx11 performance led to better looking DX 12 gains" then I'm literally speechless.
We're talking about 37% more space to put transistors.
The FX cards have a lot in common with the 1080, minus heat output. Both weren't ready for DirectX. Both overclock poorly, relatively speaking because the 1080 doesn't overclock as well as the Maxwell cards did. You are losing performance when in DX12. The FX cards would lost a lot more performance in DX9, but still similar situation.
I would keep the Fury X. I believe it to be faster. Though I hope AMD does do some big price drops across their lineup.
If you want to look at perform better at an API, you look at relative to the competition, not itself. You linking at 3840x2160 bench showing the 390x hugging the 980Ti kind of shows that.
Take two cards, in API 1 (baseline) performs equally in the majority of the situation, but in API 2 (a newer one) if either performs more than the other, that in itself is a better gain. Say 390 and 970, DX11 they perform relatively the same, however in DX12 390 takes the edge.
Random number, but say you look at 390 getting 30% extra performance vs 970 getting 10%, sure you can say "oh cause DX11 on AMD sucks", but that's dishonest at best because you don't compare relative to itself when you want to show which card gains more.
What the hell are you on about? The Geforce 1080 has made great strides forward in VR support and is fully DX12 compliant. It's also made strides forward with Async compute if you had bothered to read up about it. What leap in technology has the new Radeon architecture made in comparison?
The 1080 is a poor overclocker? Since when? The cards all get around roughly 20% overclock. Which everyone in this thread has been praising the new Radeon for doing the exact same thing.
It's DX 12 performance has been great as well, the card just came out, drivers and games are not matured for it and it's already getting roughly 30%-35% better performance then a Fury X. In Ashes the Geforce 1080 is roughly 40%-45% faster at stock then the Geforce 980Ti with Async enabled.
Async Compute is still broken. Any game that uses it, will decrease frame rates on 1080. Also Radeon GCN cards were already able to use DX12 like 3 years ago. What technology leap is needed? Also HBM memory.
Compared to Maxwell cards, it doesn't overclock as much %. Nobody knows anything about AMD's Polaris card.The 1080 is a poor overclocker? Since when? The cards all get around roughly 20% overclock. Which everyone in this thread has been praising the new Radeon for doing the exact same thing.
We know the 1080 is the fastest, but when you turn on DX12 is runs slower compared to DX11, just like Maxwell. It's Maxwell 2.0.It's DX 12 performance has been great as well, the card just came out, drivers and games are not matured for it and it's already getting roughly 30%-35% better performance then a Fury X. In Ashes the Geforce 1080 is roughly 40%-45% faster at stock then the Geforce 980Ti with Async enabled.
If rumors are true and the RX 480 is faster than GTX 1070 in anyway ill be so hurt
Must be why they used a DX11 demo for AC despite DX11 does not support it. They shot themselves in the foot when they said "as you can see the performance difference from FRAPS" which does not support DX12. Hell it hasn't been updated for 3 years. "Pascal" / Maxwell 3.0 didn't really do much architecture wise. Maxwell had terrible VR latency and consistency to begin with, especially being quoted "potentially catastrophic". One pass for the two lens may not yield much other than bringing the latency down to more acceptable levels relative to the competition really.
Architecture wise, personally the more interesting ones I've seen are the ALU level boost/power down, 'multi-threaded' ALU patents and the hardware primitive discard from Polaris. Not sure what's going to happen with the former two, we'll see on that but the latter one is going to make some scenes extremely lopsided to Polaris because of the nature of it.
The layout for "Pascal" (not P100) is 99% identical to Maxwell. The only difference is that it has 5 SMs vs 4 SMs per GPC which means more shared resources between each SM per GPC. The only other update is the polymorph engine so we can tessellate the living hell out of everything more but since Nvidia doesn't publish their stuff like Intel and AMD that's all the information we have. Hell that's one of the reason why Asynchronous Compute became a bigger thing than it probably would've been was cause Nvidia falsely advertised that they can support it and once people started messing with it, they realized that that wasn't true. Nothing better than asking the driver to do something the hardware couldn't do (which is what happened). Wouldn't be a big issue if they publicized their architecture like Intel and AMD so cutting edge devs can just look at it and notice "oh", instead of this year long mess.It's DX 12 performance has been great as well, the card just came out, drivers and games are not matured for it and it's already getting roughly 30%-35% better performance then a Fury X. In Ashes the Geforce 1080 is roughly 40%-45% faster at stock then the Geforce 980Ti with Async enabled.
The gain from Maxwell to "Pascal" is negative from the architecture side, the gain from the node shrink however is positive which is where the brute forcing came into play. Starting 50% higher clock rate base and boost is bound to bring performance increase. FinFET let that happen, not so much the architecture did.
Noted it before but there's no problem with it, it's just... boring.
It's cause you guys are saying DX11 is crap on AMD and thus makes DX12 jump look better. The only reason DX11 looks bad in AotS is because GCN isn't suited for heavy draw call in DX11 and thus hits a CPU bottleneck. If DX12 only just removed CPU bottleneck then performance delta between 970 and 390 for example wouldn't be that big, but for GCN that's not particularly true due to the fact that in DX11 it's extremely underutilized by the serial nature of DX11 where as GCN is parallel in nature. DX12 allows for better parallel processing for graphics and compute which is where GCN able to be utilized better.