Page 42 of 77 FirstFirst ...
32
40
41
42
43
44
52
... LastLast
  1. #821
    Deleted
    Quote Originally Posted by Butthurt Beluga View Post
    Although I have a Fury X, the RX 480 is really tempting.
    There's been leaks (although I hardly believe them) of aftermarket RX 480's with 8+6 pin power connectors reaching 1500 MHz on air, and if they scale as well as previous iterations of GCN, that is a ridiculous overclock from the supposed 1080MHz stock clock.

    I can't wait to see the 1060 vs. RX 480 go head to head.
    Hmmm, it would be more of a side-grade than a real upgrade I think. Going from a Fury X to a 480.

    The 1500 MHz puts in the range of a stock 1070, accoording to those rumors. We will see, only 10 more days

  2. #822
    Quote Originally Posted by Zeara View Post
    Hmmm, it would be more of a side-grade than a real upgrade I think. Going from a Fury X to a 480.

    The 1500 MHz puts in the range of a stock 1070, accoording to those rumors. We will see, only 10 more days
    I heavily doubt said rumors on the 480 overclocking to stock 1070 performance levels, judging by the early firestrike benches it have to gain 45% in performance from it's OC to match the 1070. On top of that there's no telling how this card is going to react in normal gaming situations yet. It's wait and see type of thing.

  3. #823
    The Lightbringer Artorius's Avatar
    10+ Year Old Account
    Join Date
    Dec 2012
    Location
    Natal, Brazil
    Posts
    3,781
    Quote Originally Posted by Zenny View Post
    I'm not really sure how you can equate the two, the GeForce FX was late, underperformed, ran hot and overclocked really poorly. None of which describes the GeForce 1080.
    The thing is there's no technological advancement with Maxwell 3.0. Some people who care a little more wouldn't feel good spending that much money on a graphic's card that can't support the current technologies well and is already outdated by default. What's the point of a 2016 graphic's card that's designed for a 2008's API?

    Then you can add all the dirty anti-consumer tactics, forcefully locking you into their brand by refusing to use whatever open standard there is while offering an alternative that only works with their hardware and the fact that newer GameWorks games will magically run worse on your hardware once the GTX1100 series is released. When you add everything you have a bunch of reasons not to buy Nvidia, and that's probably why most gamers who have a deeper knowledge about hardware tend to prefer AMD.

  4. #824
    Quote Originally Posted by Artorius View Post
    The thing is there's no technological advancement with Maxwell 3.0. Some people who care a little more wouldn't feel good spending that much money on a graphic's card that can't support the current technologies well and is already outdated by default. What's the point of a 2016 graphic's card that's designed for a 2008's API?

    Then you can add all the dirty anti-consumer tactics, forcefully locking you into their brand by refusing to use whatever open standard there is while offering an alternative that only works with their hardware and the fact that newer GameWorks games will magically run worse on your hardware once the GTX1100 series is released. When you add everything you have a bunch of reasons not to buy Nvidia, and that's probably why most gamers who have a deeper knowledge about hardware tend to prefer AMD.
    tldr - Basically he hates Nvidia

    You're like the AMD version of Life Binder. /chuckle
    God I love these forums.

  5. #825
    The Lightbringer Artorius's Avatar
    10+ Year Old Account
    Join Date
    Dec 2012
    Location
    Natal, Brazil
    Posts
    3,781
    Quote Originally Posted by Bigvizz View Post
    tldr - Basically he hates Nvidia

    You're like the AMD version of Life Binder. /chuckle
    God I love these forums.
    No I'm not. I can recommend Nvidia cards when they offer better perf/price like the GTX950 or the OC'd 980Tis. I can also see things from a logical PoV and offer arguments.

    What I can't do is choose Nvidia over anyone else for myself, I don't like how they do business and I won't support a company that would make games "Nvidia-exclusive" if they could. PC was always about choice.

  6. #826
    Quote Originally Posted by Artorius View Post
    No I'm not. I can recommend Nvidia cards when they offer better perf/price like the GTX950 or the OC'd 980Tis. I can also see things from a logical PoV and offer arguments.

    What I can't do is choose Nvidia over anyone else for myself, I don't like how they do business and I won't support a company that would make games "Nvidia-exclusive" if they could. PC was always about choice.
    Ohhhh come on buddy, Nvidia's business practices might be a little shady, but it works and people don't seem to care and consumers are still buying their cards, I bet if Nvidia took a shit (in a literal sense of course) on a PCB called it the 1080ti people would still buy it.
    You know the damn thing would still outsell AMD's competing card, lol.... there's the Rub.
    Last edited by Bigvizz; 2016-06-19 at 01:57 PM.

  7. #827
    Deleted
    Quote Originally Posted by Bigvizz View Post
    Ohhhh come on buddy, Nvidia's business practices might be a little shady, but it works and people don't seem to care and consumers are still buying their cards, I bet if Nvidia took a shit (in a literal sense of course) on a PCB called it the 1080ti people would still buy it.
    You know the damn thing would still outsell AMD's competing card, lol.... there's the Rub.
    what's your point? anyone who doesn't buy a literal shit on a pcb is a fanboy of the competition?

  8. #828
    Quote Originally Posted by Him of Many Faces View Post
    what's your point? anyone who doesn't buy a literal shit on a pcb is a fanboy of the competition?
    That is my point, consumers will buy what Nvidia put out in front of them regardless of what it is. Simple truths, are simple.

  9. #829
    Deleted
    Quote Originally Posted by Bigvizz View Post
    That is my point, consumers will buy what Nvidia put out in front of them regardless of what it is. Simple truths, are simple.
    yes, but why does that make people who don't "nvidia haters"?

  10. #830
    Quote Originally Posted by Him of Many Faces View Post
    yes, but why does that make people who don't "nvidia haters"?
    Never said that, I was calling Artorius a Nvidia hater, not everyone else. Unlike most people Artorius seems to have plenty of tech knowledge to make informed decisions. Where most people make their decisions based on what is available first or fall for the marketing hype. Which Nvidia is a lot better at than AMD.

  11. #831
    Quote Originally Posted by Artorius View Post
    The thing is there's no technological advancement with Maxwell 3.0. Some people who care a little more wouldn't feel good spending that much money on a graphic's card that can't support the current technologies well and is already outdated by default. What's the point of a 2016 graphic's card that's designed for a 2008's API?

    Then you can add all the dirty anti-consumer tactics, forcefully locking you into their brand by refusing to use whatever open standard there is while offering an alternative that only works with their hardware and the fact that newer GameWorks games will magically run worse on your hardware once the GTX1100 series is released. When you add everything you have a bunch of reasons not to buy Nvidia, and that's probably why most gamers who have a deeper knowledge about hardware tend to prefer AMD.
    DX 12 is better supported on Pascal, while not showing gains like on AMD on async compute it is no longer loosing or at least not as much as it was under maxwell, and seeing as developers have to support async compute when they create a game it's hardly surprising it is not running well on Nvidia as at the time of the game's creation those cards/architecture were not available to optimise for.

    There was one website who had a theory that AMD's larger gains are partially due to their lackluster DX 11 performance where the card is underperforming there and their GPU being put to better use in DX12 using async compute and that the gains would be more in line with Nvidia's if their DX 11 performance was better optimized, but that is something that is untestable and as such will remain a theory.

    Not too mention that AMD did launch mantle which was also proprietary to their hardware iirc.

  12. #832
    Quote Originally Posted by Denpepe View Post
    There was one website who had a theory that AMD's larger gains are partially due to their lackluster DX 11 performance where the card is underperforming there and their GPU being put to better use in DX12 using async compute and that the gains would be more in line with Nvidia's if their DX 11 performance was better optimized, but that is something that is untestable and as such will remain a theory.

    Not too mention that AMD did launch mantle which was also proprietary to their hardware iirc.
    This^ Smoke and mirrors buddy, Smoke and mirrors. Seriously though this is what you're referring to.
    Basically, terrible DX11 performance led to better looking DX 12 gains. Link



    Last edited by Bigvizz; 2016-06-19 at 02:36 PM.

  13. #833
    The Lightbringer Artorius's Avatar
    10+ Year Old Account
    Join Date
    Dec 2012
    Location
    Natal, Brazil
    Posts
    3,781
    Quote Originally Posted by Denpepe View Post
    DX 12 is better supported on Pascal, while not showing gains like on AMD on async compute it is no longer loosing or at least not as much as it was under maxwell, and seeing as developers have to support async compute when they create a game it's hardly surprising it is not running well on Nvidia as at the time of the game's creation those cards/architecture were not available to optimise for.

    There was one website who had a theory that AMD's larger gains are partially due to their lackluster DX 11 performance where the card is underperforming there and their GPU being put to better use in DX12 using async compute and that the gains would be more in line with Nvidia's if their DX 11 performance was better optimized, but that is something that is untestable and as such will remain a theory.

    Not too mention that AMD did launch mantle which was also proprietary to their hardware iirc.
    GCN was originally designed with consoles in mind and at consoles you need to extract every single portion of performance out of not so powerful hardware. Since you have way better control of the hardware, this is a possible thing to do with extremely well optimized games. Then AMD tried to bring this low-overhead control of the GPU to the PC with Mantle (which Intel had access BTW, Nvidia just chose not to because it wouldn't look good for them to support something from AMD), which showed ridiculously great performance increase across the board and forced MS to do the same thing with DX12.

    Mantle then matured into Vulkan, and since it's maintained by the Khronos group now Nvidia supports it, and offers the same benefits of DX12 while being multi-platform which is might make Linux gaming viable. AMD's plan went awfully well and Nvidia couldn't do anything about it because Maxwell was already under development before Async compute was a thing.

    @Bigvizz posting DX11 benchs of AotS with AMD cards doesn't make any sense, nor AMD nor Oxide ever tried to optimize the game at this scenario simply because it doesn't make any sense to do so. For a valid comparison of what you're trying to talk about you need to choose a game that originally was DX11 for a while and then got proper DX12 support. Which means that you won't ever be able to compare this properly.

    The 390X and the 980Ti trade blows on Ashes DX12, if you honestly think that a 438 mm² GPU should be able to trade blows with a 601 mm² one because "terrible dx11 performance led to better looking DX 12 gains" then I'm literally speechless.

    We're talking about 37% more space to put transistors.
    Last edited by Artorius; 2016-06-19 at 02:44 PM.

  14. #834
    Old God Vash The Stampede's Avatar
    10+ Year Old Account
    Join Date
    Sep 2010
    Location
    Better part of NJ
    Posts
    10,939
    Quote Originally Posted by Zenny View Post
    I'm not really sure how you can equate the two, the GeForce FX was late, underperformed, ran hot and overclocked really poorly. None of which describes the GeForce 1080.
    The FX cards have a lot in common with the 1080, minus heat output. Both weren't ready for DirectX. Both overclock poorly, relatively speaking because the 1080 doesn't overclock as well as the Maxwell cards did. You are losing performance when in DX12. The FX cards would lost a lot more performance in DX9, but still similar situation.

    Quote Originally Posted by Zeara View Post
    Hmmm, it would be more of a side-grade than a real upgrade I think. Going from a Fury X to a 480.

    The 1500 MHz puts in the range of a stock 1070, accoording to those rumors. We will see, only 10 more days
    I would keep the Fury X. I believe it to be faster. Though I hope AMD does do some big price drops across their lineup.

  15. #835
    Fluffy Kitten Remilia's Avatar
    10+ Year Old Account
    Join Date
    Apr 2011
    Location
    Avatar: Momoco
    Posts
    15,160
    Quote Originally Posted by Bigvizz View Post
    This^ Smoke and mirrors buddy, Smoke and mirrors. Seriously though this is what you're referring to.
    Basically, terrible DX11 performance led to better looking DX 12 gains. Link

    [IMG]http://media.gamersnexus.net/images/media/2016/gpu/gtx-1080/gtx-1080-ashes-bench-1080p.png[IMG]

    [IMG]http://media.gamersnexus.net/images/media/2016/gpu/gtx-1080/gtx-1080-ashes-bench-4k-h.png[IMG]
    If you want to look at perform better at an API, you look at relative to the competition, not itself. You linking at 3840x2160 bench showing the 390x hugging the 980Ti kind of shows that.

    Take two cards, in API 1 (baseline) performs equally in the majority of the situation, but in API 2 (a newer one) if either performs more than the other, that in itself is a better gain. Say 390 and 970, DX11 they perform relatively the same, however in DX12 390 takes the edge.

    Random number, but say you look at 390 getting 30% extra performance vs 970 getting 10%, sure you can say "oh cause DX11 on AMD sucks", but that's dishonest at best because you don't compare relative to itself when you want to show which card gains more.

  16. #836
    Quote Originally Posted by Remilia View Post
    If you want to look at perform better at an API, you look at relative to the competition, not itself. You linking at 3840x2160 bench showing the 390x hugging the 980Ti kind of shows that.

    Take two cards, in API 1 (baseline) performs equally in the majority of the situation, but in API 2 (a newer one) if either performs more than the other, that in itself is a better gain. Say 390 and 970, DX11 they perform relatively the same, however in DX12 390 takes the edge.

    Random number, but say you look at 390 getting 30% extra performance vs 970 getting 10%, sure you can say "oh cause DX11 on AMD sucks", but that's dishonest at best because you don't compare relative to itself when you want to show which card gains more.
    Ok, let me find those benchmarks that don't exist yet. Sadly AMD has yet to release a comparable card to the 1070 or 1080 as of yet. So we use what we have, but DX 11 performance is pretty poor for the 390x and fury X.

  17. #837
    Warchief Zenny's Avatar
    10+ Year Old Account
    Join Date
    Oct 2011
    Location
    South Africa
    Posts
    2,171
    Quote Originally Posted by Artorius View Post
    The thing is there's no technological advancement with Maxwell 3.0. Some people who care a little more wouldn't feel good spending that much money on a graphic's card that can't support the current technologies well and is already outdated by default. What's the point of a 2016 graphic's card that's designed for a 2008's API?
    What the hell are you on about? The Geforce 1080 has made great strides forward in VR support and is fully DX12 compliant. It's also made strides forward with Async compute if you had bothered to read up about it. What leap in technology has the new Radeon architecture made in comparison?

    Quote Originally Posted by Dukenukemx View Post
    The FX cards have a lot in common with the 1080, minus heat output. Both weren't ready for DirectX. Both overclock poorly, relatively speaking because the 1080 doesn't overclock as well as the Maxwell cards did. You are losing performance when in DX12. The FX cards would lost a lot more performance in DX9, but still similar situation.
    The 1080 is a poor overclocker? Since when? The cards all get around roughly 20% overclock. Which everyone in this thread has been praising the new Radeon for doing the exact same thing.

    It's DX 12 performance has been great as well, the card just came out, drivers and games are not matured for it and it's already getting roughly 30%-35% better performance then a Fury X. In Ashes the Geforce 1080 is roughly 40%-45% faster at stock then the Geforce 980Ti with Async enabled.

  18. #838
    Old God Vash The Stampede's Avatar
    10+ Year Old Account
    Join Date
    Sep 2010
    Location
    Better part of NJ
    Posts
    10,939
    Quote Originally Posted by Zenny View Post
    What the hell are you on about? The Geforce 1080 has made great strides forward in VR support and is fully DX12 compliant. It's also made strides forward with Async compute if you had bothered to read up about it. What leap in technology has the new Radeon architecture made in comparison?
    Async Compute is still broken. Any game that uses it, will decrease frame rates on 1080. Also Radeon GCN cards were already able to use DX12 like 3 years ago. What technology leap is needed? Also HBM memory.

    The 1080 is a poor overclocker? Since when? The cards all get around roughly 20% overclock. Which everyone in this thread has been praising the new Radeon for doing the exact same thing.
    Compared to Maxwell cards, it doesn't overclock as much %. Nobody knows anything about AMD's Polaris card.
    It's DX 12 performance has been great as well, the card just came out, drivers and games are not matured for it and it's already getting roughly 30%-35% better performance then a Fury X. In Ashes the Geforce 1080 is roughly 40%-45% faster at stock then the Geforce 980Ti with Async enabled.
    We know the 1080 is the fastest, but when you turn on DX12 is runs slower compared to DX11, just like Maxwell. It's Maxwell 2.0.

  19. #839
    Brewmaster
    7+ Year Old Account
    Join Date
    Mar 2015
    Location
    Birmingham, Alabama
    Posts
    1,297
    If rumors are true and the RX 480 is faster than GTX 1070 in anyway ill be so hurt

  20. #840
    Fluffy Kitten Remilia's Avatar
    10+ Year Old Account
    Join Date
    Apr 2011
    Location
    Avatar: Momoco
    Posts
    15,160
    Quote Originally Posted by Zenny View Post
    What the hell are you on about? The Geforce 1080 has made great strides forward in VR support and is fully DX12 compliant. It's also made strides forward with Async compute if you had bothered to read up about it. What leap in technology has the new Radeon architecture made in comparison?
    Must be why they used a DX11 demo for AC despite DX11 does not support it. They shot themselves in the foot when they said "as you can see the performance difference from FRAPS" which does not support DX12. Hell it hasn't been updated for 3 years. "Pascal" / Maxwell 3.0 didn't really do much architecture wise. Maxwell had terrible VR latency and consistency to begin with, especially being quoted "potentially catastrophic". One pass for the two lens may not yield much other than bringing the latency down to more acceptable levels relative to the competition really.

    Architecture wise, personally the more interesting ones I've seen are the ALU level boost/power down, 'multi-threaded' ALU patents and the hardware primitive discard from Polaris. Not sure what's going to happen with the former two, we'll see on that but the latter one is going to make some scenes extremely lopsided to Polaris because of the nature of it.
    It's DX 12 performance has been great as well, the card just came out, drivers and games are not matured for it and it's already getting roughly 30%-35% better performance then a Fury X. In Ashes the Geforce 1080 is roughly 40%-45% faster at stock then the Geforce 980Ti with Async enabled.
    The layout for "Pascal" (not P100) is 99% identical to Maxwell. The only difference is that it has 5 SMs vs 4 SMs per GPC which means more shared resources between each SM per GPC. The only other update is the polymorph engine so we can tessellate the living hell out of everything more but since Nvidia doesn't publish their stuff like Intel and AMD that's all the information we have. Hell that's one of the reason why Asynchronous Compute became a bigger thing than it probably would've been was cause Nvidia falsely advertised that they can support it and once people started messing with it, they realized that that wasn't true. Nothing better than asking the driver to do something the hardware couldn't do (which is what happened). Wouldn't be a big issue if they publicized their architecture like Intel and AMD so cutting edge devs can just look at it and notice "oh", instead of this year long mess.

    The gain from Maxwell to "Pascal" is negative from the architecture side, the gain from the node shrink however is positive which is where the brute forcing came into play. Starting 50% higher clock rate base and boost is bound to bring performance increase. FinFET let that happen, not so much the architecture did.

    Noted it before but there's no problem with it, it's just... boring.
    Quote Originally Posted by Bigvizz View Post
    Ok, let me find those benchmarks that don't exist yet. Sadly AMD has yet to release a comparable card to the 1070 or 1080 as of yet. So we use what we have, but DX 11 performance is pretty poor for the 390x and fury X.
    It's cause you guys are saying DX11 is crap on AMD and thus makes DX12 jump look better. The only reason DX11 looks bad in AotS is because GCN isn't suited for heavy draw call in DX11 and thus hits a CPU bottleneck. If DX12 only just removed CPU bottleneck then performance delta between 970 and 390 for example wouldn't be that big, but for GCN that's not particularly true due to the fact that in DX11 it's extremely underutilized by the serial nature of DX11 where as GCN is parallel in nature. DX12 allows for better parallel processing for graphics and compute which is where GCN able to be utilized better.
    Last edited by Remilia; 2016-06-19 at 11:33 PM.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •