Page 2 of 3 FirstFirst
1
2
3
LastLast
  1. #21
    Fluffy Kitten Remilia's Avatar
    10+ Year Old Account
    Join Date
    Apr 2011
    Location
    Avatar: Momoco
    Posts
    15,160
    Quote Originally Posted by Fascinate View Post
    1. We dont know for a fact the current amd cards are going to have a large advantage in dx12 at this point in time. No game even uses it yet, and who knows how the api is going to change when they finally start to.
    Hardware, period. AMD's hardware suits better in DX12, better compute throughput, hardware scheduler. Has lower latency in DX12, Vulkan and in turn also Virtual Reality (if you care about that).

    If you didn't keep up, recently (well over 6 months) found that Nvidia's hardware can NOT support asynchronous compute despite being advertised that. It has no proper scheduler so it relies heavily on the software to do so which impacts performance and frame latency. Ashes of the Singularity's implementation of DX12 shows performance increase with AC on on AMD hardware and performance decrease on Nvidia's just because of these differences while DX12 shows performance increase (small or big) on every card when AC is off, so don't say this is a bad implementation. Hitman's DX12 shows the same thing.

    To illustrate (well link images).
    This is the 980 Ti's frame time in VR.


    This is the 960


    This is the 380


    And this is the Fury


    One of the strongest card has relatively the same (if not worse) frame time as a mid low end AMD card and the Fury just stomps it.

    And this is consistency.
    AMD's


    Nvidia's

    3. Coil whine. Go look at reviews of all the current amd cards, almost none of them are 5 stars across the board, the reason for this is you have a spattering of negative reviews on almost all of them with people complaining of coil whine.....this is unheard of on the nvidia side.
    *looks at the 970, 980, 980Ti, Titan X coil whining threads in google*. Okay.
    Last edited by Remilia; 2016-03-20 at 03:10 AM.

  2. #22
    Quote Originally Posted by Artorius View Post
    The new cards aren't supposed to be die shrinks, if they are Nvidia is going to be in a horrible position.
    Uhh.. What? They are most definitely shrinking. Pascal is 16nm FinFet and Polaris is 14nm FinFet; both of the "current" generations (R9/GTX 900) are 28nm. Both are ALSO completely new architectures, as well. Thats why were seeing such large gains this upcoming generation (25%+) in addition to sipping power (AMD showed a Polaris GPU at CES that was roughly equivalent of the GTX 950, but used only 65w TDP).

  3. #23
    Fluffy Kitten Remilia's Avatar
    10+ Year Old Account
    Join Date
    Apr 2011
    Location
    Avatar: Momoco
    Posts
    15,160
    Quote Originally Posted by Kagthul View Post
    Uhh.. What? They are most definitely shrinking. Pascal is 16nm FinFet and Polaris is 14nm FinFet; both of the "current" generations (R9/GTX 900) are 28nm. Both are ALSO completely new architectures, as well. Thats why were seeing such large gains this upcoming generation (25%+) in addition to sipping power (AMD showed a Polaris GPU at CES that was roughly equivalent of the GTX 950, but used only 65w TDP).
    Think the one in CES Polaris 11 was about 30-40W used. Can't remember the exact number but it was a bit absurd. Don't know the actual performance though since they were all capped.

  4. #24
    I am actually aware of as asynchronous compute issues these sites have been showing, but how are we sure this will affect the actual dx12 api once it is widely rolled out. I have a feeling my gtx 760 will work just fine on dx12 games simply because they sold a metric truckload of them, they arent going to leave those people hanging.

  5. #25
    Quote Originally Posted by Remilia View Post
    Think the one in CES Polaris 11 was about 30-40W used. Can't remember the exact number but it was a bit absurd. Don't know the actual performance though since they were all capped.
    Yeah i dont remember what the card was actually pulling (it was the less than the TDP the guy announced). As to performance, the presenter was the who equated it to a GTX 950. Even then, the TDP he quoted is still half that of the 950, rouhly. Thats a lot of power saving.

  6. #26
    Fluffy Kitten Remilia's Avatar
    10+ Year Old Account
    Join Date
    Apr 2011
    Location
    Avatar: Momoco
    Posts
    15,160
    Quote Originally Posted by Fascinate View Post
    I am actually aware of as asynchronous compute issues these sites have been showing, but how are we sure this will affect the actual dx12 api once it is widely rolled out. I have a feeling my gtx 760 will work just fine on dx12 games simply because they sold a metric truckload of them, they arent going to leave those people hanging.
    Well, tell that to the benchmarks where Kepler is dead especially in Nvidia sponsored games. Witcher 3, Batman (Though that one is laughable), Tomb Raider, Project Cars, etc all show these characteristics.

  7. #27
    The Lightbringer Artorius's Avatar
    10+ Year Old Account
    Join Date
    Dec 2012
    Location
    Natal, Brazil
    Posts
    3,781
    Quote Originally Posted by Kagthul View Post
    Uhh.. What? They are most definitely shrinking. Pascal is 16nm FinFet and Polaris is 14nm FinFet; both of the "current" generations (R9/GTX 900) are 28nm. Both are ALSO completely new architectures, as well. Thats why were seeing such large gains this upcoming generation (25%+) in addition to sipping power (AMD showed a Polaris GPU at CES that was roughly equivalent of the GTX 950, but used only 65w TDP).
    Die shrinks are when you pick an architecture originally build at manufacturing process A and do the same thing at smaller manufacturing process B. Of course they're shrinking, but they aren't supposed to be die shrinks.

    Broadwell is a die shrink of Haswell, and Skylake is a new architecture for example.

    - - - Updated - - -

    Quote Originally Posted by Fascinate View Post
    Why is that?
    Because Maxwell is bad at everything that is coming to the computer scene right now, I think Remilia's post answered that.

  8. #28
    Quote Originally Posted by Remilia View Post
    Well, tell that to the benchmarks where Kepler is dead especially in Nvidia sponsored games. Witcher 3, Batman (Though that one is laughable), Tomb Raider, Project Cars, etc all show these characteristics.
    Simply because its early days for dx12. Nvidia has a ridiculous amount of resources at hand, many orders of magnitude higher than AMD does. I dont disagree that current tests show AMD is better setup for dx12 than nvidias current cards are, but could you imagine a scenario where say a gtx 760 was performing at 50% of a r9 270x? That would happen for 1 week before the devs tweaked something so the 760 was on par.

    Its these midrange cards that really matter, on the grand scheme of things no one owns a 980ti or fury x, they all have 760's/960's etc. This is what will decide where dx12 is going, meaning what the hardware that is already in these cards is capable of.

  9. #29
    The Lightbringer Artorius's Avatar
    10+ Year Old Account
    Join Date
    Dec 2012
    Location
    Natal, Brazil
    Posts
    3,781
    Quote Originally Posted by Fascinate View Post
    Simply because its early days for dx12. Nvidia has a ridiculous amount of resources at hand, many orders of magnitude higher than AMD does. I dont disagree that current tests show AMD is better setup for dx12 than nvidias current cards are, but could you imagine a scenario where say a gtx 760 was performing at 50% of a r9 270x? That would happen for 1 week before the devs tweaked something so the 760 was on par.

    Its these midrange cards that really matter, on the grand scheme of things no one owns a 980ti or fury x, they all have 760's/960's etc. This is what will decide where dx12 is going, meaning what the hardware that is already in these cards is capable of.
    Nvidia sabotage their own old cards to make their customers upgrade sooner, people are forced into using old drivers otherwise they get a lot of weird problems. I just don't like how they act.

  10. #30
    Fluffy Kitten Remilia's Avatar
    10+ Year Old Account
    Join Date
    Apr 2011
    Location
    Avatar: Momoco
    Posts
    15,160
    Quote Originally Posted by Fascinate View Post
    Simply because its early days for dx12. Nvidia has a ridiculous amount of resources at hand, many orders of magnitude higher than AMD does. I dont disagree that current tests show AMD is better setup for dx12 than nvidias current cards are, but could you imagine a scenario where say a gtx 760 was performing at 50% of a r9 270x? That would happen for 1 week before the devs tweaked something so the 760 was on par.

    Its these midrange cards that really matter, on the grand scheme of things no one owns a 980ti or fury x, they all have 760's/960's etc. This is what will decide where dx12 is going, meaning what the hardware that is already in these cards is capable of.
    Those aren't DX12 games, they're DX11...

  11. #31
    Quote Originally Posted by Remilia View Post
    Those aren't DX12 games, they're DX11...
    Huh? I didnt even mention a game lol. Just saying, hypothetical.

    Different technologies are built into different generations, they are going to have to make sacrifices with dx12 to make sure the majority of PC gamers can actually take advantage of dx12 because not everyone buys 650 dollar gpu's.

  12. #32
    Fluffy Kitten Remilia's Avatar
    10+ Year Old Account
    Join Date
    Apr 2011
    Location
    Avatar: Momoco
    Posts
    15,160
    Quote Originally Posted by Fascinate View Post
    Huh? I didnt even mention a game lol. Just saying, hypothetical.

    Different technologies are built into different generations, they are going to have to make sacrifices with dx12 to make sure the majority of PC gamers can actually take advantage of dx12 because not everyone buys 650 dollar gpu's.
    My mention was for the 760 not being left behind, probably should've been clear about that, but it's clear that Kepler has definitely been left behind.

  13. #33
    Its all speculation at this point tbh. I can probably be called an nvidia fanboy, if only for the reason ive had no problems with their cards, but have had problems with ATI/AMD. This goes both ways, surely there are people that have had the opposite experience.

    I love competition as it drives prices down, but right now there is no competition. Why have gtx 970 prices not moved an inch since 390 release? Its clearly faster in most games and appears to be a more future proof card with more memory and better dx12 compatibility. Its because nvidia isnt scared, they know they have the superior product. Im not saying a 970 is a no brainer vs a 390 as they compete together pretty neck and neck, its just that nvidia is soooo far ahead of amd in the pipeline they have no reason to release a new product that would stomp amd in the same price category.

    Don't believe me? Look back at gtx 680 release. Nvidia was BAFFLED that the best amd could come up with was a 7970 that they held BACK the gtx 780 which was already developed and ready for market. Its all about resources people, nvidia has it and amd dont, this is why i will always buy an nvidia graphics card for my personal rig.

  14. #34
    Fluffy Kitten Remilia's Avatar
    10+ Year Old Account
    Join Date
    Apr 2011
    Location
    Avatar: Momoco
    Posts
    15,160
    It has nothing to do with resources or the company in question for this. AMD's current hardware is better for now and later.

    We don't even know if Pascal will rectify their deficits in Maxwell.

    And if we're going to go on feels then I won't touch Nvidia's product with a ten foot clown pole just sheerly due to business practices.

  15. #35
    Quote Originally Posted by Fascinate View Post
    I am actually aware of as asynchronous compute issues these sites have been showing, but how are we sure this will affect the actual dx12 api once it is widely rolled out. I have a feeling my gtx 760 will work just fine on dx12 games simply because they sold a metric truckload of them, they arent going to leave those people hanging.
    They absolutely can and have to leave the people hanging, as you put it, (just like they did with DX11 features not being supported on older cards during the DX9 > DX11 changeover) because the hardware required for some of the DX12 features to work, particularly Asynchronous Compute, which is THE killer feature of DX12, is not physically present.

    There's quite literally nothing they can do. They cant add a feature that the hardware cant do. Period.

    Quote Originally Posted by Fascinate View Post
    Simply because its early days for dx12. Nvidia has a ridiculous amount of resources at hand, many orders of magnitude higher than AMD does. I dont disagree that current tests show AMD is better setup for dx12 than nvidias current cards are, but could you imagine a scenario where say a gtx 760 was performing at 50% of a r9 270x? That would happen for 1 week before the devs tweaked something so the 760 was on par.
    There's nothing to tweak. Its not a driver or software issue. Do you not comprehend that? The hardware required does not exist on the card. There's nothing that can be done.

    Its these midrange cards that really matter, on the grand scheme of things no one owns a 980ti or fury x, they all have 760's/960's etc. This is what will decide where dx12 is going, meaning what the hardware that is already in these cards is capable of.
    .. that is not how an API works. Microsoft makes the API, its up to the vendors to create hardware that supports it, not the other way around.

    ALL of AMD's R9 series chips support DX12 (Asynch Compute included), not just the Fury X. Do you not get that? That's the point here - AMD's current lineup is fully DX12 compliant, nVidia's is not.

    However, both Polaris and Pascal should be 100% DX12 compliant.

  16. #36
    Quote Originally Posted by Kagthul View Post
    They absolutely can and have to leave the people hanging, as you put it, (just like they did with DX11 features not being supported on older cards during the DX9 > DX11 changeover) because the hardware required for some of the DX12 features to work, particularly Asynchronous Compute, which is THE killer feature of DX12, is not physically present.
    Also, outside of reputation(which Nvidia already lacks).. There is no reason to go the extra mile to develop for the cards they have already sold. They want to sell the new cards and if the new cards are just as fast as the current, but with better support on Async and such.. it's their biggest selling point over the old generation.

    Nvidia has been doing this for a long time.. So it would be really shocking if they did something for their older gen to run smoother with DX12.

    For now though 900 series ain't too bad, because Nvidia still wants to sell it and the Gameworks titles with DX12 support show it, it just runs fairly poorly.. almost always being a performance decrease compared to DX11 for both AMD and Nvidia.

  17. #37
    Quote Originally Posted by Fascinate View Post
    I am actually aware of as asynchronous compute issues these sites have been showing, but how are we sure this will affect the actual dx12 api once it is widely rolled out. I have a feeling my gtx 760 will work just fine on dx12 games simply because they sold a metric truckload of them, they arent going to leave those people hanging.
    So you think that Hardware is magically going to appear on those older cards or do you believe some software sorcery is going to go on and they'll be able to emulate it faster than real hardware can do it? Either way, you are believing in magic and something that is not going to happen. The hardware is simply not there.

    The API is there to interact with the hardware, the API can not make the Hardware be there. In reality, what is going to happen is that nVidia drivers will emulate that hardware and emulation is slower than hardware, so while they will fully support DX21, alot of that is going to go on with emulation which will be slower. period.

    Edit to add that you seem to think that just because your 760 sold so many it will support DX12. If I am not mistaken, that's a DX11 card, you can see just how popular it is here:
    http://store.steampowered.com/hwsurvey/videocard/

    Which is to say, not even on the list. Which would mean it's in other. So if every singe other was a 760 then still only 1.38% of people use them. Obviously, it's going to be far less than that. So yeah, we're talking less than 1% of people. So they won't "leave people hanging" because "it's such a popular card." BS. They will leave people hanging and you will have to play games in DX11 mode or deal with software emulation in Dx12 mode and take a performance hit and it's not really that popular of a card. Sorry.

    - - - Updated - - -

    Quote Originally Posted by Fascinate View Post
    3. Coil whine. Go look at reviews of all the current amd cards, almost none of them are 5 stars across the board, the reason for this is you have a spattering of negative reviews on almost all of them with people complaining of coil whine.....this is unheard of on the nvidia side.
    The 970, and really the entire 9xx series is notorious for having coil whine, so not sure what you are on about here. There are multiple articles/posts about it it was so bad:
    http://wccftech.com/nvidia-gtx-970-coil-whine/
    http://www.techpowerup.com/forums/th...-whine.212975/
    https://www.reddit.com/r/buildapc/co..._and_your_psu/
    http://www.tomshardware.com/answers/...oil-whine.html
    http://www.tomshardware.com/answers/...e-batches.html
    http://forums.evga.com/Plagued-with-...-m2304663.aspx

    So much for "unheard of." It's actually the opposite, it was very common on the whole 9xx series and especially prevalent on the 970.

  18. #38
    Deleted
    Quote Originally Posted by Lathais View Post
    So you think that Hardware is magically going to appear on those older cards or do you believe some software sorcery is going to go on and they'll be able to emulate it faster than real hardware can do it? Either way, you are believing in magic and something that is not going to happen. The hardware is simply not there.

    The API is there to interact with the hardware, the API can not make the Hardware be there. In reality, what is going to happen is that nVidia drivers will emulate that hardware and emulation is slower than hardware, so while they will fully support DX21, alot of that is going to go on with emulation which will be slower. period.
    I think he meant that developers wont use that parts of dx12 that will have to be emulated - which although unlikely is possible if pascal is lacking in hardware support and nvidia throw enough money at devs(which they are known to do before). Still unlikely though.


    Edit to add that you seem to think that just because your 760 sold so many it will support DX12. If I am not mistaken, that's a DX11 card, you can see just how popular it is here:
    http://store.steampowered.com/hwsurvey/videocard/

    Which is to say, not even on the list. Which would mean it's in other. So if every singe other was a 760 then still only 1.38% of people use them. Obviously, it's going to be far less than that. So yeah, we're talking less than 1% of people. So they won't "leave people hanging" because "it's such a popular card." BS. They will leave people hanging and you will have to play games in DX11 mode or deal with software emulation in Dx12 mode and take a performance hit and it's not really that popular of a card. Sorry.
    That card is at 2.17% in "all secion", meaning its 6th most popular card atm. Though there is something wanky with that list, in "dx11" section top 3 cards are intel, then all the others are amd, 0 nvidia representatives?

    Except for that I agree, there is little reason to expect nvidia to go out of their way to force support for older hardware, and there is even less reason to expect devs to ignore one of biggest selling points of dx12 for sake of it.

  19. #39
    Quote Originally Posted by larix View Post
    That card is at 2.17% in "all secion", meaning its 6th most popular card atm. Though there is something wanky with that list, in "dx11" section top 3 cards are intel, then all the others are amd, 0 nvidia representatives?

    Except for that I agree, there is little reason to expect nvidia to go out of their way to force support for older hardware, and there is even less reason to expect devs to ignore one of biggest selling points of dx12 for sake of it.
    That list is odd, yesterday, the 760 was not in the all section, I had checked, even ctrl+f and the 760 was nowhere. Looking at the DX11 list closer now, yeah, that is odd. However, it is still true that only 2.17% of people use that card, so it's not all that common.

  20. #40
    Quote Originally Posted by Lathais View Post
    That list is odd, yesterday, the 760 was not in the all section, I had checked, even ctrl+f and the 760 was nowhere. Looking at the DX11 list closer now, yeah, that is odd. However, it is still true that only 2.17% of people use that card, so it's not all that common.
    Yeah not sure what is going on with that list but it definitely changed since yesterday as there was no modern card on that list be it AMD or Nvidia and Nvidia is still missing on the DX11 part. Would take those nr's with some caution.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •