Thread: Gtx 1080

Page 96 of 103 FirstFirst ...
46
86
94
95
96
97
98
... LastLast
  1. #1901
    Old God Vash The Stampede's Avatar
    10+ Year Old Account
    Join Date
    Sep 2010
    Location
    Better part of NJ
    Posts
    10,939
    Quote Originally Posted by Zenny View Post
    1. Time Spy gains roughly the same from enabling Async Compute then all other DX12/Vulkan implementations on the market, which is to say around the 10% mark, plus or minus a couple of %.
    This is hard to tell since nobody but Time Spy has a Async Compute button to enable or disable it. Considering how little is gained without working Async Compute, we can say that DX12/Vulkan is all about Async Compute. Maxwell cards without Async will still not gain or lose fps. So essentially Async Compute is synonymous to DX12/Vulkan.
    2. Maxwell gains nothing as it is explicitly disabled in drivers.
    And never will get it in future driver updates. Better chance of Trump winning than seeing AC on Maxwell.
    3. AMD had input along the entire development process and can veto things it feels does not work.
    Just like AMD has with GameWorks titles?
    6. Mahigan has zero, let me repeat that ZERO evidence that Time Spy does anything incorrectly. He blatantly says things that are incorrect and has nothing to back him up on that front.
    So what? Who cares? Doom Vulkan shows that DX12 is a waste of time anyway. DX12 either favors AMD a lot, or favors them slightly. Dx12 for Nvidia shows nearly nothing or barely something. Doom Vulkan though is a good boost for both AMD and Nvidia.

    3Dmark should have lost credibility long time ago. Nvidia cheated 3dmark scores in the past. Intel cheated 3Dmark as well. The whole point of 3Dmark was to represent real world gaming, but it fails at that. Doom Vulkan shows that nobody should be using 3Dmark for benchmarking, ever.

    You guys debating this just gives 3Dmark credibility. If Time Spy was so important to debate over, why isn't there a Vulkan test option? Some emulators already have Vulkan and DX12, so why doesn't a stupid synthetic test have it? Even Quake 1 gets Vulkan.
    Last edited by Vash The Stampede; 2016-07-25 at 08:21 PM.

  2. #1902
    Warchief Zenny's Avatar
    10+ Year Old Account
    Join Date
    Oct 2011
    Location
    South Africa
    Posts
    2,171
    Quote Originally Posted by Remilia View Post
    ... The devs admitted that their implementation is extremely rudimentary for the sake of compatibility. It's being noted there are different ways to approach AC which you don't seem to give an alternative. Ashes for example Pascal remains unchanged performance wise when AC is on. And honestly these synthetics are pointless...
    That's not what they admitted at all:

    Benchmark design and principles
    In all Futuremark benchmarks we aim for neutrality by ensuring that all hardware is treated equally. Every device runs the same workload using the same code path. This is the only way to produce results that are fair and comparable.
    In the past, we have discussed the option of vendor-specific code paths with our development partners, but they are invariably against it. In many cases, an aggressive optimization path would also require altering the work being done, which means the test would no longer provide a common reference point. And with separate paths for each architecture, not only would the outputs not be comparable, but the paths would be obsolete with every new architecture launch.
    3DMark benchmarks use a path that is heavily optimized for all hardware. This path is developed by working with all vendors to ensure that our engine runs as efficiently as possible on all available hardware. Without vendor support and participation this would not be possible, but we are lucky in having active and dedicated development partners.
    Ultimately, 3DMark aims to predict the performance of games in general. To accomplish this, it needs to be able to predict games that are heavily optimized for one vendor, both vendors, and games that are fairly agnostic. 3DMark is not intended to be a measure of the absolute theoretical maximum performance of hardware.
    They don't have a rudimentary implementation, the have a vendor neutral one, which is something different. They have worked with AMD, Nvidia and Intel extensively on it and it shows, it gets a better performance boost then with Async enabled then either Doom or Ashes with a Fury X.

    Directly from Futuremark themselves:

    AMD, NVIDIA and Intel propose generic optimizations to 3DMark code during development. They offer ideas how to improve the performance and as long as their improvement benefits someone and doesn't degrade anything for anyone, so it fits to a generic code path (since that is all we have), it is highly likely we'd accept it.
    I'm still puzzled as to what the problem with the benchmarks Async Compute is, a Fury X gets over a 12% performance increase with it enabled. That is a better result then both Doom and Ashes. Even better then Rise of the Tomb Raider.

    - - - Updated - - -

    Quote Originally Posted by Dukenukemx View Post
    This is hard to tell since nobody but Time Spy has a Async Compute button to enable or disable it. Considering how little is gained without working Async Compute, we can say that DX12/Vulkan is all about Async Compute. Maxwell cards without Async will still not gain or lose fps. So essentially Async Compute is synonymous to DX12/Vulkan.
    Er, yes you can. You can enable/disable Async compute in both Doom and Ashes. I even linked charts and a youtube video demonstrating this.
    And never will get it in future driver updates. Better chance of Trump winning than seeing AC on Maxwell.
    I don't disagree.
    Just like AMD has with GameWorks titles?
    Sigh, no nothing like that at all:

    http://www.futuremark.com/business/b...opment-program

    Quote Originally Posted by Futuremark
    3DMark Time Spy has been in development for nearly two years, and BDP members have been involved from the start. BDP members receive regular builds throughout development and conduct their own review and testing at each stage. They have access to the source code and can suggest improvements and changes to ensure that the implementation is correct. All development takes place in a single source tree, which means anything suggested by a vendor can be immediately reviewed and commented on by the other vendors. Ultimately, each member approves the final benchmark for release to the press and public.
    So what? Who cares? Doom Vulkan shows that DX12 is a waste of time anyway. DX12 either favors AMD a lot, or favors them slightly. Dx12 for Nvidia shows nearly nothing or barely something. Doom Vulkan though is a good boost for both AMD and Nvidia.
    Why should we take Doom as gospel here? It's obviously not a vendor neutral benchmark. Just look at the OpenGL/Vulkan version fiasco. Why makes you assume that Doom is running great on Nvidia hardware? When the developers themselves have outright admitted it's still a work in progress on Nvidia's hardware.

    3Dmark should have lost credibility long time ago. Nvidia cheated 3dmark scores in the past. Intel cheated 3Dmark as well. The whole point of 3Dmark was to represent real world gaming, but it fails at that. Doom Vulkan shows that nobody should be using 3Dmark for benchmarking, ever.
    Wow, a issue from 13 years ago? Really? A issue that was detected by Futuremark themselves and led to the Benchmark Development Program and them refusing to have any vendor specific optimizations? So something that has zero relevance to today?

    You guys debating this just gives 3Dmark credibility. If Time Spy was so important to debate over, why isn't there a Vulkan test option? Some emulators already have Vulkan and DX12, so why doesn't a stupid synthetic test have it? Even Quake 1 gets Vulkan.
    It took 2 years to put together Time Spy, due to them actually working fairly with all benchmark partners it takes time to develop something like this. A Vulkan benchmark is on it's way though:

    http://steamcommunity.com/app/223850...29894210580505
    Last edited by Zenny; 2016-07-25 at 08:47 PM.

  3. #1903
    Quote Originally Posted by Zenny View Post
    No we won't:

    http://steamcommunity.com/app/223850...43951719980204



    I've bolded the important bit, but please feel free to keep telling me I'm wrong.



    He is wrong. The developer contradicted him on that point. He is free to offer up some proof to prove his claim though. He even makes claim of how Futuremark chose Nvidia's method over AMD, which is once again completely wrong.

    - - - Updated - - -



    I bought the Geforce 1080 because I needed a high end GPU a month ago, not next year. If AMD bothered to have something that had better performance then the Geforce 1080 I would have bought that, just like I have in the past.
    well next year your so called ''high end'' wont be high end anymore also i still have no idea why you brought amd up...
    Last edited by ABEEnr2; 2016-07-25 at 08:56 PM.

  4. #1904
    Scarab Lord Master Guns's Avatar
    15+ Year Old Account
    Join Date
    Oct 2008
    Location
    California
    Posts
    4,586
    Sorry but, after reading the last few pages of this thread, ya'll need to pick your game up. This forum is full of hypocrites. Every time anyone says something, you demand they provide proof and evidence to back it up or you're a "worthless idiot" who doesn't know anything. yet, zenny has provided links, videos, graphs, everything you can think of proving his claim and you all are just saying "lol nope you're wrong" and that's it.

    Seriously, as a debate, you all are awful, except Zenny.

    Check out the directors cut of my project SCHISM, a festival winning short film
    https://www.youtube.com/watch?v=DiHNTS-vyHE

  5. #1905
    Old God Vash The Stampede's Avatar
    10+ Year Old Account
    Join Date
    Sep 2010
    Location
    Better part of NJ
    Posts
    10,939
    Quote Originally Posted by Zenny View Post
    Er, yes you can. You can enable/disable Async compute in both Doom and Ashes. I even linked charts and a youtube video demonstrating this.
    Now compare DX12/Vulkan no AC with DX11. DX12/Vuklan doesn't add much without AC, if anything.

    Why should we take Doom as gospel here? It's obviously not a vendor neutral benchmark. Just look at the OpenGL/Vulkan version fiasco. Why makes you assume that Doom is running great on Nvidia hardware? When the developers themselves have outright admitted it's still a work in progress on Nvidia's hardware.
    Because the gains for Nvidia were relatively higher than other titles. The work in progress for Nvidia I assume is for Maxwell, which is never going to happen. People overlook this because the gains for AMD were much higher. Time Spy is like 6% gains for Nvidia, while for Doom it's like 10%. 0% for Maxwell cards.
    Wow, a issue from 13 years ago? Really? A issue that was detected by Futuremark themselves and led to the Benchmark Development Program and them refusing to have any vendor specific optimizations? So something that has zero relevance to today?
    The reference for today is that synthetic tests are easy to skew their results. You play the same thing for every graphics card, which makes it easier for driver developers to "optimize" for that one scene. Which is why some websites have their own tests they make with scripts that aren't the same generic test, just to avoid special driver optimizations. Even running the built in games benchmark is asking for special optimizations, hence why you make prerecorded scripts.
    Last edited by Vash The Stampede; 2016-07-25 at 09:15 PM.

  6. #1906
    Fluffy Kitten Remilia's Avatar
    10+ Year Old Account
    Join Date
    Apr 2011
    Location
    Avatar: Momoco
    Posts
    15,160
    Quote Originally Posted by Zenny View Post
    That's not what they admitted at all:

    They don't have a rudimentary implementation, the have a vendor neutral one, which is something different. They have worked with AMD, Nvidia and Intel extensively on it and it shows, it gets a better performance boost then with Async enabled then either Doom or Ashes with a Fury X.

    Directly from Futuremark themselves:
    From a Futuremark representative.
    http://forums.anandtech.com/showpost...6&postcount=39
    It is actually somewhat similar to how 3DMark 11 vs. 3DMark Fire Strike tested DX11. You could say that Time Spy is the "3DMark 11" or "Sky Diver" for DX12. Doesn't try to use every possible feature, aims to measure the common use case.
    It is very basic for an API that's meant to be more complex.

  7. #1907
    Warchief Zenny's Avatar
    10+ Year Old Account
    Join Date
    Oct 2011
    Location
    South Africa
    Posts
    2,171
    Quote Originally Posted by ABEEnr2 View Post
    well next year your so called ''high end'' wont be high end anymore also i still have no idea why you brought amd up...
    Well of course my high end won't be high end anymore, that's just how it works. I brought up AMD as they are the only competitor to Nvidia and they had nothing to offer me and I couldn't wait.

    - - - Updated - - -

    Quote Originally Posted by Remilia View Post
    From a Futuremark representative.
    http://forums.anandtech.com/showpost...6&postcount=39

    It is very basic for an API that's meant to be more complex.
    He is referring to the DX Feature Levels there. If they decide to go with the highest feature version for the next DX12 release then AMD might have a problem with that.

    - - - Updated - - -

    Quote Originally Posted by Dukenukemx View Post
    Now compare DX12/Vulkan no AC with DX11. DX12/Vuklan doesn't add much without AC, if anything.


    Because the gains for Nvidia were relatively higher than other titles. The work in progress for Nvidia I assume is for Maxwell, which is never going to happen. People overlook this because the gains for AMD were much higher. Time Spy is like 6% gains for Nvidia, while for Doom it's like 10%. 0% for Maxwell cards.

    The reference for today is that synthetic tests are easy to skew their results. You play the same thing for every graphics card, which makes it easier for driver developers to "optimize" for that one scene. Which is why some websites have their own tests they make with scripts that aren't the same generic test, just to avoid special driver optimizations. Even running the built in games benchmark is asking for special optimizations, hence why you make prerecorded scripts.
    AMD gained so much from Vulkan due to the terrible OpenGL performance. Which is great for them, but it still doesn't mean that both vendors are now equal under Vulkan. The developers themselves have stated they are still working on Pascal optimizations. The work in progress I'd assume is for Pascal not Maxwell. Pascal in Doom still has Async disabled. (Not by the drivers as is the case for Maxwell/Time Spy but by the developers themselves)

    https://community.bethesda.net/thread/54585?tstart=0

    Quote Originally Posted by Doom Dudes
    We are working with NVIDIA to enable asynchronous compute in Vulkan on NVIDIA GPUs. We hope to have an update soon.
    With that quote as well as the fact that Nvidia is running on a older version of Vulkan as opposed to AMD and that the developers have had plenty of GCN experience due to the consoles, it seems very likely that Doom is not running to it's full potential on Nvidia hardware.

  8. #1908
    Fluffy Kitten Remilia's Avatar
    10+ Year Old Account
    Join Date
    Apr 2011
    Location
    Avatar: Momoco
    Posts
    15,160
    Quote Originally Posted by Zenny View Post
    He is referring to the DX Feature Levels there. If they decide to go with the highest feature version for the next DX12 release then AMD might have a problem with that.
    And? If you want a DX12 benchmark, then use the feature sets, that means both AMD and Nvidia has to deal with the faults in their hardware one way or another. At least up to 12_0 if you want. It's a basic benchmark and admitted as such.

  9. #1909
    Warchief Zenny's Avatar
    10+ Year Old Account
    Join Date
    Oct 2011
    Location
    South Africa
    Posts
    2,171
    Quote Originally Posted by Remilia View Post
    And? If you want a DX12 benchmark, then use the feature sets, that means both AMD and Nvidia has to deal with the faults in their hardware one way or another. At least up to 12_0 if you want. It's a basic benchmark and admitted as such.
    Feature Sets don't have anything to do with Async Compute though.

  10. #1910
    Fluffy Kitten Remilia's Avatar
    10+ Year Old Account
    Join Date
    Apr 2011
    Location
    Avatar: Momoco
    Posts
    15,160
    Quote Originally Posted by Zenny View Post
    Feature Sets don't have anything to do with Async Compute though.
    To put it short, if you want a DX12 benchmark use what you can and let the vendors deal with it.
    If you want to use Asynchronous Compute to it's fullest, use it in the multi-engine fashion that it was designed as. Allowing to use different 'parts'/engines of a GPU while other tasks are still being done. This means compute + graphics in parallel also. Asynchronous compute is very broad, but if you want to use it to the fullest you're putting a lot more things in parallel than just compute tasks.
    https://msdn.microsoft.com/en-us/lib...=vs.85%29.aspx
    Pascal deals with Asynchronous compute via preemption, but it still can't execute them in parallel, which is why each GPC in Pascal is only dedicated to one or the other. For Pascal they can 'dynamic load' the work they get, but the GPCs themselves can not execute both graphics and compute at once. That's why in the games that Pascal loses or gains no performance in DX12. https://www.youtube.com/watch?v=RqK4xGimR7A
    http://international.download.nvidia...aper_FINAL.pdf
    Last edited by Remilia; 2016-07-25 at 10:10 PM.

  11. #1911
    Old God Vash The Stampede's Avatar
    10+ Year Old Account
    Join Date
    Sep 2010
    Location
    Better part of NJ
    Posts
    10,939
    Quote Originally Posted by Zenny View Post

    AMD gained so much from Vulkan due to the terrible OpenGL performance. Which is great for them, but it still doesn't mean that both vendors are now equal under Vulkan. The developers themselves have stated they are still working on Pascal optimizations. The work in progress I'd assume is for Pascal not Maxwell. Pascal in Doom still has Async disabled. (Not by the drivers as is the case for Maxwell/Time Spy but by the developers themselves)

    https://community.bethesda.net/thread/54585?tstart=0
    Nothing they wrote is specific to Maxwell or Pascal. I assume Maxwell cause that's the same story I keep hearing with every DX12 game tested so far.

    Currently asynchronous compute is only supported on AMD GPUs and requires DOOM Vulkan supported drivers to run. We are working with NVIDIA to enable asynchronous compute in Vulkan on NVIDIA GPUs. We hope to have an update soon.

    They didn't specify Maxwell or Pascal, but if indeed Pascal is included then I can't see how this makes Nvidia look good? Digging through Google shows me nothing about Pascal cards specifically being mentioned as having disabled Async Compute, but again if it is, uhhhh what makes you think they'll have a driver patch to fix this in the near future? Has it been over a year for Maxwell cards, and still nothing?

    The sad truth here is that it'll never get a fix. That's as fast as Pascal will get. By the time Deus Ex Mankind Divided is released, we will be over Doom and bitching about how Deus EX favors AMD/Nvidia. Most likely AMD. Why? Deus Ex: Mankind Divided has seen a collaboration between Eidos Montreal and AMD. Then we'll be told how Eidos has disabled Async Compute for Nvidia hardware and is working with Nvidia for a fix.

    With so many games utilizing Async Compute, should we care about synthetic benchmarks like Time Spy? No. No no, NO no no, NO NO, no no no no no no no , ERR, NO!, don't ever, don't do it, NO! Because as you can already tell, it doesn't represent DX12/Vulkan games well at all. Deus Ex will most certainly favor AMD by a lot, compared to Nvidia, just like Doom has. With the exception of "Rise of the Tomb Raider", all other DX12/Vulkan games favored AMD.
    Last edited by Vash The Stampede; 2016-07-26 at 03:59 AM.

  12. #1912
    Warchief Zenny's Avatar
    10+ Year Old Account
    Join Date
    Oct 2011
    Location
    South Africa
    Posts
    2,171
    Quote Originally Posted by Dukenukemx View Post
    Nothing they wrote is specific to Maxwell or Pascal. I assume Maxwell cause that's the same story I keep hearing with every DX12 game tested so far.

    Currently asynchronous compute is only supported on AMD GPUs and requires DOOM Vulkan supported drivers to run. We are working with NVIDIA to enable asynchronous compute in Vulkan on NVIDIA GPUs. We hope to have an update soon.

    They didn't specify Maxwell or Pascal, but if indeed Pascal is included then I can't see how this makes Nvidia look good? Digging through Google shows me nothing about Pascal cards specifically being mentioned as having disabled Async Compute, but again if it is, uhhhh what makes you think they'll have a driver patch to fix this in the near future? Has it been over a year for Maxwell cards, and still nothing?
    It makes sense it's Pascal because two other vendors have gotten Async to work for Pascal but nothing for Maxwell?
    The sad truth here is that it'll never get a fix. That's as fast as Pascal will get. By the time Deus Ex Mankind Divided is released, we will be over Doom and bitching about how Deus EX favors AMD/Nvidia. Most likely AMD. Why? Deus Ex: Mankind Divided has seen a collaboration between Eidos Montreal and AMD. Then we'll be told how Eidos has disabled Async Compute for Nvidia hardware and is working with Nvidia for a fix.
    Based on what? What makes you think Doom is optimized for Nvidia at all? You don't take a outlier and then declare it the only valid result. Several months ago if I had claimed that Vulkan runs better on Nvidia hardware I would have been right as well, because the only game with Vulkan support (Talos Principle) ran better on Nvidia.

    With so many games utilizing Async Compute, should we care about synthetic benchmarks like Time Spy? No. No no, NO no no, NO NO, no no no no no no no , ERR, NO!, don't ever, don't do it, NO! Because as you can already tell, it doesn't represent DX12/Vulkan games well at all. Deus Ex will most certainly favor AMD by a lot, compared to Nvidia, just like Doom has. With the exception of "Rise of the Tomb Raider", all other DX12/Vulkan games favored AMD.
    Excuse me? It represents DX12 pretty damn well thank you, the only other "from the ground up" DX12 implementation is very similar to it (Ashes) and several other titles are very close in performance between AMD and Nvidia (Forza and Warhammer).

    - - - Updated - - -

    Quote Originally Posted by Remilia View Post
    To put it short, if you want a DX12 benchmark use what you can and let the vendors deal with it.
    If you want to use Asynchronous Compute to it's fullest, use it in the multi-engine fashion that it was designed as. Allowing to use different 'parts'/engines of a GPU while other tasks are still being done. This means compute + graphics in parallel also. Asynchronous compute is very broad, but if you want to use it to the fullest you're putting a lot more things in parallel than just compute tasks.
    https://msdn.microsoft.com/en-us/lib...=vs.85%29.aspx
    Compute + graphics in parallel? Sooooo, just like Time Spy then?

    http://www.futuremark.com/pressrelea...dmark-time-spy

    It's literally spelled out there.

    Pascal deals with Asynchronous compute via preemption, but it still can't execute them in parallel, which is why each GPC in Pascal is only dedicated to one or the other. For Pascal they can 'dynamic load' the work they get, but the GPCs themselves can not execute both graphics and compute at once. That's why in the games that Pascal loses or gains no performance in DX12. https://www.youtube.com/watch?v=RqK4xGimR7A
    http://international.download.nvidia...aper_FINAL.pdf
    1. Pascal doesn't just deal with compute via preemption, dynamic load balancing is used as well.

    2. Graphics/Compute can be changed on a per SM level as needed: http://www.anandtech.com/show/10325/...ition-review/9

  13. #1913
    Old God Vash The Stampede's Avatar
    10+ Year Old Account
    Join Date
    Sep 2010
    Location
    Better part of NJ
    Posts
    10,939
    Quote Originally Posted by Zenny View Post
    It makes sense it's Pascal because two other vendors have gotten Async to work for Pascal but nothing for Maxwell?
    Pascal's Async Compute is unorthodox from what I hear. If the game wasn't built around Pascal's needs, then enabling AC would probably hurt performance. Nvidia probably needs game developers to alter some game code to see any benefits. Hence the controversy over Time Spy.

    But that's entirely speculation on my part. Without access to either game code or driver code, nobody can know for certain. But based on trends so far, I don't see anything coming for Doom. BTW, what vendors got AC working right for Nvidia? Besides Time Spy, nothing else seems to gain benefits from DX12/Vulkan.
    Based on what? What makes you think Doom is optimized for Nvidia at all? You don't take a outlier and then declare it the only valid result. Several months ago if I had claimed that Vulkan runs better on Nvidia hardware I would have been right as well, because the only game with Vulkan support (Talos Principle) ran better on Nvidia.
    Doom was the first big AAA game to support Vulkan. Talos Principle had Vulkan added to it over night. Anyway, here's a bunch of useful info to compare what happens to the 1060 when in DX11/DX12. So we can see how much these cards benefit from each game.

    You can thank @coprax for this.
    https://docs.google.com/spreadsheets...WVo/edit#gid=0

    How much does the 1060 benefit from these DX12/Vulkan games?

    Doom OpenGL -> Vulkan = negative
    RotTR DX11 -> DX12 = neutral
    If you compare like to like results like Ars Tecnica, Hardware Canucks, and TechSpot, you can see that nobody benefits from DX12 in this game.
    Hitman DX11 -> DX12 = neutral
    Ashes of the Singularity DX11 -> DX12 = neutral
    Total War: Warhammer DX11 -> DX12 = neutral
    Talos Principal DX11 -> Vulkan = positive

    Except for one game, the Pascal architecture doesn't seem to have anything working with DX12/Vulkan. How accurate is Time Spy when so far we're shown that Pascal doesn't seem to have any gains at all? Mind you this is with the 1060, so the story could be different for the 1070/1080 but not by much. Considering how many more people buy mainstream priced cards, the 1060 is probably thÉ most important graphics card from Nvdia.

    Also worth pointing out how well the 1060 does in DX11 titles. It would be in Nvidia's best interest to keep games as DX11 like as possible. Part of the controversy over Time Spy.
    Excuse me? It represents DX12 pretty damn well thank you, the only other "from the ground up" DX12 implementation is very similar to it (Ashes) and several other titles are very close in performance between AMD and Nvidia (Forza and Warhammer).
    Another thank you @coprax for that link, cause whoever made it is still updating it. According to that chart, Forza definitely favors AMD by 13 fps. Same goes for Warhammer, though not by much.

    Like I said, Deus Ex is coming out soon, and will likely favor AMD more than Nvidia. If anything, Nvidia cards will likely gain 0 fps. Unlike Forza and Warhammer, it's a game that'll likely be built for DX12 from the ground up.
    Last edited by Vash The Stampede; 2016-07-26 at 06:48 AM.

  14. #1914
    Deleted
    About the whole pascall thing. Someone on another forum pointed this out.
    Volta was supposed to be the successor to Maxwell, Pascall did not exist. Just a few years ago, Pascall popped up.





    Take that anyway you want it. The first roadmap is from 2013, the second is from 2015.... It is impossible to make something completely new in just 1-2 years, so no wonder Pascall is basicaly just Maxwell 2.0. Like others have said, a jump from SB --> IB. Volta will probably be completly new with proper Asynch support, or so I hope.

    Also, Volta is coming sooner.

    Following the surprise TITAN X Pascal launch slated for 2nd August, it looks like NVIDIA product development cycle is running on steroids, with reports emerging of the company accelerating its next-generation "Volta" architecture debut to May 2017, along the sidelines of next year's GTC. The architecture was originally scheduled to make its debut in 2018.

    Much like "Pascal," the "Volta" architecture could first debut with HPC products, before moving on to the consumer graphics segment. NVIDIA could also retain the 16 nm FinFET+ process at TSMC for Volta. Stacked on-package memory such as HBM2 could be more readily available by 2017, and could hit sizable volumes towards the end of the year, making it ripe for implementation in high-volume consumer products.

  15. #1915
    Warchief Zenny's Avatar
    10+ Year Old Account
    Join Date
    Oct 2011
    Location
    South Africa
    Posts
    2,171
    Quote Originally Posted by Dukenukemx View Post
    Pascal's Async Compute is unorthodox from what I hear. If the game wasn't built around Pascal's needs, then enabling AC would probably hurt performance. Nvidia probably needs game developers to alter some game code to see any benefits. Hence the controversy over Time Spy.

    But that's entirely speculation on my part. Without access to either game code or driver code, nobody can know for certain. But based on trends so far, I don't see anything coming for Doom. BTW, what vendors got AC working right for Nvidia? Besides Time Spy, nothing else seems to gain benefits from DX12/Vulkan.
    Pascal does not nearly gain as much as AMD cards, but it is possible to get gains on both while using a vendor neutral optimization path, I mean Time Spy proves that. Also you seem to be confusing DX12 with Async, Time Spy doesn't have a DX11 code path, so for all we know the performance difference between a DX11 version of Time Spy and the current one on Nvidia hardware may have been negligible.

    Rise of the tomb raider now supports Async on AMD and Pascal (not Maxwell). Increases FPS on AMD quite a bit but only seems to effect the minimum frames on Pascal (less drops).
    Doom was the first big AAA game to support Vulkan. Talos Principle had Vulkan added to it over night. Anyway, here's a bunch of useful info to compare what happens to the 1060 when in DX11/DX12. So we can see how much these cards benefit from each game.

    You can thank @coprax for this.
    https://docs.google.com/spreadsheets...WVo/edit#gid=0

    How much does the 1060 benefit from these DX12/Vulkan games?

    Doom OpenGL -> Vulkan = negative
    RotTR DX11 -> DX12 = neutral
    If you compare like to like results like Ars Tecnica, Hardware Canucks, and TechSpot, you can see that nobody benefits from DX12 in this game.
    Hitman DX11 -> DX12 = neutral
    Ashes of the Singularity DX11 -> DX12 = neutral
    Total War: Warhammer DX11 -> DX12 = neutral
    Talos Principal DX11 -> Vulkan = positive

    Except for one game, the Pascal architecture doesn't seem to have anything working with DX12/Vulkan. How accurate is Time Spy when so far we're shown that Pascal doesn't seem to have any gains at all? Mind you this is with the 1060, so the story could be different for the 1070/1080 but not by much. Considering how many more people buy mainstream priced cards, the 1060 is probably thÉ most important graphics card from Nvdia.

    Also worth pointing out how well the 1060 does in DX11 titles. It would be in Nvidia's best interest to keep games as DX11 like as possible. Part of the controversy over Time Spy.

    Another thank you @coprax for that link, cause whoever made it is still updating it. According to that chart, Forza definitely favors AMD by 13 fps. Same goes for Warhammer, though not by much.

    Like I said, Deus Ex is coming out soon, and will likely favor AMD more than Nvidia. If anything, Nvidia cards will likely gain 0 fps. Unlike Forza and Warhammer, it's a game that'll likely be built for DX12 from the ground up.
    Talos Principle gains FPS on Vulkan for Nvidia, Doom does not. The developers themselves have said they are still working with Nvidia on the Doom implementation of Vulkan. Vulkan on Nvidia cards also runs on a several version older Vulkan then AMD does.

    I think that is all we need to say on the matter regarding how well Doom is optimized for Nvidia.

    Warhammer and Ashes are roughly the same on the Geforce 1060 and Radeon RX480. Ashes gains on some sites on the RX 480 and looses on others. Warhammer swings on both by literally a handful of frames.

    Forza can be faster on the Geforce 1060 as well, I'm not sure why that chart doesn't include it:


  16. #1916
    Fluffy Kitten Remilia's Avatar
    10+ Year Old Account
    Join Date
    Apr 2011
    Location
    Avatar: Momoco
    Posts
    15,160
    Quote Originally Posted by Zenny View Post
    Compute + graphics in parallel? Sooooo, just like Time Spy then?

    http://www.futuremark.com/pressrelea...dmark-time-spy

    It's literally spelled out there.
    Dude, all it keeps saying is COMPUTE in parallel for their implementation and regurgitating MSDN. Examples I'm talking about are like in DOOM and Ashes. DOOM uses AC for the TSSAA as to use the ROPs (iirc) while everything else is also doing it's own garbage instead of it being left alone idle when everything else is being done.
    1. Pascal doesn't just deal with compute via preemption, dynamic load balancing is used as well.

    2. Graphics/Compute can be changed on a per SM level as needed: http://www.anandtech.com/show/10325/...ition-review/9
    Read the same thing you quoted, I've noted it. And last I read the white paper they made no statement of it going to per SM level. If they did they'd make sure it's very well stated.

  17. #1917
    Warchief Zenny's Avatar
    10+ Year Old Account
    Join Date
    Oct 2011
    Location
    South Africa
    Posts
    2,171
    Quote Originally Posted by Remilia View Post
    Dude, all it keeps saying is COMPUTE in parallel for their implementation and regurgitating MSDN. Examples I'm talking about are like in DOOM and Ashes. DOOM uses AC for the TSSAA as to use the ROPs (iirc) while everything else is also doing it's own garbage instead of it being left alone idle when everything else is being done.
    I'm not even sure what you are trying to say here. Can you be more specific? Are you claiming they use Async for TSSAA? Why does async work with TSSAA disabled then?

    https://twitter.com/idSoftwareTiago/...90016988082180

    Read the same thing you quoted, I've noted it. And last I read the white paper they made no statement of it going to per SM level. If they did they'd make sure it's very well stated.
    So Anandtech is lying about the claim it's making? They literally link two images directly from Nvidia in that review. I've read the whitepaper as well and all the claims are lifted directly from it:

    http://international.download.nvidia...aper_FINAL.pdf

    Quote Originally Posted by Geforce 1080 whitepaper
    For overlapping workloads, Pascal introduces support for “dynamic load balancing.” In Maxwell generation GPUs, overlapping workloads were implemented with static partitioning of the GPU into a subset that runs graphics, and a subset that runs compute. This is efficient provided that the balance of work between the two loads roughly matches the partitioning ratio. However, if the compute workload takes longer than the graphics workload, and both need to complete before new work can be done, and the portion of the GPU configured to run graphics will go idle. This can cause reduced performance that may exceed any performance benefit that would have been provided from running the workloads overlapped. Hardware dynamic load balancing addresses this issue by allowing either workload to fill the rest of the machine if idle resources are available.
    Pascal's answer to Async has got nothing to do with preemption despite the flawed claims made on this forum and others.

  18. #1918
    Old God Vash The Stampede's Avatar
    10+ Year Old Account
    Join Date
    Sep 2010
    Location
    Better part of NJ
    Posts
    10,939
    Quote Originally Posted by Zenny View Post
    Pascal does not nearly gain as much as AMD cards, but it is possible to get gains on both while using a vendor neutral optimization path, I mean Time Spy proves that.
    Time Spy proves that vendors need to include multiple code paths to properly support DX12 for both AMD and Nvidia. How hard would it be to include Feature Level 12? Battlefield 4 supports NVAPI, Mantle, as well as DX11. Benchmark tool can't use FL 12 or 12_1?

    The point of 3Dmark is to show what future games will perform like. How accurate is it if it can't make full use of the DX12 api? This is a similar situation with Nvidia with DX9, and Geforce FX cards. "Way It's Meant To Be Played" games were using Dx9 but a lot of the code were DX8.1. Same thing with Dx10 and DX10.1, where suddenly games had 10.1 removed. Full DX9 and DX10.1 would crap on Nvidia cards. Sounds familiar.
    Also you seem to be confusing DX12 with Async, Time Spy doesn't have a DX11 code path, so for all we know the performance difference between a DX11 version of Time Spy and the current one on Nvidia hardware may have been negligible.
    I consider Feature level 11 very near to Dx11. Funny thing is Maxwell and Pascal are FL 12 and 12_1. Most AMD cards are FL 11_1 and 12. Wouldn't it benefit Nvidia to have FL 12?

    Talos Principle gains FPS on Vulkan for Nvidia, Doom does not. The developers themselves have said they are still working with Nvidia on the Doom implementation of Vulkan. Vulkan on Nvidia cards also runs on a several version older Vulkan then AMD does.

    I think that is all we need to say on the matter regarding how well Doom is optimized for Nvidia.
    The matter to leave this at is that Doom Vulkan will never see AC enabled. It would have been done by now.

    Forza can be faster on the Geforce 1060 as well, I'm not sure why that chart doesn't include it:

    Who ever made the chart explains why certain benchmarks weren't included. If they didn't explain the settings for the game, the benchmark wasn't included.

  19. #1919
    Warchief Zenny's Avatar
    10+ Year Old Account
    Join Date
    Oct 2011
    Location
    South Africa
    Posts
    2,171
    Quote Originally Posted by Dukenukemx View Post
    Time Spy proves that vendors need to include multiple code paths to properly support DX12 for both AMD and Nvidia. How hard would it be to include Feature Level 12? Battlefield 4 supports NVAPI, Mantle, as well as DX11. Benchmark tool can't use FL 12 or 12_1?

    The point of 3Dmark is to show what future games will perform like. How accurate is it if it can't make full use of the DX12 api? This is a similar situation with Nvidia with DX9, and Geforce FX cards. "Way It's Meant To Be Played" games were using Dx9 but a lot of the code were DX8.1. Same thing with Dx10 and DX10.1, where suddenly games had 10.1 removed. Full DX9 and DX10.1 would crap on Nvidia cards. Sounds familiar.
    DirectX 10.1 was a useless feature set that almost nobody needed and once soon eclipsed by DX11, even Microsoft admitted as much. Hell, the entire DX10 was a bust.

    It's funny you bring up DX9 though, the ATI 800/850 set couldn't support DX9.0c and HDR and ATI claimed it wasn't really needed, until Oblivion came along and you couldn't enable HDR on those cards.

    The reason Time Spy utilized DX12 11_0 is because that is the base DX12 feature levels that most cards support, and that all current DX12 games actually utilize. I'm sure they can enable 12_1 but then most AMD cards won't be able to run the benchmark. Should I claim the benchmark is now bias to AMD because of that?
    I consider Feature level 11 very near to Dx11. Funny thing is Maxwell and Pascal are FL 12 and 12_1. Most AMD cards are FL 11_1 and 12. Wouldn't it benefit Nvidia to have FL 12?
    You are free to consider DX12 11_0 to be very near to DX11 but you would be wrong. The higher feature levels just include additional features on top of the DX12 base, DX12 11_0 brings all the advantages that DX12 as a whole over DX11.

    The matter to leave this at is that Doom Vulkan will never see AC enabled. It would have been done by now.
    I... what? How on earth did you come to this conclusion at all? You just literally made shit up! Your opinion =/= fact.

    "Dur, it's been 1 month since Doom came out and Vulkan is not out yet, Doom will never have Vulkan. It would have been done by now."

    It took over 2 months for the Doom Vulkan patch to arrive and it's clearly a work in progress, no async on Nvidia cards, older version running on Nvidia cards, Async not working under certain AA modes on AMD cards.

    All issues that the dev has stated they will be fixing in a future patch. How on this green Earth did that lead you to your conclusion? Christ you are not even trying to hide your bias now.

    Who ever made the chart explains why certain benchmarks weren't included. If they didn't explain the settings for the game, the benchmark wasn't included.
    DX12 mode, maxed in game settings?

  20. #1920
    Fluffy Kitten Remilia's Avatar
    10+ Year Old Account
    Join Date
    Apr 2011
    Location
    Avatar: Momoco
    Posts
    15,160
    Quote Originally Posted by Zenny View Post
    I'm not even sure what you are trying to say here. Can you be more specific? Are you claiming they use Async for TSSAA? Why does async work with TSSAA disabled then?

    https://twitter.com/idSoftwareTiago/...90016988082180
    Anti-aliasing is one of many things that can be sent asynchronously. It's also why specifically only TSSAA or no AA are the only selection.
    So Anandtech is lying about the claim it's making? They literally link two images directly from Nvidia in that review. I've read the whitepaper as well and all the claims are lifted directly from it:
    I doubt they're lying but they've been wrong before on Asynchronous Compute and hardware queue like in Maxwell and they never bothered to amend it even after the whole ashes debacle came out.
    http://international.download.nvidia...aper_FINAL.pdf

    Pascal's answer to Async has got nothing to do with preemption despite the flawed claims made on this forum and others.
    Has said nothing about going on a per SM level. And since you used Anandtech, then go look at the next page.
    It's something that's for AC also. Preemption is extremely broad, yes, but it is something that can and is being used for AC as a bandaid solution. Even noted by AMD a year ago.
    https://youtu.be/v3dUhep0rBs?t=96
    Before we start, in writing this article I spent some time mulling over how to best approach the subject of fine-grained preemption, and ultimately I’m choosing to pursue it on its own page, and not on the same page as concurrency. Why? Well although it is an async compute feature – and it’s a good way to get time-critical independent tasks started right away – its purpose isn’t to improve concurrency.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •