Thread: Gtx 1080

Page 95 of 103 FirstFirst ...
45
85
93
94
95
96
97
... LastLast
  1. #1881
    Fluffy Kitten Remilia's Avatar
    10+ Year Old Account
    Join Date
    Apr 2011
    Location
    Avatar: Momoco
    Posts
    15,160
    Quote Originally Posted by Master Guns View Post
    Again: you're saying that a 1070 cannot support 1440? When lower tier cards have been doing it since before the inception of the 1070?
    It's the 144Hz part, not the resolution.

  2. #1882
    Quote Originally Posted by Master Guns View Post
    So you're trying to say 8gb of ddr5 isn't enough for 1440 lol? The 1070 will smash anything at 1440.
    "Smash" is awfully optimistic. http://www.anandtech.com/bench/product/1731 45-70 fps from one title to the next isn't "smashing" by any stretch of the imagination! And you're definitely not going to get the 144hz benefit with a single GTX 1070 that way.

  3. #1883
    The Lightbringer Artorius's Avatar
    10+ Year Old Account
    Join Date
    Dec 2012
    Location
    Natal, Brazil
    Posts
    3,781
    Quote Originally Posted by Zenny View Post
    Also "Pascal in the form of GP104 isn't really Pascal"? Give me a break.
    GP102 and GP106 aren't either.

  4. #1884
    Warchief Zenny's Avatar
    10+ Year Old Account
    Join Date
    Oct 2011
    Location
    South Africa
    Posts
    2,171
    Quote Originally Posted by Artorius View Post
    GP102 and GP106 aren't either.
    Look, this utter nonsense it got to stop. Geforce 1080, 1070, 1060 and the new Titan X are all Pascal chips, period.

    http://www.anandtech.com/show/10325/...ition-review/2
    http://www.anandtech.com/show/10325/...ition-review/4
    http://www.anandtech.com/show/10325/...ition-review/9
    http://www.anandtech.com/show/10325/...tion-review/10
    http://www.anandtech.com/show/10325/...tion-review/11
    http://www.anandtech.com/show/10325/...tion-review/12

    Just looking at a architecture diagram and saying it's the same chip as Maxwell is not going to give you the full picture. CUDA cores, texture units, PolyMorph Engines, Raster Engines, and ROPs are all identical to Maxwell, however 4th gen delta compression, dynamic scheduling (e.g Async Compute), massively improved preemption capabilities, Simultaneous Multi-Projection, GDDR5X, revamped display controller and a updated video encode/decode block are all new.

    I think Anandtech themselves said it best:

    Quote Originally Posted by Anandtech
    Looking at an architecture diagram for GP104, Pascal ends up looking a lot like Maxwell, and this is not by chance. After making more radical changes to their architecture with Maxwell, for Pascal NVIDIA is taking a bit of a breather. Which is not to say that Pascal is Maxwell on 16nm – this is very much a major feature update – but when it comes to discussing the core SM architecture itself, there is significant common ground with Maxwell.
    Yes, Pascal is broadly similar to Maxwell, but it has numerous key features that do set it apart.

    If you are referring to how the GP102/106/104 are different from the GP100, well that is basically down to them being for two very different markets.

    http://www.anandtech.com/show/10325/...ition-review/2

    The only thing left out that could benefit the consumer level chip in gaming is HBM2. But having a different memory controller doesn't make it a different chip entirely.

    Of course, I doubt I'll get any sort of response to this, I'm still waiting for you to back up your claim of how Time Spy is DX12 11_0 due to Kepler of all things. Or how pre-polaris cards don't even support DX12 12_1.

  5. #1885
    Fluffy Kitten Remilia's Avatar
    10+ Year Old Account
    Join Date
    Apr 2011
    Location
    Avatar: Momoco
    Posts
    15,160
    Tell me, how is two different architectures the same architecture, P100 and GP102/104/106. Even noted by AT and 4gamer that the main difference between Maxwell and GP102/104/106 is the Polymorphous engine for even better tessellation fun. The core structure of it is pretty much the same. Sandy Bridge to Ivy Bridge is still Sandy Bridge die shrunk to with some features added onto it, even noted by Intel that it's essentially the same thing but with some other stuff tacked on. Just cause it added some things doesn't make it something completely new and different.

  6. #1886
    The Lightbringer Artorius's Avatar
    10+ Year Old Account
    Join Date
    Dec 2012
    Location
    Natal, Brazil
    Posts
    3,781
    Quote Originally Posted by Zenny View Post
    Of course, I doubt I'll get any sort of response to this, I'm still waiting for you to back up your claim of how Time Spy is DX12 11_0 due to Kepler of all things. Or how pre-polaris cards don't even support DX12 12_1.
    All the things before this part are pretty much pointless. It doesn't matter how many new "features" you back in, it doesn't change the hardware layout and doesn't change the uarch. We're not saying GP102/104/106 are simply die-shrunk GM200/204/206. We're saying that they're way closer to Maxwell than to Pascal, and they are want you or not.

    About the part that I left in the quote, it doesn't matter. The problem with Time Spy doesn't have anything to do with what you're talking about to begin with.

    The problem with Time Spy is that it strives to be a "common ground dx12 benchmark", but it hardly has any compute whatsoever on it. It tries it's best to be the least DX12 as possible not to make it so obvious that Nvidia's current uarch doesn't benefit much from it. Having multi-engine support doesn't mean anything when there isn't any compute workload for it to do its job.

    They could've done something that exploits one of the strongest performance increasing features of DX12, but they didn't.

    And even then, on top of it, their implementation is simply weird, everything about the benchmark is done in a way that benefits Nvidia.

  7. #1887
    Warchief Zenny's Avatar
    10+ Year Old Account
    Join Date
    Oct 2011
    Location
    South Africa
    Posts
    2,171
    Quote Originally Posted by Artorius View Post
    All the things before this part are pretty much pointless. It doesn't matter how many new "features" you back in, it doesn't change the hardware layout and doesn't change the uarch. We're not saying GP102/104/106 are simply die-shrunk GM200/204/206. We're saying that they're way closer to Maxwell than to Pascal, and they are want you or not.
    No, they are Pascal, nothing about the chips are Maxwell. You can make up bullshit all you want it doesn't change any of the facts. The stuff left out of the HPC part have got bugger all to do with gaming. Yes Pascal is similar to Maxwell, I even put a quote there to indicate as much, but that doesn't make the consumer level chips not a Pascal part. If you were instead claiming that Pascal is bery similar to Maxwell I would not disagree with you at all, because that is correct, but you're basically stating that the Geforce 1080 is not Pascal which is just silly.

    I mean, I'm the one actually linking stuff here to prove my point, all you're doing is basically the equivalent of waving your arms about and going "nu-uh".


    About the part that I left in the quote, it doesn't matter. The problem with Time Spy doesn't have anything to do with what you're talking about to begin with.

    The problem with Time Spy is that it strives to be a "common ground dx12 benchmark", but it hardly has any compute whatsoever on it. It tries it's best to be the least DX12 as possible not to make it so obvious that Nvidia's current uarch doesn't benefit much from it. Having multi-engine support doesn't mean anything when there isn't any compute workload for it to do its job.

    They could've done something that exploits one of the strongest performance increasing features of DX12, but they didn't.

    And even then, on top of it, their implementation is simply weird, everything about the benchmark is done in a way that benefits Nvidia.
    So, AMD doesn't gain almost 12% performance gains when Async is enabled? So does that mean Doom and Ashes don't have async either?

    Lets see, Fury X gains roughly 10% with ashes:





    But wait, isn't that exactly what Time Spy does?



    Damn, guess Ashes is fake DX12 then, with fake Async, surely Doom shows bigger gains (being the only true benchmark that is 100% accurate for both Nvidia and AMD sarcasm )


    Aww crap, Doom gets even less then 10% performance boost from Async.

    So I'm supposed to believe Time Spy does not do Async compute correctly even though it gets a bigger performance boost for AMD cards then either Ashes or Doom?

    Why did AMD not veto it? They have full right to do so:

    http://www.futuremark.com/business/b...opment-program

    Quote Originally Posted by Futuremark
    3DMark Time Spy has been in development for nearly two years, and BDP members have been involved from the start. BDP members receive regular builds throughout development and conduct their own review and testing at each stage. They have access to the source code and can suggest improvements and changes to ensure that the implementation is correct. All development takes place in a single source tree, which means anything suggested by a vendor can be immediately reviewed and commented on by the other vendors. Ultimately, each member approves the final benchmark for release to the press and public.
    Of all the 3DMark tests we have created over the years, Time Spy has probably seen the most scrutiny from our partners. Each vendor had staff spend weeks in our office working with our engineers. The daily source code drops we offer have been downloaded hundreds of times. We have around one thousand emails of vendor communication related to Time Spy. Every detail has been debated and discussed at length.
    Why does AMD approve of it then? They do it right here:

    http://radeon.com/radeon-wins-3dmark-dx12/

    But lets look at some of these claims this forum post you linked make (not sure why I should bother, it's a freaking forum post of all things)

    Quote Originally Posted by Dubious Forum Post
    As for 3D Mark Time Fly... See concurrent vs parallel execution. All of the current games supporting Asynchronous Compute make use of parallel execution of compute and graphics tasks. 3D Mark Time Fly support concurrent. It is not the same Asynchronous Compute.
    Damn, I guess it's true then, Time Spy is full of shit! Oh wait, it's totally not:

    http://www.futuremark.com/pressrelea...dmark-time-spy

    Quote Originally Posted by Futuremark
    Once initiated, multiple queues can execute in parallel. But it is entirely up to the driver and the hardware to decide how to execute the command lists - the game or application cannot affect this decision with the DirectX 12 API.
    This parallelism is commonly known as ‘asynchronous compute’ when work is done on the COMPUTE queue at the same time as work is being done on the DIRECT queue.
    Just read through the document, it mentions the word "parallel" quite a number of times and even provides images from GPUview to show how parallel tasks are being executed with Async on and off. They even note how Maxwell is unable to utilize Async compute at all. The word concurrent is used a total of... 0 times.

    Quote Originally Posted by Dubious Forum Post
    Notice the context switch involved?
    Yes I do, and? This image has got nothing to do with Time Spy. The word "Context Switch" is not used once by Futuremark.

    Quote Originally Posted by Dubious Forum Post
    If parallelism was used then we would see Maxwell taking a performance hit
    This is noted as true by Futuremark themselves, they also indicated that Async Compute has been disabled by Nvidia themselves in the Driver, thus no perfromance loss (or gain either).
    Quote Originally Posted by Dubious Forum Post
    Where are the tech journalists these days?
    Apparently not in this forum post.

    So lets present the facts:

    1. Time Spy Async on allows AMD cards to gain +12% on performance, in line with similar performance gains from Doom and Ashes.
    2. Pascal can do Async just not nearly as well as AMD can, performance gains are this much smaller.
    3. Maxwell cannot do Async at all and is disabled in driver, hence zero performance difference vs on and off.

    Try to present some actual evidence next time, k?
    Last edited by Zenny; 2016-07-25 at 05:54 PM.

  8. #1888
    Fluffy Kitten Remilia's Avatar
    10+ Year Old Account
    Join Date
    Apr 2011
    Location
    Avatar: Momoco
    Posts
    15,160
    Just fyi on the pointing it out as dubious forum post, he (Mahigan) and a few others are the ones that brought Maxwell Asynchronous Compute thing out to light. He has worked in the semiconductor / GPU field before and thus has a lot of experience in it.

  9. #1889
    Warchief Zenny's Avatar
    10+ Year Old Account
    Join Date
    Oct 2011
    Location
    South Africa
    Posts
    2,171
    Quote Originally Posted by Remilia View Post
    Just fyi on the pointing it out as dubious forum post, he (Mahigan) and a few others are the ones that brought Maxwell Asynchronous Compute thing out to light. He has worked in the semiconductor / GPU field before and thus has a lot of experience in it.
    That might very well be, but he presents 0 evidence to back up his claims in this case. He is also blatantly going against what the developer themselves have said. Let's be real here if this was really a issue why does AMD promote Time Spy in it's marketing material and not veto it (despite having full rights to do so)?

    Why is AMD not raising all hell if this benchmark was Nvidia favored?

  10. #1890
    Quote Originally Posted by Zenny View Post
    No, they are Pascal, nothing about the chips are Maxwell. You can make up bullshit all you want it doesn't change any of the facts. The stuff left out of the HPC part have got bugger all to do with gaming. Yes Pascal is similar to Maxwell, I even put a quote there to indicate as much, but that doesn't make the consumer level chips not a Pascal part. If you were instead claiming that Pascal is bery similar to Maxwell I would not disagree with you at all, because that is correct, but you're basically stating that the Geforce 1080 is not Pascal which is just silly.

    I mean, I'm the one actually linking stuff here to prove my point, all you're doing is basically the equivalent of waving your arms about and going "nu-uh".
    I don't think anyone can argue the fact that nVidia is calling it Pascal, therefore, it is Pascal. No matter what anyone else says. They are theones that get to decide what the names of their own stuff is and they call it Pascal, so it's Pascal.

    However, what I think the point trying to be made is, is that they were originally calling something else Pascal, then decided to delay it's release and called this Pascal instead. It's not what they were originally calling Pascal, because that had things that current Pascal does not have. So when someone says it's not Pascal, they are not saying that it's not really Pascal, because again, nVidia calls it pascal so that's what it is, they are saying that it's not the Pascal we were originally told we were going to get. It's Maxwell with some small changes that they started calling Pascal and pushed backthe release of what they were calling Pascal.

  11. #1891
    Fluffy Kitten Remilia's Avatar
    10+ Year Old Account
    Join Date
    Apr 2011
    Location
    Avatar: Momoco
    Posts
    15,160
    Quote Originally Posted by Zenny View Post
    That might very well be, but he presents 0 evidence to back up his claims in this case. He is also blatantly going against what the developer themselves have said. Let's be real here if this was really a issue why does AMD promote Time Spy in it's marketing material and not veto it (despite having full rights to do so)?

    Why is AMD not raising all hell if this benchmark was Nvidia favored?
    Hes using Nvidia's own Pascal white paper to show how the GPU works and how the software can work with it.

  12. #1892
    Frankly, Zenny ought to read the many posts by Mahigan. In very plain terms, half the shit Zenny brings up here is already addressed in one form or another by Mahigan.

    Most obviously, he explains that Maxwell should always incur a performance penalty with async, because enabling async will still queue up the task and cause the pauses that were the source of stalling to begin with. The code doesn't care about whether or not the feature is on or not, it's still the damn same code. The driver is irrelevant, only the path should matter in whether those "fences" stall the card.

  13. #1893
    Warchief Zenny's Avatar
    10+ Year Old Account
    Join Date
    Oct 2011
    Location
    South Africa
    Posts
    2,171
    Quote Originally Posted by Drunkenvalley View Post
    Frankly, Zenny ought to read the many posts by Mahigan. In very plain terms, half the shit Zenny brings up here is already addressed in one form or another by Mahigan.

    Most obviously, he explains that Maxwell should always incur a performance penalty with async, because enabling async will still queue up the task and cause the pauses that were the source of stalling to begin with. The code doesn't care about whether or not the feature is on or not, it's still the damn same code. The driver is irrelevant, only the path should matter in whether those "fences" stall the card.
    It's as if you don't read my posts at all. Maxwell does incur a performance penalty with Async, hence it is disabled in drivers by Nvidia themselves. Thus zero performance penalty because no matter what Time Spy tries to do the driver automatically disables it. But I'll reiterate, if Time Spy was done "incorrectly" why does the following happen:

    1. Time Spy gains roughly the same from enabling Async Compute then all other DX12/Vulkan implementations on the market, which is to say around the 10% mark, plus or minus a couple of %.

    2. Maxwell gains nothing as it is explicitly disabled in drivers.

    3. AMD had input along the entire development process and can veto things it feels does not work.

    4. AMD is using this in their own marketing materials.

    5. AMD has said nothing on the matter (other then approving it!).

    6. Mahigan has zero, let me repeat that ZERO evidence that Time Spy does anything incorrectly. He blatantly says things that are incorrect and has nothing to back him up on that front.

    Gah, I've posted actual graphs showing the gain from Async in Time Spy is in line with gains in Ashes and Doom. God at least my posts actually have facts in them.

    - - - Updated - - -

    Quote Originally Posted by Remilia View Post
    Hes using Nvidia's own Pascal white paper to show how the GPU works and how the software can work with it.
    Which has got nothing to do with how Time Spy is doing Async Compute.

  14. #1894
    Quote Originally Posted by Zenny View Post
    Etc.
    I appreciate the fact that you accuse me of not reading, then you go ahead and don't read the part where I say we'd still see the performance loss regardless. Thanks for the obvious hypocrisy there.

  15. #1895

  16. #1896
    Fluffy Kitten Remilia's Avatar
    10+ Year Old Account
    Join Date
    Apr 2011
    Location
    Avatar: Momoco
    Posts
    15,160
    Quote Originally Posted by Zenny View Post
    Which has got nothing to do with how Time Spy is doing Async Compute.
    Maybe you should re-read that post and thread then. Hes explaining the method in which it's being used and in turn not use.

  17. #1897
    Warchief Zenny's Avatar
    10+ Year Old Account
    Join Date
    Oct 2011
    Location
    South Africa
    Posts
    2,171
    Quote Originally Posted by Drunkenvalley View Post
    I appreciate the fact that you accuse me of not reading, then you go ahead and don't read the part where I say we'd still see the performance loss regardless. Thanks for the obvious hypocrisy there.
    No we won't:

    http://steamcommunity.com/app/223850...43951719980204

    Quote Originally Posted by Futuremark
    DX12 code cannot be written in a way that would specify something so low level. It literally is telling the driver;

    - Here is a bunch of work. Most of it is graphics. You know graphics!
    - This bit of work here is compute and has been sorted so you can run it asynchronously (with these caveats, observe this fence here, etc.) - just in case you happen to have spare execution units idling at some point. I mean, I doubt you do - there is a LOT of graphics to render - but squeeze this stuff in somewhere. We still need it.
    - Go ahead, do your best!

    Nothing in it prevents usage of async shaders, but this is purely the realm of the driver as to how it sorts the work out.

    Nothing forces it to do even one bit asynchronously if the driver can't do it. It is just a request.
    I've bolded the important bit, but please feel free to keep telling me I'm wrong.

    Quote Originally Posted by Remilia View Post
    Maybe you should re-read that post and thread then. Hes explaining the method in which it's being used and in turn not use.
    He is wrong. The developer contradicted him on that point. He is free to offer up some proof to prove his claim though. He even makes claim of how Futuremark chose Nvidia's method over AMD, which is once again completely wrong.

    - - - Updated - - -

    Quote Originally Posted by ABEEnr2 View Post
    well to all who is buying any pascal GPU read this

    http://www.tweaktown.com/news/53121/...017/index.html
    I bought the Geforce 1080 because I needed a high end GPU a month ago, not next year. If AMD bothered to have something that had better performance then the Geforce 1080 I would have bought that, just like I have in the past.

  18. #1898
    Fluffy Kitten Remilia's Avatar
    10+ Year Old Account
    Join Date
    Apr 2011
    Location
    Avatar: Momoco
    Posts
    15,160
    ... The devs admitted that their implementation is extremely rudimentary for the sake of compatibility. It's being noted there are different ways to approach AC which you don't seem to give an alternative. Ashes for example Pascal remains unchanged performance wise when AC is on. And honestly these synthetics are pointless...

  19. #1899
    Quote Originally Posted by Zenny View Post
    Okay, let's just ignore Kollock from Oxide, devs of Ashes of Singularity, saying the following (in regards to DX12 fences):

    AFAIK the GPU is required to flush before continuing. You can also see why MS made this setup - because now dozens of applications could actually be giving work to the GPU, and theoretically the OS could schedule them all. Windows is more then a game OS, afterall.
    Overall, he links a few statements from Kollock, said developer. Such as the above from this post. http://www.overclock.net/t/1592431/a...#post_24970191

    And also he's got fun stuff here. http://www.overclock.net/t/1592431/a...#post_24969132

  20. #1900
    Warchief Zenny's Avatar
    10+ Year Old Account
    Join Date
    Oct 2011
    Location
    South Africa
    Posts
    2,171
    Quote Originally Posted by Drunkenvalley View Post
    Okay, let's just ignore Kollock from Oxide, devs of Ashes of Singularity, saying the following (in regards to DX12 fences):



    Overall, he links a few statements from Kollock, said developer. Such as the above from this post. http://www.overclock.net/t/1592431/a...#post_24970191

    And also he's got fun stuff here. http://www.overclock.net/t/1592431/a...#post_24969132
    What has that got to with Async being off or on in the driver? If Time Spy has Async set to on it will attempt to use unless the driver tells it not to then it will set it to off, just as if you ran Time Spy with it being off.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •