Thread: Gtx 1080

Page 70 of 103 FirstFirst ...
20
60
68
69
70
71
72
80
... LastLast
  1. #1381
    Yeah, people tell me "omg why do you care about async" and I'm like bitch are you for real? Async compute is awesome. It is so telling that it's less an "AMD fanboy" issue and more "nVidia fanboy" issue when such an awesome and core feature is just glossed over by people too stuck up on "nVidia doesn't have it so it doesn't matter" bullshit.

  2. #1382
    Scarab Lord Wries's Avatar
    10+ Year Old Account
    Join Date
    Jul 2009
    Location
    Stockholm, Sweden
    Posts
    4,127
    Quote Originally Posted by Artorius View Post
    GP104 looks a lot more like GM200 than GP100. In fact it's almost the same thing apart from one single difference pointed by @Remilia.

    We're calling it Maxwell 16nm edition because it looks much more like Maxwell than it does with the other pascal chip, the GP100. But well whatever I guess.
    How have you determined that GP104 is GM200? Because you've looked at the silicon on a picture? I'm more inclined to believe that is much different given the notable IPC difference. But let's wait and see what Ryan Smith on Anandtech has to say about it in-depth. He's said he's taking his time with this article so unfortunately it's not out yet.

    And GP100 is full of transistors dedicated to helping double precision performance. They tried having that in a "consumer"-card with the first titan, and it was hard for an home consumer like me to really find much use for it. Instead it was used by industries to get by without buying tesla cards, so I get how from a business standpoint it makes sense to just split the two and not to include a part of silicon that's not useful in a gaming card.

  3. #1383
    Quote Originally Posted by Wries View Post
    How have you determined that GP104 is GM200? Because you've looked at the silicon on a picture? I'm more inclined to believe that is much different given the notable IPC difference. But let's wait and see what Ryan Smith on Anandtech has to say about it in-depth. He's said he's taking his time with this article so unfortunately it's not out yet.

    And GP100 is full of transistors dedicated to helping double precision performance. They tried having that in a "consumer"-card with the first titan, and it was hard for an home consumer like me to really find much use for it. Instead it was used by industries to get by without buying tesla cards, so I get how from a business standpoint it makes sense to just split the two and not to include a part of silicon that's not useful in a gaming card.
    The notable IPC difference is because of the move from 28nm to 16nm. So other than one minor change, it's the same thing, but shrunk down. Being shrunk down allows for higher clock speeds which increases the IPC. Pretty simple really.

  4. #1384
    Scarab Lord Wries's Avatar
    10+ Year Old Account
    Join Date
    Jul 2009
    Location
    Stockholm, Sweden
    Posts
    4,127
    Quote Originally Posted by Lathais View Post
    The notable IPC difference is because of the move from 28nm to 16nm. So other than one minor change, it's the same thing, but shrunk down. Being shrunk down allows for higher clock speeds which increases the IPC. Pretty simple really.
    You have it completely misunderstood what IPC means: Instructions (carried) per clock (tick). IPC is obviously lower and seems to be largest jump down since Fermi to kepler. Lower IPC isn't bad per se as we can see they have more than enough increased the clock speed in order to compensate. But in your answer here you've also missed what a die shrink means and what it affects, really. It's honestly a lot to cover and I wouldn't be the best to do it* so we're not going to go there here.

    This is why I'd rather wait for Ryan's analysis than the hobbyist theories in forums threads

    *As I'm also in a sense a hobbyist and it's not my profession to design chips or anything close

  5. #1385
    Quote Originally Posted by Shinzai View Post
    Feel free to explain what real gains there are aside from pure speed in the GP104 architecture. All the benchmarks show are flat gains. There's no benefits to running DX12 on the hardware, there are no real changes in performance for DX12 games at all shown.
    We rip the Satellite Shot 2 data from Ashes, which shoves large batches down the pipe and chokes components. This is somewhat of a worst case scenario for the GPU. The above chart represents raw FPS output (averaged) for Dx11 vs. Dx12 on each card, the below chart shows the millisecond latency (frametimes) on each API, and the next one shows the percent change from Dx11 to Dx12 when it comes to frametimes.

    At 1080p/high, the GTX 1080 crushes the other cards underfoot. NVidia's asynchronous compute advancements have clearly worked out (at least, in Ashes) and are producing gap-widening gains versus when compared against the previous architecture. AMD's Fury X and R9 390X are still the most impressive when it comes to gains, though. These two cards are choking on some sort of Dx11 optimization issue – something that nVidia's good at, when it comes to circumventing bottlenecks with drivers – and are limited by Dx11 in the Satellite Shot 2 heavy benchmark. With Dx12, the cards can unleash their full potential and nearly double framerates – but they're still behind the GTX 1080.

    In contrast to this, the GTX 980 Maxwell card ranked high among Dx11 performers, but falls to the bottom of the chart for Dx12.

    The GTX 1080 has made obvious improvements to Dx12 optimization and framerate.
    . .

  6. #1386
    Scarab Lord Master Guns's Avatar
    15+ Year Old Account
    Join Date
    Oct 2008
    Location
    California
    Posts
    4,587
    Quote Originally Posted by Noctifer616 View Post
    I would say the same unless he plans to game at 4k or 1440p at higher FPS anytime soon. The 1080 and 1070 price will probably drop when Vega and GP102 come out, it's just too expensive right now, and the FE isn't worth the price considering the thermal issues and board power limit.
    Is that a joke? A 980 will annihilate any 1440 game lol?

    Check out the directors cut of my project SCHISM, a festival winning short film
    https://www.youtube.com/watch?v=DiHNTS-vyHE

  7. #1387
    Quote Originally Posted by Master Guns View Post
    Is that a joke? A 980 will annihilate any 1440 game lol?
    I meant he should not upgrade unless he wants to play at 1440p or higher which would require an upgrade.

  8. #1388
    Fluffy Kitten Remilia's Avatar
    10+ Year Old Account
    Join Date
    Apr 2011
    Location
    Avatar: Momoco
    Posts
    15,160
    Quote Originally Posted by Wries View Post
    How have you determined that GP104 is GM200? Because you've looked at the silicon on a picture? I'm more inclined to believe that is much different given the notable IPC difference. But let's wait and see what Ryan Smith on Anandtech has to say about it in-depth. He's said he's taking his time with this article so unfortunately it's not out yet.

    And GP100 is full of transistors dedicated to helping double precision performance. They tried having that in a "consumer"-card with the first titan, and it was hard for an home consumer like me to really find much use for it. Instead it was used by industries to get by without buying tesla cards, so I get how from a business standpoint it makes sense to just split the two and not to include a part of silicon that's not useful in a gaming card.
    Each SM in Maxwell and GP104 are the same. The difference is that each GPC (cluster of SM) went from 4 to 5 which means shared resources are split which means that there's less resource per SM which very well decreases performance. 28nm planar to 16finfet should've increased performance by a little but IPC decreased.

  9. #1389
    Quote Originally Posted by Master Guns View Post
    Is that a joke? A 980 will annihilate any 1440 game lol?
    http://www.techpowerup.com/reviews/N...X_1080/22.html

    I wouldn't call 43.2 FPS "annihilating" anything. Heck, even the 980ti can't average above 60 at 1440p in some games. Unless you don't include The Witcher 3 in "any 1440 game." In which case:

    http://www.techpowerup.com/reviews/N...TX_1080/9.html
    http://www.techpowerup.com/reviews/N...X_1080/13.html
    http://www.techpowerup.com/reviews/N...X_1080/14.html
    http://www.techpowerup.com/reviews/N...X_1080/16.html
    http://www.techpowerup.com/reviews/N...X_1080/18.html
    http://www.techpowerup.com/reviews/N...X_1080/21.html

    There are 6 more with below 50FPS at 1440p. Not exactly annihilating "any 1440p game" now are we?

  10. #1390
    Fluffy Kitten Remilia's Avatar
    10+ Year Old Account
    Join Date
    Apr 2011
    Location
    Avatar: Momoco
    Posts
    15,160
    Quote Originally Posted by Life-Binder View Post
    . .
    Nvidia cards have AC as off by default in Ashes and their drivers also have it off. They also showed a fake presentation of Async doesn't help their case. The comparison between DX11 and 12 is not what matters. It's AC on and off in DX12.

  11. #1391
    The comparison between DX11 and 12 is not what matters
    uh-huh, sure, if you say so

  12. #1392
    Quote Originally Posted by Life-Binder View Post
    uh-huh, sure, if you say so
    Your point being? If you're trying to prove a point, maybe you should pick one that isn't very obviously in AMD's favor, and we know you can't have that.

  13. #1393
    Fluffy Kitten Remilia's Avatar
    10+ Year Old Account
    Join Date
    Apr 2011
    Location
    Avatar: Momoco
    Posts
    15,160
    Quote Originally Posted by Eroginous View Post
    No, I think he's referring to the highest end productivity GPU, like Quattro or Fire Pro. Like, they can't call it Pascal because it's using a different GPU than they will end up putting into the Pascal productivity GPU. He's somewhat right in that they're not the same (identical) but they are the same architecture, which makes him incorrect.

    So yeah.
    No they're completely different. Pascal P100 has a different SM layout and resources. 64 CC and 32 DP cores per SM. 10 SM per GPC. That's the most basic surface look. Nvidia doesn't docunent their stuff for public all that well compared to AMD and Intel which is sad but even a surface look can tell you both P100 and GP104 are different despite naming it Pascal.

  14. #1394
    point is saying "The comparison between DX11 and 12 is not what matters " is retarded


    but of course amd fans dont see anything beyond how good Fury X does with async in AotS

    thats what all of dx12 amounts to for them

  15. #1395
    Quote Originally Posted by Remilia View Post
    No they're completely different. Pascal P100 has a different SM layout and resources. 64 CC and 32 DP cores per SM. 10 SM per GPC. That's the most basic surface look. Nvidia doesn't docunent their stuff for public all that well compared to AMD and Intel which is sad but even a surface look can tell you both P100 and GP104 are different despite naming it Pascal.
    I really think what his argument comes down to is, nVidia owns it and nVidia called in Pascal, so it's called Pascal. So technically he is correct. I think we all get what you are saying, but technically, even if in the real world it's 16nm Maxwell, it's still Pascal because nVidia said so. Funny how different it looks from P100, they really should not be called the same thing, but nVidia say's it's the same thing so it must be!!

  16. #1396
    Scarab Lord Wries's Avatar
    10+ Year Old Account
    Join Date
    Jul 2009
    Location
    Stockholm, Sweden
    Posts
    4,127
    Quote Originally Posted by Life-Binder View Post
    . "We rip the Satellite Shot 2 data from Ashes" .
    Curious as where you found this review? The question arose apparently if the reviewers checked if Asynch was on or not. But googling phrases of this quoted review only shows this very thread as a source.
    Last edited by Wries; 2016-05-26 at 08:40 PM. Reason: actually misread quote in quote. other guy was pretty much arguing the naming

  17. #1397
    Quote Originally Posted by Life-Binder View Post
    ...
    Ah, you're just cherrypicking random words so you can make a point that isn't their point. You're strawmanning them and then you're pretending to have anything to roll with.

    But you don't. The DX11 to DX12 benchmarks don't matter because they only show a direct performance increase. It doesn't utilize DX12, it only makes a faux pass at it. You choosing to explicitly misunderstand him is your own problem.

  18. #1398
    Dreadlord GoKs's Avatar
    10+ Year Old Account
    Join Date
    May 2013
    Location
    South Africa
    Posts
    869
    Quote Originally Posted by Lathais View Post
    I really think what his argument comes down to is, nVidia owns it and nVidia called in Pascal, so it's called Pascal. So technically he is correct. I think we all get what you are saying, but technically, even if in the real world it's 16nm Maxwell, it's still Pascal because nVidia said so. Funny how different it looks from P100, they really should not be called the same thing, but nVidia say's it's the same thing so it must be!!
    Precisely! Thank you =P
    Last edited by GoKs; 2016-05-26 at 09:11 PM. Reason: oops

  19. #1399
    Old God Vash The Stampede's Avatar
    10+ Year Old Account
    Join Date
    Sep 2010
    Location
    Better part of NJ
    Posts
    10,939
    Quote Originally Posted by Life-Binder View Post
    point is saying "The comparison between DX11 and 12 is not what matters " is retarded


    but of course amd fans dont see anything beyond how good Fury X does with async in AotS

    thats what all of dx12 amounts to for them
    Async Compute is part of the DX12 spec. Nvidia didn't implement that feature properly. They still didn't with the 1080, and likely the 1070. Async serves only one purpose, and that's to increase performance. Developers can choose not to implement this feature, but this is one big feature to omit, regardless of what some developers say. The problem is, without Async Compute, what is DX12? it's essentially DX11, and we know shader code HLSL for DX12 can use the same code from DX11. Vulkan can also use HLSL as well as GLSL.

    No Async Compute might as well be no DX12.

  20. #1400
    480x is less then fury x performence there is a reviewer who leaked it. Just google it also show 480x in crossfire still dont beat the 1080. Either way im waiting for a 1080 ti or by some mericle vega is better ill get that.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •