Thread: Gtx 1080

Page 54 of 103 FirstFirst ...
4
44
52
53
54
55
56
64
... LastLast
  1. #1061
    Deleted
    Quote Originally Posted by Bigvizz View Post
    Random question? Is the single 8 pin connection limiting the OC ability of the founder cards, One 8 pin connector is what 150 Watts (I might be wrong on this). I have a feeling partner boards are going to have 2 x 6pins, 8pin + 6pin or 2 x 8pin connectors for more power headroom.
    It is 150 +75 from the PCIe slot, making 225. But it could be limiting yeah.

    I think it was both temps and voltage limiting the boosting and OCs on the FE cards.

  2. #1062
    Quote Originally Posted by Bigvizz View Post
    Random question? Is the single 8 pin connection limiting the OC ability of the founder cards, One 8 pin connector is what 150 Watts (I might be wrong on this). I have a feeling partner boards are going to have 2 x 6pins, 8pin + 6pin or 2 x 8pin connectors for more power headroom.
    Yes they are limiting, the card is hitting 220W. Also the cooling is a limit as well unless the fan is at 100% speed which is loud. I don't recommend stock cooling for any high end GPU (with a few exceptions like the Fury X), just wait for proper partner cards.

  3. #1063
    Quote Originally Posted by Zeara View Post
    It is 150 +75 from the PCIe slot, making 225. But it could be limiting yeah.

    I think it was both temps and voltage limiting the boosting and OCs on the FE cards.
    Yea I figured, 1080 has a TDP of 180 watts, not really giving a lot of headroom.

  4. #1064
    Quote Originally Posted by Bigvizz View Post
    Random question? Is the single 8 pin connection limiting the OC ability of the founder cards, One 8 pin connector is what 150 Watts (I might be wrong on this). I have a feeling partner boards are going to have 2 x 6pins, 8pin + 6pin or 2 x 8pin connectors for more power headroom.
    Partner boards are superior. Like with every other release.

  5. #1065
    Quote Originally Posted by Life-Binder View Post
    Benchmarks wont change anything about Gsync/Freesync for example

    - - - Updated - - -

    But Lathais claimed superiority right now so must have been using existing AMD cards for that
    No, I claimed that more devs are using them due to the superiority they had in the last generation. With no major improvements to the architecture of Pascal and we know AMD DID make architecture improvements with Polaris, it stands to reason that Polaris will be better at ASync than Pascal. I wish I could find that article again, if I have some time today I'll look for it again.

  6. #1066
    Mechagnome Sheevah's Avatar
    7+ Year Old Account
    Join Date
    Mar 2016
    Location
    Southern Illinois
    Posts
    679
    Quote Originally Posted by Lathais View Post

    I wish I could find the article where I read it again, but I swear I read that more devs are signing up with AMD, due to their superior handling of DX12, ASync Compute and VR.
    The only thing I can find even close to that is a press release AMD had from back in March. It talks about DX12, Async Compute, and touches on VR while talking abotu connections made in the industry, so it ticks virtually all the boxes from your description except that part where it specifically says devs prefer them over Nvidia.

    Also... it is a press release, so it's all rainbows and butterflies from the company talking about themselves.

    http://www.techpowerup.com/221160/am...r-partnerships

  7. #1067
    Here, this article explains pretty well how nVidia is handling ASync compute.
    http://wccftech.com/nvidia-gtx-1080-...pute-detailed/

    They do not really have ASync Compute still, but are able to dynamically adjust things so devs don't have to guess. So it's still really just software emulation, just better than Maxwell was able to do it due to actually having a scheduler. It's still not the same thing as actually doing 2 things at once though, the way AMD can and already does. They added a scheduler in the 7xxx series and them made further improvements the following generations. nVidia is just now getting a scheduler in and will have to work on improving it in future generations. AMD just has a leg up here.

    - - - Updated - - -

    Quote Originally Posted by Sheevah View Post
    The only thing I can find even close to that is a press release AMD had from back in March. It talks about DX12, Async Compute, and touches on VR while talking abotu connections made in the industry, so it ticks virtually all the boxes from your description except that part where it specifically says devs prefer them over Nvidia.

    Also... it is a press release, so it's all rainbows and butterflies from the company talking about themselves.

    http://www.techpowerup.com/221160/am...r-partnerships
    That's not the same article I read, though it does sound like it is based on the same or similar information.

  8. #1068
    Deleted
    Quote Originally Posted by Gaidax View Post
    The logic is simple, I am going for a solution that gives the best support there is, the only thing so far Nvidia failed at is just one technology and it is only reflected in one game so far.

    But the question is whether this issue remains with 1080, because if not, then why should I ever bother with AMD besides price?

    To support the cause? Are you joking here? I'm not some rebel here, I want to enjoy games and if bloody one game does not play well with Nvidia against tons of games that have issues with AMD because of whateverworks, then I will buy Nvidia.
    I agree. But, I have to ask the question: Is this still the case? AMD definitely left a sour taste in my mouth, which resulted in evasive behaviour towards the brand as a whole. Nvidia, on the other hand, never failed to deliver what was promised.

    My preferences are obvious, but I will not lose objectivity. If AMD delivers qualitatively superior cards (hard- and software), I'll jump ship without blinking.
    Last edited by mmoc47927e0cdb; 2016-05-19 at 04:38 PM.

  9. #1069
    hardware alone doesnt cut it though

    https://twitter.com/id_aa_carmack/st...81449671495680


    and for me its so much harder to decide because Im also agonizing over monitors and Gsync/Freesync at the same time

  10. #1070
    Quote Originally Posted by Life-Binder View Post
    hardware alone doesnt cut it though

    https://twitter.com/id_aa_carmack/st...81449671495680
    Hence why we have low-level APIs now. Game devs have much more control(and responsibility) on games working on different cards and platforms. Or in some cases work poorly on some platforms, cause of Gameworks.

  11. #1071
    I have bought a G-sync monitor a couple months ago, now I only have to break my head over which Nvidia card to buy, don't really care which brand is fastest, only in which card does what I want/need as it should always be.

  12. #1072
    Old God Vash The Stampede's Avatar
    10+ Year Old Account
    Join Date
    Sep 2010
    Location
    Better part of NJ
    Posts
    10,939
    Quote Originally Posted by Lathais View Post
    2 maybe 3, but he said 5. Also, this is kind of a unique situation with DX12 coming out and being adopted so quickly. When DX11 came out, they were ready for it, they seem to have just ignored DX12 and ASync Compute until it was too late though, so this situation is a bit different.
    Plenty of people still using GTX 680's, and that's a 4 year old card. You can easily stretch it to 5, but it won't be playing games with AA and ultra settings. But then again, we are playing games ported from a PS4, so why emphasize superior graphics performance when we have games that revolve around console hardware?

    Quote Originally Posted by Gaidax View Post
    If anything I'd be worried more for AMD, with amount of "works" bullshit Nvidia is pushing everywhere, buying AMD is a far greater long term risk.
    The Gameworks program is arguably a failure. Games like Batman and Assassin’s Creed has had poor sales due to this heavy use of Gameworks. It's also gotten to the point where benchmarks are now disabling these features, due to their controversial favouritism towards Nvidia. Developers are going to limit how much Gameworks can effect their games.

    Nvidia is not the first to try and do things like this. Look at Creative with EAX. Whatever happened to EAX?

    Quote Originally Posted by Life-Binder View Post
    What superiority does AMD have over Pascal in DX12 and VR?
    VR? Nothing we know of. DX12? Async Compute, obviously.

    Quote Originally Posted by Lathais View Post
    5 years? With DX12 and ASync Compute being such a big thing over the next 5 years? I don't personally think the 1080 would be a good choice for that either. If you want to "future proof," which we all know is not really possible anyway, something that does not fully support current technologies is not a great idea.
    As big as DX12 is made out to be, it is just a performance feature. DX11 and OpenGL aren't going anywhere soon. Developers will release games that use both DX11 and DX12 for years to come. Even then, DX12 is just a performance increase, or in Nvidia's case, a performance decrease.

    BTW, this isn't the first time where Nvidia screwed up an API with hardware. Look at the fiasco with DX9 and Geforce FX's. They can still play DX9 games, just at horrible frame rates. Nvidia's "Way It's Meant To Be Played" program is like Gameworks where developers had help from Nvidia. Most DX9 games were actually DX8.1 games that had sprikled some DX9 features that didn't negatively effect the FX cards. Except with the 1080, it can still outperform the Fury X cards, even with broken Async Compute.

  13. #1073
    Game devs have much more control(and responsibility) on games working on different cards and platforms.
    devs are becoming pretty bad/lazy though lately so this doesnt inspire me with much confidence either

    or mayeb its getting harder to code the new games all the time

    - - - Updated - - -

    DX12? Async Compute, obviously.
    he mentioned async separately so obviously it isnt included here

    thats why I asked, I dont see what AMD has better for DX12 besides async

  14. #1074
    Quote Originally Posted by Life-Binder View Post
    devs are becoming pretty bad/lazy though lately so this doesnt inspire me with much confidence either

    or mayeb its getting harder to code the new games all the time
    It's not up to game devs but engine developers. There are a ton of games but only a few engines in comparison. If Unitiy has full support for all DX12/Vulkan features then a dev using the engine doesn't have to add those since there are already there in the engine.

  15. #1075
    Even then, DX12 is just a performance increase, or in Nvidia's case, a performance decrease.
    not on Pascal though as far as I see

    in no test did 1080 decrease on DX12

  16. #1076
    Quote Originally Posted by Life-Binder View Post
    in no test did 1080 decrease on DX12
    But it didn't increase either.. It performed slightly worse on DX12 than DX11 in every benchmark that I saw. And by slightly I mean 0.1-1%.

  17. #1077
    The Lightbringer Evildeffy's Avatar
    15+ Year Old Account
    Join Date
    Jan 2009
    Location
    Nieuwegein, Netherlands
    Posts
    3,772
    AMD has the far superior VR API to nVidia due to the very hardware design that AMD employs.

    If you look at VR performance as well you'll see AMD has almost no Frame Time Variance (the thing that makes you nauseous).
    nVidia on the other hand just tries to throw FPS at it and hopefully works.

    Which it doesn't, you want a fixed amount of FPS for VR, no more and no less as both can make you nauseous.
    The access to AMD's low level of graphics cards allow VR developers to adhere to set standards and with fantastic and consistent performance.

    This is something that nVidia's hardware is simply incapable of, at least Maxwell.
    Pascal MAY have improved upon this but until it's fully investigated and tried with FTV we won't know.

    And let me put this out there again as a sidenote:
    It does not matter how much you make a driver for Async Compute it WILL always be worse than a full hardware implementation.
    It adds something which for VR is "deadly" and that is increased lag due to context switching as well as simply put tanking performance.

    But you also seem to forget that DX11 subroutines aren't forgotten by DX12, it still uses previously established subroutines even if the game isn't developed for DX11.
    AMD has traditionally had higher (and lower) functions available to them with DX11 Tier 3 resources which NONE of nVidia's cards had and still have not.
    AMD still has these resouces tiers available and can do more with them than nVidia.

    This isn't to say nVidia is shit, far from it, but there are aspects which both can do things.
    In this case nVidia is simply brute forcing everything right now where AMD is actually doing things as they are supposed to be handled.

    So we'll see if Pascal's added hardware, which looks a good deal like GCN1.0 (HD7900 series), can improve upon their deficits in Async Compute and VR.
    From every review site and benchmark out there so far the answer is "No." because nVidia is still just brute forcing things but we'll see if there's a difference IF and WHEN nVidia decides to bring out their "Async Compute Driver" which was worked upon and soon to be released 9 bloody months ago.

  18. #1078
    Warchief Zenny's Avatar
    10+ Year Old Account
    Join Date
    Oct 2011
    Location
    South Africa
    Posts
    2,171
    So just under 2.1 Ghz is possible at 64c:

    http://www.hardocp.com/article/2016/...king_preview/1

    You should be easily able to get a 2Ghz clock with around 11Ghz memory.

  19. #1079
    Quote Originally Posted by Dukenukemx View Post
    BTW, this isn't the first time where Nvidia screwed up an API with hardware. Look at the fiasco with DX9 and Geforce FX's. They can still play DX9 games, just at horrible frame rates. Nvidia's "Way It's Meant To Be Played" program is like Gameworks where developers had help from Nvidia. Most DX9 games were actually DX8.1 games that had sprikled some DX9 features that didn't negatively effect the FX cards. Except with the 1080, it can still outperform the Fury X cards, even with broken Async Compute.
    You actually did a pretty good job of explaining my point. If those cards are still being used in 4 years, which they likely will be, and a game comes out that uses DX12 and ASync compute, also with a DX11 mode, which card will run the game, overall, better? Sure, the card that does not handle it will will still be able to play it, but the card that does support it will play it better.

    Also, who cares that the 1080 can outperform the Fury X? The Fury X is not the 1080s competition.

  20. #1080
    yeah because there is no competition for 1080 right now and there might not be for some time


    If those cards are still being used in 4 years, which they likely will be
    not by a gamer

    if I buy a 1070 now I expect to upgrade in late 2018 or at latest early 2019

    same for any AMD equivalent

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •