Page 2 of 4 FirstFirst
1
2
3
4
LastLast
  1. #21
    The Lightbringer Evildeffy's Avatar
    15+ Year Old Account
    Join Date
    Jan 2009
    Location
    Nieuwegein, Netherlands
    Posts
    3,772
    Quote Originally Posted by ZenX View Post
    I am aware and as I work on Neural Networks utilizing (basically) tensor calculus, it would be of great use to me. Now the question is whether if the 100 TFPs is for tensor cores only or did nvidia go ahead and basically derived results as an aggregate of their FP16, FP32 and FP64 compute units while all them operating at FP16 etc (Not sensible but marketing people love to pump the exaggeration) Although, considering the nvidia-docker it might actually be a possibility as well.
    Was just making sure you weren't confusing it with TeraFlops.

    As far as nVidia's rating... till you get to play with one or someone you know and benches it for you... you'll just have to guess.
    "A quantum supercomputer calculating for a thousand years could not even approach the number of fucks I do not give."
    - Kirito, Sword Art Online Abridged by Something Witty Entertainment

  2. #22

  3. #23
    Warchief Zenny's Avatar
    10+ Year Old Account
    Join Date
    Oct 2011
    Location
    South Africa
    Posts
    2,171
    Quote Originally Posted by Shinzai View Post
    Some benchmarks floating around now.

    https://videocardz.com/74382/overclo...chmarks-emerge
    Some odd results, improvements vary from 0%->50%.

  4. #24
    it still using Pascal GTX drivers, so not utilizing independant thread scheduling and potentially Voltas new cache properly

    also someone has mentioned Titan V is unbalanced and has too many shaders to properly feed in games, IIRC it has 96 ROPs (cutdown) when it should by all accounts have 128 for gaming to feed 5120 shaders

    which makes sense for a compute/AI card thats not meant for gaming


    I think the alleged "Ampere" GV102/GA102 (2080Ti) is going to be much better in this regard and give us the kinds of gaming gains we are used to seeing from a flagship Ti Nvidia card

    everyone is used to a new gaming flagship of Nvidia raising the bar by a certain % (anywhere between 35-65%) each time, and they dont want to dissapoint their potential buyers, gaming revenue is still big for them and the big Ti is a factor in that



    I think from now on the professional/workstation/compute/AI/tensor cards and gaming segments of Nvidia cards are going to differ even more than before

  5. #25
    Thanks to BitsBeTripping, we now know just how good the new TITAN V is at mining cryptocurrency and its truly a beast. They put the new TITAN V through its paces, with a huge 77MH/s mining Ethereum when overclocked and just 213W of power consumed.

    https://www.techpowerup.com/239599/n...ve-performance

  6. #26

  7. #27
    so max OC vs max OC the graphics score is 38% higher than a 1080Ti

    potentially to be improved further on a gaming-oriented Ampere 2080Ti / gaming Titan Xv (DP/tensor stripped) with new arch drivers


    not bad for a 16nm -> 12nm jump
    Last edited by Life-Binder; 2017-12-12 at 12:48 PM.

  8. #28
    The Lightbringer Evildeffy's Avatar
    15+ Year Old Account
    Join Date
    Jan 2009
    Location
    Nieuwegein, Netherlands
    Posts
    3,772
    Quote Originally Posted by Life-Binder View Post
    so max OC vs max OC the graphics score is 38% higher than a 1080Ti

    potentially to be improved further on a gaming-oriented Ampere 2080Ti / gaming Titan Xv (DP/tensor stripped) with new arch drivers

    not bad for a 16nm -> 12nm jump
    You're not going to get a gaming oriented Titan V .. it's pretty clear they want to diversify it, otherwise they wouldn't have called it a Titan but Quadro.

    As far as "not bad for a 16nm -> 12nm jump" .... not really.
    It's node reduction combined with a CONSIDERABLY larger and more powerful die.

    You are looking at almost a factor of 2 in transistors and OVER a factor of 2 with die size going from GP102 to GV100 in the Titan-line of cards with a new uArch (unsure of how much it differs from Pascal) ... technically if anything this GPU is underpowered for gaming purposes in relation to it's build.

    The gaming oriented 1180/2080Ti (or perhaps soon the 1180/2080 Titan instead of Ti) is likely going to beat the Titan V down like a bitch in gaming but be utterly destroyed when it comes to computational tasks.

    GV102 will be the same uArch base but different design entirely.
    "A quantum supercomputer calculating for a thousand years could not even approach the number of fucks I do not give."
    - Kirito, Sword Art Online Abridged by Something Witty Entertainment

  9. #29
    You're not going to get a gaming oriented Titan V
    with Pascal we got 2 "gaming" Titans

    so, its technically possible


    but if Volta is indeed pure compute/AI and they jump immediately to Ampere for the next GTX GF line, then yeah, we wont



    It's node reduction combined with a CONSIDERABLY larger and more powerful die.

    You are looking at almost a factor of 2 in transistors and OVER a factor of 2 with die size going from GP102 to GV100 in the Titan-line of cards with a new uArch (unsure of how much it differs from Pascal) ... .
    most of that goes into the double precision and tensor cores, all irrelevant for graphics and 3D mark

    the actual CUDA increase is only 33% compared to Titan Xp


    in pure compute/workstation benchmarks (and even mining) the increases are already significantly bigger than 38% compared to 1080Ti/Xp/P6000



    The gaming oriented 1180/2080Ti (or perhaps soon the 1180/2080 Titan instead of Ti) is likely going to beat the Titan V down like a bitch in gaming but be utterly destroyed when it comes to computational tasks.

    GV102 will be the same uArch base but different design entirely.
    agreed


    I hope GV102 (or GA102) will see the kinds of gaming gains compared to Pascal as the Titan V has now in pure compute (compared to Pascal also)

    - - - Updated - - -

    I hope that 2080 Ti will come in 2H 2018 and not in 2019

    and that it will have HDMI 2.1

    .. and that its HDMI 2.1 will support 2.1 VRR, but that remains to be seen

  10. #30
    The Lightbringer Evildeffy's Avatar
    15+ Year Old Account
    Join Date
    Jan 2009
    Location
    Nieuwegein, Netherlands
    Posts
    3,772
    Quote Originally Posted by Life-Binder View Post
    with Pascal we got 2 "gaming" Titans

    so, its technically possible

    but if Volta is indeed pure compute/AI and they jump immediately to Ampere for the next GTX GF line, then yeah, we wont
    We've gotten multiple Titans in the past as well from the same generation, doesn't change the fact that the ones you're talking about are 2 entirely different die designs, you cannot blindly adjust Tensor cores and FP64 cores to FP16 cores f.ex.
    And all the titans we had in the past started with a cut-down version first, later introducing the full fat die version.
    And even though nVidia's marketing are douchebags they have stated several times that Titans aren't MEANT FOR but can be USED TO do gaming.
    The GTX moniker (designated to indicate gaming cards by nVidia) was removed after the first Titan launch.

    So no with Pascal you didn't get 2 "Gaming" Titans, that they were the same die as the 1080Ti makes things even more clear as to why you won't get it considering nVidia's stance on Titan cards vs. gaming cards.

    They used to share die designs, current gen Titans are now using HPC dies... 2 entirely different classes.

    Quote Originally Posted by Life-Binder View Post
    most of that goes into the double precision and tensor cores, all irrelevant for graphics and 3D mark

    the actual CUDA increase is only 33%, yet the above result is 38% higher in graphics score, on slightly lower clocks .. on a GPU thats explictly not for gaming

    so yeah I think that not bad

    in pure compute/workstation benchmarks (and even mining) the increases are already significantly bigger than 38% compared to 1080Ti/Xp/P6000
    Actually if you look at the estimated die shot build-up it's still less than "most" as you put it as well as the fact you're again dealing with a new uArch.

    In terms of gaming performance it is underpowered because of it's dead weight and exorbitant price cost.
    Especially considering it's die size.
    If you were to cut away the FP64 and Tensor cores and keep with the rest you would still have a considerably larger die in comparison to the Titan Xp it'd be replacing and comparatively underpowered as the die would be ~60% larger and still "only" be 38% on some scores (not all, it's not an average but up to) faster.
    Any way you look at it this GPU (GV100) is not fit for gaming, it can do it but it's smaller and cheaper siblings will be better at it, that's why it's underpowered.

    Quote Originally Posted by Life-Binder View Post
    agreed

    I hope GV102 (or GA102) will see the kinds of gaming gains compared to Pascal as the Titan V has now in pure compute (compared to Pascal also)
    It's possible but I think you're putting the bar a little bit too high on that.
    Whilst the GV100 is not meant for gaming... it can do it but replacing the Tensor and FP64 cores is not going to work out as well as you think as f.ex. replacing them with more FP16 cores is going to still take the same (estimated) die space and that large a die is stupidly expensive to make.

    As much as I'd like to be optimistic... I am remaining realistic.

    - - - Updated - - -

    Quote Originally Posted by Life-Binder View Post
    I hope that 2080 Ti will come in 2H 2018 and not in 2019

    and that it will have HDMI 2.1

    .. and that its HDMI 2.1 will support 2.1 VRR, but that remains to be seen
    Like I told you before HDMI 2.1's Variable Refresh Rate technology is optional to implement, not mandatory.

    Meaning nVidia will not touch it, may suck to hear but that's how it is.
    "A quantum supercomputer calculating for a thousand years could not even approach the number of fucks I do not give."
    - Kirito, Sword Art Online Abridged by Something Witty Entertainment

  11. #31
    we wont know until they release a HDMI 2.1 card

    - - - Updated - - -

    Whilst the GV100 is not meant for gaming... it can do it but replacing the Tensor and FP64 cores is not going to work out as well as you think as f.ex. replacing them with more FP16 cores is going to still take the same (estimated) die space and that large a die is stupidly expensive to make.
    didnt say about more cores

    5120-5376 (hopefully 5376) should be fine, just:
    - "feed" them better for pure graphics rendering (compared to titan V)
    - Ampere gaming oriented arch changes
    - actual mature Volta/Ampere drivers instead of using Pascal driver as the Titan V does


    and we may see consistent 50%+ gaming gains compared to 1080Ti

  12. #32
    The Lightbringer Evildeffy's Avatar
    15+ Year Old Account
    Join Date
    Jan 2009
    Location
    Nieuwegein, Netherlands
    Posts
    3,772
    Quote Originally Posted by Life-Binder View Post
    we wont know until they release a HDMI 2.1 card
    Ok let me rephrase for you:

    Do you think nVidia will implement a FREE OPTIONAL spec into their HDMI ports when they have their own proprietary G-SYNC system they get paid a crapton of money for from monitor manufacturers?
    Including the fact that they already COULD have implemented a Variable Refresh Rate technology that is also FREE from VESA, known as Adaptive Sync?

    That's a pretty huge wishful thinking scenario you're having there.
    Don't get me wrong, it'd be good if they did but realistically... nope, not happening when that specific part is optional.

    - - - Updated - - -

    Quote Originally Posted by Life-Binder View Post
    didnt say about more cores

    5120-5376 (hopefully 5376) should be fine, just:
    - "feed" them better for pure graphics rendering (compared to titan V)
    - Ampere gaming oriented arch changes
    - actual mature Volta/Ampere drivers instead of using Pascal driver as the Titan V does

    and we may see consistent 50%+ gaming gains compared to 1080Ti
    That's considerably different than the ~75 - 80% you implied with the compute comparison statement.
    Like I said it's unlikely we'll get anywhere near that die size so staying realistic to expectations...

    I'd guesstimate performance to hover between 45 - 50% top to top difference from Pascal to Volta (I doubt it's going to be named Ampere ... but who knows).

    Though your specs are highly oversimplified ... the basics are the point.
    "A quantum supercomputer calculating for a thousand years could not even approach the number of fucks I do not give."
    - Kirito, Sword Art Online Abridged by Something Witty Entertainment

  13. #33
    Do you think nVidia will implement a FREE OPTIONAL spec into their HDMI ports when they have their own proprietary G-SYNC system they get paid a crapton of money for from monitor manufacturers?
    Including the fact that they already COULD have implemented a Variable Refresh Rate technology that is also FREE from VESA, known as Adaptive Sync?
    in order to be able to claim that they are "fully" supporting HDMI 2.1 and the budging HDMI 2.1 displays ..

    maybe ?


    VESA adaptive sync is whatever who cares (especially when their proprietary does still have advantages and consistency and QC), but HDMI 2.1 is the future of HDMI after all

    - - - Updated - - -

    I am a die hard Nvidia fan, but not supporting HDMI 2.1 VRR is big enough that it might even sway me to AMD, cause I wont go back to non-VRR and HDMI 2.1 will be on both my next TV and my next monitor no doubt

    might (tough choice, given that I expect the Nvidia gaming flagships to continue beating AMD gaming flagships .. and potentially on the mid end too if AMD cant keep up there as it sort of managed to till now) .. its basically VRR vs fps

  14. #34
    The Lightbringer Evildeffy's Avatar
    15+ Year Old Account
    Join Date
    Jan 2009
    Location
    Nieuwegein, Netherlands
    Posts
    3,772
    Quote Originally Posted by Life-Binder View Post
    in order to be able to claim that they are "fully" supporting HDMI 2.1 and the budging HDMI 2.1 displays ..

    maybe ?
    Doesn't work that way, optional specs are exactly that ... optional.
    So "fully" supporting HDMI 2.1 can (and has in the past for other connectors, such as VESA's Adaptive Sync) be stated for only complying with the mandatory specs.

    Barring the fact that nVidia isn't present in any TV driven only major consoles, nVidia is only used in gaming hardware and applications... you're looking at an EXTREMELY tiny market here.

    Quote Originally Posted by Life-Binder View Post
    VESA adaptive sync is whatever who cares (especially when their proprietary does still have advantages and consistency and QC), but HDMI 2.1 is the future of HDMI after all
    Both have advantages and disadvantages ... Adaptive Sync isn't nearly as bad as you think it is, having said that... monitors need to implement the standard as well.
    Which funnily enough already exists... in the form of FreeSync, which is Adaptive Sync for HDMI, where this HDMI VRR is based upon.

    Until the point where the VRR is made mandatory nVidia will not comply with that particular optional spec.
    Especially if HDMI 2.0(x) can still serve sufficient bandwidth such as 4K@60Hz which is still a long way off from being mainstream.

    Like I said... there's being optimistic and there's wishful thinking of "not going to happen soon" scenarios.

    - - - Updated - - -

    Quote Originally Posted by Life-Binder View Post
    I am a die hard Nvidia fan, but not supporting HDMI 2.1 VRR is big enough that it might even sway me to AMD, cause I wont go back to non-VRR and HDMI 2.1 will be on both my next TV and my next monitor no doubt

    might (tough choice, given that I expect the Nvidia gaming flagships to continue beating AMD gaming flagships .. and potentially on the mid end too if AMD cant keep up there as it sort of managed to till now) .. its basically VRR vs fps
    Trust me.. we've noticed the first statement quite often.

    Look at nVidia's history for this, and I fully understand you wanting this of course, and tell me if you're expecting nVidia to comply.
    In all honesty with all regards for prior optional specs that were free.

    It sucks but that's the current reality you'll face.
    "A quantum supercomputer calculating for a thousand years could not even approach the number of fucks I do not give."
    - Kirito, Sword Art Online Abridged by Something Witty Entertainment

  15. #35
    https://www.gamersnexus.net/guides/3...r-volta/page-2

    - - - Updated - - -

    tl;dr ~40% gains over Titan Xp in Doom Vulkan (async enabled) and Sniper Elite DX12 (async enabled)

    - - - Updated - - -

    Conclusion: What the Titan V Teaches Us

    We’re entering territory of informed speculation. Please be aware that, from this point forward, we’re using our data to fuel conjecture on possible outcomes for Volta.

    Purely observationally, based on the data we have presently collected, it would appear that the Titan V has two primary behaviors: (1) Applications which are built atop low-level APIs and asynchronous computational pipelines appear to process more efficiently on the Titan V; (2) the Titan V appears to host more cores than some of these applications (namely D3D11 titles) can meaningfully use, and that is demonstrated fully upon overclocking.

    Given that overclocks in D3D11 applications produce performance uplift of ~20% (in some instances), it would appear that the high core count becomes more of a burden than a benefit. The GPU needs the faster clocks, and can’t access or leverage its high core count in a meaningful way. The result is that the Titan V begins to tie with the Titan Xp, and that the 1080 Ti closes-in on the Titan V. In lower-level API games, however, the Titan V pulls away by large margins – 27% to 40%, in some cases. The gains are big enough that we retested numerous times on numerous cards, but they remained. Our present analysis is that these applications are better able to spin-off multiple, simultaneous, in-flight render jobs across the high core count, whereas the tested Dx11 titles may function more synchronously.

    As for the Titan V specifically, it can certainly be used for games -- but only in the context of, "I bought this thing for work, and sometimes I play games." If you're just gaming, clearly, this isn't the right purchase. Even for those users who have non-scientific uses for their scientific cards, the Titan V does appear to have some frame pacing problems that need to be worked out. We are not yet informed enough on the Volta architecture to root-cause these behaviors, and would suggest that it's either drivers or related specifically to the Titan V.

    That’s what we think right now, anyway, and that may change. This is still early in Volta.

    More soon.
    . .

  16. #36
    HDMI 2.1 support will be 100% dependant on monitor manufacturers setting release dates for HDMI 2.1 compatible monitors. Nvidia reps and techs will obviously be in contact with the relevant companies and vice versa, as new graphics cards being released is pretty much the largest pusher for new monitor releases and therefore sales.

    If companies are ready to utilize the tech, then Nvidia will work with them to make sure it happens.

  17. #37
    The Lightbringer Evildeffy's Avatar
    15+ Year Old Account
    Join Date
    Jan 2009
    Location
    Nieuwegein, Netherlands
    Posts
    3,772
    Quote Originally Posted by Life-Binder View Post
    https://www.gamersnexus.net/guides/3...r-volta/page-2

    - - - Updated - - -

    tl;dr ~40% gains over Titan Xp in Doom Vulkan (async enabled) and Sniper Elite DX12 (async enabled)
    Yeah which is saying exactly what I've said.
    Did you expect Async Compute to turn out any differently?

    The Titan V wasn't built to game but it sure as hell will do so if you do it.

    Just likely not as efficiently, as powerfully and likely as cheap as a card designed for it.

    - - - Updated - - -

    Quote Originally Posted by Shinzai View Post
    HDMI 2.1 support will be 100% dependant on monitor manufacturers setting release dates for HDMI 2.1 compatible monitors. Nvidia reps and techs will obviously be in contact with the relevant companies and vice versa, as new graphics cards being released is pretty much the largest pusher for new monitor releases and therefore sales.

    If companies are ready to utilize the tech, then Nvidia will work with them to make sure it happens.
    Except that by doing so they would be sinking their own G-Sync technology where they earn money off of in order implement a free piece of technology where they no longer can make any money off of.

    As long as the VRR part isn't made mandatory nVidia is very likely not going to use it.
    VRR is useless in pretty much any market but gaming and they have no reason to either considering they aren't in modern consoles barring the Switch.
    And that thing is less played on a big TV than it is mobile.

    Don't get me wrong, nVidia will use HDMI 2.1 ... they simply won't implement the optional VRR technology.
    "A quantum supercomputer calculating for a thousand years could not even approach the number of fucks I do not give."
    - Kirito, Sword Art Online Abridged by Something Witty Entertainment

  18. #38
    Quote Originally Posted by Evildeffy View Post
    Except that by doing so they would be sinking their own G-Sync technology where they earn money off of in order implement a free piece of technology where they no longer can make any money off of.

    As long as the VRR part isn't made mandatory nVidia is very likely not going to use it.
    VRR is useless in pretty much any market but gaming and they have no reason to either considering they aren't in modern consoles barring the Switch.
    And that thing is less played on a big TV than it is mobile.

    Don't get me wrong, nVidia will use HDMI 2.1 ... they simply won't implement the optional VRR technology.
    Well, they will still be flaunting things like "G-Sync HDR" (4K, 144hz with HDR-10 I believe) monitors, so that won't be a huge issue for them. The HDMI 2.1 specs only released last month, so we're looking at end of next year (Late Q3 or Q4) absolute earliest for HDMI 2.1 monitors to start being produced in any kind of quantity. AUO not having panels ready for the G-Sync HDR monitors that were due out this year may throw the potential release date even further.

    Regardless though, G-Sync HDR monitor launch dates now pretty much tie up with the 2080's (or whatever) release date, so I'm sure Nvidia will have a new gimmick ready for the release of HDMI 2.1 specific monitors. I wouldn't be surprised if they had full VRR compatibility, but they'll even more likely have a "better" propriety standard, maybe G-Sync 2 or some such?

  19. #39
    The Lightbringer Evildeffy's Avatar
    15+ Year Old Account
    Join Date
    Jan 2009
    Location
    Nieuwegein, Netherlands
    Posts
    3,772
    Quote Originally Posted by Shinzai View Post
    Well, they will still be flaunting things like "G-Sync HDR" (4K, 144hz with HDR-10 I believe) monitors, so that won't be a huge issue for them. The HDMI 2.1 specs only released last month, so we're looking at end of next year (Late Q3 or Q4) absolute earliest for HDMI 2.1 monitors to start being produced in any kind of quantity. AUO not having panels ready for the G-Sync HDR monitors that were due out this year may throw the potential release date even further.

    Regardless though, G-Sync HDR monitor launch dates now pretty much tie up with the 2080's (or whatever) release date, so I'm sure Nvidia will have a new gimmick ready for the release of HDMI 2.1 specific monitors. I wouldn't be surprised if they had full VRR compatibility, but they'll even more likely have a "better" propriety standard, maybe G-Sync 2 or some such?
    Barring the time difference of delays... HDMI specs are still part mandatory/part optional.
    G-Sync's current conditional implementation is that no other Variable Refresh Rate technology is to be allowed.

    Regardless even if the monitors get VRR, which will be fully supported by AMD because in reality it already is (FreeSync over HDMI), the Graphics Cards need to support it as well and considering nVidia's history with optional specs where they have proprietary specs for ... not going to happen.
    Because it's not necessary to be HDMI 2.1 certified either.

    Hence my statement of that unless it's made mandatory it simply will not be implemented by nVidia because they don't really have to care.
    There's no mass market device like a console they have to adhere to, it's PC graphics only in which case outliers are far more common than connectors.

    Unless of course all monitor builders decide to completely abandon VESA's DisplayPort and switch to HDMI only... which I don't see happening.
    "A quantum supercomputer calculating for a thousand years could not even approach the number of fucks I do not give."
    - Kirito, Sword Art Online Abridged by Something Witty Entertainment

  20. #40
    https://wccftech.com/nvidia-geforce-...line-confirms/

    NVIDIA GeForce GTX 1180 & 1170 Projected to Land in July & Feature 16Gbps GDDR6 Memory in 8GB & 16GB Capacities


    probably there wont be a gaming Volta .. supposedly it will be called either Turing or Ampere for the mid-2018 12nm GDDR6 GeForce line launch


    7nm GeForce could follow in 2019, but noone can say that for sure (would also depend I think if it will be just a 7nm die-shrink of Turing/Ampere or will it be 7nm + new uarch .. if the latter then thats probably 2020)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •