Page 6 of 45 FirstFirst ...
4
5
6
7
8
16
... LastLast
  1. #101
    they will likely go with GDDR6 for top gaming versions of Volta and leave HBM2 Voltas for the $5000+ Tesla line

    seems like the smartest thing to do right now, HBM2 isnt ready (or really necessary) for the gaming segment


    and GDRR6 can give something like 750+ GB/s on a 384-bit bus IIRC

  2. #102
    The Unstoppable Force Gaidax's Avatar
    10+ Year Old Account
    Join Date
    Sep 2013
    Location
    Israel
    Posts
    20,879
    If that's true: http://wccftech.com/amd-radeon-rx-ve...tities-launch/

    Then lyl at less than 1080Ti price delusions.

  3. #103
    Quote Originally Posted by Evildeffy View Post
    Just a small correction here:
    HBM2 uses less power than GDDR5(X) so it's less of a power efficiency there.
    Memory power consumption is irrelevant in comparison to GPU power consumption though. Also, GDDR5X is about 10-15% more efficient than GDDR5 due to lower operating voltage.

    Quote Originally Posted by Evildeffy View Post
    Also nVidia's actual design choices tend to be pretty damn cheap in their reference designs not to mention that AMD's higher power use comes from having "multi-threading" hardware that nVidia simply does not possess ... so stating they put less strain on components is both true and not true.
    For Pascal all their reference cards (at least their PCB portions) are completely sufficient to push the GPU to the limit, same cannot be set for Polaris cards. Also, I'm pretty sure that they know the disadvantages of the turbine they (both Nvidia and AMD, actually, at least until Vega) use in their reference card, they just cant drop it because it would cut them off from their main market - OEM solutions.

    AMD doesnt have any "multi-threaded" hardware, it just their design: use more silicon that will have higher "IPC" but run at lower clocks. More silicon means using more power and running hotter.

    Quote Originally Posted by Evildeffy View Post
    True in the sense of letting software handle things and therefore less "strain" is put on the hardware but software is lacking (Low Level APIs).
    Not true in the sense of actual engineering on the hardware itself as AMD's cards in general are pretty damn well engineered comparatively.
    Their overall approach is quite similar. CUDA cores and stream processors are two solutions to the same problem. AMD doesnt have specific hardware that handles low level API better, they have the whole GPU designed to handle a specific low level API (Vulkan) better. And in this case it's a clear miss since Vulkan is mostly irrelevant, and that's not about to change.

    Quote Originally Posted by Evildeffy View Post
    AMD simply chose to invest in a future that simply wasn't there yet, that is now starting to roll.
    Well if you mean Vulkan then unfortunately future is dead. If you mean low level API in general then same story as Nvidia found a way to optimize their cards without compromises on hardware side.

    Quote Originally Posted by Evildeffy View Post
    Also lower clocks have very little to do with bus width on their GPUs, memory yes because you don't need to push it as hard to achieve the same bandwidth but not on their core clocks.
    GPU that processes more data per clock is going to need a wider memory bus, it has everything to do with lower clocks (assuming relatively close performance).
    Last edited by Thunderball; 2017-05-08 at 09:43 PM.
    R5 5600X | Thermalright Silver Arrow IB-E Extreme | MSI MAG B550 Tomahawk | 16GB Crucial Ballistix DDR4-3600/CL16 | MSI GTX 1070 Gaming X | Corsair RM650x | Cooler Master HAF X | Logitech G400s | DREVO Excalibur 84 | Kingston HyperX Cloud II | BenQ XL2411T + LG 24MK430H-B

  4. #104
    The Lightbringer Evildeffy's Avatar
    15+ Year Old Account
    Join Date
    Jan 2009
    Location
    Nieuwegein, Netherlands
    Posts
    3,772
    Quote Originally Posted by Thunderball View Post
    Memory power consumption is irrelevant in comparison to GPU power consumption though. Also, GDDR5X is about 10-15% more efficient than GDDR5 due to lower operating voltage.
    A 512-bit bus with 7Gbps consumes 50W of power.
    8Gbps 384-bit bus consumes ~45 - 50W of power.
    GDDR5X goes faster but consumes around the exact same values because speed goes up as their calculations assume equal speed.
    That's a rather considerable power consumption, you can calculate the rest from this point.

    Quote Originally Posted by Thunderball View Post
    For Pascal all their reference cards (at least their PCB portions) are completely sufficient to push the GPU to the limit, same cannot be set for Polaris cards. Also, I'm pretty sure that they know the disadvantages of the turbine they (both Nvidia and AMD, actually, at least until Vega) use in their reference card, they just cant drop it because it would cut them off from their main market - OEM solutions.
    Both are cooler constrained, always are but the 3rd party eliminate that bottleneck, wider buses and higher IPC do not equal a bottleneck immediately.

    Quote Originally Posted by Thunderball View Post
    AMD doesnt have any "multi-threaded" hardware, it just their design: use more silicon that will have higher "IPC" but run at lower clocks. More silicon means using more power and running hotter.
    Incorrect, AMD employs multiple Hardware Schedulers and pipelines that nVidia does not, if you've studied the architecture you'd know this is why nVidia cards are "weak" in low level APIs.
    This is a CONSIDERABLE power drain and heat source, if AMD designed their cards like nVidia it'd be the same, see the GTX400/500 vs. HD5000/6000 series.

    Quote Originally Posted by Thunderball View Post
    Their overall approach is quite similar. CUDA cores and stream processors are two solutions to the same problem. AMD doesnt have specific hardware that handles low level API better, they have the whole GPU designed to handle a specific low level API (Vulkan) better. And in this case it's a clear miss since Vulkan is mostly irrelevant, and that's not about to change.
    Incorrect, again .. study the architectures of each.
    You would not make this statement if you knew, also if you think Vulkan is dead... well ... that's interesting, we should notify the Khronos Group members of this.

    Quote Originally Posted by Thunderball View Post
    Well if you mean Vulkan then unfortunately future is dead. If you mean low level API in general then same story as Nvidia found a way to optimize their cards without compromises on hardware side.
    Incorrect, again ... study the architectures.
    If you think nVidia doesn't sacrifice anything in their "Oh so wise" way then you are clearly unaware of architectures.

    Quote Originally Posted by Thunderball View Post
    Memory power consumption is irrelevant in comparison to GPU power consumption though.
    Incorrect again because you very likely think it's probably 10W orso in total, this is not even remotely true.
    The R9 290X's 7Gbps 512-bit GDDR5 consumed 50W alone, the same VRAM that nVidia and their partners used, no difference there, that's an example.
    VRAM power consumption is not tied to GPU.

    Quote Originally Posted by Thunderball View Post
    GPU that processes more data per clock is going to need a wider memory bus, it has everything to do with lower clocks (assuming relatively close performance).
    This is again wholly incorrect, completely dependant upon architecture design.

  5. #105
    Warchief Zenny's Avatar
    10+ Year Old Account
    Join Date
    Oct 2011
    Location
    South Africa
    Posts
    2,171
    Nvidia is not weak in low level API's. The 1060 despite having a smaller die size, lower power consumption and way lower TFLOP rating is comparable or just slightly behind the RX 480 in many DX12 titles.

  6. #106
    The Lightbringer Evildeffy's Avatar
    15+ Year Old Account
    Join Date
    Jan 2009
    Location
    Nieuwegein, Netherlands
    Posts
    3,772
    Quote Originally Posted by Zenny View Post
    Nvidia is not weak in low level API's. The 1060 despite having a smaller die size, lower power consumption and way lower TFLOP rating is comparable or just slightly behind the RX 480 in many DX12 titles.
    That is fully dependant upon the optimization of each title, why is it you think f.ex. the RX 480 has a commanding lead vs. the 1060 in DOOM when Vulkan is enabled?

    Or if you want to go back a little bit in generations why an R9 390X is capable of matching and beating a 980 Ti in DOOM or Ashes of the Benchmark f.ex.?

    nVidia's low level API is comparatively very weak.

    Edit:
    To give you a more valid comparison regarding this ... it's exactly like Intel's Kaby Lake vs. AMD's Ryzen right now.
    Overall the Ryzen outclasses Intel by a longshot when it comes to multi-threading where single threading they are slightly slower.
    Principle applies here due to the way AMD's architecture is designed, they COULDN'T mimic nVidia if they wanted to BECAUSE of their multi-threading hardware on their GPUs which in turn excel in low level APIs.
    Last edited by Evildeffy; 2017-05-08 at 10:27 PM.

  7. #107
    Quote Originally Posted by Zenny View Post
    Nvidia is not weak in low level API's. The 1060 despite having a smaller die size, lower power consumption and way lower TFLOP rating is comparable or just slightly behind the RX 480 in many DX12 titles.
    indeed

    there has also been a driver recently that improved Pascal on a bunch of DX12 titles, especilly Hitman


    I think both Maxwell and Pascal hardware hit just the right balance as far as APIs go with Volta dipping heavier into low level APIs (note - that doesnt (and shouldnt) mean it will definitely have hardware async shaders like GCN), because the time for that will be more right then it has been in 2015 & 2016+

  8. #108
    The Lightbringer Evildeffy's Avatar
    15+ Year Old Account
    Join Date
    Jan 2009
    Location
    Nieuwegein, Netherlands
    Posts
    3,772
    Quote Originally Posted by Life-Binder View Post
    indeed

    there has also been a driver recently that improved Pascal on a bunch of DX12 titles, especilly Hitman
    Do note I never stated the GTX 1060 to be a bad card, far from it.
    But simply looking at physics and architecture design this is a plain fact.

    nVidia's improvement in DX12 is a large misnomer though with the driver you are referring to, I believe computerbase.de did a follow-up on it (as well as others) and the "bonus" was almost completely negligible unless it was @ 4K.

    Quote Originally Posted by Life-Binder View Post
    I think both Maxwell and Pascal hardware hit just the right balance as far as APIs go with Volta dipping heavier into low level APIs (note - that doesnt (and shouldnt) mean it will definitely have hardware async shaders like GCN), because the time for that will be more right then it has been in 2015 & 2016+
    nVidia saw that DX11 and it's limitations would remain as such and design their cards around it, which was technically a very smart move.
    AMD hoped low level APIs would catch on quicker like for consoles when it powered the Xbox 360/Wii ... that didn't work out.

    That does not mean their designs are bad however, technically it's "more advanced" than nVidia's... just wrongly timed.

  9. #109
    The Unstoppable Force Gaidax's Avatar
    10+ Year Old Account
    Join Date
    Sep 2013
    Location
    Israel
    Posts
    20,879
    Nvidia seems to handle DX12 just fine and with Volta around the corner they may as well go ahead and address rough edges they still have.

    Despite all the chest thumping from the Red side (honestly, whole "poor volta" is almost embarrassingly infantile, especially considering they get slaughtered by Pascal), we yet have to see any solid product that can truly compete with Nvidia. The whole current RX line in simply inferior, because I just can't fathom how can be claimed otherwise when you can just see from Polaris how insanely inferior it is to Pascal in just about everything besides the edge cases which people keep grasping at like straws. When you need a whole ton more silicon and power to match competitor in the mainstream range and have a bleeding gap in high end - that is definitely not the sign of the smart, efficient and forward looking design some here try to portray.

    Oh but it is better in Vulcan! In all the 3 titles that people somewhat give a damn about that run it.

    Vulcan is basically Mantle 2.0 - it has some hype to it, but it's all up to designers to make use of it and they just don't really aside from couple hipster cases. Reality is simple here, EVEN if Vulcan actually gains traction, which is something that will take YEARS, by that time this whole argument will be irrelevant because both companies will support it very well with their product which will be relevant then.

    - - - Updated - - -

    I'm not even saying the obvious, when 70% of the market is people witb Nvidia GPUs in their rigs/laptops - you can be sure as hell everyone will make sure to be compatible with those solutions first.

    And if that is not bad enough already, Nvidia seems to have some success with pushing their unique solutions onto developers which make AMD cough blood. Gameworks is very effective at what it does - i.e. easy to utilize for instant eyecandy/hype and runs well on Nvidia GPUs and run shit on else, while AMD simply has nothing to offer of that caliber.
    Last edited by Gaidax; 2017-05-08 at 11:02 PM.

  10. #110
    The Lightbringer Evildeffy's Avatar
    15+ Year Old Account
    Join Date
    Jan 2009
    Location
    Nieuwegein, Netherlands
    Posts
    3,772
    Quote Originally Posted by Gaidax View Post
    Nvidia seems to handle DX12 just fine and with Volta around the corner they may as well go ahead and address rough edges they still have.
    It doesn't handle DX12's low level API, it emulates it via software.
    It's not really a rough edge as it is quite a big difference, however yet again I did not state nVidia being bad at all. I am simply stating architectural facts.

    Quote Originally Posted by Gaidax View Post
    Despite all the chest thumping from the Red side (honestly, whole "poor volta" is almost embarrassingly infantile, especially considering they get slaughtered by Pascal), we yet have to see any solid product that can truly compete with Nvidia. The whole current RX line in simply inferior, because I just can't fathom how can be claimed otherwise when you can just see from Polaris how insanely inferior it is to Pascal in just about everything besides the edge cases which people keep grasping at like straws. When you need a whole ton more silicon and power to match competitor in the mainstream range and have a bleeding gap in high end - that is definitely not the sign of the smart, efficient and forward looking design some here try to portray.
    nVidia has used the same and even worse "infantile behaviour" to screw with AMD ... keep that well in mind.
    The RX line is not inferior, it has different pros and cons, they are solid cards in general and compete with them.
    Just because you view them as inferior does not make them factually inferior..
    Even though I do not like this reviewer here's a video for you:



    Keep in mind that graphics cards go beyond "just gaming".
    Architectural difference for different purposes, can't do a 1-on-1 comparison.

    Or should we consider nVidia "potentially catastrophic" because VR devs really loathe working with nVidia tech?

    Quote Originally Posted by Gaidax View Post
    Oh but it is better in Vulcan! In all the 3 titles that people somewhat give a damn about that run it.

    Vulcan is basically Mantle 2.0 - it has some hype to it, but it's all up to designers to make use of it and they just don't really aside from couple hipster cases. Reality is simple here, EVEN if Vulcan actually gains traction, which is something that will take YEARS, by that time this whole argument will be irrelevant because both companies will support it very well with their product which will be relevant then.
    First off: It's Vulkan.
    Second off: It is a future standard to be used across all platforms, or do you know of a 100% Windows 10 adoption rate the world does not?
    Vulkan has major developers in it working with it and whilst it will take some time these cards will still be relevant.
    The average person (the VAST majority) does 4 - 6 years with their hardware, including graphics cards.
    Of course newer cards are out by then but is the vast majority then irrelevant? Developers and game studios seem to think not as it is their primary source of income.
    World of WarCraft is your prime example.. ancient and "irrelevant" and still a monstrosity in the MMO world.

    Quote Originally Posted by Gaidax View Post
    I'm not even saying the obvious, when 70% of the market is people witb Nvidia GPUs in their rigs/laptops - you can be sure as hell everyone will make sure to be compatible with those solutions first.
    Actually you'd be incorrect since Intel holds more than 80% of that referred market, don't see developers making optimizations for them.
    That said the games developers are actually tied into which is optimized, the difference here is customer viewpoint.
    AMD could bring out the most powerful graphics cannon in the world by a factor of 10, the vast majority would still pick nVidia because "nVidia! That's why!".

    Quote Originally Posted by Gaidax View Post
    And if that is not bad enough already, Nvidia seems to have some success with pushing their unique solutions onto developers which make AMD cough blood. Gameworks is very effective at what it does - i.e. easy to utilize for instant eyecandy/hype and runs well on Nvidia GPUs and run shit on else, while AMD simply has nothing to offer of that caliber.
    GameWorks which has been mostly made Open Source due to the fact every developer (barring nVidia horny ones) hated their guts for it and AMD's Open Source variants are gaining more and more traction because it was free vs. paid and a blackbox and literally sabotaging the competition? (this is illegal btw)
    GameWorks is a blight upon the gaming community and luckily most developers realized this and this forced nVidia into action making most Open Source.

    Simply stated you may think of AMD as garbage but without them you'd be paying 500 USD for a GTX 1060, keep that well in mind.

  11. #111
    The Unstoppable Force Gaidax's Avatar
    10+ Year Old Account
    Join Date
    Sep 2013
    Location
    Israel
    Posts
    20,879
    Thanks?

    That review basically confirmed what I am saying - Polaris needs a whole ton more silicon and power to match competitor in the mainstream range. Thanks for confirming this, as this is basically what the reviewer says.

    When a GPU with 30% more transistor count and 50% greater power consumption ends up being 2% faster on average than direct competitor - that all there needs to be said. It's shit, really and manages to make it not because it's superior architecture, but simply because it bruteforces the problem.

    You literally demolished your own argument with that review. I can't fathom how you can claim Polaris to be a superior architecture, where it's advantage over direct competitor, in reality, is plain 30% transistor count. If anything, put in that perspective it only shows how much more impressive and efficient Nvidia solution is.
    Last edited by Gaidax; 2017-05-09 at 12:11 AM.

  12. #112
    580 vs 1060 is somewhat misleading

    580 is nothing more then a bruteforced overvolted overclocked 480

    a ~580 Nitro has literally just 5-6% max OC headroom left and that max OC gives only 4%+ of actual fps gains (compared to factory OC)


    a 1060 gets 12-15% from stock to max OC or 8-10% from factory OC to max OC


    so 580 beating a non OCed 1060 slightly is expected, but at max OC vs max OC they tie or 1060 edges out ahead again (this not counting the power consumption/heat and thus noise under load)

    and this despite 580 being technically a year later, so a 2017 vs 2016 GPU



    also midrange doesnt matter if we are talking about flagships (which is what Vega is), personally I find the 1080Ti a more impressive (relative to segment) GPU than a 1060

    I think 1060 segment is due for an update, the 9 Gbps is ok, but nothing great


    which is why I think that Nvidia will let 1080Ti hold the crown for some more and start Volta release with a ~2060 (maybe 2070 too), especially if the gaming Volta line release indeed begins in Fall 2017 .. the 1060 is the one that could use the biggest uplift since 2016 era, a GV106 GPU would destroy 580 & the smaller 1070-level Vega

    and hold the GTX 2080 off till 2018

  13. #113
    The Lightbringer Evildeffy's Avatar
    15+ Year Old Account
    Join Date
    Jan 2009
    Location
    Nieuwegein, Netherlands
    Posts
    3,772
    Quote Originally Posted by Gaidax View Post
    Thanks?

    That review basically confirmed what I am saying - Polaris needs a whole ton more silicon and power to match competitor in the mainstream range. Thanks for confirming this, as this is basically what the reviewer says.

    When a GPU with 30% more transistor count and 50% greater power consumption ends up being 2% faster on average than direct competitor - that all there needs to be said. It's shit, really and manages to make it not because it's superior architecture, but simply because it bruteforces the problem.

    You literally demolished your own argument with that review. I can't fathom how you can claim Polaris to be a superior architecture, where it's advantage over direct competitor, in reality, is plain 30% transistor count. If anything, put in that perspective it only shows how much more impressive and efficient Nvidia solution is.
    And yet you fail to see the simple point I am making.

    I also did not EVER state Polaris to be the "Superior Architecture"... I stated when it comes to actual engineering it is more advanced and this is a pure fact.

    Let me ask you this since you fail to grasp the problem, do you believe DX12/Vulkan to be the future type of APIs, or do you believe serial processing to be still the future?

    Seeing as I'm assuming you will agree to it being the way forward then I'll ask you the following:
    How do you think nVidia will progress further seeing as their entire line-up is serial processing based to parallel based?
    Do you believe they will remain as power efficient as they are now and remain as "small" in die-size as they are?
    Keep in mind these 2 designs are mutually exclusive to each other, you can't have best of both worlds (yet).

    The answer is that nVidia will face the same limitations and more since AMD is very well experienced in these architecture designs.
    nVidia is not, technically their Kepler architecture would be a better low level API card than Maxwell and Pascal.

    I'll give you another hint: The roles have been reversed in the past with the GTX 400/500 series and HD5000/HD6000 series.
    AMD adopted the same design style and was successful but saw the limitations and developed into future capabilities.
    nVidia actually went away from this scheme because of the flame thrower that damaged nVidia's reputation.

    And certainly back then it wasn't time for it, too early and too fast, for either company.

    But now the low level API is in full development, things will rush forward and remaining with the old will start being a bad idea.
    It may indeed take a few more years but it would be comparable to brining a Nokia 3310 (old one) vs. a Samsung Galaxy S8+ in a Smartphone battle.

    It will not end gracefully.

  14. #114
    Vulkan isnt the future of anything, it literally has still only 1 hit AAA game released and 0 announced

    MS & Windows will throttle it, if they havent already


    DX12 is inevitable, at some point they will have to stop even making DX11 games and only make true DX12 games (not like current DX12 ones), but IMO that is 2019+, if not 2020+ territory, if not even later .. current discussions will be entirely irrelevant by then, 2-3+ years is an eternity in the GPU world, especially nowadays

  15. #115
    The Lightbringer Evildeffy's Avatar
    15+ Year Old Account
    Join Date
    Jan 2009
    Location
    Nieuwegein, Netherlands
    Posts
    3,772
    Quote Originally Posted by Life-Binder View Post
    580 vs 1060 is somewhat misleading

    580 is nothing more then a bruteforced overvolted overclocked 480

    a ~580 Nitro has literally just 5-6% max OC headroom left and that max OC gives only 4%+ of actual fps gains (compared to factory OC)

    a 1060 gets 12-15% from stock to max OC or 8-10% from factory OC to max OC


    so 580 beating a non OCed 1060 slightly is expected, but at max OC vs max OC they tie or 1060 edges out ahead again (this not counting the power consumption/heat and thus noise under load)

    and this despite 580 being technically a year later, so a 2017 vs 2016 GPU
    The discussion was how Polaris was a failure, which it really is not.
    Also you are comparing 1450 clocked Polaris to a 1380 clocked Polaris, if you look at the ones presented they have almost the same headroom.
    Can't really play with numbers here like that.

    However we're talking with OC vs. OC in the article so ... that argument doesn't fly.
    Also it is compared to the Gaming X+ variant ... so technically also a "2017" card.

    Quote Originally Posted by Life-Binder View Post
    also midrange doesnt matter if we are talking about flagships (which is what Vega is), personally I find the 1080Ti a more impressive (relative to segment) GPU than a 1060

    I think 1060 segment is due for an update, the 9 Gbps is ok, but nothing great

    which is why I think that Nvidia will let 1080Ti hold the crown for some more and start Volta release with a ~2060 (maybe 2070 too), especially if the gaming Volta line release indeed begins in Fall 2017 .. the 1060 is the one that could use the biggest uplift since 2016 era, a GV106 GPU would destroy 580 & the smaller 1070-level Vega

    and hold the GTX 2080 off till 2018
    We know nothing of Vega yet nor it's spin-offs.
    Assuming would be the mother of all fuck-ups, it could be atrocious and it could be fantastic... who knows?

    Just like you shouldn't make assumptions with Volta.

    Just be aware (general statement) that if nVidia follows the same parallelization it WILL lose all the power efficiency it's toting now.

    - - - Updated - - -

    Quote Originally Posted by Life-Binder View Post
    Vulkan isnt the future of anything, it literally has still only 1 hit AAA game released and 0 announced

    MS & Windows will throttle it, if they havent already
    Vulkan is actually growing and will be incorporated in quite a few game engines, MS cannot throttle it.
    Id Software, Unreal Engine, Cry-engine etc.
    Not to mention Vulkan WILL be on every single Android device out there, MS cannot throttle this as much as they want to.

    Quote Originally Posted by Life-Binder View Post
    DX12 is inevitable, at some point they will have to stop even making DX11 games and only make true DX12 games (not like current DX12 ones), but IMO that is 2019+, if not 2020+ territory, if not even later .. current discussions will be entirely irrelevant by then, 2-3+ years is an eternity in the GPU world, especially nowadays
    And considering people use their PCs, including GFX, for 4 - 6 years as well as DX12 and Vulkan being faster than 2019 for deployment ... does that make AMD worthless?

  16. #116
    The Unstoppable Force Gaidax's Avatar
    10+ Year Old Account
    Join Date
    Sep 2013
    Location
    Israel
    Posts
    20,879
    Quote Originally Posted by Life-Binder View Post
    Vulkan isnt the future of anything, it literally has still only 1 hit AAA game released and 0 announced

    MS & Windows will throttle it, if they havent already


    DX12 is inevitable, at some point they will have to stop even making DX11 games and only make true DX12 games (not like current DX12 ones), but IMO that is 2019+, if not 2020+ territory, if not even later .. current discussions will be entirely irrelevant by then, 2-3+ years is an eternity in the GPU world, especially nowadays
    This basically, the thing I am trying to say from the beginning and does not even seem to sink in.

    Again, that dude can write whole essays, but the fact is clear - AMD does not have it - all it does is what AMD does - brute force with inferior solution. The edge it has in DX12 and Vulcan is often simply the case of them pumping bigger and hungrier chips and not much else. That's why you have this whole case where Nvidia TFLOPS not equal to AMD ones, the technological edge is actually on Nvidia side simply because they achieve the same result with less computational power.

    And from looks of it Vega will be another such failure where AMD will try to win by making real big guns instead of actually smart guns.

  17. #117
    I will make as much assumptions as I like until official info is released, thank you very much

    doesnt take a genius to see that Volta will have a pretty substantial uplift in IPC over Pascal (which is very similar to Maxwell, which is 2013-2014 era arch) .. also final over performance may be helped by the "12 nm" process

    Vega 10 gen can maybe match Pascal (vs 1080Ti TBD), but it wont match Volta .. thats what Vega 20 and Navi will be for


    Just be aware (general statement) that if nVidia follows the same parallelization it WILL lose all the power efficiency it's toting now.
    whos the one assuming now ?




    However we're talking with OC vs. OC in the article so ... that argument doesn't fly.
    custom 1060s have more manual OC headroom left in them then custom 580s

  18. #118
    The Lightbringer Evildeffy's Avatar
    15+ Year Old Account
    Join Date
    Jan 2009
    Location
    Nieuwegein, Netherlands
    Posts
    3,772
    Quote Originally Posted by Life-Binder View Post
    I will make as much assumptions as I like until official info is released, thank you very much

    doesnt take a genius to see that Volta will have a pretty substantial uplift in IPC over Pascal (which is very similar to Maxwell, which is 2013-2014 era arch) .. also final over performance may be helped by the "12 nm" process

    Vega 10 gen can maybe match Pascal (vs 1080Ti TBD), but it wont match Volta .. thats what Vega 20 and Navi will be for

    whos the one assuming now ?
    Did I state, ANYWHERE, that Vega would match Volta? I did not.
    I stated that Volta (an architecture which is already 2 years "too late"), if following the same parallelization route, which at some point it must, it will NOT host the same power efficiency, this is not an assumption, this is fact.
    Adding HWS and more graphical/compute pipelines in a parallel way WILL cost you efficiency, this is an architectural fact.
    You may "rolleyes" emote all you want but every hardware engineer will tell you that, you have to make trade-offs for every choice in hardware.

    There is no "Be-all, end-all" solution to this area of technology.

    Quote Originally Posted by Life-Binder View Post
    custom 1060s have more manual OC headroom left in them then custom 580s
    In that very article, and without the Sapphire Nitro, it shows you them being pretty much equal.
    Gamers Nexus equated a 1360MHz RX 480 to be equal to 1800MHz GTX 1060.
    From this point on you can do the math when it comes to ceilings.

  19. #119
    I dont think the midrange is going to be refreshed for a LONG time. A 1060 is an absolute monster compared to a 960, its basically a gtx 980. Thats why i was confident buying it a year after its release.

  20. #120
    Quote Originally Posted by Barnabas View Post
    They have 2 models to compete against both.
    Ah, I see.
    "Every country has the government it deserves."
    Joseph de Maistre (1753 – 1821)


Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •