Page 26 of 45 FirstFirst ...
16
24
25
26
27
28
36
... LastLast
  1. #501
    AMD has been ahead of NVidia in pure tflops for a long time now.
    not really true anymore, because Nvidia official flops are only at the low rated boost

    in reality even FE cards boost above that and aftermarkets boost way above that

    meanwhile the FEs 12,5-13 TFlops are based on the 1600 mhz boost clock, which the stock FE air-cooled card can never reach and hold and even the liquid-cooled FE requires using the non-default 350W TDP and increasing power target



    before this gen this might have been truer, but even then - real life aftermarket 980Ti clocks for example are 200-250 mhz higher then the official stock rated boost (from which the TFlops are calced)

  2. #502
    Quote Originally Posted by Evildeffy View Post
    Smaller physical size, reduced memory controller size on silicon, dual channel architecture for increased bandwidth with lower bus sizes, CONSIDERABLY less voltage/wattage requirements... just to name a few.
    There are more benefits to it than pure speed.
    Package for GDDR6 is larger, voltage requirements are exactly the same (it's specifically made to use the same exact power circuitry GDDR5X uses right now), wattage we have no idea right now (voltages are the same, it's a smaller fabrication process but uses a dual channel architecture), bandwidth is roughly the same as GDDR5X for now (expected to go up as manufacturing process matures). Dual channel architecture doesnt offer much right now but will allow for a sharper increase in bandwidth in the future. So yeah, currently benefits of GDDR6 are unclear, architecture is there, but technology needs time to develop. Also, there is no way two channel memory controller is going to be smaller than a single channel memory controller.

    Quote Originally Posted by Evildeffy View Post
    It depends, how things look like it's not good for AMD's graphics division ... that said we are assuming things before we see them.
    I'm not expecting much but I'll await the reveal and reviews before making judgement.
    Power consumption is less of an issue than you think unless the RX Vega gobbles down 400W though, but we'll see if it's worth a damn in performance.

    Like you said don't expect anything too much from RX Vega but also don't expect nVidia to break the laws of time and physics and expect Volta this year when it's memory subsystem require 2018 to begin to being mass produced.
    And just saying this once again... you cannot use a GDDR5(X) memory controller to run GDDR6 memory, so the silicon either needs 2 separate IMCs or nVidia spends double the amount on R&D to create 2 different silicons for each different tier of graphics cards, which I do not see happening.
    I actually think that 400W is not a limit, partner water cooler cards are going to easily break it (the top silicon). It's going to require a new PSU along with a card for sure for most people (who currently use 500-600W PSUs).

    I'm pretty sure that no matter how successful Micron is with their GDDR6 production there is no way they can make enough for it to supply all Volta cards, so at least "2060" and down are still going to use GDDR5/5X, as GDDR5 is also made by Samsung and Hynix, so Nvidia will develop Volta silicon with GDDR5/5X memory controllers either way.
    R5 5600X | Thermalright Silver Arrow IB-E Extreme | MSI MAG B550 Tomahawk | 16GB Crucial Ballistix DDR4-3600/CL16 | MSI GTX 1070 Gaming X | Corsair RM650x | Cooler Master HAF X | Logitech G400s | DREVO Excalibur 84 | Kingston HyperX Cloud II | BenQ XL2411T + LG 24MK430H-B

  3. #503
    The Lightbringer Evildeffy's Avatar
    15+ Year Old Account
    Join Date
    Jan 2009
    Location
    Nieuwegein, Netherlands
    Posts
    3,772
    Quote Originally Posted by Zenny View Post
    As per your own links the Geforce 1080 launched over a month earlier when Micron indicated it should start mass production, they only actually confirmed May after Nvidia announced the 1080.
    How is being revealed on the 6th of May 2016 a month earlier to a post stating "GDDR5X is already in mass production" on the 10th of May, the week start after the end on the 6th, by Micron themselves already being in mass production.
    The only other estimates were "Summer" and nothing else concrete, meaning prototyping phase was already well underway and samples will have been sent out to board partners that were to use it.

    That stated yes they confirmed only after the event to coincide with it but confirmed prior it would start "very soon".
    Still can't bend the laws of time and space, you still can't produce something for commercial availability if components aren't in production.
    (Official Micron statement here btw: Linketylink)

    Quote Originally Posted by Zenny View Post
    Based on that it's perfectly possible for Volta to launch in January with GDDR6. But that being said, Volta is not dependant on GDDR6, as a hypothetical Geforce 2080 can launch with GDDR5x memory and still have 60% increased bandwidth over the 1080.
    If by January mass production of GDDR6 has started then by the end of January you could be starting to see Volta cards yes, you won't however see it "in 3 or 4 months" from now if GDDR6 isn't finalized.
    And if Volta launches with GDDR5(X) then it means that entire Volta range will use GDDR5(X) and GDDR6 will not be used which would also mean SK Hynix's announcement would be fraudulent and GDDR6 not being used anywhere for ca. 1 - 1,5 years... doesn't seem good for business does it?

    Also I never stated it wouldn't launch in 2018, I specifically stated it did.. I simply said it's not going to in 2017 because of aforementioned reasons.

    Just repeating for insurance... GDDR6 != GDDR5(X) ... needs a different controller and you won't have both controllers on a graphics card and you won't have 2 different die designs and validation either from nVidia.
    This is why AMD's Fury (X) and Vega cannot support GDDR5X either, HBM memory controller.

    Quote Originally Posted by Zenny View Post
    Consumer Volta won't actually be as monstrous as the Tesla variant, all those Tensor and FP64 cores will get removed, drastically shrinking the die size. As for TFLOP, the difference is slight at the high end. Most 1080ti cards can boost to over 14TFLOP with a easy overclock. It takes a Vega card clocked at almost 1750Mhz to equal that.
    1.709MHz to be exact.

    - - - Updated - - -

    Quote Originally Posted by Thunderball View Post
    Package for GDDR6 is larger, voltage requirements are exactly the same (it's specifically made to use the same exact power circuitry GDDR5X uses right now), wattage we have no idea right now (voltages are the same, it's a smaller fabrication process but uses a dual channel architecture), bandwidth is roughly the same as GDDR5X for now (expected to go up as manufacturing process matures). Dual channel architecture doesnt offer much right now but will allow for a sharper increase in bandwidth in the future. So yeah, currently benefits of GDDR6 are unclear, architecture is there, but technology needs time to develop. Also, there is no way two channel memory controller is going to be smaller than a single channel memory controller.
    I think you're reading the GDDR5 to GDDR5X specs here ... GDDR6 by all standards is smaller, lower power consumption (1,35v instead of 1,8v for GDDR5(X) for example) .. Of course it'll use the same power circuitry .. so does any GDDR and HBM .. little bit of a weird thing to say but no pinout is different and so are the pathways along with the rest of the bunch.
    Not sure where you got your information from but these are GDDR6 specs, little bit weird.

    Quote Originally Posted by Thunderball View Post
    I actually think that 400W is not a limit, partner water cooler cards are going to easily break it (the top silicon). It's going to require a new PSU along with a card for sure for most people (who currently use 500-600W PSUs).
    It's still not going to be as big of a problem as you may think, most people have way too powerful PSUs (OEMs not included) that are builders that use this class of hardware and consumption of said power isn't going to change the electricity bill too much, that was my point with it.
    If anything cooling it is the bigger thing than someone giving a hoot for power consumption.

    Quote Originally Posted by Thunderball View Post
    I'm pretty sure that no matter how successful Micron is with their GDDR6 production there is no way they can make enough for it to supply all Volta cards, so at least "2060" and down are still going to use GDDR5/5X, as GDDR5 is also made by Samsung and Hynix, so Nvidia will develop Volta silicon with GDDR5/5X memory controllers either way.
    If so that will be a first that a new silicon (not rebrands/shrinking of existing dies) does not use the new memory architecture for either nVidia or AMD.
    But it's possible... it still doesn't change the fact it won't launch this year as nVidia always launches the big ones first because that attracts headlines.
    Headlines = PR = More money ... you get the drill.

  4. #504

  5. #505
    Warchief Zenny's Avatar
    10+ Year Old Account
    Join Date
    Oct 2011
    Location
    South Africa
    Posts
    2,171
    Quote Originally Posted by Evildeffy View Post
    How is being revealed on the 6th of May 2016 a month earlier to a post stating "GDDR5X is already in mass production" on the 10th of May, the week start after the end on the 6th, by Micron themselves already being in mass production.
    The only other estimates were "Summer" and nothing else concrete, meaning prototyping phase was already well underway and samples will have been sent out to board partners that were to use it.
    They estimated Summer as when mass production would start, May does not fall in the summer, even the article you link notes:

    that GDDR5X video card memory has already entered mass production. The product wasn’t expected to become available until this summer
    Micron confirmed themselves after the launch event of the 1080 that they have started mass production of GDDR5x, over a month ahead of schedule. I'm guess mass production for them = Nvidia launching a GPU that uses the memory technology.

    That stated yes they confirmed only after the event to coincide with it but confirmed prior it would start "very soon".
    Still can't bend the laws of time and space, you still can't produce something for commercial availability if components aren't in production.
    (Official Micron statement here btw: Linketylink)

    If by January mass production of GDDR6 has started then by the end of January you could be starting to see Volta cards yes, you won't however see it "in 3 or 4 months" from now if GDDR6 isn't finalized.
    And if Volta launches with GDDR5(X) then it means that entire Volta range will use GDDR5(X) and GDDR6 will not be used which would also mean SK Hynix's announcement would be fraudulent and GDDR6 not being used anywhere for ca. 1 - 1,5 years... doesn't seem good for business does it?

    Also I never stated it wouldn't launch in 2018, I specifically stated it did.. I simply said it's not going to in 2017 because of aforementioned reasons.
    I don't think it will launch this year myself (although the possibility is always there), I've pegged it as releasing it Jan/Feb 2018 as that would fit nicely with both statements made by Micron and Nvidia's own release schedule.

  6. #506
    The Lightbringer Evildeffy's Avatar
    15+ Year Old Account
    Join Date
    Jan 2009
    Location
    Nieuwegein, Netherlands
    Posts
    3,772
    Quote Originally Posted by Zenny View Post
    They estimated Summer as when mass production would start, May does not fall in the summer, even the article you link notes:
    My apologies, this was misreading on my part.
    I somehow thought you meant that nVidia's GTX 1080 was revealed 1 month before Micron mentioned already being in mass production.
    Hence my reply on that.

    Quote Originally Posted by Zenny View Post
    Micron confirmed themselves after the launch event of the 1080 that they have started mass production of GDDR5x, over a month ahead of schedule. I'm guess mass production for them = Nvidia launching a GPU that uses the memory technology.
    Considering that nVidia is LITERALLY their only client for GDDR5X ... yeah it would mean that, they simply coincided them as Micron very likely was more successful with their prototype sampling than they had expected.

    Quote Originally Posted by Zenny View Post
    I don't think it will launch this year myself (although the possibility is always there), I've pegged it as releasing it Jan/Feb 2018 as that would fit nicely with both statements made by Micron and Nvidia's own release schedule.
    Try telling that to the person whom insulted me for even daring to speak that nVidia will not release Volta in 2017.
    My post was there literally just stating this with all the known information.

  7. #507
    Unless AMD has a major performance surprise for us, we won't see Volta in 2017. No need. For Nvidia they are very reactive to the market.

  8. #508
    Quote Originally Posted by Evildeffy View Post
    I think you're reading the GDDR5 to GDDR5X specs here ... GDDR6 by all standards is smaller, lower power consumption (1,35v instead of 1,8v for GDDR5(X) for example) .. Of course it'll use the same power circuitry .. so does any GDDR and HBM .. little bit of a weird thing to say but no pinout is different and so are the pathways along with the rest of the bunch.
    Not sure where you got your information from but these are GDDR6 specs, little bit weird.
    14x12 package for GDDR6, 0.75 pitch (confirmed for both Micron and SK Hynix); 14x10, 0.65 pitch for GDDR5X. Also, GDDR5X operates on 1.35V, you should know that.

    https://www.extremetech.com/wp-conte...X-vs-GDDR6.png
    http://www.tweaktown.com/image.php?i...-bandwidth.jpg

    Those are from Micron and Hynix presentations, obviously.

    Quote Originally Posted by Evildeffy View Post
    It's still not going to be as big of a problem as you may think, most people have way too powerful PSUs (OEMs not included) that are builders that use this class of hardware and consumption of said power isn't going to change the electricity bill too much, that was my point with it.
    If anything cooling it is the bigger thing than someone giving a hoot for power consumption.
    It's going to be a huge problem even if thermals are fine. Those power figures will require 3 8pin power connectors, which most PSUs simply dont have. Air cooling is out of the question, water cooling cards are going to require a full-cover design.
    R5 5600X | Thermalright Silver Arrow IB-E Extreme | MSI MAG B550 Tomahawk | 16GB Crucial Ballistix DDR4-3600/CL16 | MSI GTX 1070 Gaming X | Corsair RM650x | Cooler Master HAF X | Logitech G400s | DREVO Excalibur 84 | Kingston HyperX Cloud II | BenQ XL2411T + LG 24MK430H-B

  9. #509
    The Lightbringer Evildeffy's Avatar
    15+ Year Old Account
    Join Date
    Jan 2009
    Location
    Nieuwegein, Netherlands
    Posts
    3,772
    Quote Originally Posted by Thunderball View Post
    14x12 package for GDDR6, 0.75 pitch (confirmed for both Micron and SK Hynix); 14x10, 0.65 pitch for GDDR5X. Also, GDDR5X operates on 1.35V, you should know that.

    https://www.extremetech.com/wp-conte...X-vs-GDDR6.png
    http://www.tweaktown.com/image.php?i...-bandwidth.jpg

    Those are from Micron and Hynix presentations, obviously.
    Mweh... I read entirely different things, perhaps I was on drugs or something at that point.
    Though package size is a step down from GDDR5 but not GDDR5X apparently.

    I'm going to look for that damned article and size because I'm sure I read it had bigger advantages on 5X ... but till I find it I'll concede on savings.

    However.. that doesn't change the fact Volta being either A or B and not both though.
    Different memory subsystem, different memory controllers.

    Quote Originally Posted by Thunderball View Post
    It's going to be a huge problem even if thermals are fine. Those power figures will require 3 8pin power connectors, which most PSUs simply dont have. Air cooling is out of the question, water cooling cards are going to require a full-cover design.
    2 x 8-pin and PCIe power is enough for 375W, both can easily supply way more but those are "official" specs.
    So you don't really need 3 x 8-pin for 400W consumption.

    Any PSU above 600W, possibly 550W, has 3 - 4 PCIe connectors, or should do, which most builders will easily have with hardware of this class.
    Unless of course said builder used a pile of shit PSU that can't even pull 300W from the wall without being dangerously close to going boom.

  10. #510
    Quote Originally Posted by Evildeffy View Post
    Mweh... I read entirely different things, perhaps I was on drugs or something at that point.
    Though package size is a step down from GDDR5 but not GDDR5X apparently.

    I'm going to look for that damned article and size because I'm sure I read it had bigger advantages on 5X ... but till I find it I'll concede on savings.

    However.. that doesn't change the fact Volta being either A or B and not both though.
    Different memory subsystem, different memory controllers.
    Samsung had presentations about GDDR6 with that kind of notion. Although they are still to announce their GDDR6 chips, despite being the first manufacturer to start talking about it. Micron recently said that some of their best GDDR6 chips can achieve 14-15 Gb/s throughput, but also added that their best GDDR5X chips are capable of the same thing.

    Pascal is on both GDDR5 and 5X (HBM2 also, if you count GP100), I dont see why Volta cannot use both 5X and 6.

    Quote Originally Posted by Evildeffy View Post
    2 x 8-pin and PCIe power is enough for 375W, both can easily supply way more but those are "official" specs.
    So you don't really need 3 x 8-pin for 400W consumption.

    Any PSU above 600W, possibly 550W, has 3 - 4 PCIe connectors, or should do, which most builders will easily have with hardware of this class.
    Unless of course said builder used a pile of shit PSU that can't even pull 300W from the wall without being dangerously close to going boom.
    375W on air is not enough for Vega FE to maintain 1600 MHz, and RX Vega will have to push a lot higher to be competitive.

    Also, it's recommended to have ">850W" PSU for Vega FE, go figure.
    R5 5600X | Thermalright Silver Arrow IB-E Extreme | MSI MAG B550 Tomahawk | 16GB Crucial Ballistix DDR4-3600/CL16 | MSI GTX 1070 Gaming X | Corsair RM650x | Cooler Master HAF X | Logitech G400s | DREVO Excalibur 84 | Kingston HyperX Cloud II | BenQ XL2411T + LG 24MK430H-B

  11. #511
    Quote Originally Posted by Evildeffy View Post
    Mweh... I read entirely different things, perhaps I was on drugs or something at that point.
    Though package size is a step down from GDDR5 but not GDDR5X apparently.

    I'm going to look for that damned article and size because I'm sure I read it had bigger advantages on 5X ... but till I find it I'll concede on savings.

    However.. that doesn't change the fact Volta being either A or B and not both though.
    Different memory subsystem, different memory controllers.


    2 x 8-pin and PCIe power is enough for 375W, both can easily supply way more but those are "official" specs.
    So you don't really need 3 x 8-pin for 400W consumption.

    Any PSU above 600W, possibly 550W, has 3 - 4 PCIe connectors, or should do, which most builders will easily have with hardware of this class.
    Unless of course said builder used a pile of shit PSU that can't even pull 300W from the wall without being dangerously close to going boom.
    I mean, the recommended minimum PSU specs for the 1080 (180w TDP) is 500W, and the recommended minimum for the 1080 Ti (250w TDP) is 600W. If the Rx Vega is a 400 W TDP, and the PSU requirement scale linearly, chances are the recommended minimum PSU is going to be 800 or 850W. How many people actually purchase an 850+W PSU for a single GPU system? I suspect the number is extremely small, because it's nearly a $100 price premium over a decent 500W PSU and complete waste of money. If AMD is going to require a $100 premium in PSU requirements over the card its competing with from a performance perspective, they damn well better take that $100 off the MSRP of the card. Otherwise, why bother? That still doesn't take into account that the delta in power usage is going to cost a typical user something like $50 more on their power bills a year.

    And, you then have the issues with heat and airflow and case sizing. Even a 1080 Ti blower card works 100% fine in a microATX case or in an OEM case (as long as you have the PSU for it). Even if they can make a 400W card work on air, it's not going to work on a lot of the cases that people actually have. Unless you have a $150+ enthusiast level case, it probably won't have the airflow. Equally, OEM/mATX/budget cases out there probably also don't have the room to throw in a watercooling setup either.

    The whole thing has disaster written all over it. It's one thing to have ridiculous power and cooling requirements on a card that is hitting the highest end enthusiast range of the market like the 295x2 did, because people at that price/performance range have the high end setup to make it work, and where the card itself has the performance to back those beefy requirements up. If this card is really going to be a 1080 in terms of performance, that's more at the high end of the mainstream, where you just can't make those assumptions.

  12. #512
    The Lightbringer Evildeffy's Avatar
    15+ Year Old Account
    Join Date
    Jan 2009
    Location
    Nieuwegein, Netherlands
    Posts
    3,772
    Quote Originally Posted by Thunderball View Post
    Samsung had presentations about GDDR6 with that kind of notion. Although they are still to announce their GDDR6 chips, despite being the first manufacturer to start talking about it. Micron recently said that some of their best GDDR6 chips can achieve 14-15 Gb/s throughput, but also added that their best GDDR5X chips are capable of the same thing.

    Pascal is on both GDDR5 and 5X (HBM2 also, if you count GP100), I dont see why Volta cannot use both 5X and 6.
    From a design standpoint it is not really a cheap thing to do.
    GDDR5(X) is an exception since that was specifically designed to work with GDDR5 controllers, the X is for Extended.

    GDDR6 will not be so lucky.

    Quote Originally Posted by Thunderball View Post
    375W on air is not enough for Vega FE to maintain 1600 MHz, and RX Vega will have to push a lot higher to be competitive.

    Also, it's recommended to have ">850W" PSU for Vega FE, go figure.
    Actually 375W is plenty, the problem is that the GPU is HEAVILY overvolted.
    Gamer's Nexus has a video up on this HERE!
    It is highly likely that adjustments will be made in drivers to lower this, GN managed only 268W @ 1600MHz stable along with temps under stress testing @ 50-something max, granted @ 3,5K RPM fanspeed though, but this can be achieved with less as well.

    As far as recommended PSUs go... not that hard to understand as both AMD and nVidia play with a lot of safe margin, AMD's is just bigger.
    You can easily run an OCed 7700K to 5GHz with all power settings cranked to insanity on the FE with a 600W PSU.
    That's an added 150W right there on top of the FE's power drain.

    Most people that aren't builders have horrendous shit PSUs, that's why the increase as the 12V rail(s) will be loaded pretty heavily.

    - - - Updated - - -

    Quote Originally Posted by Tiberria View Post
    I mean, the recommended minimum PSU specs for the 1080 (180w TDP) is 500W, and the recommended minimum for the 1080 Ti (250w TDP) is 600W. If the Rx Vega is a 400 W TDP, and the PSU requirement scale linearly, chances are the recommended minimum PSU is going to be 800 or 850W. How many people actually purchase an 850+W PSU for a single GPU system? I suspect the number is extremely small, because it's nearly a $100 price premium over a decent 500W PSU and complete waste of money. If AMD is going to require a $100 premium in PSU requirements over the card its competing with from a performance perspective, they damn well better take that $100 off the MSRP of the card. Otherwise, why bother? That still doesn't take into account that the delta in power usage is going to cost a typical user something like $50 more on their power bills a year.
    Power cost between a GTX 1080 and R9 290X @ 300W were calculated multiple times on this forum and the difference in a year ended up being 15 - 20 USD, that's 150W vs. 300W for 8 hours a day @ 100% for 365 days in the year.
    Also an 850W+ PSU is not going to command a 500 USD premium... where the hell did you get that from?

    Most people with this class of hardware most deffo have 750W or higher from a proper brand, they will have plenty of power with that.

    Though yes it is a con in comparison, NQA, simply saying that it really isn't as big as people make it out to be.

    Quote Originally Posted by Tiberria View Post
    And, you then have the issues with heat and airflow and case sizing. Even a 1080 Ti blower card works 100% fine in a microATX case or in an OEM case (as long as you have the PSU for it). Even if they can make a 400W card work on air, it's not going to work on a lot of the cases that people actually have. Unless you have a $150+ enthusiast level case, it probably won't have the airflow. Equally, OEM/mATX/budget cases out there probably also don't have the room to throw in a watercooling setup either.

    The whole thing has disaster written all over it. It's one thing to have ridiculous power and cooling requirements on a card that is hitting the highest end enthusiast range of the market like the 295x2 did, because people at that price/performance range have the high end setup to make it work, and where the card itself has the performance to back those beefy requirements up. If this card is really going to be a 1080 in terms of performance, that's more at the high end of the mainstream, where you just can't make those assumptions.
    I can actually link you budget cases which host plenty of space, for example in an mITX form factor the Cooler Master Elite 130 will do.
    That's between 30 - 40 USD case, regardless the blower style will work in more cases than you give it credit for BECAUSE it's blower style.
    Had it been the standard you might've had a point but then we come back to the following:

    What person will put this class of card in a budget environment with budget gear?

    You really can't make assumptions on devices that aren't meant for the low end that will drop in a 1K+ card.

    Now when it comes to RX Vega... we simply don't know yet, of course it's related but changes may have been made.
    Just wait it out and see from the launch and reviews.

  13. #513
    Quote Originally Posted by Evildeffy View Post
    From a design standpoint it is not really a cheap thing to do.
    GDDR5(X) is an exception since that was specifically designed to work with GDDR5 controllers, the X is for Extended.

    GDDR6 will not be so lucky.


    Actually 375W is plenty, the problem is that the GPU is HEAVILY overvolted.
    Gamer's Nexus has a video up on this HERE!
    It is highly likely that adjustments will be made in drivers to lower this, GN managed only 268W @ 1600MHz stable along with temps under stress testing @ 50-something max, granted @ 3,5K RPM fanspeed though, but this can be achieved with less as well.

    As far as recommended PSUs go... not that hard to understand as both AMD and nVidia play with a lot of safe margin, AMD's is just bigger.
    You can easily run an OCed 7700K to 5GHz with all power settings cranked to insanity on the FE with a 600W PSU.
    That's an added 150W right there on top of the FE's power drain.

    Most people that aren't builders have horrendous shit PSUs, that's why the increase as the 12V rail(s) will be loaded pretty heavily.

    - - - Updated - - -


    Power cost between a GTX 1080 and R9 290X @ 300W were calculated multiple times on this forum and the difference in a year ended up being 15 - 20 USD, that's 150W vs. 300W for 8 hours a day @ 100% for 365 days in the year.
    Also an 850W+ PSU is not going to command a 500 USD premium... where the hell did you get that from?

    Most people with this class of hardware most deffo have 750W or higher from a proper brand, they will have plenty of power with that.

    Though yes it is a con in comparison, NQA, simply saying that it really isn't as big as people make it out to be.


    I can actually link you budget cases which host plenty of space, for example in an mITX form factor the Cooler Master Elite 130 will do.
    That's between 30 - 40 USD case, regardless the blower style will work in more cases than you give it credit for BECAUSE it's blower style.
    Had it been the standard you might've had a point but then we come back to the following:

    What person will put this class of card in a budget environment with budget gear?

    You really can't make assumptions on devices that aren't meant for the low end that will drop in a 1K+ card.

    Now when it comes to RX Vega... we simply don't know yet, of course it's related but changes may have been made.
    Just wait it out and see from the launch and reviews.
    The $500 was a typo; I later corrected it to $100.

    The 290X is a TDP of 290W; the 1080 is a TDP of 180W, a 110W difference. It appears that Vega has a ~400 W TDP, which is a 220W difference from a 1080 and a 150W difference from a 1080 Ti, so it's a lot more of an electrical cost differential than the example you gave. At 6 hrs a day of usage and 12c/kWH, you're looking at ~$50 per year in added power costs. The performance has to warrant that.

    As far as PSU's, I'd agree that most people buying a 250W TDP card like a 1080 Ti, or 980 Ti before the 1000s were out probably went with a 750W PSU. However, I don't think the majority of people with lower TDP and price point cards like 970s, 1070s, 1080s (which is where this card will be positioned) have more than a 600W; there was no reason to.

    RX Vega is going to have to be in the 300-400+ W TDP if they want to attain 1080 level performance or better. It's the same die as the Vega FE after all. If they have to cut it down to a more reasonable 200-250W to avoid all of the PSU requirement issues, there's a pretty good chance it will be more of a 1070 class performance card.

  14. #514
    The Lightbringer Evildeffy's Avatar
    15+ Year Old Account
    Join Date
    Jan 2009
    Location
    Nieuwegein, Netherlands
    Posts
    3,772
    Quote Originally Posted by Tiberria View Post
    The $500 was a typo; I later corrected it to $100.

    The 290X is a TDP of 290W; the 1080 is a TDP of 180W, a 110W difference. It appears that Vega has a ~400 W TDP, which is a 220W difference from a 1080 and a 150W difference from a 1080 Ti, so it's a lot more of an electrical cost differential than the example you gave. At 6 hrs a day of usage and 12c/kWH, you're looking at ~$50 per year in added power costs. The performance has to warrant that.

    As far as PSU's, I'd agree that most people buying a 250W TDP card like a 1080 Ti, or 980 Ti before the 1000s were out probably went with a 750W PSU. However, I don't think the majority of people with lower TDP and price point cards like 970s, 1070s, 1080s (which is where this card will be positioned) have more than a 600W; there was no reason to.

    RX Vega is going to have to be in the 300-400+ W TDP if they want to attain 1080 level performance or better. It's the same die as the Vega FE after all. If they have to cut it down to a more reasonable 200-250W to avoid all of the PSU requirement issues, there's a pretty good chance it will be more of a 1070 class performance card.
    AMD Vega FE Air cooled has a TDP of 300W @ stock.
    AMD Vega FE Water cooled has a TDP of 375W @ stock.

    AMD RX Vega is supposed to be less than 300W, comparison will be valid if so.
    Comparison still is valid with AMD Vega FE, especially if undervolting and overclocking (again check the GN video I linked).

    600W (if the brand doesn't suck) is enough to power an overclocked Vega FE along with a 7700K (not the HEDT line) still.

    That said in the current RX Vega travel thing AMD is doing they are stating that it's competing with the 1080 (non-Ti) but no real spec data yet.
    They are however stating that the rig is entirely identical except for the GFX and monitor and that there's a 300 USD difference.
    Now 200 USD is the known price premium for G-Sync monitors (they pit up the same monitor FreeSync vs. G-Sync ... thank you for that nVidia) so they are estimating the RX Vega may be 100 USD cheaper.

  15. #515
    Quote Originally Posted by Evildeffy View Post
    From a design standpoint it is not really a cheap thing to do.
    GDDR5(X) is an exception since that was specifically designed to work with GDDR5 controllers, the X is for Extended.

    GDDR6 will not be so lucky.
    GDDR5X still required different memory controller aswell as different power components (VPP is now external on GDDR5X, 6 and HBM, as opposed to earlier graphic memory types, memory controller VRMs are also more powerful). Currently GDDR6 is not much different from GDDR5X, it just splits the IO available to two channels, resulting in very close results right now.

    Quote Originally Posted by Evildeffy View Post
    Actually 375W is plenty, the problem is that the GPU is HEAVILY overvolted.
    Gamer's Nexus has a video up on this HERE!
    It is highly likely that adjustments will be made in drivers to lower this, GN managed only 268W @ 1600MHz stable along with temps under stress testing @ 50-something max, granted @ 3,5K RPM fanspeed though, but this can be achieved with less as well.

    As far as recommended PSUs go... not that hard to understand as both AMD and nVidia play with a lot of safe margin, AMD's is just bigger.
    You can easily run an OCed 7700K to 5GHz with all power settings cranked to insanity on the FE with a 600W PSU.
    That's an added 150W right there on top of the FE's power drain.

    Most people that aren't builders have horrendous shit PSUs, that's why the increase as the 12V rail(s) will be loaded pretty heavily.
    I've seen the video, and they state that undervolting doesnt work for everything, that's why AMD probably went with that high of a voltage. They also still went with +50% power limit (which puts an aircooled card to 450W power limit). 268W figure is measured on 12V PCIe power, we have no idea who much more is going in through the PCIe slot (they mention that they might have a way to measure it in the future). Drivers wont fix this, they might fix issues with some application requiring higher voltage than others but that's about it.

    Vega FE can definitely be normally used on most gaming systems if current issues are fixed, problem is: performance is not adequate. AMD will have to push RX Vega to higher frequencies to make the card competitive. People are speculating that it has to run ~1850 MHz stable to compete with 1080 Ti in most usecases. I dont even want to think about thermals and power draw in this case.
    R5 5600X | Thermalright Silver Arrow IB-E Extreme | MSI MAG B550 Tomahawk | 16GB Crucial Ballistix DDR4-3600/CL16 | MSI GTX 1070 Gaming X | Corsair RM650x | Cooler Master HAF X | Logitech G400s | DREVO Excalibur 84 | Kingston HyperX Cloud II | BenQ XL2411T + LG 24MK430H-B

  16. #516
    The Lightbringer Evildeffy's Avatar
    15+ Year Old Account
    Join Date
    Jan 2009
    Location
    Nieuwegein, Netherlands
    Posts
    3,772
    Quote Originally Posted by Thunderball View Post
    GDDR5X still required different memory controller aswell as different power components (VPP is now external on GDDR5X, 6 and HBM, as opposed to earlier graphic memory types, memory controller VRMs are also more powerful). Currently GDDR6 is not much different from GDDR5X, it just splits the IO available to two channels, resulting in very close results right now.
    I just checked it again just to be certain and you are incorrect, it required a modified GDDR5 memory controller with some minor changes.
    Making it so that both can be used as long as FW is adjusted as the signaling is identical.
    (Little bit of info here: Linketylink)

    That cannot be done by GDDR6 which brings me to my original point again, cost of different IMCs is massive and requires validation for both GDDR5(X) and GDDR6 instead of just 1 type of memory.

    It's possible of course but highly unlikely with cost prohibitions... we'll see in early 2018 very likely.

    Quote Originally Posted by Thunderball View Post
    I've seen the video, and they state that undervolting doesnt work for everything, that's why AMD probably went with that high of a voltage. They also still went with +50% power limit (which puts an aircooled card to 450W power limit). 268W figure is measured on 12V PCIe power, we have no idea who much more is going in through the PCIe slot (they mention that they might have a way to measure it in the future). Drivers wont fix this, they might fix issues with some application requiring higher voltage than others but that's about it.
    No they hypothesize several possible reasons, not why it actually is like that.
    Undervolting and adjustments can lower power draw without issue, like almost everyone stated it very much felt like Vega FE was rushed.
    You'd be surprised how much can be fixed via drivers.

    Quote Originally Posted by Thunderball View Post
    Vega FE can definitely be normally used on most gaming systems if current issues are fixed, problem is: performance is not adequate. AMD will have to push RX Vega to higher frequencies to make the card competitive. People are speculating that it has to run ~1850 MHz stable to compete with 1080 Ti in most usecases. I dont even want to think about thermals and power draw in this case.
    That is a different matter in itself and like I said... 2 weeks and we'll know for sure, but it'll very likely be a GTX 1080 equivalent card due to the AMD tour stuff.

  17. #517
    Quote Originally Posted by Thunderball View Post
    Vega FE can definitely be normally used on most gaming systems if current issues are fixed, problem is: performance is not adequate. AMD will have to push RX Vega to higher frequencies to make the card competitive. People are speculating that it has to run ~1850 MHz stable to compete with 1080 Ti in most usecases. I dont even want to think about thermals and power draw in this case.
    I'm not gonna comment on the other stuff because that's beyond me, but on this part, why does it need to compete with a 1080ti? We don't know the price of this thing yet AFAIK, so competing with a 1080 could be fine. Granted, that means they have nothing competing at the top end, but that;s fine. That's such a small segment of the market that I'm not sure if it's worth the time to try and compete at that level. When the vast majority of people still run 1080p@60hz, who needs something even as powerful as a 1080?

  18. #518
    Quote Originally Posted by Lathais View Post
    I'm not gonna comment on the other stuff because that's beyond me, but on this part, why does it need to compete with a 1080ti? We don't know the price of this thing yet AFAIK, so competing with a 1080 could be fine. Granted, that means they have nothing competing at the top end, but that;s fine. That's such a small segment of the market that I'm not sure if it's worth the time to try and compete at that level. When the vast majority of people still run 1080p@60hz, who needs something even as powerful as a 1080?
    Then why release it at all ?

    Theyve got the mid-range covered with the RX 580 (which competes favorably with, or beats the 1060).

    If theyre not going to try move a massive number of these things, then there's no point to even making it.

    Cut it down to something that actually competes with a 1070 and then slash the price and call it a day.

    And get working on something that will stand a chance in hell against Volta, because there's nothing coming out of AMD that looks like theyre anywhere near where they need to be on their next architecture, barring a damn miracle.

  19. #519
    Deleted
    Quote Originally Posted by Kagthul View Post
    because there's nothing coming out of AMD that looks like theyre anywhere near where they need to be on their next architecture, barring a damn miracle.
    Yes because we know SO MUCH about NaVi that we can just call how it will turn out now /s

    From what we do know chances are navi will be like ryzen - small, cheap chips connected with some kind of "fabric" into bigger units - that could very well be game changer. But since we know nothing and it is AMD we are talking about its best to wait till they actually release it.

  20. #520
    Quote Originally Posted by Kagthul View Post
    Then why release it at all ?
    To compete with the 1080, not the 1080ti. I'm not saying the 1080ti has no place in the market at all, just that as AMD is trying to take market share, they SHOULD be focused on the largest market, which they are. Historically, they have not been able to compete well in the xx80ti realm and even the Fury X, which after Cromson drivers actually DID beat the 980ti, people still considered it worse. So why try to compete in that market when even if you ARE the better card people don't care. We also don't know that once they get this released and bin the chips that they won't cut some down and make something that competes with the 1070. Then again, maybe this isn't designed to play against anything in the 10xx series and IS designed to compete against the xx70 from next gen.

    My main point was simply that who cares if it beats or equals a 1080ti without even knowing the price of the thing? Yeah, if it's priced close to a 1080ti, it should compare to it, but as it sits it compares to a 1080, with some driver tweaks and what not it may beat it but even if it doesn't, if it's cheaper, it has a solid place in the market. What if it is priced similarly to a 1070 and has 1080 like performance? Would it matter that it doesn't beat a 1080ti if it's priced like a 1070?

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •