Thread: AMD Navi

Page 3 of 15 FirstFirst
1
2
3
4
5
13
... LastLast
  1. #41
    Quote Originally Posted by Vegas82 View Post
    Sounds like they're still doing their best to catch up instead of compete.
    Ye I know, but at the same time they've only recently been able to put a lot(still way less than Nvidia) money into graphics R&D again with the money they've gotten from Zen/Zen+.. So we can't really expect miracles either unless they are doing something totally different.

  2. #42
    Quote Originally Posted by dadev View Post
    Are they really coming out with that naming scheme?
    Seems that way, specifically aimed at Nvidia.

    Quote Originally Posted by mrgreenthump View Post
    The leaks a supposedly from Singaporean source and it may very well be 499 Singapore Dollars, which would still hold true the original price points leaked before. But we shall see soon.
    Hopefully so.

    On the ray-tracing thing: These ones won't be official ray-tracing cards as far as I'm aware, they're saving that for the big Navi next year.

  3. #43
    Quote Originally Posted by Shinzai View Post
    Seems that way, specifically aimed at Nvidia.
    I don't know if it's smart to be honest. If amd fires the opening shot on non-subtle naming schemes, what stops other companies from doing the same?

  4. #44
    https://wccftech.com/amd-radeon-navi...m-256-bit-bus/

    More new details, looks like more plausible final stats for the cards.

  5. #45
    Quote Originally Posted by Shinzai View Post
    https://wccftech.com/amd-radeon-navi...m-256-bit-bus/

    More new details, looks like more plausible final stats for the cards.
    The point again here is the pricing. If they go with that, they're not gonna make people swap from team green to team red.
    Non ti fidar di me se il cuor ti manca.

  6. #46
    Another generation of "here's the NVIDIA killer cpu/gpu" then they fall short of whatever reason (pricing, performance, etc).
    I wonder when they will realize they have to bite a bullet for 1 or 2 generations and give a really much better product for much cheaper, even if it means they dont really make profits on them, but people will think twice whether they will buy NVIDIA or AMD stuff.

  7. #47
    Quote Originally Posted by dadev View Post
    Are they really coming out with that naming scheme?
    The most unimaginative marketing I've seen. AMD prime does the same thing with their motherboards and the CPUs aren't any better

  8. #48
    Old God Vash The Stampede's Avatar
    10+ Year Old Account
    Join Date
    Sep 2010
    Location
    Better part of NJ
    Posts
    10,939
    Quote Originally Posted by Shinzai View Post
    On the ray-tracing thing: These ones won't be official ray-tracing cards as far as I'm aware, they're saving that for the big Navi next year.
    Ray-Tracing so far is too unusable to be something of a concern. I'm actually more excited for projects like MineCraft that uses Ray-Tracing instead of Battlefield V, and the Minecraft version doesn't require RTX cards though it does require Nvidia as the OpenGL implementation doesn't stick to the OpenGL standard but what Nvidia considers standard. With CryTek showing that they also don't need RTX cards, it asks the question if Nvidia jumped in too quickly into this Ray-Tracing business. Is it worth it using half the GPU die for Tensor cores or just more Cuda cores or Stream cores?

    So Navi not having Ray-Tracing dedicated hardware shouldn't be an issue for anyone. Especially when there's like 3 AAA games that currently support it.

  9. #49
    Quote Originally Posted by Gourmandise View Post
    The most unimaginative marketing I've seen. AMD prime does the same thing with their motherboards and the CPUs aren't any better
    It's called familiarity. In case of the Ryzen cpus, the naming scheme is extremely clear and more importantly gives the less informed customer a clear indication of what "tier of equipment" they're looking for. It's the 9 > 7 > 5 > 3 Intel is using for years and AMD puts their cpus in direct competition.

    On the GPUs though they're all over the place. Now with the 30x0 they just made more confusion than anything, as they seem better than the 20x0 but they're actually mid-tier material. It does more damage than anything as i fully expect some people to go and buy AMD over NVidia just because 3000 > 2000 (it happens a lot of times).
    Non ti fidar di me se il cuor ti manca.

  10. #50
    Quote Originally Posted by Coldkil View Post
    It's called familiarity. In case of the Ryzen cpus, the naming scheme is extremely clear and more importantly gives the less informed customer a clear indication of what "tier of equipment" they're looking for. It's the 9 > 7 > 5 > 3 Intel is using for years and AMD puts their cpus in direct competition.

    On the GPUs though they're all over the place. Now with the 30x0 they just made more confusion than anything, as they seem better than the 20x0 but they're actually mid-tier material. It does more damage than anything as i fully expect some people to go and buy AMD over NVidia just because 3000 > 2000 (it happens a lot of times).
    I know what it's for and it was more a comment on motherboards and gpus. Hey let's increase the first digit by 1 so it's larger therefore better. CPU naming isn't the worst in my opinion, and just sufferers from changing core counts per generation now, but I guess the R and I prefixes are slightly more budget categories than core.
    However, your general person doesn't really care if core count is consistent with the naming scheme so my criticism is moot.

  11. #51
    Quote Originally Posted by Gourmandise View Post
    I know what it's for and it was more a comment on motherboards and gpus. Hey let's increase the first digit by 1 so it's larger therefore better. CPU naming isn't the worst in my opinion, and just sufferers from changing core counts per generation now, but I guess the R and I prefixes are slightly more budget categories than core.
    However, your general person doesn't really care if core count is consistent with the naming scheme so my criticism is moot.
    Agree. It's all marketing and not substance. Worst kind of marketing - to trick people into buying the stuff instead of actually making it better. I'm fine with the card being rivals of the 2070 for example, but that price tag simply doesn't make it competitive at all.
    Non ti fidar di me se il cuor ti manca.

  12. #52
    Quote Originally Posted by Vash The Stampede View Post
    Ray-Tracing so far is too unusable to be something of a concern. I'm actually more excited for projects like MineCraft that uses Ray-Tracing instead of Battlefield V, and the Minecraft version doesn't require RTX cards though it does require Nvidia as the OpenGL implementation doesn't stick to the OpenGL standard but what Nvidia considers standard. With CryTek showing that they also don't need RTX cards, it asks the question if Nvidia jumped in too quickly into this Ray-Tracing business. Is it worth it using half the GPU die for Tensor cores or just more Cuda cores or Stream cores?

    So Navi not having Ray-Tracing dedicated hardware shouldn't be an issue for anyone. Especially when there's like 3 AAA games that currently support it.
    It's hard to say tbh. If nothing else, I applaud Nvidia's effort in pushing ray tracing, but it was too early and too expensive to be truly viable. 60fps at 1080 is dire, but at the same time, they've made it viable to even get those results in the first place. We probably wouldn't even be talking about Ray tracing for another 4 years if it wasn't for their pushing developers and graphics engines with it so hard and investing so much.

    I still the the RTX series cards are rip offs for the price though. So in the end, it's innovation at the the customer's cost.

    And yeah, Navi not having Ray tracing is a non-issue.

    - - - Updated - - -

    Well. I guess that's one way to stop the name branding.

    https://www.tomshardware.com/news/nv...rks,39435.html

  13. #53
    Quote Originally Posted by Puzzony View Post
    Another generation of "here's the NVIDIA killer cpu/gpu" then they fall short of whatever reason (pricing, performance, etc).
    I wonder when they will realize they have to bite a bullet for 1 or 2 generations and give a really much better product for much cheaper, even if it means they dont really make profits on them, but people will think twice whether they will buy NVIDIA or AMD stuff.
    It took Nvidia a decade to make some of their current tech and it will fair very well in the face of "the end of silicon improvements" so we may be talking more than just a couple of generations. There will always be a market for something priced correctly, hell, that is where AMD lives.
    The whole problem with the world is that fools and fanatics are always so certain of themselves, but wiser people so full of doubts.

  14. #54
    literally the only thing I need out of Navi in 2019 is to have full 48 gbps HDMI 2.1 port


    so that rtings can finally do proper 4K-120 PC tests of the first HDMI 2.1 4K TVs, so we can know which TVs are actually HDMI 2.1 (48 gbps) with proper 4K-120 support and which arent

  15. #55
    I've been using a gaming laptop for like, five years, and I think I might just build a pc with new AMD stuff. The price points look really attractive when you see the performance you're getting. I've swapped between ATI and Nvidia throughout the years, so it may be time to swap back over.

  16. #56
    I'm confuced about Navi.

    All we've heard last few weeks is that it's a bit disappointing and the double 8 pin of the leaked PCB comfirmed that. And considering it is still pitted against RTX2070 I don't see how it will have 50% power efficiency against Vega and required double 8 pin, yet still is only at RTX2070 level. And the "concept" cards from ASRock(videocardz link) have beafy 3 fan heatsinks. Then again those cards may not be the 5700 at all, but ASRock isn't making Nvidia cards.. So it has to be AMD.

    For reference the Radeon VII is a 300 Watt card and is more powerful than a 2070 by quite a margin. And if we would take the 50% power efficiency as gospel it should put Navi at well under 150W or those heatsinks for the performance it has, but you don't need double 8 pin for that?!

    Is AMD lying about what the performance is and pulling a fast one on us/NGreedia or is the 50% power efficiency a lie? Are they just being deceptive and comparing to the 14nm Vega and all the power efficiency comes from the node change? Who knows but certainly seems like an odd situation for me.

  17. #57
    The Lightbringer Shakadam's Avatar
    10+ Year Old Account
    Join Date
    Oct 2009
    Location
    Finland
    Posts
    3,300
    I think the 50% power efficiency is compared to Vega on 14nm, considering Radeon VII is on 7nm I don't see any way that they'd be able to pull an additional 50% power efficiency over that.

    The total board power (not TDP) of vega 64 is 295W. Idk what the TDP of the vega chip itself is, let's say 280W (HBM2 is very power efficient)
    50% of that is 140W + maybe a little higher clock speed + GDDR6 memory (uses around ~4 times more power than HBM2) and you end up around ~200 - 225W, which justifies the 2x8 pin power connectors.

  18. #58
    Quote Originally Posted by mrgreenthump View Post
    I'm confuced about Navi.

    All we've heard last few weeks is that it's a bit disappointing and the double 8 pin of the leaked PCB comfirmed that. And considering it is still pitted against RTX2070 I don't see how it will have 50% power efficiency against Vega and required double 8 pin, yet still is only at RTX2070 level. And the "concept" cards from ASRock(videocardz link) have beafy 3 fan heatsinks. Then again those cards may not be the 5700 at all, but ASRock isn't making Nvidia cards.. So it has to be AMD.

    For reference the Radeon VII is a 300 Watt card and is more powerful than a 2070 by quite a margin. And if we would take the 50% power efficiency as gospel it should put Navi at well under 150W or those heatsinks for the performance it has, but you don't need double 8 pin for that?!

    Is AMD lying about what the performance is and pulling a fast one on us/NGreedia or is the 50% power efficiency a lie? Are they just being deceptive and comparing to the 14nm Vega and all the power efficiency comes from the node change? Who knows but certainly seems like an odd situation for me.
    Well, Navi is meant to replace Vega 56/64 and not Radeon VII, so I believe the comparison is Navi to 14nm Vega, but as you say the statement Lisa made wasn't very definitive either way.

  19. #59
    Quote Originally Posted by mrgreenthump View Post
    I'm confuced about Navi.

    All we've heard last few weeks is that it's a bit disappointing and the double 8 pin of the leaked PCB comfirmed that. And considering it is still pitted against RTX2070 I don't see how it will have 50% power efficiency against Vega and required double 8 pin, yet still is only at RTX2070 level. And the "concept" cards from ASRock(videocardz link) have beafy 3 fan heatsinks. Then again those cards may not be the 5700 at all, but ASRock isn't making Nvidia cards.. So it has to be AMD.

    For reference the Radeon VII is a 300 Watt card and is more powerful than a 2070 by quite a margin. And if we would take the 50% power efficiency as gospel it should put Navi at well under 150W or those heatsinks for the performance it has, but you don't need double 8 pin for that?!

    Is AMD lying about what the performance is and pulling a fast one on us/NGreedia or is the 50% power efficiency a lie? Are they just being deceptive and comparing to the 14nm Vega and all the power efficiency comes from the node change? Who knows but certainly seems like an odd situation for me.
    I find the 5700 quite interesting, as normally, that's a low-mid-range card, naming-wise. x800 models tend to be the main card and x900 would be high end. That's assuming they've gone back to that naming pattern, at least.

    The 5700 is shown as being 10% faster than a 2070 in the Strange Brigade demo, which puts it directly on par with the Vega 64, if not just slightly above it. This would make sense, if they're aiming for this to be their new baseline card. The question becomes where a 5800 will land. As you almost have to assume they'll be aiming for +10% on a 2080/Radeon VII.



    If so, then the 5800 or equivalent might be rather impressive, if the price isn't stratospheric.

    Furthermore, if it maintains a 10%, or close to advantage at 1440p and above (or at least just at 1440p), then the 5800 will be faster than the VII in a lot of games.

    Also, going by their 1.5x per watt reference, I'd assume these cards will run around the 200watt mark, which is pretty impressive also and really all they need to do to be on par with competition. The fact that it is 7nm makes this pretty much expected and the overall increase in performance/decrease in power usage less impressive, but it is what it is.
    Last edited by Shinzai; 2019-05-28 at 09:48 PM. Reason: clarification, also power

  20. #60
    The Lightbringer Shakadam's Avatar
    10+ Year Old Account
    Join Date
    Oct 2009
    Location
    Finland
    Posts
    3,300
    Quote Originally Posted by Shinzai View Post
    I find the 5700 quite interesting, as normally, that's a low-mid-range card, naming-wise. x800 models tend to be the main card and x900 would be high end. That's assuming they've gone back to that naming pattern, at least.

    The 5700 is shown as being 10% faster than a 2070 in the Strange Brigade demo, which puts it directly on par with the Vega 64, if not just slightly above it. This would make sense, if they're aiming for this to be their new baseline card. The question becomes where a 5800 will land. As you almost have to assume they'll be aiming for +10% on a 2080/Radeon VII.



    If so, then the 5800 or equivalent might be rather impressive, if the price isn't stratospheric.

    Furthermore, if it maintains a 10%, or close to advantage at 1440p and above (or at least just at 1440p), then the 5800 will be faster than the VII in a lot of games.

    Also, going by their 1.5x per watt reference, I'd assume these cards will run around the 200watt mark, which is pretty impressive also and really all they need to do to be on par with competition. The fact that it is 7nm makes this pretty much expected and the overall increase in performance/decrease in power usage less impressive, but it is what it is.
    The big question is if the AMD presentation and the claim of ~10% above RTX 2070 was using Vulkan or DX12 API in Strange Brigade.

    If DX12 then RX5700 would be a few % faster than Vega 64.

    If Vulkan then RX5700 would be ~20-40% faster than Vega 64.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •