Page 1 of 6
1
2
3
... LastLast
  1. #1

    So what do you guys think about the Radeon VII?

    Will it have RTX 2080 performance for $699?

    And what do you think the sales of the Radeon VII will be?
    "Every country has the government it deserves."
    Joseph de Maistre (1753 – 1821)


  2. #2
    Where is my chicken! moremana's Avatar
    15+ Year Old Account
    Join Date
    Dec 2008
    Location
    Florida
    Posts
    3,618
    No and

    Very low. There is really no need to buy it.

    Nvidia is on a mission to cater to Freesync and once they get more support for more monitors it will be hard to justify a AMD card.

  3. #3
    Titan Yunru's Avatar
    10+ Year Old Account
    Join Date
    Nov 2009
    Location
    The Continent of Orsterra
    Posts
    12,407
    I had to do some research to figure out wtf they are selling:

    7nm Technology Process
    https://en.wikipedia.org/wiki/7_nanometer

    3840 Stream Processors
    https://en.wikipedia.org/wiki/Stream_processing
    (pretty much GPU power)
    For comparison:
    https://graphicscardhub.com/wp-conte...processors.jpg

    16 GB Memory -- wait no ram needed?

    1 TB/s Memory Bandwidth ... well thats already old tehnology

    No info on Core speed, or Memory speed (wich i find more important).
    AMD FreeSync2 HDRTechnology1--also nothing special

  4. #4
    Please wait Temp name's Avatar
    10+ Year Old Account
    Join Date
    Mar 2012
    Location
    Under construction
    Posts
    14,631
    Quote Originally Posted by Yunru View Post
    I had to do some research to figure out wtf they are selling:

    7nm Technology Process
    https://en.wikipedia.org/wiki/7_nanometer

    3840 Stream Processors
    https://en.wikipedia.org/wiki/Stream_processing
    (pretty much GPU power)
    For comparison:
    https://graphicscardhub.com/wp-conte...processors.jpg

    16 GB Memory -- wait no ram needed?

    1 TB/s Memory Bandwidth ... well thats already old tehnology

    No info on Core speed, or Memory speed (wich i find more important).
    AMD FreeSync2 HDRTechnology1--also nothing special
    Look up the Instinct Mi50. It's the exact same chip and (probably) board, just with some display outputs on it.

    As for OP: AMD's only real "major" advantage so far has been their lower price. 2080 performance for 2080 prices? Nah, not going to happen. If they halved the memory, or used GDDR5/6, it'd be a way better value proposition

  5. #5
    The Lightbringer Shakadam's Avatar
    10+ Year Old Account
    Join Date
    Oct 2009
    Location
    Finland
    Posts
    3,300
    Quote Originally Posted by Yunru View Post
    I had to do some research to figure out wtf they are selling:

    7nm Technology Process
    https://en.wikipedia.org/wiki/7_nanometer

    3840 Stream Processors
    https://en.wikipedia.org/wiki/Stream_processing
    (pretty much GPU power)
    For comparison:
    https://graphicscardhub.com/wp-conte...processors.jpg

    16 GB Memory -- wait no ram needed?

    1 TB/s Memory Bandwidth ... well thats already old tehnology

    No info on Core speed, or Memory speed (wich i find more important).
    AMD FreeSync2 HDRTechnology1--also nothing special
    What are you talking about?



    Anyways. Performance will be same-ish as RTX 2080. A little ahead in some games, a little behind in some games. Pretty safe prediction.


    It'll be better than the RTX cards for workstation use, for gaming I see no compelling reason to buy a Radeon VII

  6. #6
    Holy Priest Saphyron's Avatar
    10+ Year Old Account
    Join Date
    Feb 2012
    Location
    Netherlight Temple
    Posts
    3,353
    Quote Originally Posted by Shakadam View Post
    What are you talking about?



    Anyways. Performance will be same-ish as RTX 2080. A little ahead in some games, a little behind in some games. Pretty safe prediction.


    It'll be better than the RTX cards for workstation use, for gaming I see no compelling reason to buy a Radeon VII
    Yeah, they should have marketed it as a workstation GPU. In that, it is extremely good.
    Inactive Wow Player Raider.IO | Inactive D3 Player | Permanent Retired EVE Player | Inactive Wot Player | Retired Openraid Raid Leader| Inactive Overwatch Player | Inactive HotS player | Youtube / Twitter | Steam | My Setup

  7. #7
    Please wait Temp name's Avatar
    10+ Year Old Account
    Join Date
    Mar 2012
    Location
    Under construction
    Posts
    14,631
    Quote Originally Posted by Saphyron View Post
    Yeah, they should have marketed it as a workstation GPU. In that, it is extremely good.
    Yeah, it'll be good for things that need a lot of fast video memory

  8. #8
    So far it's like an option against the RTX 2080. Performs around the same under DX11 and a bit better in Vulkan, but as we know Vulkan isn't getting support from game devs, because it's "too hard"(it isn't).

    Anywho.. It's too early to say anything. All we have is AMD figures and they promise somewhat decent numbers, but at the same time nothing new for gaming. For content creation/work station it will be nice though, but it having 1:8 rate in FP64 does limit it's success quite a bit in scientific workloads that ain't memory restricted currently.

    But as with RTX 2080 and GTX 1080ti before it. I still don't see any value on getting a GPU that expensive for the performance they give. They don't really get you all the way to 4k and for 1440p they are already overkill. Then again I like bargains and price/performance.

  9. #9
    Legendary!
    10+ Year Old Account
    Join Date
    Feb 2011
    Location
    If you knew you would run the other way.
    Posts
    6,763
    One thing to look out for now, as rumor has it that AMD are working on their version of a ray tracing video card.. The other rumor is that AiB's wont be making any as AMD are supposedly only making 5000 of them, and that you will have to buy from AMD..
    Last edited by grexly75; 2019-01-18 at 04:14 PM.

  10. #10
    Quote Originally Posted by Shakadam View Post
    What are you talking about?



    Anyways. Performance will be same-ish as RTX 2080. A little ahead in some games, a little behind in some games. Pretty safe prediction.


    It'll be better than the RTX cards for workstation use, for gaming I see no compelling reason to buy a Radeon VII
    What you would call workstation use?
    I don't necessarily agree that it competes with the 2080 in the workstation segment. Though that is mostly because I don't think that people who consider workstation GPU will buy a 2080. Seems to me that in the workstation segment it's more likely trying to compete with Titan V, by significantly undercutting it but also offering significantly slimmer feature set, kind of "cheap asian knock off".

    So far I can identify its edge (for the price) for parallel computations which require a lot of RAM, but you need to not mind slumming with half working frameworks. As with all things in life, you get what you paid for...
    This might find some use in the academy, pretty skeptical otherwise. But even there I think it depends on the use case and the budget available.
    Crypto maybe too, though didn't it end?

    Quote Originally Posted by mrgreenthump View Post
    So far it's like an option against the RTX 2080. Performs around the same under DX11 and a bit better in Vulkan, but as we know Vulkan isn't getting support from game devs, because it's "too hard"(it isn't).

    Anywho.. It's too early to say anything. All we have is AMD figures and they promise somewhat decent numbers, but at the same time nothing new for gaming. For content creation/work station it will be nice though, but it having 1:8 rate in FP64 does limit it's success quite a bit in scientific workloads that ain't memory restricted currently.
    Vulkan is not "too hard", it's just not very usable now unless you build only for it or you do a hello world app (which is a special case of building only for it).
    About the workstation, same question as to the other poster. What you consider workstation use and what advantages you think it offers there over the competition? Also, same question about content creation.

  11. #11
    Just nooo. And they just want money. They still can't make a GPU...you just need Nvidia.

  12. #12
    Sounds like an attempt to make some additional money on the GPU that's been designed for professional applications without any additional investments.
    R5 5600X | Thermalright Silver Arrow IB-E Extreme | MSI MAG B550 Tomahawk | 16GB Crucial Ballistix DDR4-3600/CL16 | MSI GTX 1070 Gaming X | Corsair RM650x | Cooler Master HAF X | Logitech G400s | DREVO Excalibur 84 | Kingston HyperX Cloud II | BenQ XL2411T + LG 24MK430H-B

  13. #13
    Old God Vash The Stampede's Avatar
    10+ Year Old Account
    Join Date
    Sep 2010
    Location
    Better part of NJ
    Posts
    10,939
    Quote Originally Posted by grexly75 View Post
    One thing to look out for now, as rumor has it that AMD are working on their version of a ray tracing video card.. The other rumor is that AiB's wont be making any as AMD are supposedly only making 5000 of them, and that you will have to buy from AMD..
    I think AMD's implementation of Ray-Tracing is going to be a combination of GPU+CPU, like OpenCL but for Ray-Tracing. At least I think Lisa Su hinted at this with emphasis in general compute performance.

    As for the Radeon VII... yea. AMD released this cause they have nothing else to show for now. I personally think that AMD is going to focus on two types of GPU's, like one for mid to low end like Navi, and one for high end that uses HBM like Vega. Navi won't be a high end GPU but it will perform somewhere between a GTX 1070 to GTX 1080Ti cause I have a feeling they're going to use a chiplet design for GPU's as well.

    Also for Radeon VII vs RTX 2080, you can go either way. DLSS and Ray-Tracing are not worth getting a RTX card, especially Ray-Tracing where it seems to barely work. I'm just disappointed at the price of this new GPU as nobody is going to pay $700 for something slower than a RTX 2080Ti.
    Last edited by Vash The Stampede; 2019-01-20 at 05:33 PM.

  14. #14
    The Unstoppable Force DeltrusDisc's Avatar
    10+ Year Old Account
    Join Date
    Aug 2009
    Location
    Illinois, USA
    Posts
    20,098
    Can someone explain the naming convention? O_o
    "A flower.
    Yes. Upon your return, I will gift you a beautiful flower."

    "Remember. Remember... that we once lived..."

    Quote Originally Posted by mmocd061d7bab8 View Post
    yeh but lava is just very hot water

  15. #15
    The Unstoppable Force DeltrusDisc's Avatar
    10+ Year Old Account
    Join Date
    Aug 2009
    Location
    Illinois, USA
    Posts
    20,098
    Quote Originally Posted by CryotriX View Post
    There isn't any? VII as in Vega II but pronounced seven. Do we even care, since it's just one card in one family.
    I miss the days of consistent naming that made sense and flowed...

    NVidia certainly isn't as bad as AMD, but they're both guilty.

    Maybe it should be V-II... It isn't the "7th" or "#7" of anything.
    "A flower.
    Yes. Upon your return, I will gift you a beautiful flower."

    "Remember. Remember... that we once lived..."

    Quote Originally Posted by mmocd061d7bab8 View Post
    yeh but lava is just very hot water

  16. #16
    Old God Vash The Stampede's Avatar
    10+ Year Old Account
    Join Date
    Sep 2010
    Location
    Better part of NJ
    Posts
    10,939
    It seems that DX12 has a feature called Microsoft’s DirectML API that allows AI like functionality within a GPU without the need of a AI core. Basically this gives AMD AI and Deep Learning without the need of an AI core like in the Nvidia RTX cards. Which means that AMD could implement their own version of DLSS. But who cares about DLSS, but this could allow AMD to use the AI for Denoising Ray-Tracing, assuming that something like the CPU does the Ray-casting.

    My idea that AMD could use the CPU+GPU to do Ray-Tracing isn't all that far off.

    https://www.guru3d.com/news-story/am...ectml-api.html

  17. #17
    Warchief Zenny's Avatar
    10+ Year Old Account
    Join Date
    Oct 2011
    Location
    South Africa
    Posts
    2,171
    The Radeon VII being as fast as a RTX 2080 is a bit optimistic, as the numbers came from AMD themselves. They would of course cherry pick the best examples to "prove" the cards performance. For example, utilizing Strange Brigade instead of Wolfenstein 2 for the Vulkan performance numbers.

    I don't doubt it will be competitive, but I'd wager over a large game spread it will end up several percent slower on average. Likely a Vega 64 vs 1080 situation all over again.

  18. #18
    Quote Originally Posted by Vash The Stampede View Post
    It seems that DX12 has a feature called Microsoft’s DirectML API that allows AI like functionality within a GPU without the need of a AI core. Basically this gives AMD AI and Deep Learning without the need of an AI core like in the Nvidia RTX cards. Which means that AMD could implement their own version of DLSS. But who cares about DLSS, but this could allow AMD to use the AI for Denoising Ray-Tracing, assuming that something like the CPU does the Ray-casting.

    My idea that AMD could use the CPU+GPU to do Ray-Tracing isn't all that far off.

    https://www.guru3d.com/news-story/am...ectml-api.html
    No, the idea is that they are going to pump it all through their GPUs. The CPU still has the same load it always has to deal with in games. That's not going to change. Adding Ray Tracing doesn't some how make it easier to process things, it's adding a full, new vastly harder to manage workload on top of everything else.

    Anyway.

    Radeon 7 is a year late and priced like it's a year early.

    I can easily see it comparing directly to the 2080, because if it doesn't, it's a waste of time.

    To me it looks like a stop-gap card, with the reportedly low production figures.

    Given that it's basically a 7nm refresh, it wouldn't be amazing if Nvidia came out with a 2780 and 2770 and 2780ti, just cus.

    Switching to 7nm is a huge and (essentially) free performance boost post initial cost recovery.

  19. #19
    I'll mosey along with 1060ti man lol.

    that block and that tiny first resistor IS jumped and your shunt mod.

    about like still 10-11k in Passmark.

  20. #20
    Quote Originally Posted by Vash The Stampede View Post
    It seems that DX12 has a feature called Microsoft’s DirectML API that allows AI like functionality within a GPU without the need of a AI core. Basically this gives AMD AI and Deep Learning without the need of an AI core like in the Nvidia RTX cards. Which means that AMD could implement their own version of DLSS. But who cares about DLSS, but this could allow AMD to use the AI for Denoising Ray-Tracing, assuming that something like the CPU does the Ray-casting.

    My idea that AMD could use the CPU+GPU to do Ray-Tracing isn't all that far off.

    https://www.guru3d.com/news-story/am...ectml-api.html
    The MS api is for inference of NNs. It'll work regardless of whether you have an ASIC that can do convolutions (or matrix multiplications) or just the normal general compute units. It's unlikely that AMD are happy about this, because NV has the ASIC while AMD do not. Not writing it out completely, but chances that a general compute unit can compete with an ASIC are slim.
    You also need to differentiate about DLSS. AMD cannot implement DLSS training because very likely it's NV's trade secret. They might be able to reverse engineer the inference and the weights, then implement it on their hardware, however:
    1. They'll be tied to training done by NV, which means that they cannot add games of their own to the list
    2. They'll have to use the exact same NN, there's exactly zero room for changes. It's likely that NV made the network so that it's inferred efficiently on their tensor core ASIC, which AMD doesn't have.
    So overall, I don't believe AMD will do it. Alternatively, AMD can do their own "DLSS", but they probably lack the know-how. They're well behind the curve on anything that has to do with deep learning, if they suddenly close the gap it'll be a small miracle.

    About ray tracing on CPU, you just have to let it go.
    1. NV's ray tracing ASICs are disproportionately faster than general compute units, whether it's CPU or GPU. AMD would need to x5 to x10 the amount of cores to be able to match current performance (at same power and price). This is just infeasible and not going to happen.
    2. There's too much transfer latency between CPU and GPU for it to be practical for games. Casting rays on CPU and then transferring the hit data to GPU in the same frame is already impractical, but what if you need recursive ray tracing? Then download to CPU memory, raycast, upload to GPU? This is not even close to realistic.
    If AMD wants to compete on ray tracing they can do only one thing, make an ASIC. There's literally no other way.

    Quote Originally Posted by Shinzai View Post
    No, the idea is that they are going to pump it all through their GPUs. The CPU still has the same load it always has to deal with in games. That's not going to change. Adding Ray Tracing doesn't some how make it easier to process things, it's adding a full, new vastly harder to manage workload on top of everything else.
    Actually, this is not correct (but not in the direction you think!). If the rendering pipeline would've been entirely ray tracing driven then CPU requirements would be about half of what they're now, because no draw calls are necessary and no visibility culling is necessary. These two usually eat up at least half of the CPU time in a frame. So basically if there was a game today which doesn't use rasterization at all, it's likely we could go back 6 years on CPUs and not even feel it, or find some new use for all the CPU time.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •