Page 40 of 45 FirstFirst ...
30
38
39
40
41
42
... LastLast
  1. #781
    The Lightbringer Evildeffy's Avatar
    15+ Year Old Account
    Join Date
    Jan 2009
    Location
    Nieuwegein, Netherlands
    Posts
    3,772
    Quote Originally Posted by Zenny View Post
    The idea that Nvidia has a pure software scheduler or that thousands of CUDA cores on a 1080ti all operate in serial (when every single documentation indicates otherwise) is laughable. Instruction scheduling is indeed done by the compiler (so software), but all warp scheduling (the thing that actually assigns work to the various shader clusters) is still done in hardware.

    https://devblogs.nvidia.com/parallel...-architecture/
    http://www.anandtech.com/show/5699/n...x-680-review/3
    I didn't say that.
    I said, and you can look this up easily by simply looking at the text you quoted, that all the work original scheduling is done serially and executed parallel, this very statement is in there.
    Command given to Drivers -> Drivers Request Data for CUDA -> Driver sends data to CUDA -> Driver tells GPU what to do -> GPU does the rest. (nVidia)
    The only difference is that AMD doesn't need the driver middle-man if low level access is exposed, Vulkan does this more than DX12.

    Quote Originally Posted by Zenny View Post
    Talking about Serial vs Parallel in GPU is a whole other matter as both are used, one of the things that is serial is work submission to the GPU (via the command queue), this happens even in DX12 and goes for both AMD and Nvidia. From MS themselves:

    https://blogs.msdn.microsoft.com/dir...20/directx-12/

    The command lists record from multiple threads, but the final submission to the GPU is serial.
    True but it doesn't have to be, you can optimize software to ignore such commands if the hardware is capable of it.
    This is the normal optimization method to GPU manufacturers .. if you can expose it and use it why limit it?
    That said it's quite a bit more difficult to get it working than the official way and that's where nVidia's strength in graphics cards lie.

    Again nVidia right now cannot do it any other way because their architecture literally CANNOT do it any other way... doesn't mean it sucks but it's 1 limitation with a list of Pros and Cons.

    Quote Originally Posted by Zenny View Post
    Actual compute work done by a Nvidia GPU is highly parallel:

    http://www.nvidia.com/object/cuda_home_new.html
    https://www.udacity.com/course/intro...ramming--cs344
    http://www.nvidia.com/object/what-is-gpu-computing.html
    http://www.nvidia.in/docs/IO/124986/...rief-chart.png
    https://www.custompcreview.com/wp-co...entation-1.jpg

    TL;DR:

    1. Nvidia uses a software compiler to schedule instructions to the GPU.
    2. Splitting work to SM clusters is still done in hardware by warp schedulers.
    3. Task submission is serial in DX11, more parallel in DX12 still needs a final serial process.
    4. Compute work on Nvidia GPUs is parallel.
    I never suggested nVidia cannot parallel execute, just parallel process due to architecture design, read in the last paragraph of what you quoted, it's in there specifically.

    You're on the same page as me but you're either skimming the posts or misreading them (after you figured out it is indeed serial processing ).

  2. #782
    I agree theres no way Vega will stay at MSRP

    both due to both AMD and partners wanting more margins, as well as the mining craze


    it will land where 480/580/1070 have landed already

  3. #783
    So by the time the next iteration of AMD cards release they will be 3 years behind + highly inefficient power usage(assuming 1080 is about 1.5 years old). Right now they are a minimum of 30% behind, being generous.

    I believe we'll get an 1180 ti long before these cards are upgraded, so 70% behind before next AMD gen?

    Hard sale at half a grand.

    Who both has the resources and motivation to spend 500 bucks on a video card, but ignores the 1080ti. At or above that price range, people usually want the best within reason.
    Last edited by Zenfoldor; 2017-08-15 at 01:49 PM.

  4. #784
    Warchief Zenny's Avatar
    10+ Year Old Account
    Join Date
    Oct 2011
    Location
    South Africa
    Posts
    2,171
    Quote Originally Posted by Evildeffy View Post
    I never suggested nVidia cannot parallel execute, just parallel process due to architecture design, read in the last paragraph of what you quoted, it's in there specifically.

    You're on the same page as me but you're either skimming the posts or misreading them (after you figured out it is indeed serial processing ).
    What you call serial processing is a limitation on both AMD and Nvidia due to the way modern API's work. The final command list submission to GPU's is a serial process. Nvidia does this through a compiler while AMD has hardware scheduling. Neither is inherently more "parallel" than the other, Nvidia is perfectly capable of submitting and executing multiple command lists with it's driver in DX12.

    Nvidia's driver based approach appears to have such tiny overhead that it easily keeps pace with AMD's hardware based solution while still having the benefit of saving die space and being more flexible in more limiting API's such as DX11. It certainly seems to be working for them, AMD GPU's don't outperform Nvidia GPU's in Vulkan or DX12 when comparing similar chips, a 1060 and 580 are very close in pure DX12 games despite the later having a 30% TFLOP advantage. Another example is that while Vega 64 and the 1080ti are very similar in pure TFLOP numbers the latter destroys the former in basically any game, doesn't matter if it's DX11, DX12 or Vulkan.

    Instruction hardware schedulers do not make a AMD GPU more parallel than a corresponding Nvidia one. It's also a misnomer to label Nvidia as pure software scheduler, when it still has warp schedulers in the hardware. It's more accurate to label it as a hybrid approach.

  5. #785
    The Lightbringer Shakadam's Avatar
    10+ Year Old Account
    Join Date
    Oct 2009
    Location
    Finland
    Posts
    3,300
    Quote Originally Posted by Tiberria View Post
    I also find the GSync vs FreeSync thing to be kind of exaggerated. For one thing, a big chunk of buyers don't understand/care about it. For another, there are options for getting GSync monitors like the Dell S2716DG that don't really have a GSync price premium attached to it - especially if you can get it on one of its many sales.
    I'd say there's very much a G-sync premium on a 27" 1440p 144Hz TN monitor which costs ~630€. A comparable Freesync monitor is less than 500€.

  6. #786
    yeah I think someone already prepared to spend $500 is much more likely to go up to $700+ for 1080Ti then someone whose max budget is $300-400


    I still think anything near ~$700 (or maybe 750 if we stretch it) for an AIB card with good cooler is a fair-ish price for 1080Ti level performance .. regardless of competitions line-up

    Titans are of course overpriced and the original pre-pricedrop 1080 @ $650-700 back in June 2016 also was, but the 1080Ti is alright in my book



    if all goes to plan (aka we get HDMI 2.1 4K@true 120/144 Hz OLED TVs in 2018) - then if the 2080Ti will give at least 40% (ideally closer to 50%) perf increase @ 4K over 1080Ti and will have HDMI 2.1 ports and will support VRR game mode over HDMI 2.1, then Huang will get my money for sure

    but if Volta wont support VRR over HDMI 2.1 then I will actualy be forced to wait for Navi


    and then I wont upgrade anything or at least 5 or 6 years after this

  7. #787
    The Lightbringer Shakadam's Avatar
    10+ Year Old Account
    Join Date
    Oct 2009
    Location
    Finland
    Posts
    3,300
    Man, not even 10 years ago I bought a couple of brand new Radeon 4850 (which was the 2nd fastest card on the market at the time) for 180€ each. Those were the days.

    It's kinda ridiculous how expensive graphics cards have become in the last few years. I wonder at what point we started looking at a price tag of 700$ and think "Yep that's reasonable!".

  8. #788
    The Lightbringer Evildeffy's Avatar
    15+ Year Old Account
    Join Date
    Jan 2009
    Location
    Nieuwegein, Netherlands
    Posts
    3,772
    Quote Originally Posted by Zenny View Post
    What you call serial processing is a limitation on both AMD and Nvidia due to the way modern API's work. The final command list submission to GPU's is a serial process. Nvidia does this through a compiler while AMD has hardware scheduling. Neither is inherently more "parallel" than the other, Nvidia is perfectly capable of submitting and executing multiple command lists with it's driver in DX12.
    Difference in process execution, I specifically mentioned latency rather than throughput.
    AMD has more parallel hardware and like I said before, which you seem to be confusing with gaming performance, in compute terms AMD is, and almost has been, stronger if not for the lack of eco system.

    And like I said before .. optimizations don't always stick fully to API rules & regs on PC... Consoles I'm unsure of but PC is more flexible in that regard.
    Otherwise games wouldn't be able to affect scheduling CPU tasks differently from how Windows does it f.ex.

    Quote Originally Posted by Thunderball View Post
    Nvidia's driver based approach appears to have such tiny overhead that it easily keeps pace with AMD's hardware based solution while still having the benefit of saving die space and being more flexible in more limiting API's such as DX11. It certainly seems to be working for them, AMD GPU's don't outperform Nvidia GPU's in Vulkan or DX12 when comparing similar chips, a 1060 and 580 are very close in pure DX12 games despite the later having a 30% TFLOP advantage. Another example is that while Vega 64 and the 1080ti are very similar in pure TFLOP numbers the latter destroys the former in basically any game, doesn't matter if it's DX11, DX12 or Vulkan.
    Again compute != gaming performance, even though in Vulkan AMD cards tend to outclass nVidia quite handily ... the only problem with this is we only have 1 Vulkan game out so it's hard to "compare".
    That said I never stated nVidia didn't optimize their software scheduling, they have to a very good point in fact, I've only mentioned the differences along with Pros and Cons... not "ZOMG IT'S SUPERIOR!!111oneoneone".

    Quote Originally Posted by Thunderball View Post
    Instruction hardware schedulers do not make a AMD GPU more parallel than a corresponding Nvidia one. It's also a misnomer to label Nvidia as pure software scheduler, when it still has warp schedulers in the hardware. It's more accurate to label it as a hybrid approach.
    First part:
    It does when the entire GPU uArch is designed and built around this fact and their competition is not.

    Second part:
    Not really as the Warp Schedulers still have to receive commands from the driver in order to work, calling it a hybrid approach is... a little too friendly.

  9. #789
    The Unstoppable Force Gaidax's Avatar
    10+ Year Old Account
    Join Date
    Sep 2013
    Location
    Israel
    Posts
    20,865
    Quote Originally Posted by Shakadam View Post
    Man, not even 10 years ago I bought a couple of brand new Radeon 4850 (which was the 2nd fastest card on the market at the time) for 180€ each. Those were the days.

    It's kinda ridiculous how expensive graphics cards have become in the last few years. I wonder at what point we started looking at a price tag of 700$ and think "Yep that's reasonable!".
    10 years ago ATI/Nvidia weren't tossing billions into fire to progress, things are a lot more complex and thus expensive to begin with now, compared to back when 1080p screens were considered frikkin' enthusiast shit.

  10. #790
    The Lightbringer Evildeffy's Avatar
    15+ Year Old Account
    Join Date
    Jan 2009
    Location
    Nieuwegein, Netherlands
    Posts
    3,772
    http://wccftech.com/amds-rx-vega-64s...ductory-offer/

    Ugh... looks like they aren't rumours... that sucks ass.

  11. #791
    Deleted
    Quote Originally Posted by Zenny View Post
    Well despite all the doom and gloom, the Vega 56 and 64 are viable competitors to the 1070 and 1080 respectively. That is only true if you can get them at MSRP of course and don't really care about the power draw.

    I still think AMD should have offered the 56 for the same MSRP as the 1070 ($349) and the 64 at slightly lower then the 1080 at $449. It would have made both cards a better price/performance prospect. But I guess AMD can't make too much profit at those prices with HBM2 and the huge die.
    They arent conpetitors at all. Vega costs too much currently. Vega 64 currently costs 150€ more than the chespest 1080 and 200€ more than the special offers 3 months ago.

    No reason to move.

  12. #792
    The Unstoppable Force Gaidax's Avatar
    10+ Year Old Account
    Join Date
    Sep 2013
    Location
    Israel
    Posts
    20,865
    Quote Originally Posted by Evildeffy View Post
    http://wccftech.com/amds-rx-vega-64s...ductory-offer/

    Ugh... looks like they aren't rumours... that sucks ass.
    Seems like AMD really pulls no punches with Vega. They seem to use just about any underhanded trick to make Vega look better than it really is, even if it is effectively lying about the price to generate value hype.

    I was never so negative about AMD as last year when all this bullshit train started by them trashtalking and deceiving everyone for a year plus. Vega is not only technological, but also a moral failure in my eyes.

  13. #793
    Are these new prices true ?

    I was planning to buy a Vega64+ a Freesync 144hz 1440p monitor but I guess I am forced to buy a 1070 with a gsync 1080p 144 hz monitor...
    "It is always darkest just before the dawn " ~Thomas Fuller

  14. #794
    Quote Originally Posted by a C e View Post
    Are these new prices true ?

    I was planning to buy a Vega64+ a Freesync 144hz 1440p monitor but I guess I am forced to buy a 1070 with a gsync 1080p 144 hz monitor...
    It looks like it, atleast 1 EU shop that had stock yesterday for i believe €549 now list the black version for €649

  15. #795
    there were already rumors that the "true" price of Vega64 air will end up being $600 after all, but if this was deliberately and consciously done by AMD, smh

  16. #796
    So this is what it's like to launch a DoA product...

  17. #797
    Deleted
    Quote Originally Posted by Hextor View Post
    So this is what it's like to launch a DoA product...
    Its sold out, not sure if you can call that DOA.

  18. #798
    The Lightbringer Artorius's Avatar
    10+ Year Old Account
    Join Date
    Dec 2012
    Location
    Natal, Brazil
    Posts
    3,781
    So, Vega launch summary.

    Before I start, just some terminology. When I say "GPU" I'm talking about the GPUs themselves, Vega 10 (or V10, to make it short) and GP102 are good examples. When I say VGA, SKU or graphic's card, I'm talking about the actual cards, the products we can buy as consumers (oh, really?).

    So, first a brief analysis as an engineer.

    GCN5 (Vega) is not exactly the impressive overhaul we were expecting.

    I'm not going to say their efforts are in vain, because they did address some of their problems, taking into consideration their ridiculously limited budget compared to the competition, I guess that's to be expected. Also, they might want you to believe Vega 10 is something new, but in reality it's basically an improved Fiji XT. Both Vega 10 and Fiji Xt have 4096 SPs, 64 CUs and ROPs, 256 TUs and they both have HBM memory controllers. Is it a coincidence? No, not really. The difference between them is basically HBCC, tile-based rasterization, 6 CU arrays per compute engine instead of 4, 4MB L2 cache instead of 2MB, HBM2 (even if it didn't reach the intended spec) instead of HBM, higher performance libraries to help maintaining latency sane and high-clocks (they also decrease perf/clock, but Nvidia did the same thing with Pascal from Maxwell and it worked well enough) and the infinity fabric to connect things internally.

    Wait, HBM2 didn't reach its intended spec? No. The JEDEC spec was 2 GT/sec, but AMD is having to overvolt and overclock their HBM2 stacks in order to reach 1.875 GT/sec. But really, overclocking it further doesn't really increase performance that much, Vega's problems aren't that tightly related to this. It's more related to the fact that Vega 10 has to be their top GPU for gamers, professionals and servers. AMD has to make it able to do everything well (which also comes at a power consumption cost) and at the same time they don't exactly have the manpower to optimize their gaming drivers, and in the end it doesn't really do that well in anything.

    Wait, isn't the cut-down Vega 10 (Vega 56 SKU) beating cut-down GP104 (GTX1070 SKU)? Yes it is. But let's not forget the die is 174mm² larger and the power consumption 90W higher.

    Vega 10 was never supposed to be going against GP104 to begin with, the die is large, it's power hungry, the PCBs for it are all incredibly well designed too. This is the GPU that had to be going against GP102, but somehow it didn't exactly turn out well enough to do so and they had to put it against 104, where it still performed better despite the huge discrepancy in power consumption.

    AMD's full hardware scheduling is much harder to effectively optimize than Nvidia's. Vega can still only do 4 triangles/clock, and only has 64 ROPs while GP102 has 96. It's relatively clear that they don't really care about gaming as much as they care about the other markets, Vega 10 isn't a GPU designed with gaming in mind, it's a GPU that can do gaming while it also does other things.

    We could say that performance is going to get better with time when drivers start to mature, but even if this is true, Nvidia will just release Volta soon enough to counter whatever performance improvement AMD manages to get out of Vega with drivers. But don't misunderstand, Vega 10 is a monster at 13.7 TFLOPS in FP32 (compared to 12 TFLOPs from the full-die GP102 in the Titan Xp), Vega is decent when it comes to compute tasks.

    Now as a consumer

    The RX Vega 56 SKU is a really good value if power consumption isn't important to you and you manage to buy it close to its MSRP. It beats the 1070 in basically everything and has a nice margin over it. But I wouldn't buy the reference card, the BIOS has a hard-lock limit on 300W which makes you unable to overclock the GPU without hitting a power wall, so you can't really extract all the performance of the GPU. It's a shame because maybe it could reach the 1080 in games, although still consuming much more power. In this case it all comes down to how much it costs compared to the 1070.

    The RX Vega 64 SKU though, is a hard sell. It's expensive and it isn't beating the 1080 consistently, it trade blows with it. You can expect it to beat the 1080 in any game that was sponsored by AMD or in any game that is good with Vulkan/d3d12, but you can also expect it to lose to the 1080 in any game that is sponsored by Nvidia or simply not well optimized for AMD. And while in average they're basically the same in performance, title to title variance isn't exactly small. The 1080 is relatively consistent and consumes less power, there's not exactly that much of a reason to buy the Vega 64 unless you really want to do something else with it.

    Also, I'm not really sure about this but apparently the RX cards don't really have any driver side limitations and are all performing relatively in-line with the Frontier Edition, so maybe they make more sense if you're buying them for GPGPU.

    I think I'm probably forgetting some things, but since most people here don't really care that much about the technology itself and just want to know what to spend their money on, it's probably fine. In any case, there are some very good reviews out there, perhaps it's a good idea to leave some useful links.

    Ryan's review (AnandTech)

    Steve Burke's review (GamersNexus)
    Last edited by Artorius; 2017-08-15 at 06:04 PM.

  19. #799
    well if their gaming marketshare falls even lower due to Vega ..

  20. #800
    Deleted
    Quote Originally Posted by Life-Binder View Post
    well if their gaming marketshare falls even lower due to Vega ..
    Not sure they would care too much if the cards are selling, AIBs will want in on that action, if they are making a profit and a quick consistent one at that right now then doesn't matter, also AMD pretty much have contracts with MS, Sony and Apple when it comes to non data centre GPU tasks, I don't think they need to worry.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •