Page 27 of 95 FirstFirst ...
17
25
26
27
28
29
37
77
... LastLast
  1. #521
    Quote Originally Posted by dadev View Post
    Can this be taken as an official AMD statement?
    Somehow - http://gpuopen.com/concurrent-execut.../#comment-6937

    I can remember reading a reply from AMD itself somewhere, but I can't think of it where it was. sorry.

    Do explain how you force async execution in DX12. I'm now working on an engine that is built for modern API (DX12 included) and I have literally no idea what you're talking about.
    Easiest way to do would be to run a test to the GPU for some async compute stuff and if it can't reply in a given time it's not capable of async compute. Or just read the vendor ID and stop the program. In AotS the driver was responding that it can do async compute on a nvidia gpu, but the performance broke in like no other. This was at the end of 2015 / start of 2016 and didn't change much yet. AFAIK AC is still disabled in AotS.

    Or maybe, just maybe, because you can't force it on?
    You can write async code as much as you like, and if it performs badly on nvidia they will serialize it in the driver. And that will be 100% within the specification. If you fail at optimizing for specific hardware then the vendor is free to disable your hoop jumping in this specific case. Notice how I wrote vendor, if same would happen with AMD they would do the same.
    Nvidia's AC implementation is really bad and can't be fixed by software, because it's a hardware limit. Usually you should be able to send the workloads to the GPU and the GPU should be able to do compute and gfx workloads at the same time. Nvidia has just one pipeline with the need to switch between compute and gfx workloads, because it can't run at the same time without switching. That results in bigger queues till the data is worked on, longer response time, worse performance. Upside is, that this kind of arc. is really well optimized for DX 11 and is extremely efficient for the still current games. But the more games use async compute in higher numbers, nvidia can't optimized around it anymore or do another way like AotS does.

    So even if you can't really enforce AC (if the driver tells you it can run it, then it should or?), if the performance goes down hard, because of the increased workloads in new games, the guys playing the game will just turn it off. I remember a news from back then, that the performance of AotS was like -60% with async turned on.

    Don't get me wrong, I might not be a fan of nvidia, but that is because of their behavior as a company, not from their hardware. Their current gen is a beast in performance and I haven't seen such a big increase in performance over the past gen for a long time.

    IMHO their new gen coming with full async compute etc. won't run as efficient as their current gen does and might even lose a bit in DX 11. That's because even they can only cook with water and if you do a multipurpose system, you can't optimized it that much anymore. That's why the GCN cards usually run better on newer (or even older) games with time, because they have more raw power to crunch them, even without direct optimization. And Nvidia (or any other company), won't optimized their old gens for the current games, maybe only if there is some huge error with it.

    - - - Updated - - -

    Quote Originally Posted by dadev View Post
    Eh? Put wrong barrier on AMD cards and you get black screen (and that's a good state considering that not too long ago they could bsod), while NVidia usually continues on. Not one year ago every slight deviation from OpenGL spec on AMD gpu would almost certainly not work. I really doubt the situation has changed much seeing how OpenGL fares in general (dead spec).
    Of course you don't want bugs when you're about to ship, but what good it does when you want to get some work done?
    With their new drivers, after the catalyst stuff, they really improved by a huge margin if the driver hangs itself up. That GPU recovery was something Nvidia was quite a bit better for a time. For the OpenGL implementation ... yeah, AMD sucks there, especially in performance. But that's because they honor the spec and don't work around in many cases like nvidia (including some dirty fixes). But still, the AMD OpenGL sucks. That's why they are working so hard with together for vulkan. Makes no sense to ride a dead horse - guess, their OL Driver is a clusterfuck of shit

    If you want a proof from more rigid segments, go to the hospital, take a look at some class 3 medical equipment that does graphics and check its gpu. You will very rarely find AMD gpu inside (if at all! I know I never seen one). You know why? Because people usually don't develop on it, hence it doesn't go through FDA, and hence can't be swapped in after you done developing on NVidia.
    When we supported the medical clinics around my town (had a contract for their main institute and all others around), I never encountered AMD or Nvidia. In most cases they all used matrox gpu's, depending on what was running even ASpeed or some even the Intel GPU. Some stuff like the MRT / CRT screens was closed of course, but I doubt they used anything from nvidia or AMD. Even the ultrasonic device with 3d render was using a matrox.

    For the CAD itself the gpu vendor is not important, but when you run the engine to see how your glorious animation looks like in it then it can be important.
    However, why on earth you'd need a Quadro to do artist work? That's insane waste of money. Save on FirePro? Save by buying consumer level cards, we're not in the stone age when quadros were vastly superior in graphics tasks. Pretty sure it's cheaper to license new Maya than to buy a bunch of quadros. Modern CADs no longer use fixed pipeline. Not sure what reason there is to buy a quadro nowadays except for supporting old CADs, doing double precision stuff or deep learning, with the last two being 100% irrelevant to artists.
    Programs like Maya or AutoDesk CAD need a certified GPU to even run with full speed. In the past I can remember the consumer cards could run the programs, but in many cases had trouble and errors running the realtime rendering. I doubt that has changed much, because the FirePro or Quardro drivers are especially made for those programs and licensed for a reason. Also even today those cards a way faster then the usual consumer card.

    That was like in 2014 or 2015. Just for performance reasons, a report I saved with solidworks https://www.pugetsystems.com/labs/ar...orks-2016-751/ (we needed that for our production).

    It's not only the driver - if you work in a professional environment, you won't get any support from autodesk or the gpu builder, if you use a non-cert card or system. They just outright refuse you. We also had the idea to run solidworks (older versions) with customer GPU's - in this case Nvidia - but after like 2 months we buyed FirePro GPU's, because the colleagues were raging about the consumer cards and their errors. Performance was ok, but the errors and wrong renderings were not. Don't ask me the details, I don't know them, had to trust out 3 engineers. And don't get me wrong, this was not a manufacturer problem, we just had the FirePro, because at this time they had the better price/performance rating then nvidias quadro (with those it seems you have to pay for the name...).

    And finally? Why would NVidia fuck up CUDA? What incentive they have? Why you're worried about CUDA being closed standard but fine with DX? Fine with a closed platform, namely XBone. And finally fine with a super closed platform, namely PS4 (you're signing an nda for crying out loud)? Are you not worried that Sonny will fuck up the next patch and your game will be unplayable? What kind of rubbish is this??
    CUDA is used in medical devices, NVidia couldn't fuck up CUDA even if they wanted to. CUDA is running now for 8 major versions and so far they didn't fuck up anything. In fact there's version 8 for OSX, now tell me what pascal card there is in macs?
    CUDA has nothing to do with DX or XBone or the PS4 (and the PS4 runs FreeBSD by the way). The direct competition to CUDA is OpenCL and what I don't like about it is, that CUDA is closed source and only works with Nvidia Cards. So to use it, you have to use nvidia in the future, no matter what comes. And I didn't say they fucked something up, I just said, what happens if they do. What if they decide to change CUDA in a way for a new hardware, so that it won't be backwards compatible? Or stall the dev. of CUDA and let it running out? Because they have something new they can sell better or change to OpenCL? If you use CUDA, you also have to use their dev tools and other stuff. You can't use a third partys framework because its better or the other one wont be developed anymore. I never was a fan of binding yourself to a specific company. I like open standards. But that's a personal, subjective opinion. CUDA is shit in terms of a closed system, not in terms of performance etc.

    And for macs - there is currently no nvidia card in any mac. Apple changed to AMD quite a time ago. The CUDA drivers are only for some older cards and quadro (IF the nvidia page is correct). Their current GPU driver for macOS won't even work with any 10X0 Card and still has some errors with the previous gen.
    "Who am I? I am Susan Ivanova, Commander, daughter of Andrej and Sophie Ivanov. I am the right hand of vengeance and the boot that is gonna kick your sorry ass all the way back to Earth, sweetheart. I am death incarnate and the last living thing that you are ever going to see. God sent me." - Susan Ivanova, Between the Darkness and the Light, Babylon 5

    "Only one human captain ever survived a battle with a Minbari fleet. He is behind me! You are in front of me! If you value your lives - be somewhere else!" - Delenn, Severed Dreams, Babylon 5

  2. #522
    Deleted
    AMD is 2 years behind Intel on product. 10 years behind on making decent drivers. No decent costumer support/service and quality control.
    I have no interrest in AMD.

  3. #523
    Quote Originally Posted by parlaa View Post
    AMD is 2 years behind Intel on product. 10 years behind on making decent drivers. No decent costumer support/service and quality control.
    I have no interrest in AMD.
    that awkward moment when AMD drivers are better than Nvidias /facepalm

  4. #524
    Deleted
    Quote Originally Posted by Kalodrei View Post
    that awkward moment when AMD drivers are better than Nvidias /facepalm
    Rofl, I had this discussion 2 weeks ago with a friend who had an AMD graphics card. The drivers literally crashed while he was talking about how great they were.

  5. #525
    Quote Originally Posted by parlaa View Post
    Rofl, I had this discussion 2 weeks ago with a friend who had an AMD graphics card. The drivers literally crashed while he was talking about how great they were.
    and the same exact thing has not happened to nvidia users recently? Nvidia recently released drivers that actually FRIED CARDS. So how exactly is nvidia better? I mean sure, drivers crash sometimes, but it happens to both sides.

    Personally, I have not had any issue with my nvidia card, but neither did I have any issues with me AMD card when I had it. My friend has an AMD card and he has not had any issues with it either. However, all that is anecdotal and doesn't really mean much. What does mean something is that nvidia fried cards with their drivers.

  6. #526
    Deleted
    Quote Originally Posted by parlaa View Post
    Rofl, I had this discussion 2 weeks ago with a friend who had an AMD graphics card. The drivers literally crashed while he was talking about how great they were.
    Quote Originally Posted by Lathais View Post
    and the same exact thing has not happened to nvidia users recently? Nvidia recently released drivers that actually FRIED CARDS. So how exactly is nvidia better? I mean sure, drivers crash sometimes, but it happens to both sides.

    Personally, I have not had any issue with my nvidia card, but neither did I have any issues with me AMD card when I had it. My friend has an AMD card and he has not had any issues with it either. However, all that is anecdotal and doesn't really mean much. What does mean something is that nvidia fried cards with their drivers.
    In recent history nvidia fried cards on multiple occasions with drivers. There also was issue of dramatically decreasing older gpus performance with new driver update that took time to get fixed, not to mention the 3.5GB debacle or "we can do Async compute in drivers" bs. So yeah nvidias driver are not all that good and company got shady practices.

    I can not speak much about AMD, but at least on surface they look decent - granted that might be because they are not in position to be dicks. My friend with amd gpu never complained about drivers. I on the other hand had few crashes on nvidia... these things just happen on both sides.

  7. #527
    Quote Originally Posted by Maerad View Post
    Somehow - http://gpuopen.com/concurrent-execut.../#comment-6937

    I can remember reading a reply from AMD itself somewhere, but I can't think of it where it was. sorry.
    He says they discussed on how to identify the issues, but it doesn't look like they committed to fix it. I wouldn't complain if they fixed it (better for me), but this is not it and not a promise to do it.

    Easiest way to do would be to run a test to the GPU for some async compute stuff and if it can't reply in a given time it's not capable of async compute.
    But still, this will not allow me to force it on. If NVidia or AMD decides to serialize there's nothing I can do about it. I can design the engine around async as much as I like, in the end the driver can decide against my grand schemes.

    Or just read the vendor ID and stop the program.
    Do you mean to stop the game because I think there's no async compute? If so, why on earth I would do that?

    In AotS the driver was responding that it can do async compute on a nvidia gpu, but the performance broke in like no other. This was at the end of 2015 / start of 2016 and didn't change much yet. AFAIK AC is still disabled in AotS.
    Seems like it's enabled. https://www.computerbase.de/2016-05/...sync-compute_2
    It's in german, but graphs are readable.

    Nvidia's AC implementation is really bad and can't be fixed by software, because it's a hardware limit. Usually you should be able to send the workloads to the GPU and the GPU should be able to do compute and gfx workloads at the same time.

    Nvidia has just one pipeline with the need to switch between compute and gfx workloads, because it can't run at the same time without switching. That results in bigger queues till the data is worked on, longer response time, worse performance. Upside is, that this kind of arc. is really well optimized for DX 11 and is extremely efficient for the still current games. But the more games use async compute in higher numbers, nvidia can't optimized around it anymore or do another way like AotS does.

    So even if you can't really enforce AC (if the driver tells you it can run it, then it should or?), if the performance goes down hard, because of the increased workloads in new games, the guys playing the game will just turn it off. I remember a news from back then, that the performance of AotS was like -60% with async turned on.
    As far as I'm concerned in DX12 I do exactly that. I execute two command lists at the same time on two different queues, what the driver does after that I cannot control, nor usually care as long as results are correct and executed fast enough. If suddenly NVidia's driver started to crash (or just mess up) because I do something I should be allowed to, then we have a problem and I raise it to NVidia, but as long as that doesn't happen it's all good as far as the game is concerned.
    What I think (or not think) about NVidia's async compute implementation has nothing to do with shipping a title.

    Don't get me wrong, I might not be a fan of nvidia, but that is because of their behavior as a company, not from their hardware.
    I don't think there's one big company which can be considered really honest. PR and marketing are almost synonymous with dishonest. AMD has less resources so their shenanigans are usually limited to inciting the community.
    But just so you know, NVidia spends crapton of money (feels like way more than AMD and Intel combined) on their devtech team, they're actually willing to put in the effort to re-optimized for their hardware, since almost all titles are initially optimized for AMD because of consoles.

    - - - Updated - - -

    Quote Originally Posted by Maerad View Post
    When we supported the medical clinics around my town (had a contract for their main institute and all others around), I never encountered AMD or Nvidia. In most cases they all used matrox gpu's, depending on what was running even ASpeed or some even the Intel GPU. Some stuff like the MRT / CRT screens was closed of course, but I doubt they used anything from nvidia or AMD. Even the ultrasonic device with 3d render was using a matrox.
    I'm familiar with a few products that use nvidia gpus. Consumer level btw. I don't think they advertise it though.


    Programs like Maya or AutoDesk CAD need a certified GPU to even run with full speed. In the past I can remember the consumer cards could run the programs, but in many cases had trouble and errors running the realtime rendering. I doubt that has changed much, because the FirePro or Quardro drivers are especially made for those programs and licensed for a reason. Also even today those cards a way faster then the usual consumer card.

    That was like in 2014 or 2015. Just for performance reasons, a report I saved with solidworks https://www.pugetsystems.com/labs/ar...orks-2016-751/ (we needed that for our production).
    That was so for Maya and 3ds until 5 or 6 years ago. Clearly Solidworks is ill suited for making game assets.
    Since the viewport thing in Maya, quadro level has no real edge over consumer level gpus.
    Compare:
    https://www.pugetsystems.com/labs/ar...rformance-816/
    https://www.pugetsystems.com/labs/ar...rformance-822/
    That M6000 is 5k$ for crying out loud. You can find 1080s for 10th of that price if you buy in stock.

    It's not only the driver - if you work in a professional environment, you won't get any support from autodesk or the gpu builder, if you use a non-cert card or system. They just outright refuse you. We also had the idea to run solidworks (older versions) with customer GPU's - in this case Nvidia - but after like 2 months we buyed FirePro GPU's, because the colleagues were raging about the consumer cards and their errors. Performance was ok, but the errors and wrong renderings were not. Don't ask me the details, I don't know them, had to trust out 3 engineers. And don't get me wrong, this was not a manufacturer problem, we just had the FirePro, because at this time they had the better price/performance rating then nvidias quadro (with those it seems you have to pay for the name...).
    Maybe Solidworks stuff (especially older version, big no no for consumer level cards), with Maya or 3ds there are practically no issues.

    CUDA has nothing to do with DX or XBone or the PS4 (and the PS4 runs FreeBSD by the way).
    It certainly has some things to do, at the very least it's a closed ecosystem. PS4's OS being FreeBSD (or some derivative) doesn't really change that it's a fully closed and locked down system.
    Secondly it's an interface to program the GPU. DX is also an interface for programming the GPU. It might be not a direct competitor to CUDA because it's made for somewhat other things, but in essence they do similar things, that is send commands to the GPU.

    But it doesn't really matter, I brought it up because they're all closed technologies and most people don't have a problem basing their stuff on closed technologies, because no matter how you spin this around there will be very significant parts in your system that are completely closed.
    You work with solidworks? closed.
    You have intel or amd cpu? closed.
    You have nvidia, amd or intel gpu? closed.
    You have windows? closed. Apple? still closed.
    You use non llvm or gnu compiler? closed.

    The direct competition to CUDA is OpenCL and what I don't like about it is, that CUDA is closed source and only works with Nvidia Cards. So to use it, you have to use nvidia in the future, no matter what comes. And I didn't say they fucked something up, I just said, what happens if they do. What if they decide to change CUDA in a way for a new hardware, so that it won't be backwards compatible?
    But previous CUDA will still be compatible with previous hardware. And that what matters. Currently we have some internal GI generator that works on CUDA, tomorrow nvidia can come with CUDA2000 which is fully incompatible with current one and works only on Pascals. I wouldn't care. Our current thing works and will continue to work until we'll decide to make a new one.

    If you use CUDA, you also have to use their dev tools and other stuff. You can't use a third partys framework because its better or the other one wont be developed anymore.
    But neither console frameworks are open or have alternatives.

    And for macs - there is currently no nvidia card in any mac. Apple changed to AMD quite a time ago. The CUDA drivers are only for some older cards and quadro (IF the nvidia page is correct). Their current GPU driver for macOS won't even work with any 10X0 Card and still has some errors with the previous gen.
    Exactly! Why NVidia still supports the new macOS with newest CUDA?

  8. #528
    Can I ask.. Why are we talking about Radeon and Nvidia in the Ryzen thread?

  9. #529
    Deleted
    Quote Originally Posted by mrgreenthump View Post
    Can I ask.. Why are we talking about Radeon and Nvidia in the Ryzen thread?
    The usual, blind people saying they "will not use zen no matter what because in recent memory AMD cpus were subpar and amd drivers are bad hence zen must be bad, amd is bad they should just die" then some folk jump in to correct misinformation and voila we have amd vs nvidia talk.

    Also there is no more zen news so we are bored

  10. #530
    Deleted
    Nope, unless they manage to catch up with Intel through some miracle. Right now AMD just makes budget and niche CPUs.

  11. #531
    The Lightbringer Artorius's Avatar
    10+ Year Old Account
    Join Date
    Dec 2012
    Location
    Natal, Brazil
    Posts
    3,781
    The fact that we have the last 2 posts in this specific order is ridiculously hilarious.

    Also, the PS4 doesn't run FreeBSD. It runs Orbis which uses the FreeBSD Kernel and is based on it. AMD doesn't even have GPU drivers for FreeBSD as far as I remember. It's all custom.

    - - - Updated - - -

    Also the 5GHz OC on air was confirmed, although they only had a single core turned on like @Remilia suggested.
    Last edited by Artorius; 2016-12-30 at 09:43 AM.

  12. #532
    So if you extrapolate down that puts it about right where i figured, 4.0-4.2ghz for a full 8 core overclock.

  13. #533
    The Lightbringer Artorius's Avatar
    10+ Year Old Account
    Join Date
    Dec 2012
    Location
    Natal, Brazil
    Posts
    3,781
    It was an ES chip and they couldn't do it with all cores turned on because the VRMs couldn't handle it. Their sample also had bugs with SMT so it was turned off.

  14. #534
    Fluffy Kitten Remilia's Avatar
    10+ Year Old Account
    Join Date
    Apr 2011
    Location
    Avatar: Momoco
    Posts
    15,160
    Kind of hard to extrapolate on an A0 revision ES sample. Plus, we don't really have a similar architect CPU to extrapolate from to begin with.
    Last edited by Remilia; 2016-12-30 at 11:39 AM.

  15. #535
    Old God Vash The Stampede's Avatar
    10+ Year Old Account
    Join Date
    Sep 2010
    Location
    Better part of NJ
    Posts
    10,939
    Quote Originally Posted by parlaa View Post
    Rofl, I had this discussion 2 weeks ago with a friend who had an AMD graphics card. The drivers literally crashed while he was talking about how great they were.
    Cause that's how statistics work? Today it was super cold outside, and therefore global warming is a hoax made up by the Chinese. EVERYONE, parlaa experienced that AMD is bad and you should feel bad for owning one. And by he, I mean his friend.

    Quote Originally Posted by Cherise View Post
    Nope, unless they manage to catch up with Intel through some miracle. Right now AMD just makes budget and niche CPUs.
    From the looks of it, AMD has caught up in terms of IPC. But in terms of performance we know that Intel has like 22 core Xeon chips that cost more than your PC. Though it'll be interesting to see how AMD's 32 core CPU will challenge that.

  16. #536
    The Lightbringer Evildeffy's Avatar
    15+ Year Old Account
    Join Date
    Jan 2009
    Location
    Nieuwegein, Netherlands
    Posts
    3,772
    Quote Originally Posted by Fascinate View Post
    So if you extrapolate down that puts it about right where i figured, 4.0-4.2ghz for a full 8 core overclock.
    *Twitch* .. no you cannot in any possible way extrapolate ANYTHING from that.
    It is physically impossible and is nothing but guesswork, extrapolation would mean that you have had A LOT of samples going through the same procedure and each procedure documented in details and then combined and divided to extrapolate something.

    What you're doing is nothing but a pure guess.

    It could be that the thing doesn't clock above 3.8GHz or it could be it can clock 5.0GHz on all cores, to state that you can extrapolate anything from an Engineering Sample SINGULAR CPU is dumb, stop fking doing it already.

  17. #537
    Fluffy Kitten Remilia's Avatar
    10+ Year Old Account
    Join Date
    Apr 2011
    Location
    Avatar: Momoco
    Posts
    15,160
    Honestly, out of this entire thing, the main thing I'm interested in is actually a laptop Vega APU with HBM in the 15-35W power envelope. Basically a souped up Carizzo without the need of DDR type DRAM. Basically 8 CUs (512SP), 8GB HBM, and 2c/4t set up for a well made $1000 laptop would be a very good proposition for light game and business / school.

    Now if this ever happens is a different thing.

  18. #538
    Old God Vash The Stampede's Avatar
    10+ Year Old Account
    Join Date
    Sep 2010
    Location
    Better part of NJ
    Posts
    10,939
    AMD's APUs are an afterthought for me. Sure it'll be more interesting than the 8 core CPU, but AMD needs that 8 core CPU to be seriously fast. Then yes, a quad core Zen with Vega CU's using HBM memory would be fantastically awesome. And you wonder why Intel is licensing Radeon technologies? It would be fantastic if this resulted in a CPU that has i7 level of performance with RX 480 graphics performance. Even for $300, that could change the market drastically.

  19. #539
    The Lightbringer Evildeffy's Avatar
    15+ Year Old Account
    Join Date
    Jan 2009
    Location
    Nieuwegein, Netherlands
    Posts
    3,772
    Posting this here just because I think it will be combo'ed with Ryzen...

    http://ve.ga/

    Incoming CES goodies boys, let's hope AMD can accomplish what they set out to do.

  20. #540
    Fluffy Kitten Remilia's Avatar
    10+ Year Old Account
    Join Date
    Apr 2011
    Location
    Avatar: Momoco
    Posts
    15,160
    Interesting video, CES is going to be interesting for sure. Also the poor volta thing is a nice jab.
    Not just CPU/GPU but some other stuff too, though that's me and my own obsession with displays.
    Last edited by Remilia; 2017-01-02 at 04:40 AM.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •