Somehow - http://gpuopen.com/concurrent-execut.../#comment-6937
I can remember reading a reply from AMD itself somewhere, but I can't think of it where it was. sorry.
Easiest way to do would be to run a test to the GPU for some async compute stuff and if it can't reply in a given time it's not capable of async compute. Or just read the vendor ID and stop the program. In AotS the driver was responding that it can do async compute on a nvidia gpu, but the performance broke in like no other. This was at the end of 2015 / start of 2016 and didn't change much yet. AFAIK AC is still disabled in AotS.Do explain how you force async execution in DX12. I'm now working on an engine that is built for modern API (DX12 included) and I have literally no idea what you're talking about.
Nvidia's AC implementation is really bad and can't be fixed by software, because it's a hardware limit. Usually you should be able to send the workloads to the GPU and the GPU should be able to do compute and gfx workloads at the same time. Nvidia has just one pipeline with the need to switch between compute and gfx workloads, because it can't run at the same time without switching. That results in bigger queues till the data is worked on, longer response time, worse performance. Upside is, that this kind of arc. is really well optimized for DX 11 and is extremely efficient for the still current games. But the more games use async compute in higher numbers, nvidia can't optimized around it anymore or do another way like AotS does.Or maybe, just maybe, because you can't force it on?
You can write async code as much as you like, and if it performs badly on nvidia they will serialize it in the driver. And that will be 100% within the specification. If you fail at optimizing for specific hardware then the vendor is free to disable your hoop jumping in this specific case. Notice how I wrote vendor, if same would happen with AMD they would do the same.
So even if you can't really enforce AC (if the driver tells you it can run it, then it should or?), if the performance goes down hard, because of the increased workloads in new games, the guys playing the game will just turn it off. I remember a news from back then, that the performance of AotS was like -60% with async turned on.
Don't get me wrong, I might not be a fan of nvidia, but that is because of their behavior as a company, not from their hardware. Their current gen is a beast in performance and I haven't seen such a big increase in performance over the past gen for a long time.
IMHO their new gen coming with full async compute etc. won't run as efficient as their current gen does and might even lose a bit in DX 11. That's because even they can only cook with water and if you do a multipurpose system, you can't optimized it that much anymore. That's why the GCN cards usually run better on newer (or even older) games with time, because they have more raw power to crunch them, even without direct optimization. And Nvidia (or any other company), won't optimized their old gens for the current games, maybe only if there is some huge error with it.
- - - Updated - - -
With their new drivers, after the catalyst stuff, they really improved by a huge margin if the driver hangs itself up. That GPU recovery was something Nvidia was quite a bit better for a time. For the OpenGL implementation ... yeah, AMD sucks there, especially in performance. But that's because they honor the spec and don't work around in many cases like nvidia (including some dirty fixes). But still, the AMD OpenGL sucks. That's why they are working so hard with together for vulkan. Makes no sense to ride a dead horse - guess, their OL Driver is a clusterfuck of shit
When we supported the medical clinics around my town (had a contract for their main institute and all others around), I never encountered AMD or Nvidia. In most cases they all used matrox gpu's, depending on what was running even ASpeed or some even the Intel GPU. Some stuff like the MRT / CRT screens was closed of course, but I doubt they used anything from nvidia or AMD. Even the ultrasonic device with 3d render was using a matrox.If you want a proof from more rigid segments, go to the hospital, take a look at some class 3 medical equipment that does graphics and check its gpu. You will very rarely find AMD gpu inside (if at all! I know I never seen one). You know why? Because people usually don't develop on it, hence it doesn't go through FDA, and hence can't be swapped in after you done developing on NVidia.
Programs like Maya or AutoDesk CAD need a certified GPU to even run with full speed. In the past I can remember the consumer cards could run the programs, but in many cases had trouble and errors running the realtime rendering. I doubt that has changed much, because the FirePro or Quardro drivers are especially made for those programs and licensed for a reason. Also even today those cards a way faster then the usual consumer card.For the CAD itself the gpu vendor is not important, but when you run the engine to see how your glorious animation looks like in it then it can be important.
However, why on earth you'd need a Quadro to do artist work? That's insane waste of money. Save on FirePro? Save by buying consumer level cards, we're not in the stone age when quadros were vastly superior in graphics tasks. Pretty sure it's cheaper to license new Maya than to buy a bunch of quadros. Modern CADs no longer use fixed pipeline. Not sure what reason there is to buy a quadro nowadays except for supporting old CADs, doing double precision stuff or deep learning, with the last two being 100% irrelevant to artists.
That was like in 2014 or 2015. Just for performance reasons, a report I saved with solidworks https://www.pugetsystems.com/labs/ar...orks-2016-751/ (we needed that for our production).
It's not only the driver - if you work in a professional environment, you won't get any support from autodesk or the gpu builder, if you use a non-cert card or system. They just outright refuse you. We also had the idea to run solidworks (older versions) with customer GPU's - in this case Nvidia - but after like 2 months we buyed FirePro GPU's, because the colleagues were raging about the consumer cards and their errors. Performance was ok, but the errors and wrong renderings were not. Don't ask me the details, I don't know them, had to trust out 3 engineers. And don't get me wrong, this was not a manufacturer problem, we just had the FirePro, because at this time they had the better price/performance rating then nvidias quadro (with those it seems you have to pay for the name...).
CUDA has nothing to do with DX or XBone or the PS4 (and the PS4 runs FreeBSD by the way). The direct competition to CUDA is OpenCL and what I don't like about it is, that CUDA is closed source and only works with Nvidia Cards. So to use it, you have to use nvidia in the future, no matter what comes. And I didn't say they fucked something up, I just said, what happens if they do. What if they decide to change CUDA in a way for a new hardware, so that it won't be backwards compatible? Or stall the dev. of CUDA and let it running out? Because they have something new they can sell better or change to OpenCL? If you use CUDA, you also have to use their dev tools and other stuff. You can't use a third partys framework because its better or the other one wont be developed anymore. I never was a fan of binding yourself to a specific company. I like open standards. But that's a personal, subjective opinion. CUDA is shit in terms of a closed system, not in terms of performance etc.And finally? Why would NVidia fuck up CUDA? What incentive they have? Why you're worried about CUDA being closed standard but fine with DX? Fine with a closed platform, namely XBone. And finally fine with a super closed platform, namely PS4 (you're signing an nda for crying out loud)? Are you not worried that Sonny will fuck up the next patch and your game will be unplayable? What kind of rubbish is this??
CUDA is used in medical devices, NVidia couldn't fuck up CUDA even if they wanted to. CUDA is running now for 8 major versions and so far they didn't fuck up anything. In fact there's version 8 for OSX, now tell me what pascal card there is in macs?
And for macs - there is currently no nvidia card in any mac. Apple changed to AMD quite a time ago. The CUDA drivers are only for some older cards and quadro (IF the nvidia page is correct). Their current GPU driver for macOS won't even work with any 10X0 Card and still has some errors with the previous gen.