Obviously DX11 took the place of DX10, but at the time 10.1 was superior and Nvidia made sure to mute 10.1 as much as possible.
The ATI cards that didn't support DX9.0c were junk. I ended up buying a X850 for $100 years ago and ended up putting it into a PC used for a business that was just good enough to put out a display in Windows 7. But in ATI"s defense, HDR was possible without DX9.0a. Half Life 2's HDR proves that. Also, a few games were patched by the community to allow DX9.0c games to be played on DX9.0a cards.It's funny you bring up DX9 though, the ATI 800/850 set couldn't support DX9.0c and HDR and ATI claimed it wasn't really needed, until Oblivion came along and you couldn't enable HDR on those cards.
Is there a problem supporting FL 11 through 12_1? As if Time Spy can only utilize one and only one Feature Level. Kinda odd that Maxwell cards support FL 12, but gain almost nothing from DX12. Software emulation?The reason Time Spy utilized DX12 11_0 is because that is the base DX12 feature levels that most cards support, and that all current DX12 games actually utilize. I'm sure they can enable 12_1 but then most AMD cards won't be able to run the benchmark. Should I claim the benchmark is now bias to AMD because of that?
The thing that strikes me as odd is that DX11 also uses FL 11 as the highest level. You could give Time Spy DX11 and it would work just fine, but you wouldn't get the benefits of Async Compute. For a benchmark tool, having Dx11 option would be useful, because realistically all games will have DX11/DX12 options. I'm not the only one who thinks Time Spy is a DX11 game with DX12 features thrown at it.You are free to consider DX12 11_0 to be very near to DX11 but you would be wrong. The higher feature levels just include additional features on top of the DX12 base, DX12 11_0 brings all the advantages that DX12 as a whole over DX11.
But the main issue for Time Spy is their choice of Pre-emption Compute which is less Async Compute like and doesn't benefit AMD as much. Time Spy is just not making good use of Async Compute. But again, why not make more than one code path to benefit Nvidia and AMD, to maximize performance?
Compute queues as a % of total run time:
Doom: 43.70%
AOTS: 90.45%
Time Spy: 21.38%
http://www.overclock.net/t/1606224/v...#post_25358335
It is my opinion. I'm just usually right about outcomes. Feel free to tell me I'm wrong once Nvidia fixes Doom's Async problem. I'll be waiting. No I won't.I... what? How on earth did you come to this conclusion at all? You just literally made shit up! Your opinion =/= fact.
"Dur, it's been 1 month since Doom came out and Vulkan is not out yet, Doom will never have Vulkan. It would have been done by now."
The Ashes of Singularity devs said the same thing, but nothing so far. I'm going by history and how many games has Nvidia "fixed" Async Compute? Is this going to be a theme now, where Nvidia owners have to wait for a fix months later? DX12/Vulkan/Async Compute aren't new things. These are standards they have been worked on for years, and Nvidia was involved in the development process. They had more than enough time to get proper working drivers for these features, just like AMD has. Is each DX12/Vulkan game that's released going to go through this?All issues that the dev has stated they will be fixing in a future patch. How on this green Earth did that lead you to your conclusion? Christ you are not even trying to hide your bias now.
My bias is with the consumer, and if Nvida screwed up then they should own up to it. Time Spy does not represent how DX12 games currently work, and will work. Time Spy doesn't use parallel computing, and is a bad representation of DX12 games.
Explanation of settings used. Not everyone uses max settings in their benchmarks, and not everyone explains it. The person who made that is very precise.DX12 mode, maxed in game settings?