I doubt it's more on the line of refuse as I've played games that are on PS3/4/Vita platform (Yoru no Nai Kuni for example), but making it for older consoles is an extra cost where the player base on the newer console already is enough to be profitable.
Eh fair enough. I don't really pay attention to the stuff though admittedly.Cause emulation is awesome? I have very high respects for emulation authors.
Sort of yes and no. While yes you're using an API it still isn't as simple as Intel/AMD/Nvidia.As a company that makes games, this is literally a minor issue. For example, Fallout 4 is on PS4/XB1/PC. As a developer, you need to deal with two sets of hardware with the PS4/XB1. They're fixed hardware, but still twice the work. With PC, you deal with three sets of hardware. Intel, AMD, and Nvidia. We're not talking about a large amount of diversity here. The PC hardware industry is practically an oligopoly. And you're not coding for specific hardware, but for an API like Direct 3D or OpenGL. Unless the game is ported to Linux and Mac.
Yes they need to test it, but if anything goes wrong with the game on a Radeon R9 270X, then it's AMD's job to fix it, not the developers. Between AMD and Nvidia, they'll do everything in their power to make sure any new released game is working perfectly.
While a layer of abstraction is nice, some things will inherently work better on certain architectures. What some people call the 'console effect' (kind of dumb but whatever), where current consoles are running off GCN and being ported. Stuff like in GCN, each compute unit has 64 stream processor. This differs from Kepler, which each streaming multiprocessor has 192 cuda core, Maxwell is 128CC per SM, where as Pascal is 64CC per SM, like GCN. GCN / console take in 64 operation wide wavefronts and thus for GCN each sp is immediately occupied when a wavefront batch arrives where as Maxwell and Kepler would have wasted resources. For devs the choice is to optimize for each architecture or hope Nvidia does something (like Kepler for example no longer really performing up to par).
This is excluding hardware schedulers and other stuff that should only be driver exposed.
Rip
Ah okay, misread then on the indie part.I didn't mention DX12/Vulkan for Indie devs, but for any games ported to PC so developers don't have to write 3-4 code paths so AMD Nvidia and Intel are all happy. Because the idea is most of the code is now in the hands of the developer, and not in the driver, which has traditionally been a problem area.
It depends on how far into optimization you want to get. Dan Baker from Oxide with an interview with... Pcper?Anandtech? Someone, noted that while yes, they can optimize for every architecture, it is time consuming and expensive. This is something that PC programming can't do, which is essentially to the metal. A layer of abstraction is still needed, just not as absurd as DX11.
Yes, I noted HBAO+ for a reason, however this is one of the times where HBAO+ actually has caused artifacting to such degrees and HBAO+ is no longer a huge performance hit for AMD hardware. Now it's VXAO! cause reasons. Granted it's both gameworks and devs that are the issue here.And what's responsible for it? GameWorks is responsible for it. The game patched it by disabling Ambient Occlusion. Without any access to the code the issue can't be fixed, not even by the developers.