Well now at least the cards are not doing way worse performance at DX12, this 1fps difference between it and DX11 can be ignored.
Literally no Async on board. I don't even want to call this Maxwell 3.0 now. It's more like Maxwell 2.1.
So then, my thinking now, is that Nvidia will release their own equivalent of the Crimson drivers for the Maxwell series in general, this thing included. It'll activate Async at a software level, and some minor gains will be had, but that'll be it. This thing is actually still lacking DX12 features. Nothing's progressed or advanced outside of speed.
1080 analogy:
980 is a Ford Fiesta. Solid and reliable, but not very impressive.
1080 looks like a Ferrari, sounds like a Ferrari and goes the same speed as a Ferrari. However, when you get inside it, it's a fucking Ford Fiesta's interior.
That's what was missing from all these benchmarks. Besides making sure the imagine quality on these cards haven't degraded with 1.7x texture compression. So we know the Async Compute issue hasn't actually been addressed with Pascal. Yet somehow this kind of benchmarking is missing with majority of websites. Yet lots of DX12 results with Tomb Raider. Cause that game isn't a mess.
https://forum.beyond3d.com/threads/d...4#post-1915300
async isnt enabled yet even lol
anyway 1080 is fun but 1070 is what everyone wants
And here i was thinking that because all those people that went to the press event got to take a 1080 and a 1070 home, the embargo date referred to both of them. Silly me.
Does anyone know when the 1070 embargo date is?
Not sure, but don't bother with the 1070 founder's. Even crappier reference... =\
http://anandtech.com/show/10336/nvid...gtx-1070-specs
However I have received confirmation that as this is a lower TDP card, it will not get the GTX 1080’s vapor chamber cooler. Instead it will use an integrated heatpipe cooler similar to what the reference GTX 980 used.
Has anyone seen any benchmarks/reviews with two 1080's in SLI? I've googled a bit and didn't see anything, figured nobody was lucky enough to get two cards and/or didn't know someone who got a card that they could borrow to use in SLI
Sorry, but I always have to shake my head and /facepalm at anyone who uses 4K benchmarks as anything remotely serious when talking about overall performance of video cards. We just aren't at a point yet with technology where single card solutions for 4K are feasible yet. Can it be done? Yes. Does that mean it's done well or that it should be used as the standard to compare things? Hell no. Not yet.
I've read several articles and reviews that all say the resolution that the GTX 1080 is really meant for and can really stretch its legs at is 1440p. It actually commands respectable leads at that resolution in comparison to the 980, 980ti, and Titan X.
4K just still isn't a thing yet for single card setups. A lot of people were hoping it would be with Pascal, but we just aren't there yet.
Last edited by Zephyr Storm; 2016-05-19 at 05:22 AM.
Try upgrading from a GTX 670.
So.. There seems to be something fishy going on with the 2.1 Ghz OC on the 1080 during the press event.
Apperently, that was the very first time a FE got past the 2.0 Ghz.
I also saw something that the volume GPU's are all a bit different regarding OCing... That is probably also why the clocks are quite conservative.
Maybe custom cooled cherry picked version will clock high, but you probably have to pay a lot for that. I'm guessing, we will see reference PCB cards with custom coolers selling for less than the FE. But the high end cooled, custom PCB cards will probably cost as much or even more than the FE.
They also never showed stage's system so we don't really know what was going on. Maybe the card was running with a watercooler, maybe they overclocked it and then power-limited it with a 60fps cap to do the presentation without getting too hot. There are plenty of alternatives.