Implying nVidia hadn't already eaten up that portion of the market in the first place. It's not like the rhetoric going around for years hasn't been "omg too little too late AMD" regardless of what AMD have done.
You can keep telling yourself that maybe it'll work out for you at some point.
Also I never lashed out at you so good job with that, well done.
Please show me exactly where I said "DX11 does not scale".
I stated, numerous times, DX11's draw call is limited by a primary CPU thread, which I've linked you and you acknowledged the post on saying "He's saying what I am!".
I think it's funny how you go from "This article is saying what I'm saying" to "This article is failing to interpret his own data!" on the same article.
I also think it's funny where you pulled that magic "I comment on how to design (for OpenGL of all things)" when I clearly didn't comment on HOW to design things and least of all in OpenGL of which I even stated I don't know too much about except some of it's functions.
Oxide funded by AMD is even more hilarious... does that mean that Eidos-Montreal and Cloud Imperium Games are also funded by AMD? Also admitted by the own developer is that nVidia has spent far more time with them than AMD, well done.
Also hilarious is the misinterpreting MSDN on my part then going on to blame MSDN when it clearly states what it does, so whom am I to believe? A nobody or a large conglomerate whom made the tool?
Then we get to the best part... GameWorks not responsible for tesselation, I think the entire controversy in for example Witcher 3 disagrees with you entirely.
(among other games with GameWorks controversies)
nVidia bashing is warranted when it comes to such tactics as it's a fact, I have however stated more times than I care to count that their cards are good.
And the fact you don't know what an Interrupt Request on a CPU cycle is .. should be concerning as a developer.
But you know what you are right about 1 thing:
I do not give a fuck with someone who's being a like a little child so this is my last post towards you as well so enjoy your peace and quiet, think of things as you will.
I certainly do not care what you think of me, not even for a moment, when I go back to sleep tonight I shall be sleeping like a baby.
However I've treated you with respect so I expect to get a response in kind but that is something lost on you it seems.
Where Remilia is trying another approach to tell you things you seem to not understand is more than I expected him to do even seeing the walls of text he has clearly read, i can only commend him for trying to tell you but it's clearly futile.
So I shall close off with this for you:
Enjoy your own superhuman feats that no other developer seems capable of doing.
I shall enjoy seeing you lead the industry of graphics development into a new age, just be sure to use "dadev" as your nickname so I know it's you.
HAHA! ok....
That's still 14-16% of the market that AMD has no place in atm, well till much later likely 2017. While Nvidia will likely push their 1060 release in early fall or sooner to match the RX480. Guess we'll see to be honest I don't see AMD making up any ground over the next 6 months. Just my opinion.
Last edited by Bigvizz; 2016-06-22 at 11:50 AM.
dont confuse them being out of stock with them selling a lot. it most likely just means supply is low (or is kept artificially low to give the impression they sell alot/increase desirability).
that amd slide that most people don't go over $300 for a gpu makes a lot of sense and matches other similar sources.
- - - Updated - - -
you can probably resell it for profit, since they are still very scarce, especially if it's still unopened.
tho i have to wonder why a person who can/is willling to spend $7-800 on a gpu is happy to buy a $2-300 one.
Last edited by mmoc982b0e8df8; 2016-06-22 at 01:34 PM.
From (extremely) early benchmarks the 480 in crossfire appears to be on par with the 1080, for almost half the price. I'll always prefer a single card solution, but if roughly the same performance can be had for that steep of a price difference then I'm paying for my impatience.
I'm short tempered, especially with people who don't ever listen. And really, for that (and only that) I am willing to say sorry.
Here:Please show me exactly where I said "DX11 does not scale".
I stated, numerous times, DX11's draw call is limited by a primary CPU thread, which I've linked you and you acknowledged the post on saying "He's saying what I am!".
A draw call can only be utilized on a single CPU coreI can find more. Are you done playing this game? Before you ask, serial essentially means no multithreading.If you read the primary advantages of DX12 and what they can do it is advertised that everything is now parallel capable natively and that draw call and other commands are no longer executed serially.
1. Please don't misquote. Article can't fail to interpret its data, its author can.I think it's funny how you go from "This article is saying what I'm saying" to "This article is failing to interpret his own data!" on the same article.
2. I'll be more accurate if you can't read between the lines. You linked the article, I looked at the graph (because I really don't need someone to chew up the data for me) and it told the same story as I did. I pointed it out. Then you proceeded to quote that author on something random which I couldn't care less about. So I dismissed it. Now I read it and the answer depends on the support provided by the driver. He's absolutely wrong when it's NVidia's driver and right for AMD and Intel. This is because Microsoft said: "vendors please implement this, but if you won't we'll emulate". Blaming seriality on runtime emulation is a low move. It's like saying that DX12 is crap api because of warp. You agree with this?
OK, explain this then:I also think it's funny where you pulled that magic "I comment on how to design (for OpenGL of all things)" when I clearly didn't comment on HOW to design things and least of all in OpenGL of which I even stated I don't know too much about except some of it's functions.
I might took it a little too far, but I read it as a frown, "They overhauled the entire engine and didn't put in proper DX11 support? No way man."Correct it is but when it was implemented WoW's engine was entirely overhauled, it wasn't just a patch in it was included into a new expansion, I don't remember if it was Cataclysm or MoP.
But yes way, and your comment has everything to do with engine design. Very few are the engines that were made for 2 or 3 vastly different APIs and were good in most respects for all intended targets. Right now I can think of only one example. Making WoW engine dedicated for multithreading in DX11 will require a tremendous effort to achieve the same for DX9 and OpenGL paths. And no, you cannot just say "don't enable this for DX9/OpenGL". This changes how the entire engine is built.
No idea about the other companies. But vendor time spent is irrelevant. In fact, I would expect nvidia to spend more time so that they'll cover for lost performance on async.Oxide funded by AMD is even more hilarious... does that mean that Eidos-Montreal and Cloud Imperium Games are also funded by AMD? Also admitted by the own developer is that nVidia has spent far more time with them than AMD, well done.
See this:Also hilarious is the misinterpreting MSDN on my part then going on to blame MSDN when it clearly states what it does, so whom am I to believe? A nobody or a large conglomerate whom made the tool?
The logger traces DMA packets going to the kernel after they've been chewed up virtually by everyone. This is as close as you ever get to the hardware save for a dedicated tool by NVidia/AMD. Then you proceed to quote MSDN and fail to interpret it (mainly because it wasn't saying anything of significance, but how could you know that?).Because you're not even looking at the basics, GPUView still looks at the DirectX API thus you're still a level above where you need to be.
The MSDN quote you're looking for is:
Does it require more explanation or now you understand that the logger doesn't look at the DX api?Originally Posted by MSDN
Get this once and for all, artists decide on tessellation. Not engine developers nor gameworks developers. The latter groups responsible for sensible tessellation factors, nothing more. If the art guys who made Witcher 3 decide to use more tessellation it was their artistic decision.Then we get to the best part... GameWorks not responsible for tesselation, I think the entire controversy in for example Witcher 3 disagrees with you entirely. (among other games with GameWorks controversies)
I'll walk you through it once again so that you can see the difference, tessellation is a tool for artists, in contrast async is a tool for engine developers.
Now, what happened is that NVidia probably provides a full blown solution for hair and crap (I'm really not into gameowrks, except for using it for SLI in VR) and surprise surprise, they have a lot of tessellation there. The artists, once again, the artists liked it and magic happened.
But then again, if you have a problem with this why not bring it up to AMD to make a better tessellator? Do you also blame AMD for not playing fair with async because Maxwell doesn't have it? Why should one vendor be excused for not supporting an artistically or technologically required feature while the other does?
How did you come to that conclusion? Much more likely I know about it more than you. But then again, I dismissed your yet another attempt to derail the discussion into something entirely unrelated to the original topic so that your false statements can be concealed by meaningless noise. Visibility culling doesn't require any sort of interrupt requests and the sorts (why on earth would it?). You blame a game which didn't use a good object culling system to say that it can't be done very well on the CPU? Your logic is flawless! Current facts are, per-object culling can be done better on CPU, deal with it. Will this change in the future? Who knows, maybe a quantum computer will finally be ready in a month and then visibility culling will forever be better on CPU. So some marketing guy said something, you're ready to take his word just like that?And the fact you don't know what an Interrupt Request on a CPU cycle is .. should be concerning as a developer.
You certainly did not! Ignoring and arguing against what you've been told in the Nth time is pure disrespect. Not spending any time on research on things you don't know about and then arguing about it is again disrespect. Posting random stuff to derail the discussion is again disrespect. You've given me nothing but disrespect, so you get paid back in exactly the same coin.However I've treated you with respect
Last edited by dadev; 2016-06-22 at 04:19 PM.
I had already stated I will no longer go into discussion of this since it is beyond clear you and I do not see things equally.
Take that as you will, I do not care, if you wish to think of it as your victory then go right ahead.
I will however state that, regardless of how you define respect incorrectly, I accept your apology.
Turns out those 1500-1600 Mhz on air claims might have been a bit presumptuous:
http://wccftech.com/radeon-rx-480-thermal-tests-leak/
We can only know for certain once reviews are in.
@Evildeffy for me the only thing clear is that you refuse to accept facts. It is not a matter of seeing things differently, but rather one of ignoring facts. Seeing things differently is liking a movie or not, but when you say that 1+1=3 and insist on it then it's not a matter of "seeing things differently".
For instance, I completely disagree with Remilia and I think that the chance that per-object visibility culling on the gpu will be more efficient is quite slim, and even if it will the effort spent on that could've gone into something far more useful. That's a matter of seeing things differently.
So this 480 (4gb version) is going to be better than GTX 980 without OC'ing or what? They say it's between GTX 980 and 980 Ti in terms of performance. Is that correct?
What is the point of GTX 980 Ti?
Edit: Okay, it's probably a bit higher than Titan X
--
Soooo, GTX 980 for 200 bucks. Sounds good to me.
Last edited by Kuntantee; 2016-06-22 at 06:46 PM.