Page 7 of 45 FirstFirst ...
5
6
7
8
9
17
... LastLast
  1. #121
    Warchief Zenny's Avatar
    10+ Year Old Account
    Join Date
    Oct 2011
    Location
    South Africa
    Posts
    2,171
    Quote Originally Posted by Evildeffy View Post
    That is fully dependant upon the optimization of each title, why is it you think f.ex. the RX 480 has a commanding lead vs. the 1060 in DOOM when Vulkan is enabled?

    Or if you want to go back a little bit in generations why an R9 390X is capable of matching and beating a 980 Ti in DOOM or Ashes of the Benchmark f.ex.?

    nVidia's low level API is comparatively very weak.
    A single example of a game that is not optimized for one platform does not make a trend, it's funny you mention Ashes though as the 1060 matches the 480 in DX12 performance despite being on a smaller die size and consuming less power.

    Not sure how that equals Nvidia being weak in a low level API.

    Edit:
    To give you a more valid comparison regarding this ... it's exactly like Intel's Kaby Lake vs. AMD's Ryzen right now.
    Overall the Ryzen outclasses Intel by a longshot when it comes to multi-threading where single threading they are slightly slower.
    Principle applies here due to the way AMD's architecture is designed, they COULDN'T mimic nVidia if they wanted to BECAUSE of their multi-threading hardware on their GPUs which in turn excel in low level APIs.
    You do realize Nvidia has multi-threading hardware right? That is kind of the point of GPU's? It sound like you are stating that hardware async schedulers = multithreading, which is not true.

    Quote Originally Posted by Evildeffy View Post
    It doesn't handle DX12's low level API, it emulates it via software.
    It's not really a rough edge as it is quite a big difference, however yet again I did not state nVidia being bad at all. I am simply stating architectural facts.
    Where on earth are you getting this information from? Emulates it via software?
    Or should we consider nVidia "potentially catastrophic" because VR devs really loathe working with nVidia tech?
    Where do you get this as well? You do know that most VR titles (if not all) actually run better on Nvidia then AMD? Like the 1060 is better in VR titles then a Fury X?

    GameWorks which has been mostly made Open Source due to the fact every developer (barring nVidia horny ones) hated their guts for it and AMD's Open Source variants are gaining more and more traction because it was free vs. paid and a blackbox and literally sabotaging the competition? (this is illegal btw)
    GameWorks is a blight upon the gaming community and luckily most developers realized this and this forced nVidia into action making most Open Source.
    Properly implemented gameworks has lots of features that look great but are heavily demanding, but since most people just see it eating away frames they automatically think it is shit. Also, where do you find any evidence of gameworks deliberating sabotaging the competition?

    HBAO+, HFTS, PCSS shadows, TXAA and hairworks are all technology that heavily impact frames, but offer better looking graphics. All this leads to is that User X with his mid tier GPU complains that he can't max it out and Nvidia is thus shit, meanwhile he is missing the point of these technologies entirely.

  2. #122
    Deleted
    I think this video has been linked in this forum before, but I will link it again here. I basically explains nvidia and amd architectures, their drivers and how approaches for game coding differs between them.

  3. #123
    Ive been hearing that Polaris helped AMD gain marketshare, but Steam seems to disagree


    http://www.dsogaming.com/news/steam-...2107-unveiled/

    Steam handles over 50K refund requests per day, hardware & software survey for April 2017 unveiled


    Valve has shared some interesting stats, revealing that it receives over 50K refund requests per day. Moreover, the team unveiled its hardware and software survey for April 2017.

    What’s really interesting here is that Valve out of these 50K (and in some cases 90K and even 100K) requests, only 1/5 of them are not being addressed (as they are awaiting for a response). In other words, Valve’s accepts the 80% of the submitted requests for refunds in the very same day.

    In other news, it appears that NVIDIA is increasing its dominance over AMD. The last hardware & software survey we covered was for November 2016. Since then, NVIDIA saw an increase of 2.92%, whereas AMD saw a decrease of 1.54%. Intel also saw a decrease of 1.33%.

    On the other hand, Intel saw a slight increase on the CPU side. 79.73% of the survey-takers prefer Intel’s CPUs, while 20.27% prefer AMD’s CPUs. Since November 2016, Intel saw an increase of 1.23% (and AMD saw a similar decrease).

    But what about the number of CPU cores per computers? According to the survey, 50.11% of the survey-takers own a quad-core CPU, while 44% own a dual-core CPU. Only 4.26% has a CPU with more than four CPU cores, and 1.62% is still on a single-core CPU. To put things into perspective, quad-cores gained a 2.31% increase (compared to November 2016) whereas dual-cores saw a 1.9% decrease. Interestingly enough, the amount of owners of CPUs that feature more than four CPU cores has been decreased by 0.24%.
    Last edited by Life-Binder; 2017-05-09 at 11:08 AM.

  4. #124
    Warchief Zenny's Avatar
    10+ Year Old Account
    Join Date
    Oct 2011
    Location
    South Africa
    Posts
    2,171
    I think part of the problem is that AMD has nothing to compete in the mid-high/top tiers. Despite the RX480/RX580 being competitive/better then the 1060 series, users often hear that Nvidia is the best, because of the 1070+ performing cards, which may very well influence their purchase despite them only buying a 1060 or lower tier card. The terrible reference RX 480 launch cards and launch drivers didn't help matters as well.

    If AMD had just launched a 4096 shader ~1300 Mhz Polaris part back when the RX480 came out as a high end, then I would imagine the RX 480 sales might have been boosted as well due to a knock-on effect. Plus the entire myth that $350+ cards don't sell as well as $200-$250 cards is pure bullshit, the steam surveys show 1070+1080 numbers being about the same as the 1060, and those more expensive cards have a higher sales margin as well.

    Related to all of this has been AMD's CPU performance pre-Ryzen, the bad rep it got also could (and likely did) influence it's reputation as a GPU provider.

  5. #125
    I don't think AMD needs the fastest and best products to be fair. They have some hits and some misses and that's good. We want competition. Even if AMD ends up only coming out with cards at around 1070 or 1080 levels of proformance it's a good thing. Even better if they can do so at lower costs. Because that drags down and controls pricing on both sides. Even if it's just a hair it's a good thing.

    If they don't allow themselves to get knocked out of the game for a few generations they can become equals again. But they seem to allow themselves to get knocked out once every 8 to 10 years meaning they have to crawl back in. Which lets Intel and Nvida to skyrocket pricing and only trinkle out proformance increases.

  6. #126
    Quote Originally Posted by Evildeffy View Post
    A 512-bit bus with 7Gbps consumes 50W of power.
    8Gbps 384-bit bus consumes ~45 - 50W of power.
    GDDR5X goes faster but consumes around the exact same values because speed goes up as their calculations assume equal speed.
    That's a rather considerable power consumption, you can calculate the rest from this point.
    Memory consumptions is again, irrelevant. Their GPUs are 20-30% more power hungry.

    Quote Originally Posted by Evildeffy View Post
    Both are cooler constrained, always are but the 3rd party eliminate that bottleneck, wider buses and higher IPC do not equal a bottleneck immediately.
    Bad reference PCB (they are not neccesary bad, but AMD cannot afford to design better ones) designs do equal bottleneck. Any 3rd party Nvidia Pascal card manufacturer can get their custom cooler, mount it on reference PCB and get a product that will push the GPU to the limit. Doesnt work that way with AMD. Why? You need a stronger PCB to push a more power hungry chip, plus Nvidia has power draw limitation on all of their Pascal card that doesnt allow power consumption to skyrocket.

    Quote Originally Posted by Evildeffy View Post
    Incorrect, AMD employs multiple Hardware Schedulers and pipelines that nVidia does not, if you've studied the architecture you'd know this is why nVidia cards are "weak" in low level APIs.
    This is a CONSIDERABLE power drain and heat source, if AMD designed their cards like nVidia it'd be the same, see the GTX400/500 vs. HD5000/6000 series.
    Everything that you have just described is found in both modern Nvidia and AMD GPUs. Yes, AMD has chosen their "wider" GPU design specifically to optimize their GPU workload for new APIs, but that doesnt mean AMD will have an advantage in all of them. And yes, Nvidia design was actually very very similar to AMD's back then, AMD opted for big dies later, same as Nvidia with their higher clock.

    Quote Originally Posted by Evildeffy View Post
    Incorrect, again .. study the architectures of each.
    You would not make this statement if you knew, also if you think Vulkan is dead... well ... that's interesting, we should notify the Khronos Group members of this.
    You are a clear victim of marketing. I dont want Vulkan to be dead, but I dont see any new games. Nvidia wont do anything hardware-wise for Vulkan if it only has 1 game.

    Quote Originally Posted by Evildeffy View Post
    Incorrect, again ... study the architectures.
    If you think nVidia doesn't sacrifice anything in their "Oh so wise" way then you are clearly unaware of architectures.
    Well so far their only sacrifice Vulkan performance. It's a pretty good tradeoff to me. And that's considering the fact that AMD needs HBM memory to make cards more powerful than RX 480.

    Quote Originally Posted by Evildeffy View Post
    Incorrect again because you very likely think it's probably 10W orso in total, this is not even remotely true.
    The R9 290X's 7Gbps 512-bit GDDR5 consumed 50W alone, the same VRAM that nVidia and their partners used, no difference there, that's an example.
    VRAM power consumption is not tied to GPU.
    Well GTX 1080 Ti DRAM consumes about 30-40W, and GPU is about 310-340W under typical load. On Polaris memory consumes more power, on Vega it's going to be the other way around, but it's still around 10% of the whole card power consumption.

    Quote Originally Posted by Evildeffy View Post
    This is again wholly incorrect, completely dependant upon architecture design.
    Wider GPU > wider memory bus IS AMD architecture design. There is no other things about it.
    R5 5600X | Thermalright Silver Arrow IB-E Extreme | MSI MAG B550 Tomahawk | 16GB Crucial Ballistix DDR4-3600/CL16 | MSI GTX 1070 Gaming X | Corsair RM650x | Cooler Master HAF X | Logitech G400s | DREVO Excalibur 84 | Kingston HyperX Cloud II | BenQ XL2411T + LG 24MK430H-B

  7. #127
    Where is my chicken! moremana's Avatar
    15+ Year Old Account
    Join Date
    Dec 2008
    Location
    Florida
    Posts
    3,618
    Quote Originally Posted by Thunderball View Post
    Memory consumptions is again, irrelevant. Their GPUs are 20-30% more power hungry.


    Bad reference PCB (they are not neccesary bad, but AMD cannot afford to design better ones) designs do equal bottleneck. Any 3rd party Nvidia Pascal card manufacturer can get their custom cooler, mount it on reference PCB and get a product that will push the GPU to the limit. Doesnt work that way with AMD. Why? You need a stronger PCB to push a more power hungry chip, plus Nvidia has power draw limitation on all of their Pascal card that doesnt allow power consumption to skyrocket.
    I didnt read through your whole argument, but you are sadly mistaken on this tidbit. AMD does not design or supply, or control quality of aib boards. They supply the reference design and the aib takes off from there. The MSI RX 580 Gaming X is a prefect example for you to gander at. That board is not that different than the MSI Gaming X 1060

  8. #128
    The Lightbringer Evildeffy's Avatar
    15+ Year Old Account
    Join Date
    Jan 2009
    Location
    Nieuwegein, Netherlands
    Posts
    3,772
    Quote Originally Posted by Zenny View Post
    A single example of a game that is not optimized for one platform does not make a trend, it's funny you mention Ashes though as the 1060 matches the 480 in DX12 performance despite being on a smaller die size and consuming less power.

    Not sure how that equals Nvidia being weak in a low level API.
    But comparatively it is because of lack of hardware support to fully exploit the engine.
    Vulkan itself, which is heavily modified because it belongs to the Khronos Group to do with as they please, is still in it's infancy and so is DX12.
    Yet both have shown what is possible when optimized properly.

    I may have phrased it poorly for you, but comparatively it is true, AMD's low level API compatability is fully done by hardware, not software thus "weak".

    Quote Originally Posted by Zenny View Post
    You do realize Nvidia has multi-threading hardware right? That is kind of the point of GPU's? It sound like you are stating that hardware async schedulers = multithreading, which is not true.

    Where on earth are you getting this information from? Emulates it via software?
    This question has been answered by Larix below your post, watch the video, came in nicely.

    Quote Originally Posted by Zenny View Post
    Where do you get this as well? You do know that most VR titles (if not all) actually run better on Nvidia then AMD? Like the 1060 is better in VR titles then a Fury X?

    This has been confirmed, nVidia has improved by strides but AMD's VR capabilities ARE better and more stable, but yes we are missing AMD high-end hardware.

    Quote Originally Posted by Zenny View Post
    Properly implemented gameworks has lots of features that look great but are heavily demanding, but since most people just see it eating away frames they automatically think it is shit. Also, where do you find any evidence of gameworks deliberating sabotaging the competition?

    HBAO+, HFTS, PCSS shadows, TXAA and hairworks are all technology that heavily impact frames, but offer better looking graphics. All this leads to is that User X with his mid tier GPU complains that he can't max it out and Nvidia is thus shit, meanwhile he is missing the point of these technologies entirely.
    I'm not saying GameWorks cannot bring pretty things to your screen, it can.
    But it is still a blight upon the community as it requires nVidia to have direct access to your source and implement effects you as a developer get as a black box.
    You have no way to edit it, no way to optimize it but just have it in there and work around it and somehow with these features the competitor always suffers some huge impact because they work EXACTLY where it's architecture is weaker and that's "just coincidence"?

    There have been good stuff as well but as long as most of it remains closed source it will remain a blight upon the gaming community because of the effects it causes.

    Also regarding to where .. there was a YouTube cast where they interviewed 2 nVidia devs whom admitted to it being specifically coded to hurt AMD as much as possible but looks like that was removed long ago.
    However you can easily still find tweets of developers from large studios whom loathe nVidia for their GameWorks system and how it's implemented.
    It's not in a friendly way if you desire to use one of those technologies.

    - - - Updated - - -

    Quote Originally Posted by Thunderball View Post
    Memory consumptions is again, irrelevant. Their GPUs are 20-30% more power hungry.
    Of course, nature of the beast of design, but I was simply correcting that your statement of memory power consumption on cards being irrelevant.
    They are VERY relevant as they add in total over-all TDP.

    Quote Originally Posted by Thunderball View Post
    Bad reference PCB (they are not neccesary bad, but AMD cannot afford to design better ones) designs do equal bottleneck. Any 3rd party Nvidia Pascal card manufacturer can get their custom cooler, mount it on reference PCB and get a product that will push the GPU to the limit. Doesnt work that way with AMD. Why? You need a stronger PCB to push a more power hungry chip, plus Nvidia has power draw limitation on all of their Pascal card that doesnt allow power consumption to skyrocket.
    Funny you mention this as it shows you've not read nor seen the PCBs nor assessments thereof.
    In terms of design and quality the AMD reference cards are better than nVidia's "Founder's Edition" cards, not because they NEED it but because they over-engineer all their cards, there are "worse" cards on the market on the AMD side and they are working and being sold as well. (PowerColor)
    nVidia has the better reference design cooler however and you get a snowball effect.
    "Worse" cooler + more power consumption = ZOMG THE WORLD IS BURNING! CARD IS HOT! ... what you don't realize is that whilst yes it is louder it is by no means "bad" as the silicon on GPUs is in general higher rated for temperatures than any CPU is f.ex.
    AMD's limits are not constrained by power delivery on Polaris, they are constrained by silicon limit, nVidia's design however puts a physical limit as to how much power you can give your GPU in order to not have a chance of frying it, temperature wise they don't care.. it's to prevent normal users from breaking their hardware and claiming RMAs which they cannot prove was broken by users unless physical mods were made.


    Quote Originally Posted by Thunderball View Post
    Everything that you have just described is found in both modern Nvidia and AMD GPUs. Yes, AMD has chosen their "wider" GPU design specifically to optimize their GPU workload for new APIs, but that doesnt mean AMD will have an advantage in all of them. And yes, Nvidia design was actually very very similar to AMD's back then, AMD opted for big dies later, same as Nvidia with their higher clock.
    Actually again no it is not, again go read and study architectures, nVidia does NOT have hardware they used to have in previous generations and rely on software to work it out.

    Quote Originally Posted by Thunderball View Post
    You are a clear victim of marketing. I dont want Vulkan to be dead, but I dont see any new games. Nvidia wont do anything hardware-wise for Vulkan if it only has 1 game.
    Funny you mention that as nVidia is on the Khronos Group board for Vulkan and future optimizations.
    Vulkan is new, like DX12, it only arrived for use last year and whilst true nVidia won't change for 1 game the point herein lies in the fact that DX12 exposes the same low level capabilities and WILL be used by developers, they already are with new games.
    Do you believe it easy to learn a completely new way of developing when the tools you've been given are still in new territory stage?
    That's exactly the same as stating AMD's Ryzen CPUs are crap just because Intel's Kaby Lake CPUs are ~5% faster in gaming.
    Just because what YOU as a person can see is limited does not mean that it really is limited, but if you want another example of a highly anticipated Vulkan capable game you need only look at Star Citizen.
    Also any new and older Android device that officially gets Android 7 requires full Vulkan support to get certification.

    Quote Originally Posted by Thunderball View Post
    Well so far their only sacrifice Vulkan performance. It's a pretty good tradeoff to me. And that's considering the fact that AMD needs HBM memory to make cards more powerful than RX 480.
    You yet again fail to see that I'm referring to low level capabilities and not "JUST VULKAN!".
    I must ask you to study architectures again and the effects thereof.
    Also as a point AMD does not "need" HBM memory, they choose to work with it as GDDR is at it's limits for both bandwith and memory power consumption and transitioning now means they won't have to do work later, if they so choose they can develop with GDDR5X as well.
    Note: GDDR5 and GDDR5X can use the same but slightly modified controllers, HBM and GDDR6 require separate controllers and new design.

    Quote Originally Posted by Thunderball View Post
    Well GTX 1080 Ti DRAM consumes about 30-40W, and GPU is about 310-340W under typical load. On Polaris memory consumes more power, on Vega it's going to be the other way around, but it's still around 10% of the whole card power consumption.
    Polaris uses 256-bit 8Gbps GDDR5 and GTX 1080Ti uses 352-bit 11Gbps GDDR5X.
    The 1080Ti actually uses between 45 - 50W of power on full load with memory.
    The RX 480/580 still uses the same VRAM as before and with a 256-bit bus "only" consumes about 25 - 30W of power for VRAM.
    Your numbers are off and incorrect, it's still ~20% of the entire card which is a large budget, this is why HBM (among other reasons) was developed.
    This is yet again a limit of VRAM that is produced by Hynix/Samsung/Micron, this is out of either nVidia or AMD hands.

    Quote Originally Posted by Thunderball View Post
    Wider GPU > wider memory bus IS AMD architecture design. There is no other things about it.
    Yes and? You stated something ENTIRELY different as to which I answered.
    That AMD designs their GPUs to have higher bus width so they don't have to run their RAM as fast to gain as much bandwidth is their choice.
    What you stated however was that because of higher IPC you need more bus width which I answered that is wrong in it's entirety.
    There are more ways than 1 to reach the end goal.

  9. #129
    Warchief Zenny's Avatar
    10+ Year Old Account
    Join Date
    Oct 2011
    Location
    South Africa
    Posts
    2,171
    Quote Originally Posted by Evildeffy View Post
    But comparatively it is because of lack of hardware support to fully exploit the engine.
    Vulkan itself, which is heavily modified because it belongs to the Khronos Group to do with as they please, is still in it's infancy and so is DX12.
    Yet both have shown what is possible when optimized properly.

    I may have phrased it poorly for you, but comparatively it is true, AMD's low level API compatability is fully done by hardware, not software thus "weak".
    Pascal is fully DX12 feature compliant in hardware. You are free to post some evidence refuting this of course.

    This question has been answered by Larix below your post, watch the video, came in nicely.
    This video has very little sources and is filled with suppositions and assumptions throughout, he links CPU utilization of 3 games as proof yet when I look at that youtuber that does the comparison videos I can find a massive list of games in which the exact opposite occurs, AMD having higher CPU utilization then Nvidia. Guess AMD is doing software emulation as well then?


    This has been confirmed, nVidia has improved by strides but AMD's VR capabilities ARE better and more stable, but yes we are missing AMD high-end hardware.
    Yet, real world benchmarks of multiple different games and engines proves the exact opposite:

    https://www.pcper.com/reviews/Graphi...060-and-RX-480

    Our first results, though not covering nearly as many games as I would like at this point, show the GeForce GTX 1060 6GB card coming out ahead of the Radeon RX 480 8GB. Chronos, Dirt Rally, and Obduction on the Oculus Rift all indicate significant performance advantages for the NVIDIA card in this price range, despite the claims of VR dominance by the Radeon marketing and branding teams

    https://www.hardocp.com/reviews/vr/1//0

    Several games/apps with the Geforce 1060 winning throughout. Some choice quotes:

    Moving to our mid-level GPUs, the RX 480 and GTX 1060, I did not have much faith that we would have a good experience. Keep in mind that we have only turned on "Flashlight Shadows." I did this because it very much increases the immersion and ambiance of the game. While on the "edge," the GTX 1060 pulled through and delivered the same gaming performance that our high end GPUs did. The GTX 1060 actually outperformed the R9 Fury X in terms of GPU Render Time and Reprojection frames. The AMD Radeon RX 480 delivered an experience that put us in Reprojection for 99.42% of the time. When you are trying to line up headshots in The Brookhaven Experiment you simply do not want to be dealing with Reprojection and the frame judder that comes with it.


    The second group of GPUs down the scale in Robot Repair performance include both the GTX 1060 and R9 Fury X. In terms of GPU Render Time, the Fury X is a bit faster in this game, but overall, both provide very close to the same experience. If you spend some time truly evaluating what is going on while you are playing this, I think the Fury X provided a bit better of an experience, as it was not falling in and out of Reprojection during the first half of the game. The GTX 1060 was constantly bumping up close enough to or over our 11.1ms mark that Reprojection was being turned, then off, then on, then off, etc. This is noticeable while gaming in this title. This is the first game I have been sensitive to this in, but assuredly I did not like it. Again, not a "dealbreaker," but worthy of notation and still better than being in Reprojection "all the time."

    Lastly we get to the "Radeon™ RX 480 set to drive premium VR experiences into the hands of millions of consumers..." The RX 480 "premium experience" showed us the worst GPU Render Times and the worst percentage of Reprojection (99.48%). I would suggest that if you are looking for a premium frame judder experience, the RX 480 fits the bill.


    The leaderboard is pretty damning:



    Catastrophic for Nvidia indeed.

  10. #130
    The GTX 1060 actually outperformed the R9 Fury X in terms of GPU Render Time and Reprojection frames.
    kek


    Fury X really hurt them, they're still reeling from that

  11. #131
    The Lightbringer Evildeffy's Avatar
    15+ Year Old Account
    Join Date
    Jan 2009
    Location
    Nieuwegein, Netherlands
    Posts
    3,772
    Quote Originally Posted by Zenny View Post
    Pascal is fully DX12 feature compliant in hardware. You are free to post some evidence refuting this of course.
    It is NOT fully compliant in DX12 feature level in hardware, but a mix of hardware and software, you need only search for it on this forum to have more detailed info as to why as it's been said multiple times.
    You are free to ignore it and not search for it as well of course, this won't change that fact.

    Edit:
    Just as a free example: We're still waiting on Asynch Compute to become available in nVidia's drivers!

    Quote Originally Posted by Zenny View Post
    This video has very little sources and is filled with suppositions and assumptions throughout, he links CPU utilization of 3 games as proof yet when I look at that youtuber that does the comparison videos I can find a massive list of games in which the exact opposite occurs, AMD having higher CPU utilization then Nvidia. Guess AMD is doing software emulation as well then?
    No, AMD's DX11 driver has higher overhead as well.
    Also answered by the same video, the basic principles are correct and if you want a 100% honest working straight from AMD or nVidia's mouth then you're going to be kept waiting for a VERY VERY long time.

    Though the basics can be gotten from AMD's Mantle/DX12 explanation which isn't disputed by anyone but in the end the developer of the game has to be the one to use the feature, if it doesn't ... well yeah, you're screwed.

    Quote Originally Posted by Zenny View Post
    Yet, real world benchmarks of multiple different games and engines proves the exact opposite:

    <snipped the rest for huge ass quote>

    Catastrophic for Nvidia indeed.
    In this specific case I do find it very surprising that the Fury X is getting the beatdown as much as it has since on Tom's Hardware's VR thing a long while back, I dunno who posted it but I'm sure we'll get it posted here again, even a lower series R9 380 card was beating a GTX 980 Ti.
    Not in FPS, as FPS is mostly irrelevant in VR, but consistent frame times etc. so you don't throw up your lunch when playing for an extended time.

    I've not done any research in VR because it's not a real topic that interests me, like 3D movies in the cinema or at home, I find it boring and ruins the movie/experience for me.
    This may change when real games for them hit the market like VR Star Citizen but I honestly doubt it.

    Regardless the point stands, hardware wise context switching should be superior for AMD and worst for nVidia because of lack of the same hardware that is exposed in other low level APIs, but if nVidia found a way around it by their usual software emulation and made it work properly ... kudos for them.

    It does not change anything I've stated before however.

  12. #132
    Warchief Zenny's Avatar
    10+ Year Old Account
    Join Date
    Oct 2011
    Location
    South Africa
    Posts
    2,171
    Quote Originally Posted by Evildeffy View Post
    It is NOT fully compliant in DX12 feature level in hardware, but a mix of hardware and software, you need only search for it on this forum to have more detailed info as to why as it's been said multiple times.
    You are free to ignore it and not search for it as well of course, this won't change that fact.

    Edit:
    Just as a free example: We're still waiting on Asynch Compute to become available in nVidia's drivers!
    Nvidia is fully complaint with the DX12 spec in hardware, I can find absolutely no credible evidence to the contrary. As for Async compute, are you for real? Maxwell does not have it to be sure, but Pascal can do it. Timespy, Sniper Elite 4, Ashes of the Singularity and Gears of War 4 all show gains on Pascal cards when Async is enabled, not Maxwell though.

    Edit:

    If you need some evidence:

    Sniper Elite 4 Async off:

    http://gamegpu.com/images/stories/Te...e4_1920_12.png

    Sniper Elite 4 Async on:

    http://gamegpu.com/images/stories/Te.../se4_1920a.png

    Gears of War 4 Async off:

    http://imageshack.com/a/img922/4439/p10vCj.png

    Gears of War 4 Async on:

    http://imagizer.imageshack.us/a/img924/1162/lOX3fY.png

    TimeSpy Async Off vs On:

    http://images.anandtech.com/graphs/graph10486/82854.png
    Last edited by Zenny; 2017-05-09 at 03:15 PM.

  13. #133
    The Lightbringer Evildeffy's Avatar
    15+ Year Old Account
    Join Date
    Jan 2009
    Location
    Nieuwegein, Netherlands
    Posts
    3,772
    Quote Originally Posted by Zenny View Post
    Nvidia is fully complaint with the DX12 spec in hardware, I can find absolutely no credible evidence to the contrary. As for Async compute, are you for real? Maxwell does not have it to be sure, but Pascal can do it. Timespy, Sniper Elite 4, Ashes of the Singularity and Gears of War 4 all show gains on Pascal cards when Async is enabled, not Maxwell though.
    There is very little to almost no hardware difference in Maxwell vs. Pascal.
    The entire design is the same, the only difference is that whilst Maxwell could not stop in mid-execution order to prioritize a packet (whatever that may be) Pascal's firmware can, this leads to the marginal improvement of "Async Compute" being enabled.
    That's about the only difference and the funny thing is that if nVidia wanted to they could implement this feature on Maxwell as well by a firmware update because that's all that's changed.

    The rest of "Async Compute" is fully handled by nVidia's drivers.

    Again:
    I'm not stating the nVidia cards are bad, far from it.
    I'm saying that each architecture is compliant in different ways.

    Simple fact of the matter right now is that AMD cards are short-changed of their abilities due to a development cycle that has been going on for a long time.
    nVidia capitalized on the API present and designed around what was available, which allows them to do less for more.
    AMD banked and, unfortunately, failed to push through a low level API that was present on consoles, which is why they brought in Mantle.
    Mantle in turn gave rise to DX12, Vulkan, VR low level handling and what MS is planning for unification between Xbox and Windows.

    This leads to the point of "Fine Wine Technology" as AMD's drivers and game devs optimize for "multi-threading" more often now that low level APIs are available.
    It creates the appearance and effect of AMD cards ageing better whilst in turn the horsepower has always been there, simply untapped.

    If you have multi-threading hardware you do need software to fully utilize it and it is more difficult to learn than single-threading development.

    To be clear here:
    I do not care which brand I pick, I pick whichever is the most powerful at the time, but if the new low level APIs are to break through I keep preliminary results in mind here as well.
    I simply do not like misinformation however and tend to correct what is portrayed as wrong such as "nVidia designs on PCB are better!" when engineering wise it is not, not even by a long shot, that it is smart however of nVidia to work in the way they have... yes that is true.
    But like the fact that nVidia is power efficient is because they have less hardware for said "multi-threading" they will have to either choose to follow up on said technology and lose their power efficiency in the process or remain as is and let software handle the rest.
    Both are options and both can work, but software has limitations as to what can and will be supported in the future where hardware will work regardless.

  14. #134
    Old God Vash The Stampede's Avatar
    10+ Year Old Account
    Join Date
    Sep 2010
    Location
    Better part of NJ
    Posts
    10,939
    Quote Originally Posted by Yggdrasil View Post
    If they don't allow themselves to get knocked out of the game for a few generations they can become equals again. But they seem to allow themselves to get knocked out once every 8 to 10 years meaning they have to crawl back in. Which lets Intel and Nvida to skyrocket pricing and only trinkle out proformance increases.
    Intel and Nvidia are two different beasts that AMD tackles. In Intels case, AMD screwed up big because they have to live with the Bulldozer design for 5 years. Intel did this with their Pentium 4, but they somehow made it work. AMD gave up and said fuck it, and left it alone. They literally did a 360 and walked towards consoles.

    Nvidia though is different as AMD has proven they do produce better hardware. Even the RX 480 is better than the 1060 when it comes to DX12 Vulikan games. But the problem is that most people don't understand graphics very well. Ask JoeSixPack what's better and he'll probably go by VRAM. But nowadays they either go by benchmarks or by friends who also go by benchmarks. And Nvidia's first impression is better than AMD's.

  15. #135
    as AMD has proven they do produce better hardware.
    no

    less power efficient hardware for sure, perf is game by game basis



    AMD GPU problems are:

    marketing
    availabilities at launch, especially custom cards with jacked up prices
    rebrands
    disasters in the high-end for several years in a row
    putting too much tech on future maybes and missing out on the present

  16. #136
    Quote Originally Posted by Life-Binder View Post
    no

    less power efficient hardware for sure, perf is game by game basis



    AMD GPU problems are:

    marketing
    availabilities at launch, especially custom cards with jacked up prices
    rebrands
    disasters in the high-end for several years in a row
    putting too much tech on future maybes and missing out on the present
    But yes, they have. In pure computational power, AMD wins, hands down. Now, that pure power does not translate to game performance at all unless devs use it, which they don't. It is a fact that they have more raw power though.

  17. #137
    http://www.investopedia.com/news/nvi...#ixzz4gb1qP9kg
    Nvidia Revenue Will Be Boosted By Nintendo Switch Success (NTDOY, NVDA)

    It’s not only Nintendo (NTDOY) that is making money off of its new game console. Graphics chip maker is as well Nvidia Corp.

    That’s according to RBC Capital Markets analyst Mitch Steves who said this week Nvidia could make $300 million to $400 million in its fiscal year 2018 all because of the Switch game console. In a research note to clients covered by Yahoo Finance, the analyst said Nintendo will double production of the Switch console to 16 million units from 8 million units, creating a situation when Nvidia earns even more off of the game console.

    “We think the incremental 6-8M units could add $300-400M to the top line (3-4% growth to annual revenue on a $50 ASP),” wrote RBC Capital Markets analyst Mitch Steves according to Yahoo Finance. “This is a notable metric given that the Wii U sold ~13.5M units since its release in 2012 and 10M+ in the first 12 months are unlikely reflected in current estimates.” Nvidia makes a customized Tegra X1 chip for the Switch device. (See more: Nintendo Sold Close to a Million Switch Systems in March)

    I wonder how much AMD makes from the Xbone & PS4 ?

  18. #138
    Quote Originally Posted by moremana View Post
    I didnt read through your whole argument, but you are sadly mistaken on this tidbit. AMD does not design or supply, or control quality of aib boards. They supply the reference design and the aib takes off from there. The MSI RX 580 Gaming X is a prefect example for you to gander at. That board is not that different than the MSI Gaming X 1060
    I'm not talking about partner board vs partner board, I'm talking about reference vs reference.

    Quote Originally Posted by Evildeffy View Post
    Funny you mention this as it shows you've not read nor seen the PCBs nor assessments thereof.
    In terms of design and quality the AMD reference cards are better than nVidia's "Founder's Edition" cards, not because they NEED it but because they over-engineer all their cards, there are "worse" cards on the market on the AMD side and they are working and being sold as well. (PowerColor)
    Funny that you bring up PowerColor, those things can literally artifact on stock clocks. Do not buy those, it's a steamy pile of shit. I'm not praising Pascal FE cards but those work and can even overclock without overheating (they are fucking loud though). AMD reference designs are just as loud, run 20-30C hotter and forget about overclocking those.

    Quote Originally Posted by Evildeffy View Post
    nVidia has the better reference design cooler however and you get a snowball effect.
    "Worse" cooler + more power consumption = ZOMG THE WORLD IS BURNING! CARD IS HOT! ... what you don't realize is that whilst yes it is louder it is by no means "bad" as the silicon on GPUs is in general higher rated for temperatures than any CPU is f.ex.
    AMD's limits are not constrained by power delivery on Polaris, they are constrained by silicon limit, nVidia's design however puts a physical limit as to how much power you can give your GPU in order to not have a chance of frying it, temperature wise they don't care.. it's to prevent normal users from breaking their hardware and claiming RMAs which they cannot prove was broken by users unless physical mods were made.
    Pascals are limited to 1.093V but can actually go up to 1.2-1.3V, but those wont yield you much higher clocks. You can easily mod power limit without voiding your warrantly but that's pretty pointless unless you run LN2.

    AMD's limit is memory bandwidth, their GPU is wide enough in RX 480. You dont need any proof that RX 480 reference was bad, just try to buy a RX 580 reference.

    Quote Originally Posted by Evildeffy View Post
    Actually again no it is not, again go read and study architectures, nVidia does NOT have hardware they used to have in previous generations and rely on software to work it out.
    I guess it's my CPU that runs DX12 games on my computer. Doing pretty good as far as I can tell.

    Quote Originally Posted by Evildeffy View Post
    Funny you mention that as nVidia is on the Khronos Group board for Vulkan and future optimizations.
    Vulkan is new, like DX12, it only arrived for use last year and whilst true nVidia won't change for 1 game the point herein lies in the fact that DX12 exposes the same low level capabilities and WILL be used by developers, they already are with new games.
    Do you believe it easy to learn a completely new way of developing when the tools you've been given are still in new territory stage?
    That's exactly the same as stating AMD's Ryzen CPUs are crap just because Intel's Kaby Lake CPUs are ~5% faster in gaming.
    Just because what YOU as a person can see is limited does not mean that it really is limited, but if you want another example of a highly anticipated Vulkan capable game you need only look at Star Citizen.
    Also any new and older Android device that officially gets Android 7 requires full Vulkan support to get certification.
    Star Citizen is another No Man's Sky. Also, funny that you talk about hardware optimization for new APIs: both Apple Fusion and Qualcomm SnapDragon SoC (existing or upcoming) dont have any hardware Vulkan support, that includes processors that power Google devices.

    Quote Originally Posted by Evildeffy View Post
    You yet again fail to see that I'm referring to low level capabilities and not "JUST VULKAN!".
    I must ask you to study architectures again and the effects thereof.
    Also as a point AMD does not "need" HBM memory, they choose to work with it as GDDR is at it's limits for both bandwith and memory power consumption and transitioning now means they won't have to do work later, if they so choose they can develop with GDDR5X as well.
    Note: GDDR5 and GDDR5X can use the same but slightly modified controllers, HBM and GDDR6 require separate controllers and new design.
    Yet Nvidia is faster in DX12, but not in Vulkan. Memory power consumption is MINIMAL these days, one phase memory VRM is a norm for budget cards and is nothing to be worried about, it can go up 3 times and it wont be a problem for PCB makers. Using HBM memory for any mainstream targetted products will mean that those will flop, either from an exorbitant pricing or from not being available in sufficient quantities.

    Quote Originally Posted by Evildeffy View Post
    Polaris uses 256-bit 8Gbps GDDR5 and GTX 1080Ti uses 352-bit 11Gbps GDDR5X.
    The 1080Ti actually uses between 45 - 50W of power on full load with memory.
    The RX 480/580 still uses the same VRAM as before and with a 256-bit bus "only" consumes about 25 - 30W of power for VRAM.
    Your numbers are off and incorrect, it's still ~20% of the entire card which is a large budget, this is why HBM (among other reasons) was developed.
    This is yet again a limit of VRAM that is produced by Hynix/Samsung/Micron, this is out of either nVidia or AMD hands.
    1) 1080 Ti is over 2 times faster than RX 580.
    2) How is 20-30A@1.35V=40-50W?
    3) Again it can be 150W and that wouldnt be a significant problem. Yes, coldplates that cover memory chips as well as a GPU would be a must, but that's about it.

    Quote Originally Posted by Evildeffy View Post
    Yes and? You stated something ENTIRELY different as to which I answered.
    That AMD designs their GPUs to have higher bus width so they don't have to run their RAM as fast to gain as much bandwidth is their choice.
    What you stated however was that because of higher IPC you need more bus width which I answered that is wrong in it's entirety.
    There are more ways than 1 to reach the end goal.
    What are you talking about? Memory clock is always very similar, the only difference is always how far the board partner push their factory overclock. They need a wider bus because their GPU is wider. There are only two ways around memory bandwidth limit: 1) change memory type 2) wider memory bus.
    R5 5600X | Thermalright Silver Arrow IB-E Extreme | MSI MAG B550 Tomahawk | 16GB Crucial Ballistix DDR4-3600/CL16 | MSI GTX 1070 Gaming X | Corsair RM650x | Cooler Master HAF X | Logitech G400s | DREVO Excalibur 84 | Kingston HyperX Cloud II | BenQ XL2411T + LG 24MK430H-B

  19. #139
    Quote Originally Posted by Denpepe View Post
    At least it is clear now why Nvidia went with GDDR memory and not HBM for the GTX 10xx series
    Because all HBM does is make the product more expensive and delays release of products? This is one of the most idiotic things AMD had done in the past few years.

  20. #140
    Quote Originally Posted by Life-Binder View Post
    http://www.investopedia.com/news/nvi...#ixzz4gb1qP9kg

    I wonder how much AMD makes from the Xbone & PS4 ?
    Well it's quite different from Nvidia for sure. Nvidia offers their own solutions while AMD designs them based on partner requests.
    R5 5600X | Thermalright Silver Arrow IB-E Extreme | MSI MAG B550 Tomahawk | 16GB Crucial Ballistix DDR4-3600/CL16 | MSI GTX 1070 Gaming X | Corsair RM650x | Cooler Master HAF X | Logitech G400s | DREVO Excalibur 84 | Kingston HyperX Cloud II | BenQ XL2411T + LG 24MK430H-B

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •