Page 22 of 95 FirstFirst ...
12
20
21
22
23
24
32
72
... LastLast
  1. #421
    Quote Originally Posted by Life-Binder View Post
    they are as good as they are ever been .. and still better than AMDs

    there was one major bug recently where it lowered Pascals memory frequncy and it was hotfixed within a day or two
    Oh, how many times I read about that ...
    Even in the past, the AMD and NVIDIA drivers were both ok, both with problems. The main problem for AMD was, that their control center was bad. For most ppl, it was just not as nice looking as the NVIDIA one. And NVIDIA put more effort into their game profiles as AMD, so AMD was slower in this, but that changed with their new crimson driver thing.

    I heard so many times of bad AMD drivers, but if asked for an example, no one could ever deliver - they "just read it somewhere". I use AMD and NVIDA private and in the company and never had real problems with both of them or my collegues.

    Matter of fact, AMD cards age better then Nvidia, beause with the new gen Nvidia drops the support for the older cards quite fast, that's why a 290 goes way beyond a TI 7XX in many benchmarks. And this happend even before. AMD is not perfect, neither is Nvidia.
    "Who am I? I am Susan Ivanova, Commander, daughter of Andrej and Sophie Ivanov. I am the right hand of vengeance and the boot that is gonna kick your sorry ass all the way back to Earth, sweetheart. I am death incarnate and the last living thing that you are ever going to see. God sent me." - Susan Ivanova, Between the Darkness and the Light, Babylon 5

    "Only one human captain ever survived a battle with a Minbari fleet. He is behind me! You are in front of me! If you value your lives - be somewhere else!" - Delenn, Severed Dreams, Babylon 5

  2. #422
    The Lightbringer Evildeffy's Avatar
    15+ Year Old Account
    Join Date
    Jan 2009
    Location
    Nieuwegein, Netherlands
    Posts
    3,772
    Quote Originally Posted by Fascinate View Post
    Umm we already know the IPC here my dude, what i am saying is if we can trust the info AMD has given us they have intel beat on IPC/TDP/and i would assume price. I just dont see AMD coming from no where and taking every category that matters, thats why i say intel will still have an overall lead with clockspeed even tho broadwell e arent the best clockers.
    They have done so before and can do so again.

    You know absolutely nothing in regards to whether or not they'll beat Intel on clockspeed.
    Approaching things conservatively is 1 thing, approaching it as "Intel has won already" is quite another.

    It's funny you should mention Broadwell-E aren't the best clockers... maybe we should take a look at ANY -E series chip and compare them to the consumer variants.
    Well ... whaddya know, it's strikingly similar that they all clock lower than their consumer parts.

    I wonder why that is... I'm sure it has nothing to do with chip complexity difference! (sarcasm, just in case you didn't get that)

    You don't know how they will clock period and making a statement that you do is being a pure fanboy and it irritates the fuck out of me.
    It may be due to me having a bad day but in all honesty it's likely not.

  3. #423
    I have no brand loyalty if AMD can come out with a 4 core 8 thread chip that competes with the 7700k for less money im on board. My first two PC's i built had AMD chips cause they were better value for the money, but that hasnt been the case for a loonnnggg time. Just saying with AMD's recent history its been fail after fail, these new zen's are looking pretty good but do not forget guys the only "information" we have is from AMD themselves, do not take any of this as fact until we see a guy like tom logan get their hands on one of these and see what they are actually capable of.

    Eh evil, a good 6800k hits 4.3ghz, a good 5820k hits 4.7ghz. Thats why i said broadwell e arent the best clockers.
    Last edited by Fascinate; 2016-12-21 at 07:40 PM.

  4. #424
    Pit Lord
    10+ Year Old Account
    Join Date
    Sep 2013
    Location
    Unites States
    Posts
    2,471
    Quote Originally Posted by Fascinate View Post
    I have no brand loyalty if AMD can come out with a 4 core 8 thread chip that competes with the 7700k for less money im on board.
    So then what is the point of the whole "doesn't matter it won't beat Intel in such and such because I said so" type of attitude? Surely doesn't sound like an unbiased opinion.

    Quote Originally Posted by Fascinate View Post
    these new zen's are looking pretty good but do not forget guys the only "information" we have is from AMD themselves,
    We know. We're not the ones passing assumptions off as facts.
    | Fractal Design Define R5 White | Intel i7-4790K CPU | Corsair H100i Cooler | 16GB G.Skill Ripsaws X 1600Mhz |
    | MSI Gaming 6G GTX 980ti | Samsung 850 Pro 256GB SSD | Seagate Barracuda 1TB HDD | Seagate Barracuda 3TB HDD |

  5. #425
    Quote Originally Posted by dadev View Post
    As for driver stability, I don't know much about testing new games with new AMD drivers, and I assume that works just fine (at least most of the time?).
    However, developing stuff on PC with AMD hardware can be pain sometimes. What it means is that when something is not stable in my code in many cases I end up switching to NVidia hardware for debugging, unless it's AMD specific issue of course, simply because it's more robust with regards to errors or just incorrect usage. Funnily enough, most of the time I work on AMD hardware (besides consoles, which is obvious) because in my experience it is more likely to encounter problems on it than on NVidia, and because most people (especially artists!) never work on AMD hardware if they have the option so someone has to do it.
    What the fuck are you developing? I mean, if you program a shader in assembler by yourself there might be some kind of error per card, but usually you just use DX or Vulkan/OL - where do you even have direct access to the hardware to fuck something up? I never had a framework till now, where AMD or Nvidia fucked up something - at least if I did nothing wrong. I you use something like Unreal Engine 4, you have to code AI and stuff, but can more or less paint the darn world. I know room planer that are harder to use then UE4.

    And what about the artists? I call BS on that. Really. An Artist (like, 2d drawing) uses whatever he wants. They get wet over the new surface thing you can paint on, that has a intel GPU. 3D Rendering might be something different, but those are not artists, but more engineers. And here it depends, because both Nvidia and AMD are ok to use. We had AMD's FirePro (GL?!) and Nvidias Quadro here - if the CAD Program can use it, it dosent really matter what you use. Both are fine. Well, AMD is a bit cheaper with more or less the same performance.

    The main advantage Nvidia really has over AMD in the professional sector is CUDA and OpenGL is faster. That's it. And CUDA is shit, because you won't use a open standard but something propetary that can fuck up your company, if Nvidia ever changes the code or how the cards work. Or if they dev. a CUDA 2.0 without backward compatiblity.
    "Who am I? I am Susan Ivanova, Commander, daughter of Andrej and Sophie Ivanov. I am the right hand of vengeance and the boot that is gonna kick your sorry ass all the way back to Earth, sweetheart. I am death incarnate and the last living thing that you are ever going to see. God sent me." - Susan Ivanova, Between the Darkness and the Light, Babylon 5

    "Only one human captain ever survived a battle with a Minbari fleet. He is behind me! You are in front of me! If you value your lives - be somewhere else!" - Delenn, Severed Dreams, Babylon 5

  6. #426
    You guys really have an issue with the way i word my sentences dont you lol. Of course i dont know, but i am telling you right now zen will NOT be a clean sweep over broadwell e. It will fall down in some area, we just dont have enough information right now as to where, i am just saying in this thread my best guess is clockspeeds.

    I also dont 100% believe zen has higher IPC than broadwell, i think they are close but the demo's they had at the press conference could absolutely been tweaked to favor its architecture some way or another.

  7. #427
    Quote Originally Posted by Fascinate View Post
    You guys really have an issue with the way i word my sentences dont you lol. Of course i dont know, but i am telling you right now zen will NOT be a clean sweep over broadwell e. It will fall down in some area, we just dont have enough information right now as to where, i am just saying in this thread my best guess is clockspeeds.

    I also dont 100% believe zen has higher IPC than broadwell, i think they are close but the demo's they had at the press conference could absolutely been tweaked to favor its architecture some way or another.
    You say you don't know, then go on to say you are telling us right now how it will be? You contradict yourself. All you would have to do to not make that sound like a fact is say:

    Of course I don't know, but I am telling you right now I do not THINK zen will be a clean sweep.

    See how one little word changes your statement from a fact to an opinion. Yes, of course we all have a problem with you stating things like a fact that you have no possible way of knowing.

  8. #428
    Pit Lord
    10+ Year Old Account
    Join Date
    Sep 2013
    Location
    Unites States
    Posts
    2,471
    Quote Originally Posted by Fascinate View Post
    You guys really have an issue with the way i word my sentences dont you lol. Of course i dont know, but i am telling you right now zen will NOT be a clean sweep over broadwell e. It will fall down in some area, we just dont have enough information right now as to where, i am just saying in this thread my best guess is clockspeeds.

    I also dont 100% believe zen has higher IPC than broadwell, i think they are close but the demo's they had at the press conference could absolutely been tweaked to favor its architecture some way or another.
    Alright I give up. Can't tell someone that dense why they are wrong.

    Meanwhile I'll be holding out for the facts before making any assumptions.
    | Fractal Design Define R5 White | Intel i7-4790K CPU | Corsair H100i Cooler | 16GB G.Skill Ripsaws X 1600Mhz |
    | MSI Gaming 6G GTX 980ti | Samsung 850 Pro 256GB SSD | Seagate Barracuda 1TB HDD | Seagate Barracuda 3TB HDD |

  9. #429
    Quote Originally Posted by Evildeffy View Post
    nVidia actually claimed it could paralellize as well as the AMD cards could by driver fixing Asynch Compute to do so and have the same performance.
    That's what he was referring to, obviously we know that nVidia's hardware is incapable of the same features due to not having Hardware Schedulers and a different pipeline construction but that didn't stop them from claiming it could to do damage control.

    As far as GCN1 cards go ... a very recent update serialized the 79X0/R9 280(X) in the same manner because of newer architecture differences and them having to produce different drivers so they changed it indeed, but for existing drivers which allowed it still show performance gains before the change so a lot of users are keeping those older drivers for that reason.
    GCN1 cards will have async again, but the fix will take longer then expected, so for the time being it was disabled. But you can just use the old drivers.

    Nividas hardware can't do async like in multiple workloads on the shaders without switching. They just optimized the pipleing with the current gen, but that's more a band aid. That's why the driver disable async in some games, forgot which ones. If Nvida had to do real async compute on their current gen, the 1080 would break in greatly. But those cards have enough raw power to bruteforce their way tru, I'm sure of it, at least do to 60 fps. Well, if there is a game, that is build from start for DX12 and every component of it, like async compute, I think Nvidias current gen will run really bad on it. And they know it, that's why the new gen is said to have a major hw overhaul.

    IMHO we will never see a game with async compute that forces it on nvidia cards. They will always build in a backdoor, because nvidia would run really bad on it. And they have quite the market share.

    - - - Updated - - -

    Quote Originally Posted by Fascinate View Post
    You guys really have an issue with the way i word my sentences dont you lol. Of course i dont know, but i am telling you right now zen will NOT be a clean sweep over broadwell e. It will fall down in some area, we just dont have enough information right now as to where, i am just saying in this thread my best guess is clockspeeds.

    I also dont 100% believe zen has higher IPC than broadwell, i think they are close but the demo's they had at the press conference could absolutely been tweaked to favor its architecture some way or another.
    I don't think AMD lied or cheated in their presentations up till now. There's a simple fact why ... More or less the whole PC world is waiting for ZEN and if they are out in the open, they will benchmark the shit out of it. ANY cheat would come to light in like 20 sec. and AMD would be done for. I also believe that's why they didn't tell frame rates - games and gpu drivers change to much till the release in Q1 2017, so 2 months later the fps could be different and there would be a really big issue with cheat insults.

    For the facts we know up till know, based on what I said before.

    They had the blender mark some months before with a 6900 boardwell-e vs. ZEN at 3 Ghz, both with the same hardware, RAM, stock cooling, no turbo, no HT. ZEN was a tiny bit faster.
    Now with the new presentation we saw ZEN with 3.4 Ghz, no HT, no Turbo vs. Intel 6900 at 3.5 Ghz with HT and Turbo, both again with almost the same time. I suspect that HT was the reason, broadwell-e had the same speed like ZEN, unlike their first benchmark.

    Also those Intel Blendertimes were CONFIRMED but other users, running the same benchmark from Blender on their PC with a 6900 stock. And after AMD told them to run it at 150 samples, they fucked up big, because they uploaded a wrong file with 200 samples to render. If you changed the sample rate in their first file, you also had the same results. Not to mention the typo in power point...

    Same goes for the handbrake benchmark they did. Here it was discussed a short time after, that Intel were only running on stock clocks, because the task manager had " 3.2 Ghz" written in the upper top, but that's the CPU name and Intel writes the stock clocks behind it, AMD does not. This is not the actual CPU Clock you have in the maximized task manager.

    For the games, well, hard to say without FPS, but ZEN was running as well as the Intel CPU with the game on 4K and the titan x. Actually I really enjoyed the fact they used a Titan-X, because THAT demonstated the same speeds, with another manufacturers card. And that it wasen't possible to cheat with that (behind the scenes). If they used VEGA for it, it would've been called biased again.

    The DOTA h264 thing was stupid - they had a 6700 @ 4.5 Ghz vs. ZEN and with the 6700 the stream was dropping frames. That was to be expected, if you run a CPU with 4 cores against one with 8 cores on a CPU intensive game. A 6900 would not have dropped any frames, I'm sure off. Dunno why they pulled this stunt.


    IMHO ZEN looks impressive and I expect at least broadwell-e IPC. If it's true and ZEN had 95 W TDP vs. Intel 140 W TDP 6900, the clocks should go quite high. Also the new tech they build in is impressive. Like the neural network and sensors. THIS is what I call a real evolution, not the crap intel does since 4 gens, after they got back into the game with Core2Duo.

    And a quick comment to some ppl saying "can't be they come back that hard into the game" ... well, yes? When Intel had the P4, they also came back hard with the C2D, with way more performance then before. Also AMD's tech is far from bad. They improved so hard from bulldozer to excavator now - to get that much out of that shitty architecture is quite impressive. And don't forget - what Intel makes per year in profit is more then AMD has in sales. We talk here about david vs. goliath. That's why the P4 was in better shape then the FX line of AMD - Intel just threw lots of money at it to go until C2D was ready.

    And don't forget who was hired by AMD for ZEN - Jim Keller. That guy is what you can call a god around CPU's. He was the lead engineer for AMD's K8 (and Athlon 64, the 64-Bit instruction set Intel also uses, because their IA-64 failed so hard) and was in the team for apples A4 and A5 (awesome performance). If this guy, with a broad knowledge about Alpha RISC (anyone next to me even remember those things and NT 4.0?), x86, x64, MIPS, ARM Chips etc. has to do a new PC design, something like ZEN gets created. If you look at the ZEN functonality, you see many things from other designs that where flowing into it.

    Neural Network for the prediction, a net of sensors to measure the OC capabilities in real time and boosting accordingly in 25 mhz steps. This is awesome, really. It's like the mangement took a bunch of engineers and told them, to build the CPU from scratch and do whatever they want. Just build the best CPU you can.

    If everything goes right and they really can deliver that performance + a price to gain marketshare back, I can see 2017 to be one awesome year. Even better if VEGA can hold what it promises. It's really time for Intel and Nvidia to get their ass kicked again by a good competition. Just look at intels CPU prices (>1k$) and you know where we are going - like they did back then in the Pentium times. And I hope they never come back (MMX 200 was a joke and expensive as fuck)
    "Who am I? I am Susan Ivanova, Commander, daughter of Andrej and Sophie Ivanov. I am the right hand of vengeance and the boot that is gonna kick your sorry ass all the way back to Earth, sweetheart. I am death incarnate and the last living thing that you are ever going to see. God sent me." - Susan Ivanova, Between the Darkness and the Light, Babylon 5

    "Only one human captain ever survived a battle with a Minbari fleet. He is behind me! You are in front of me! If you value your lives - be somewhere else!" - Delenn, Severed Dreams, Babylon 5

  10. #430
    Deleted
    Ok seems like AMD benchmark for blender was using 150 samples, not the default which is 200, not like the sample matters in terms of the numbers against each other, the samples seem to be reasonably linear.

    5820 K @ 4.3 Ghz

    100 samples = 27.69
    150 samples = 40.40
    200 samples ( default) = 53.53

    No other settings changed, only changed the sample number.

    The Ryzen results still is better, so impressive showing, would love to do the Handbrake bench but don't seem to find the file they used assuming they released it.

  11. #431
    Quote Originally Posted by Maerad View Post
    What the fuck are you developing? I mean, if you program a shader in assembler by yourself there might be some kind of error per card, but usually you just use DX or Vulkan/OL - where do you even have direct access to the hardware to fuck something up? I never had a framework till now, where AMD or Nvidia fucked up something - at least if I did nothing wrong. I you use something like Unreal Engine 4, you have to code AI and stuff, but can more or less paint the darn world. I know room planer that are harder to use then UE4.

    And what about the artists? I call BS on that. Really. An Artist (like, 2d drawing) uses whatever he wants. They get wet over the new surface thing you can paint on, that has a intel GPU. 3D Rendering might be something different, but those are not artists, but more engineers. And here it depends, because both Nvidia and AMD are ok to use. We had AMD's FirePro (GL?!) and Nvidias Quadro here - if the CAD Program can use it, it dosent really matter what you use. Both are fine. Well, AMD is a bit cheaper with more or less the same performance.

    The main advantage Nvidia really has over AMD in the professional sector is CUDA and OpenGL is faster. That's it. And CUDA is shit, because you won't use a open standard but something propetary that can fuck up your company, if Nvidia ever changes the code or how the cards work. Or if they dev. a CUDA 2.0 without backward compatiblity.
    Eh? Put wrong barrier on AMD cards and you get black screen (and that's a good state considering that not too long ago they could bsod), while NVidia usually continues on. Capture DX frame on AMD? Pray to gods that it'll succeed. Not one year ago every slight deviation from OpenGL spec on AMD gpu would almost certainly not work. I really doubt the situation has changed much seeing how OpenGL fares in general (dead spec).
    Of course you don't want bugs when you're about to ship, but what good it does when you want to get some work done?

    If you want a proof from more rigid segments, go to the hospital, take a look at some class 3 medical equipment that does graphics and check its gpu. You will very rarely find AMD gpu inside (if at all! I know I never seen one). You know why? Because people usually don't develop on it, hence it doesn't go through FDA, and hence can't be swapped in after you done developing on NVidia.

    I call artist all guys who do: models, animation, lights, textures, level editing, sound. Some of them are technical artists (with engineering background), but they're artists nonetheless. You can tell a technical person is an artist through and through when he comes to you and asks to override pbr with some hand made shading, because it looks odd in that scene. The specific type of artists I meant there was those that keep the engine open during development for the stuff they do. Of course the concept art guy is not part of that group.

    What you're missing is that, unless you license a ready made engine, during early development cycles the engine is not always stable, and that means an artist has even more variables to deal with. That is why artists prefer NVidias, even if the engine fucks up there's higher chance the gpu will do things right.
    Artist time >> gpu cost. The 100$ (best case) you save on the difference between vendors will be outweighed on the first engine instability you'll have. I was nearly slapped the last time I suggested some of the art guys should switch to AMD so that it'll be easier catch bugs.

    For the CAD itself the gpu vendor is not important, but when you run the engine to see how your glorious animation looks like in it then it can be important.
    However, why on earth you'd need a Quadro to do artist work? That's insane waste of money. Save on FirePro? Save by buying consumer level cards, we're not in the stone age when quadros were vastly superior in graphics tasks. Pretty sure it's cheaper to license new Maya than to buy a bunch of quadros. Modern CADs no longer use fixed pipeline. Not sure what reason there is to buy a quadro nowadays except for supporting old CADs, doing double precision stuff or deep learning, with the last two being 100% irrelevant to artists.

    And finally? Why would NVidia fuck up CUDA? What incentive they have? Why you're worried about CUDA being closed standard but fine with DX? Fine with a closed platform, namely XBone. And finally fine with a super closed platform, namely PS4 (you're signing an nda for crying out loud)? Are you not worried that Sonny will fuck up the next patch and your game will be unplayable? What kind of rubbish is this??
    CUDA is used in medical devices, NVidia couldn't fuck up CUDA even if they wanted to. CUDA is running now for 8 major versions and so far they didn't fuck up anything. In fact there's version 8 for OSX, now tell me what pascal card there is in macs?

  12. #432
    Fluffy Kitten Remilia's Avatar
    10+ Year Old Account
    Join Date
    Apr 2011
    Location
    Avatar: Momoco
    Posts
    15,160
    The main difference between Quadro/FirePro are ECC RAM and different binning and drivers. Not really useful outside of certain applications, and games not being one of em. Mostly to do with rendering videos or specific compute tasks where FP64 isn't so needed. Sometimes they're needed to run 24/7 (or close to), so lower power draw is nice for those. It's just a big fits your situation or completely useless.

  13. #433
    Old God Vash The Stampede's Avatar
    10+ Year Old Account
    Join Date
    Sep 2010
    Location
    Better part of NJ
    Posts
    10,939
    Quote Originally Posted by Turaska View Post
    Not falling for the same bs as the 480's, I remember when they said the 480's wouldn't run too hot... yet were practically melting when they first came out. Sticking to Intel, I like to think paying the extra for Intel ensures quality.
    I would think the RX 480's are a raging success. Don't know where you get that they run hot. JayTwoCents proved that the 480 can run cool and low power if you give it a better cooler. He was able to make it pull around 100W.

    Most enthusiasts are recommending RX 480 but for whatever reason most people actually buy a 1060. I consider the 3GB 1060 to be wrong on many levels. The 1060 6GB has value if and only if you spend $250 and no more. And remember Nvidia milked their 1060 for $300 for the Founders Edition.

    Quote Originally Posted by Fascinate View Post
    Bottom line is intel will still have a lead when overclocking is taken into consideration, but what people really want to know is how the 4 core 8 thread chips do and can they bring down the price of 6700k/7700k.
    Remember Intel CPUs can only be overclocked if they're the K parts. That limits overclocking to 6600K/6700K/7700K. Where we know AMD plans to not limit Zen's overclockability. Zen will even auto overclock depending on your cooler. Though it's really just Turbo mode, with a tweak.

    An i5 6600 is 3.90 GHz and won't go higher and costs $220. The quad core Zen will be cheaper with HT and will have manual overclocking.
    Last edited by Vash The Stampede; 2016-12-22 at 12:30 AM.

  14. #434
    Quote Originally Posted by Maerad View Post
    GCN1 cards will have async again, but the fix will take longer then expected, so for the time being it was disabled. But you can just use the old drivers.
    Can this be taken as an official AMD statement?

    Well, if there is a game, that is build from start for DX12 and every component of it, like async compute, I think Nvidias current gen will run really bad on it.
    Do explain how you force async execution in DX12. I'm now working on an engine that is built for modern API (DX12 included) and I have literally no idea what you're talking about.

    IMHO we will never see a game with async compute that forces it on nvidia cards. They will always build in a backdoor, because nvidia would run really bad on it. And they have quite the market share.
    Or maybe, just maybe, because you can't force it on?
    You can write async code as much as you like, and if it performs badly on nvidia they will serialize it in the driver. And that will be 100% within the specification. If you fail at optimizing for specific hardware then the vendor is free to disable your hoop jumping in this specific case. Notice how I wrote vendor, if same would happen with AMD they would do the same.
    Last edited by dadev; 2016-12-22 at 12:32 AM.

  15. #435
    If those benchmarks that where leaked are to be believed, the Ryzen Chip with twice as many cores is only barely better than the quad core I7-7700k. Does that mean that cinebench doesn't work properly with the Ryzen chips or could this really mean that Ryzens (maybe even Zen in general) IPC is insanely low?

  16. #436
    Fluffy Kitten Remilia's Avatar
    10+ Year Old Account
    Join Date
    Apr 2011
    Location
    Avatar: Momoco
    Posts
    15,160
    Quote Originally Posted by Karon View Post
    If those benchmarks that where leaked are to be believed, the Ryzen Chip with twice as many cores is only barely better than the quad core I7-7700k. Does that mean that cinebench doesn't work properly with the Ryzen chips or could this really mean that Ryzens (maybe even Zen in general) IPC is insanely low?
    Are you talking about the leak that was actually an E5-2660(?) one that was already noted as fake and already deleted by mods on the leaked forum?

  17. #437
    The Lightbringer Evildeffy's Avatar
    15+ Year Old Account
    Join Date
    Jan 2009
    Location
    Nieuwegein, Netherlands
    Posts
    3,772
    Quote Originally Posted by Karon View Post
    If those benchmarks that where leaked are to be believed, the Ryzen Chip with twice as many cores is only barely better than the quad core I7-7700k. Does that mean that cinebench doesn't work properly with the Ryzen chips or could this really mean that Ryzens (maybe even Zen in general) IPC is insanely low?
    Those benchmarks were long confirmed to be made by some jackass trolling the Baidu fora.
    The CPU's in question that was used, but ID modified, was an Intel Xeon E5-2660.

    - - - Updated - - -

    Quote Originally Posted by dadev View Post
    Do explain how you force async execution in DX12. I'm now working on an engine that is built for modern API (DX12 included) and I have literally no idea what you're talking about.

    Or maybe, just maybe, because you can't force it on?
    You can write async code as much as you like, and if it performs badly on nvidia they will serialize it in the driver. And that will be 100% within the specification. If you fail at optimizing for specific hardware then the vendor is free to disable your hoop jumping in this specific case. Notice how I wrote vendor, if same would happen with AMD they would do the same.
    Last I checked the spec for that is that whilst Async Compute is a part of DX12/Vulkan it isn't mandatory to be used.
    So it can either be disabled entirely in driver or game side.. or be serialized and depending on load suffer a POSSIBLE performance loss.

  18. #438
    Quote Originally Posted by Evildeffy View Post
    Last I checked the spec for that is that whilst Async Compute is a part of DX12/Vulkan it isn't mandatory to be used.
    So it can either be disabled entirely in driver or game side.. or be serialized and depending on load suffer a POSSIBLE performance loss.
    And this is correct.
    I would wager that if nvidia (or amd for that matter, it's just less likely to happen for them) tests some widespread dx12 based game and sees that if they serialize that game's dedicated compute queue into the graphics queue they gain performance, they will do it in a profile in next driver release. If not, they'll leave it. If they serialize and suffer performance loss, it's their own problem.

  19. #439
    Quote Originally Posted by Evildeffy View Post
    Those benchmarks were long confirmed to be made by some jackass trolling the Baidu fora.
    The CPU's in question that was used, but ID modified, was an Intel Xeon E5-2660.

    - - - Updated - - -


    Last I checked the spec for that is that whilst Async Compute is a part of DX12/Vulkan it isn't mandatory to be used.
    So it can either be disabled entirely in driver or game side.. or be serialized and depending on load suffer a POSSIBLE performance loss.
    ah, good to know, would've been pretty sad otherwise....

    Quote Originally Posted by Remilia View Post
    Are you talking about the leak that was actually an E5-2660(?) one that was already noted as fake and already deleted by mods on the leaked forum?
    The fact that Evildeffy could answer my post like a normal human being makes me wonder why people think it's necessary to answer with rethorical questions no one asked for.

  20. #440
    The Lightbringer Evildeffy's Avatar
    15+ Year Old Account
    Join Date
    Jan 2009
    Location
    Nieuwegein, Netherlands
    Posts
    3,772
    Quote Originally Posted by dadev View Post
    And this is correct.
    I would wager that if nvidia (or amd for that matter, it's just less likely to happen for them) tests some widespread dx12 based game and sees that if they serialize that game's dedicated compute queue into the graphics queue they gain performance, they will do it in a profile in next driver release. If not, they'll leave it. If they serialize and suffer performance loss, it's their own problem.
    Well last I checked AMD will never serialize it as their compute queues all have a hardware scheduler attached.
    2 if you have the RX 470/480 series.

    The only time they will lose out performance on Async Compute implementation is if the game developer fucks it up royally.
    AMD's hardware is literally built around hardware "multi-threading" where nVidia's hardware is emulating it with software.

    nVidia's driver overhead is indeed smaller and they (last I checked again) have better performance with deferred context but DX11 still has to send those via the main processor thread and no way around this.
    The whole DX12/Vulkan unleashes this concept for PC allowing all CPU threads to communicate with the GPU rather than just the main thread in DX11.
    Not sure if OpenGL follows the same routine or actually has more "multi-threading" capability than DX11 so I won't comment on that.

    - - - Updated - - -

    Quote Originally Posted by Karon View Post
    The fact that Evildeffy could answer my post like a normal human being makes me wonder why people think it's necessary to answer with rethorical questions no one asked for.
    Remilia is not a bad moderator/person at all, he probably just phrased it a little different because he, like me, gets annoyed by people whom might look like they are trolling or obviously biased.

    My responses can become quite heated as well but most of the time I answer in a direct fashion often "cold" which lead a lot of people to think that I might be condescending to them when I'm really not.
    So you may think I answered you perfectly now by your standards, in another thread I might sound like a complete ass to you where Remilia will answer more "human" for example.

    Just keep in mind in what kind of thread you're posting and that sometimes people that need to see open mindedness get aggravated by seeing clear favouritism and then carry their opinions are clear fact, when it really isn't, and then dare call themselves a proper techie.
    I'm not saying you did but just the "playing field" of this thread has that content.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •