Page 8 of 8 FirstFirst ...
6
7
8
  1. #141
    Pit Lord Ghâzh's Avatar
    15+ Year Old Account
    Join Date
    Mar 2009
    Location
    Helsinki, Finland
    Posts
    2,329
    Quote Originally Posted by Dukenukemx View Post
    Here's one example of HT that lowers FPS Not by much of course because modern HT is pretty good at load balancing the work. Here's the link to the results. But there are games that do benefit having HT on, there just isn't many.
    Would be interesting to know if he had core parking enabled or disabled. He was also running on windows 7 and it's context switch (on which 8 and 10 improved upon). Then again even if that wasn't the problem, would you recommend disabling HT based on that alone? Or well you could and also accept the fact that you were an idiot when buying that i7 instead of that i5.
    But because things aren't always perfect in the real world, HT can sometimes cause a performance loss if the application isn't multithreaded, like WoW.
    If you can call 1 FPS a performance loss..

    What I'm getting at is the fact that the earlier recommendation of forcing WoW to specific cores is just idiocy. I can see disabling HT as an option to gain absolutely minimum amount of extra FPS but forcing specific cores does either nothing or cripples your performance even further by having your OS work around the fact that there's reserved cores for something.

    Also what further confused me was the fact that Remilia told him to force WoW main thread from hyperthreaded core 7 to physical core 1. I mean what the hell?

  2. #142
    Old God Vash The Stampede's Avatar
    10+ Year Old Account
    Join Date
    Sep 2010
    Location
    Better part of NJ
    Posts
    10,939
    Quote Originally Posted by Ghâzh View Post

    If you can call 1 FPS a performance loss..
    It's 3 in some games, but it's insignificant either way. Also consider that overclocking a 970 would gain you 5 or 6 fps in games, so yea.
    What I'm getting at is the fact that the earlier recommendation of forcing WoW to specific cores is just idiocy. I can see disabling HT as an option to gain absolutely minimum amount of extra FPS but forcing specific cores does either nothing or cripples your performance even further by having your OS work around the fact that there's reserved cores for something.
    It's not uncommon to do that when an application is demanding. If you run an application that does video encoding it will sometimes do that to maximize performance. Lock threads to cores is an option for many applications like Dolphin. WoW is exceptionally dependent on single core performance unlike most games that don't care what CPU you use. So things like disabling HT and locking threads to cores would get the maximum available performance you can get. But like I said 99.99% of games it does nearly nothing, and since we don't have a way to benchmark WoW we can only go by "feels".


  3. #143
    Pit Lord Ghâzh's Avatar
    15+ Year Old Account
    Join Date
    Mar 2009
    Location
    Helsinki, Finland
    Posts
    2,329
    Quote Originally Posted by Dukenukemx View Post
    It's not uncommon to do that when an application is demanding. If you run an application that does video encoding it will sometimes do that to maximize performance. Lock threads to cores is an option for many applications like Dolphin. WoW is exceptionally dependent on single core performance unlike most games that don't care what CPU you use. So things like disabling HT and locking threads to cores would get the maximum available performance you can get. But like I said 99.99% of games it does nearly nothing, and since we don't have a way to benchmark WoW we can only go by "feels".
    Disabling HT and locking cores are not the same though nor does it accomplish the same results. You don't lock cores because an application is demanding, it's got more to do with the way it's coded, what instructions it's threads are running, and how the caching works. This is why you don't touch those settings if you aren't sure what they affect.

    Be that as it may, the advice was just complete gibberish. If you wanted to core lock (because you didn't trust your OS to handle it) you'd choose cores that were not next to each other since windows numbers them in pairs (e.g. #0 and #1 make one physical core, #2 and #3 make second one, etc..). There's zero difference between core #7 and core #1 in itself. And if you look how windows balances WoW threads, it uses uneven numbers by default.
    Last edited by Ghâzh; 2016-01-17 at 06:43 PM.

  4. #144
    Update;

    The R390 has been confirmed for refund, awaiting payment which should be within 2 more days or so, i decided to get the 970, it came today and first impressions; Stunning.

    everyone commenting that wow can't use the gpus core clock etc etc etc... Seemed weird that my other cards were all fine with using the core clock it could provide, not limiting itself drastically.

    As soon as i open wow its running at 1379mhz core clock, 300 + more than the R390 SHOULD of ran at, but my R390 was constantly gimping itself running at 700mhz.

    The choppy fps in garrison is gone to say the least, dosent drop at all (well ofc it does, i Vsync/cap at 60, but it drops to 58 or so and instantly back to 60, no more 40fps spikes).

    Not sure if R390 was faulty or what not, don't really care, i wanted the R390 to `futureproof` but i'm sure i can OC this 970 in the future to get close results.

    http://i.imgur.com/bK0qM4x.gif
    Gpu-z of the 970 on wow

  5. #145
    Deleted
    Don't compare the clockspeeds between two different architectures

    Seeing as your R390 did not have issues with other games, it was probably a WoW issue... Anyway, have fun with the 970

  6. #146
    Quote Originally Posted by Dukenukemx View Post
    As far as the OS is concerned, it sees 8 physical cores. They're not physical but "logical" but you can treat them as 8 individual cores. Virtual machines can assign 8 cores to an OS. The problem with hyperthreading is that you're just double feeding a core to increase efficiency, but you would lower single threaded performance doing this. HT has gotten good enough that this isn't usually a problem but it can still happen. For gaming you want to disable HT.
    You are wrong atlest windows 10 knows whats logical cores and whats physical Cores.



    Skylake i7 processors have "Inverse Hyper Threading" which bosts single threaded performance when aplications are "single minded", this explains why the Skylake generation i7s have so much better minimum frame-rates(making a more smooth experience) when compared to a i5's of the same generation

    http://wccftech.com/intel-inverse-hy...ading-skylake/

  7. #147
    Old God Vash The Stampede's Avatar
    10+ Year Old Account
    Join Date
    Sep 2010
    Location
    Better part of NJ
    Posts
    10,939
    Quote Originally Posted by Hellfury View Post
    You are wrong atlest windows 10 knows whats logical cores and whats physical Cores.
    Windows 10 like any previous version will tell you there's 4 physical, 8 virtual but that's only because Windows just identity's the CPUs and looks for a HT flag. Otherwise it handles it like 8 separate cores.

    Skylake i7 processors have "Inverse Hyper Threading" which bosts single threaded performance when aplications are "single minded", this explains why the Skylake generation i7s have so much better minimum frame-rates(making a more smooth experience) when compared to a i5's of the same generation

    http://wccftech.com/intel-inverse-hy...ading-skylake/
    Lets recap on some of Intel's history of screwing up. Remember the Pentium 4? The first Intel chip to introduce Hyper Threading? It had a 20 stage pipeline, compared to the 10 stage in the Pentium III or the Athlon XP. The idea was Intel could achieve a higher clock speed, but at the cost of lower IPC. But this was the early 2000's and you bought on a PC cause it had a bigger clock speed! Except it was slower compared to the 10 stage Athlon XP, because it had much higher IPC even though a lower clock speed.

    Then Intel's engineers came up with the idea to make Pentium 4 chips look like 2 virtual CPUs. The idea was to separate the 20 stages into two 10 stages. Given that you gave it plenty of bandwidth, it could dramatically improve the performance. By double feeding the CPU, you increase it's efficiency. Sounds familiar? Sounds like Intels "Inverse Hyper Threading" but in reverse.

    Right now there's no massive single threaded performance increase that's visible. If anything Skylake is a joke since there's hardly any increase over Haswell. But when they get their "Inverse Hyper Threading" going we coudl see something similar to the NetBurst architecture but maybe done right. Maybe a repeat of AMD vs Intel when AMD release's their Zen CPU? They say history repeats.

  8. #148
    If you have Raptr automatically optimizing your games (bad idea) it enables SSAA for WoW which basically runs the game at 4k.
    Stuttering sounds like a driver issue.

    Edit: Comparing clock speeds between AMD and Nvidia cards is pointless.
    Last edited by Musta Kyy; 2016-01-24 at 01:06 PM.
    | Ryzen R7 5800X | Radeon RX 6800 |

  9. #149
    Quote Originally Posted by Dukenukemx View Post
    Windows 10 like any previous version will tell you there's 4 physical, 8 virtual but that's only because Windows just identity's the CPUs and looks for a HT flag. Otherwise it handles it like 8 separate cores.
    source?


    Quote Originally Posted by Dukenukemx View Post
    Lets recap on some of Intel's history of screwing up. Remember the Pentium 4? The first Intel chip to introduce Hyper Threading? It had a 20 stage pipeline, compared to the 10 stage in the Pentium III or the Athlon XP. The idea was Intel could achieve a higher clock speed, but at the cost of lower IPC. But this was the early 2000's and you bought on a PC cause it had a bigger clock speed! Except it was slower compared to the 10 stage Athlon XP, because it had much higher IPC even though a lower clock speed.

    Then Intel's engineers came up with the idea to make Pentium 4 chips look like 2 virtual CPUs. The idea was to separate the 20 stages into two 10 stages. Given that you gave it plenty of bandwidth, it could dramatically improve the performance. By double feeding the CPU, you increase it's efficiency. Sounds familiar? Sounds like Intels "Inverse Hyper Threading" but in reverse.

    Right now there's no massive single threaded performance increase that's visible. If anything Skylake is a joke since there's hardly any increase over Haswell. But when they get their "Inverse Hyper Threading" going we coudl see something similar to the NetBurst architecture but maybe done right. Maybe a repeat of AMD vs Intel when AMD release's their Zen CPU? They say history repeats.
    I dont care about history lesons, the only similarity from p4 and skylake hyperthreading is the name

    Its a fact that actual generation i7's have more stable frame-rates due to less dips, max FPS is GPU bounce and not CPU

  10. #150
    The Lightbringer Artorius's Avatar
    10+ Year Old Account
    Join Date
    Dec 2012
    Location
    Natal, Brazil
    Posts
    3,781
    You'd be surprised how much of Intel's modern architectures come from P4...

  11. #151
    Old God Vash The Stampede's Avatar
    10+ Year Old Account
    Join Date
    Sep 2010
    Location
    Better part of NJ
    Posts
    10,939
    Quote Originally Posted by Artorius View Post
    You'd be surprised how much of Intel's modern architectures come from P4...
    Well... after the failure of the P4 Intel took a step back to the Pentium III and made the Core2Duo and modern Core i3/i5/i7. Efficiency over high clock speed. They eventually did bring back Hyper-Threading cause it turns out to be a really good thing, even for the high IPC Core i3 or Core i7. At the core of every CPU is a bunch of pipelines trying to calculate data ahead all at once, and the branch prediction is trying to predict what piece of data is a good idea to calculate while other data is being done. The output is sequential but the input is a mess.

    Not sure how modern Haswell's are built but it sounds like the Skylakes new VISC feature is going to turn the quad core CPU into this one giant virtual core. Much like the Pentium 4, it wouldn't as good as 4 cores with multithreaded core or 8 virtual cores with HT. But since most code isn't multithreaded then this would be a huge speed boost for games like WoW. And if Intel can dynamically control this feature on the fly where you can have 1 virtual to 8 virtual cores on the fly, that would be very amazing.

  12. #152
    Quote Originally Posted by Artorius View Post
    You'd be surprised how much of Intel's modern architectures come from P4...
    ofc its x86 afterall but its like comparing a Ford-T to a Mclaren P1, just because it has 4 wheels and 1 driving weel

  13. #153
    Quote Originally Posted by Svisalith View Post
    WoW is one of the games that gets 30-50% more FPS on Nvidia hardware when CPU bound because their graphics drivers are more efficient with CPU load in dx11. WoW is both one of the bigger games affected by the gap between AMD and Nvidia there as well as usually CPU bound at times of low FPS (in cities, raids, pvp, anywhere with a lot of players). Someone gave it before but it wasn't taken seriously - it is correct. You have your answer (or at least one of them).
    This is bullshit. Unbelievable troll. "Nvidia hardware when CPU bound because their graphics drivers are more efficient with CPU load in dx11" the hell are you talking about?

  14. #154
    The Lightbringer Evildeffy's Avatar
    15+ Year Old Account
    Join Date
    Jan 2009
    Location
    Nieuwegein, Netherlands
    Posts
    3,772
    Quote Originally Posted by Gouca View Post
    This is bullshit. Unbelievable troll. "Nvidia hardware when CPU bound because their graphics drivers are more efficient with CPU load in dx11" the hell are you talking about?
    Actually no.. his statement is correct... although the amount of added FPS %-ile gain is exaggerated nVidia DOES have better DX11 driver optimizations.
    This leads to less overhead in the driver for each draw call etc. which allows the CPU to process more relevant game data.

    Of course as I said his gain is not nearly as large as he states it but yes .. WoW combined with nVidia has definite edges.. but it's largely completely irrelevant at higher graphics cards because you'll have a powerful CPU to go with it as well.
    Making the difference at minimum frame rates VERY low.

  15. #155
    Old God Vash The Stampede's Avatar
    10+ Year Old Account
    Join Date
    Sep 2010
    Location
    Better part of NJ
    Posts
    10,939
    AMD graphic drivers were crap when ATI was around. Nvidia has their drivers so organized that it's the same code they use for Linux as well as Windows. AMD is still using the same code base as ATI, which means the code for Linux is totally different than on Windows. It's so badly organized that this still continues to this day.

    AMD should be rewriting their drivers from scratch and I wouldn't be surprised if their AMDGPU, R600, and RadeonSI would eventually be the ground work for new better driver code. But in the meanwhile Vulkan and DX12 would remove most driver responsibility from AMD, which begs the question if DX11 is even worth the effort. Especially when AMD cards do well enough against Nvidia in DX11.

    As far as CPU goes, it doesn't matter. Nobody's graphics card is limited by the CPU. The game on the other hand is very much limited by the CPU. Those are two different things.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •