Page 6 of 9 FirstFirst ...
4
5
6
7
8
... LastLast
  1. #101
    Quote Originally Posted by Majesticii View Post
    Etc
    Comparing a 4GHz 8-core from AMD to a 3.5GHz quadcore from Intel is beyond ridiculous? How exactly?

  2. #102
    Quote Originally Posted by Majesticii View Post
    Yes it does, you're just reading these benchmarks incorrectly. The fact that the AMD scales better is because the Intel saturates the game at a lower frequency. If you look at relative performance, the i5-3470 is faster than at 3.2ghz, vs the 4.0ghz on the FX8350.
    Now, compare those results to the 25fps (i5-3570K) vs 57fps (FX8350) they got on the TekSyndicate video, and you understand why we don't believe it.
    Quote Originally Posted by n0cturnal View Post
    That link shows i5 3470 beating the 8350 at stock speeds, you really think the 8350 will double the performance of an overclocked 3570k when the 8350 is also overclocked? I don't think so.

    It could possibly beat the 3570k but it would only be with a few FPS not 30FPs more or what it was in the video.
    Incorrect.

    The link shows the 8350 @ 4.5Ghz behind a 3770k @ 4.5Ghz by only 5 FPS.

    Referencing their charts, it clearly shows the 3770k gaining only 2-3 FPS per 500Mhz increment while the 8350 gains 6-7 FPS over each 500Mhz increment. Perhaps using "walk all over" was a poor choice of words, but the benchmark I linked clearly shows that a 8350 @ 5Ghz should be ahead of a 3570k @ 4.5Ghz.

    This is why these discussions turn into a cesspool of nonsense. (Many) People follow word of mouth brand bashing to the point of actually ignoring the facts in front of their face and stating the opposite as truth.
    Last edited by glo; 2013-01-27 at 12:27 AM.
    i7-4770k - GTX 780 Ti - 16GB DDR3 Ripjaws - (2) HyperX 120s / Vertex 3 120
    ASRock Extreme3 - Sennheiser Momentums - Xonar DG - EVGA Supernova 650G - Corsair H80i

    build pics

  3. #103
    Quote Originally Posted by glo View Post
    Referencing their charts, it clearly shows the 3770k gaining only 2-3 FPS per 500Mhz increment while the 8350 gains 6-7 FPS over each 500Mhz increment. Perhaps using "walk all over" was a poor choice of words, but the benchmark I linked clearly shows that a 8350 @ 5Ghz should be ahead of a 3570k @ 4.5Ghz.
    You probably missed my earlier post:

    Quote Originally Posted by yurano View Post
    Fallacy of extrapolation.

    You can't extrapolate unless you know the governing equation. Even then, extrapolation is not a good idea.
    The FX-8350's performance might be governed by a limiting equation. Between 2.5 and 4.5 Ghz, the FX-8350's performance equation might be closely linear but approach the 'limiter' not much past 4.5 Ghz.

    In other words, the FX-8350 might hit a bottleneck just past 4.5 Ghz. You can't say for sure whether said bottleneck is present or not. As such, you can't extrapolate the results. You can definitely interpolate within 2.5 to 4.5 Ghz, but you can't extrapolate to 5 Ghz without additional information.

  4. #104
    Brewmaster Majesticii's Avatar
    10+ Year Old Account
    Join Date
    Feb 2010
    Location
    Netherlands
    Posts
    1,414
    Quote Originally Posted by glo View Post
    Incorrect.

    The link shows the 8350 @ 4.5Ghz behind a 3770k @ 4.5Ghz by only 5 FPS.

    Referencing their charts, it clearly shows the 3770k gaining only 2-3 FPS per 500Mhz increment while the 8350 gains 6-7 FPS over each 500Mhz increment. Perhaps using "walk all over" was a poor choice of words, but the benchmark I linked clearly shows that a 8350 @ 5Ghz should be ahead of a 3570k @ 4.5Ghz.

    This is why these discussions turn into a cesspool of nonsense. (Many) People follow word of mouth brand bashing to the point of actually ignoring the facts in front of their face and stating the opposite as truth.
    No i'm sorry, you're incorrect again. You assume the overclock gain is linear to infinity. There is a roof to where overclocking benifits, there always is. Judging from the results on the Intel i'd say that's about 70-75fps. At this point you saturate the GPU and that's that.
    You are correct the FX8350 is trailing the 3770K by only a small margin, but that's the whole idea. In the video they show the AMD doubling the Intel's performance, which is absolute BS.

    EDIT: what yurano said.

  5. #105
    Quote Originally Posted by glo View Post
    Incorrect.

    The link shows the 8350 @ 4.5Ghz behind a 3770k @ 4.5Ghz by only 5 FPS.

    Referencing their charts, it clearly shows the 3770k gaining only 2-3 FPS per 500Mhz increment while the 8350 gains 6-7 FPS over each 500Mhz increment. Perhaps using "walk all over" was a poor choice of words, but the benchmark I linked clearly shows that a 8350 @ 5Ghz should be ahead of a 3570k @ 4.5Ghz.

    This is why these discussions turn into a cesspool of nonsense. (Many) People follow word of mouth brand bashing to the point of actually ignoring the facts in front of their face and stating the opposite as truth.
    So lets say that your numbers are correct, for your numbers to match the video that would have a 20FPS or so starting point for both CPUs. Overclock the i5 1GHz and it gains 7FPS.

    For the 8350 to gain 39FPS at 6-7FPS per 500MHz that would mean that it will have to run at ~7GHz.

    And there is no way we can know that it would continue to scale at that rate on top of that.
    Intel i5-3570K @ 4.7GHz | MSI Z77 Mpower | Noctua NH-D14 | Corsair Vengeance LP White 1.35V 8GB 1600MHz
    Gigabyte GTX 670 OC Windforce 3X @ 1372/7604MHz | Corsair Force GT 120GB | Silverstone Fortress FT02 | Corsair VX450

  6. #106
    Quote Originally Posted by yurano View Post
    You probably missed my earlier post:



    The FX-8350's performance might be governed by a limiting equation. Between 2.5 and 4.5 Ghz, the FX-8350's performance equation might be closely linear but approach the 'limiter' not much past 4.5 Ghz.

    In other words, the FX-8350 might hit a bottleneck just past 4.5 Ghz. You can't say for sure whether said bottleneck is present or not. As such, you can't extrapolate the results. You can definitely interpolate within 2.5 to 4.5 Ghz, but you can't extrapolate to 5 Ghz without additional information.
    Pulling at straws. The odds of it not scaling an extra 500Mhz just as well are next to none when it's doing so 2.5Ghz - 4.5Ghz.
    i7-4770k - GTX 780 Ti - 16GB DDR3 Ripjaws - (2) HyperX 120s / Vertex 3 120
    ASRock Extreme3 - Sennheiser Momentums - Xonar DG - EVGA Supernova 650G - Corsair H80i

    build pics

  7. #107
    Quote Originally Posted by glo View Post
    Pulling at straws.
    Pulling at straws... this is how calibrations are performed in 'real science'. Limitations on extrapolation are not just for comparing CPU clocks.

    I'm not saying that there's a guaranteed limiter in the FX-8350, I'm saying that you can't extrapolate due to the possibility of such an event.

    The odds of it not scaling an extra 500Mhz just as well are next to none when it's doing so 2.5Ghz - 4.5Ghz.
    There is no 'probability' in a clear cut case such as CPU scaling.

    When I give you equation such as f(x) = x^2. You can't go, f(0) = 0 and f(1) = 1 so f(2) is probably 2.
    Last edited by yurano; 2013-01-27 at 12:45 AM.

  8. #108
    Brewmaster Majesticii's Avatar
    10+ Year Old Account
    Join Date
    Feb 2010
    Location
    Netherlands
    Posts
    1,414
    Quote Originally Posted by yurano View Post
    I'm not saying that there's a guaranteed limiter in the FX-8350, I'm saying that you can't extrapolate due to the possibility of such an event.
    I can, it's the 7970 GHZ edition they tested it with. Which goes at about 70fps.

    http://static.techspot.com/articles-...ch/1920_02.png

    Yeah go mad on your CPU, you won't gain any fps if the GPU doesn't allow it. You might get a few extra fps if there are lots of physics being calculated or whatever, but not much.

    When I give you equation such as f(x) = x^2. You can't go, f(0) = 0 and f(1) = 1 so [I]f(2) is probably 2
    Haha
    Last edited by Majesticii; 2013-01-27 at 12:47 AM.

  9. #109
    Quote Originally Posted by yurano View Post
    Pulling at straws... this is how calibrations are performed in 'real science'. Limitations on extrapolation are not just for comparing CPU clocks.

    I'm not saying that there's a guaranteed limiter in the FX-8350, I'm saying that you can't extrapolate due to the possibility of such an event.



    There is no 'probability' in a clear cut case such as CPU scaling.

    When I give you equation such as f(x) = x^2. You can't go, f(0) = 0 and f(1) = 1 so f(2) is probably 2.
    In every other bench I've seen, the 8350 doesn't randomly run into a mystery limiter past 4.5Ghz. Using the argument that one may exist in Far Cry 3 and no where else is absurd. So yes, pulling at straws.
    i7-4770k - GTX 780 Ti - 16GB DDR3 Ripjaws - (2) HyperX 120s / Vertex 3 120
    ASRock Extreme3 - Sennheiser Momentums - Xonar DG - EVGA Supernova 650G - Corsair H80i

    build pics

  10. #110
    Quote Originally Posted by Majesticii View Post
    I can, it's the 7970 GHZ edition they tested it with. Which goes at about 70fps.

    http://static.techspot.com/articles-...ch/1920_02.png

    Yeah go mad on your CPU, you won't gain any fps if the GPU doesn't allow it. You might get a few extra fps if there are lots of physics being calculated or whatever, but not much.
    I feel like you're missing something from your post for me to follow your train of thought here, and so your entire post to me doesn't make sense. Would you mind sitting down to explain a bit better where you were going with this?

  11. #111
    Quote Originally Posted by glo View Post
    In every other bench I've seen, the 8350 doesn't randomly run into a mystery limiter past 4.5Ghz. Using the argument that one may exist in Far Cry 3 and no where else is absurd. So yes, pulling at straws.
    You are still missing the point that the 8350 would have to overclock to around 7GHz to outscale the 3570k by that much.
    Intel i5-3570K @ 4.7GHz | MSI Z77 Mpower | Noctua NH-D14 | Corsair Vengeance LP White 1.35V 8GB 1600MHz
    Gigabyte GTX 670 OC Windforce 3X @ 1372/7604MHz | Corsair Force GT 120GB | Silverstone Fortress FT02 | Corsair VX450

  12. #112
    Quote Originally Posted by Majesticii View Post
    I can, it's the 7970 GHZ edition they tested it with. Which goes at about 70fps.

    http://static.techspot.com/articles-...ch/1920_02.png

    Yeah go mad on your CPU, you won't gain any fps if the GPU doesn't allow it. You might get a few extra fps if there are lots of physics being calculated or whatever, but not much.



    Haha
    Completely baseless assumption. You have no idea whether or not that the CPU or GPU is the limiting factor unless you're referencing some next gen hardware that we don't know about.
    i7-4770k - GTX 780 Ti - 16GB DDR3 Ripjaws - (2) HyperX 120s / Vertex 3 120
    ASRock Extreme3 - Sennheiser Momentums - Xonar DG - EVGA Supernova 650G - Corsair H80i

    build pics

  13. #113
    Brewmaster Majesticii's Avatar
    10+ Year Old Account
    Join Date
    Feb 2010
    Location
    Netherlands
    Posts
    1,414
    Quote Originally Posted by Drunkenvalley View Post
    I feel like you're missing something from your post for me to follow your train of thought here, and so your entire post to me doesn't make sense. Would you mind sitting down to explain a bit better where you were going with this?
    Sure. They run the CPU scaling test both with the same GPU. Say the GPU does 70 fps at about it's max 99% GPU load. Now you clock your CPU from 2.5ghz to 4.5ghz. Say the AMD has about 50% GPU load on 2.5ghz (44fps), and the intel has about 85% GPU (65fps) saturation on 2.5ghz. That means the AMD has a much higher scaling than the Intel. However this only reflects the low saturation it started off with. And if you hit the 99% GPU saturation, it doesn't scale any further because the GPU can't process more images per second, meaning a wall off.

    In games with a CPU bias, i.e older games, you can see both processor scaling the same because they will never saturate the GPU, due to the low graphics required. Like this one:
    http://www.techspot.com/review/601-b...nce/page5.html

    However in GPU intensive games, you can see intel saturate the GPU quicker, and thus appearing to have less scaling:
    http://www.techspot.com/review/591-m...rks/page6.html

    When actually it's just having more efficiency.

    Quote Originally Posted by glo View Post
    Completely baseless assumption. You have no idea whether or not that the CPU or GPU is the limiting factor unless you're referencing some next gen hardware that we don't know about.
    I do, because they do the GPU tests on a i7-3960x heavily overclocked. Pretty sure that thing will pull 99% load on any single GPU.
    Last edited by Majesticii; 2013-01-27 at 01:03 AM.

  14. #114
    Deleted
    0,128$ per KWH... I wish we had it that cheap.

    I pay roughly 2,8DKK per KWH (roughly 0,5$ per KWH) here lol.

  15. #115
    Quote Originally Posted by glo View Post
    In every other bench I've seen, the 8350 doesn't randomly run into a mystery limiter past 4.5Ghz. Using the argument that one may exist in Far Cry 3 and no where else is absurd. So yes, pulling at straws.
    Maybe past 4.5 Ghz, FC3 is bottlenecked by the 7970 GE? Where as the 3770K is already bottlenecked by the 7970 GE at stock speeds. The location of this GPU bottleneck varies between games and depends heavily on settings and GPU used.
    Last edited by yurano; 2013-01-27 at 12:56 AM.

  16. #116
    Quote Originally Posted by inux94 View Post
    WoW's engine is very outdated, it shouldn't ever be benchmarked IMO.
    You don't benchmark it because it's the latest and greatest game engine. You test it because 10 billion million trillion people play wow and with a high retention rate.

  17. #117
    Quote Originally Posted by Majesticii View Post
    Sure. They run the CPU scaling test both with the same GPU. Say the GPU does 70 fps at about it's max 99% GPU load. Now you clock your CPU from 2.5ghz to 4.5ghz. Say the AMD has about 50% GPU load on 2.5ghz, and the intel has about 75% GPU saturation on 2.5ghz. That means the AMD has a much higher scaling than the Intel. However this only reflects the low saturation it started off with. And if you hit the 99% GPU saturation, it doesn't scale any further because the GPU can't process more images per second, meaning a wall off.

    In games with a CPU bias, i.e older games, you can see both processor scaling the same because they will never saturate the GPU, due to the low graphics required. Like this one:
    http://www.techspot.com/review/601-b...nce/page5.html

    However in GPU intensive games, you can see intel saturate the GPU quicker, and thus appearing to have less scaling:
    http://www.techspot.com/review/591-m...rks/page6.html

    When actually it's just having more efficiency.



    I do, because they do the GPU tests on a i7-3960x heavily overclocked. Pretty sure that thing will pull 99% load on any single GPU.
    The 3960x performs near identically to Ivy Bridge core for core. So yeah, you're making a massive baseless assumption. Processor saturation was never mentioned anywhere, so throwing out your own random numbers means little.
    i7-4770k - GTX 780 Ti - 16GB DDR3 Ripjaws - (2) HyperX 120s / Vertex 3 120
    ASRock Extreme3 - Sennheiser Momentums - Xonar DG - EVGA Supernova 650G - Corsair H80i

    build pics

  18. #118
    Quote Originally Posted by Majesticii View Post
    Sure. They run the CPU scaling test both with the same GPU. Say the GPU does 70 fps at about it's max 99% GPU load. Now you clock your CPU from 2.5ghz to 4.5ghz. Say the AMD has about 50% GPU load on 2.5ghz, and the intel has about 75% GPU saturation on 2.5ghz. That means the AMD has a much higher scaling than the Intel. However this only reflects the low saturation it started off with. And if you hit the 99% GPU saturation, it doesn't scale any further because the GPU can't process more images per second.
    I was going to be all nice and say thank you for the information, but goddang do you need to work on your writing. I've had to read that a couple of times now to figure out what you were trying to say in that, and right now it's almost a guess.

    But from what I gathered, what you're saying is that the CPUs were bottlenecking the GPU until they were at the 4.5GHz mark. With the bottleneck gone, they no longer see the scaling we saw at lower clocks.

  19. #119
    Quote Originally Posted by yurano View Post
    Maybe past 4.5 Ghz, FC3 is bottlenecked by the 7970 GE? Where as the 3770K is already bottlenecked by the 7970 GE at stock speeds. The location of this GPU bottleneck varies between games and depends heavily on settings and GPU used.
    Weren't you the one going on about needing absolute fact for any "scientific" reason why processors are performing the way they're performing?
    i7-4770k - GTX 780 Ti - 16GB DDR3 Ripjaws - (2) HyperX 120s / Vertex 3 120
    ASRock Extreme3 - Sennheiser Momentums - Xonar DG - EVGA Supernova 650G - Corsair H80i

    build pics

  20. #120
    Brewmaster Majesticii's Avatar
    10+ Year Old Account
    Join Date
    Feb 2010
    Location
    Netherlands
    Posts
    1,414
    Quote Originally Posted by Drunkenvalley View Post
    But from what I gathered, what you're saying is that the CPUs were bottlenecking the GPU until they were at the 4.5GHz mark. With the bottleneck gone, they no longer see the scaling we saw at lower clocks.
    Because it's very hard to describe for me, and i'm not entirely sober. Sorry about that :P

    But yeah, that's the TL: DR version. The intel hits the 99% mark way faster in regard to the amd processor. That in now way means the AMD scales better, that means the AMD is slower per calculation, and requires a higher frequency to saturate the GPU. Honestly, if it requires 4.5ghz just to saturate a single 7970, it has no hope of any future cards, whereas with the 3570K has much more headroom.

    Quote Originally Posted by glo View Post
    Weren't you the one going on about needing absolute fact for any "scientific" reason why processors are performing the way they're performing?
    hmmmnoo that was me

    Quote Originally Posted by glo View Post
    The 3960x performs near identically to Ivy Bridge core for core. So yeah, you're making a massive baseless assumption. Processor saturation was never mentioned anywhere, so throwing out your own random numbers means little.
    If i can saturate (99%) an overclocked 670 in that game with an ancient i5-760, a 3960x @ 4.5ghz can saturate a slightly faster 680. For heavens sake, you're really trying to hard on this.
    Last edited by Majesticii; 2013-01-27 at 01:11 AM.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •