Page 2 of 3 FirstFirst
1
2
3
LastLast
  1. #21
    Right i knew your numbers were hypothetical, but i was just pointing out that sandy and kaby lake do actually hit very similar max overclocks.

    If it wasnt clear i am just trying to get the point across to people that say comparing a overclocked cpu to a non overclocked cpu isnt fair, that if you overclocked both to the max you would still be seeing an apples to apples comparison as they hit similar clockspeeds overall, and that intel could have released the 2500k with much MUCH higher clocks out of the box (similar to the 7600k ).
    Last edited by Fascinate; 2017-01-12 at 08:32 AM.

  2. #22
    Quote Originally Posted by Speaker View Post
    I have a i5-3570k OC'd to 4.5 Ghz which runs at 100% load frequently in games like Overwatch, Diablo 3, WoW, and many others. Try running a few other programs in the background and the drop in FPS is quite noticeable. It's pretty disappointing. Can't wait for Zen.
    You have something running on the background. It is literally impossible for WoW or D3 to run at 100% CPU usage on a quad core. Why? Because those games can only take advantage of max 2 cores because of the way they are coded. Not sure about how OW uses multi-cores, but it's a Blizz game so I would assume same as WoW/D3.

  3. #23
    My guess is windows updates, its super aggressive and will max out CPU and disk usage.

  4. #24
    Well like i said they really havent reduced power consumption on performance desktop side either, unlocked i5's are still in the 90w tdp range.

  5. #25
    The Lightbringer Artorius's Avatar
    10+ Year Old Account
    Join Date
    Dec 2012
    Location
    Natal, Brazil
    Posts
    3,781
    Now look at the core count difference between 2011 and nowadays. Intel has been successfully increasing their profits with each new node while also making better products at lower TDPs and higher throughput products for the market that actually need more processing power.

    The markets that matter are the mobile market and the server market. Desktop CPUs are designed with whatever fits the other 2 needs and that's how it is. They aren't going to make an entire new uarch just for this segment alone when it doesn't necessarily make sense for the other 2.

    Could they redesign everything and come up with a larger core design with stronger single-threaded performance? Of course they could. Why don't they do it? Because this hurts their yields. You can always laser-cut a defective core and make a lower-tier product out of it (the entire uarch is also designed around scavenging, so they don't throw as much silicon away), and the chances of you having to throw huge chunks of silicon away because of small defective areas decreases when you have smaller designs (well this should be obvious). What Intel is doing scales very well from CPUs with ridiculously high core counts (what was it again this year? 24 cores?) to ridiculously low power extremely tiny CPUs like the ones you see in tablets and portable netbooks.

    To claim that they didn't do anything for half a decade is pretty stupid, the focus is just somewhere else. And besides, there's just so much you can refine anyway. The core uarch transcends the Pentium III, they're close to just ditching it altogether in favour of a new uarch (I think it's planned for their 7nm node, not sure, will get delayed anyway) and then you can be sure to see bigger improvements. For now what they can do and most likely will depending on Zen's pricing is offer consumer grade CPUs with more cores.

  6. #26
    Quote Originally Posted by Synthaxx View Post
    TDP != Power usage.

    TDP is Thermal Dissipation Power which is how much heat energy they're rated to release as a maximum over a time. TDP is not correlated with power usage, but only with how efficiently it makes use of the power provided to it. If TDP was power consumption, it would mean that 100% of the energy is being converted to heat.
    Well that isn't true at all, at full load a chip with a higher TDP than the next one will generally use more power. Obviously TDP doesn't tell the whole story and of course power consumption has come down some but its nothing impressive like GPU's have done in the past 6 years.

  7. #27
    Competition fuels innovation. They don't really have any competition.
    "Privilege is invisible to those who have it."

  8. #28
    You need to take into account other chipset changes since Sandy Bridge to really get the entire story imo. Yeah we're not getting crazy IPC improvements like we were prior to Sandy Bridge, but we're still getting a ~40% improvement with a bunch of other features thrown in there too.

  9. #29
    Quote Originally Posted by Artorius View Post
    Now look at the core count difference between 2011 and nowadays. Intel has been successfully increasing their profits with each new node while also making better products at lower TDPs and higher throughput products for the market that actually need more processing power.

    The markets that matter are the mobile market and the server market. Desktop CPUs are designed with whatever fits the other 2 needs and that's how it is. They aren't going to make an entire new uarch just for this segment alone when it doesn't necessarily make sense for the other 2.

    Could they redesign everything and come up with a larger core design with stronger single-threaded performance? Of course they could. Why don't they do it? Because this hurts their yields. You can always laser-cut a defective core and make a lower-tier product out of it (the entire uarch is also designed around scavenging, so they don't throw as much silicon away), and the chances of you having to throw huge chunks of silicon away because of small defective areas decreases when you have smaller designs (well this should be obvious). What Intel is doing scales very well from CPUs with ridiculously high core counts (what was it again this year? 24 cores?) to ridiculously low power extremely tiny CPUs like the ones you see in tablets and portable netbooks.

    To claim that they didn't do anything for half a decade is pretty stupid, the focus is just somewhere else. And besides, there's just so much you can refine anyway. The core uarch transcends the Pentium III, they're close to just ditching it altogether in favour of a new uarch (I think it's planned for their 7nm node, not sure, will get delayed anyway) and then you can be sure to see bigger improvements. For now what they can do and most likely will depending on Zen's pricing is offer consumer grade CPUs with more cores.
    Oh i love what intel is doing on the lower end stuff, pentiums are finally hyperthreaded and they even have 35w parts for HTPC use. Of course the aim isnt at the performance desktop market for obvious reasons, i just didnt know the gains were so low through the years and that is why i made the thread. Obviously cinebench does not tell the whole story, but it is a pretty good benchmark to judge raw horsepower.

    And yes, hopefully zen changes the game and i5's turn into i7's and i7's turn into 8 cores

  10. #30
    The amount of conspiracy tinfoil in here is astounding.

    Intel has been up-front for years that gains were coming slower and slower, and the reason why.

    Physics is why. Hell, theyve had to alter their entire product cycle because even with 5-7 year lead times on these new architectures they are STILL not able to ship them on time because the die shrinks are becoming nearly impossible. If it was as simple as just "coming up with a new architecture" - they'd have done. They re-invest billions into research every year. (They are one of the few companies with such a research budget or a Board of Directors willing to let them reinvest so much instead of paying higher dividends to shareholders).

    Even Ryzen is looking to hit roughly the same IPC wall on the same size nodes.

    Quote Originally Posted by Chickat View Post
    If only games benefited from that stuff as much as they should. Surely theres a way to make games benefit from at least multithreading. A lot of games still only really benefit form 2 cores let alone 4 or 8.
    Yeah, except there often really ISNT any way to make that happen.

    A lot of calculations for games MUST be done in serial. Equation X requires the result of Equation W to even start, etc.

    Quote Originally Posted by Speaker View Post
    I have a i5-3570k OC'd to 4.5 Ghz which runs at 100% load frequently in games like Overwatch, Diablo 3, WoW, and many others. Try running a few other programs in the background and the drop in FPS is quite noticeable. It's pretty disappointing. Can't wait for Zen.
    Then you're having some other issue and/or already running a ton of background processes.

    Overwatch in particular is super lightweight. It doesn't even max out a single thread on my i7-4790K. And that's running at 200% render scale/4K DSR.

  11. #31
    The Lightbringer Artorius's Avatar
    10+ Year Old Account
    Join Date
    Dec 2012
    Location
    Natal, Brazil
    Posts
    3,781
    Comparing CPUs and GPUs is also incredibly stupid.

    With GPUs even if you don't change the uarch AT ALL you can literally just do more of the same when you change nodes (if you had 10 imaginary processing units before, now you can have 20 for example) and you'll get an impressive performance difference.
    With CPUs adding more cores don't really work the same way.
    Last edited by Artorius; 2017-01-13 at 01:48 AM.

  12. #32
    The Unstoppable Force Chickat's Avatar
    10+ Year Old Account
    Join Date
    Mar 2011
    Location
    Orgrimmar
    Posts
    20,640
    Quote Originally Posted by Kagthul View Post
    The amount of conspiracy tinfoil in here is astounding.

    Intel has been up-front for years that gains were coming slower and slower, and the reason why.

    Physics is why. Hell, theyve had to alter their entire product cycle because even with 5-7 year lead times on these new architectures they are STILL not able to ship them on time because the die shrinks are becoming nearly impossible. If it was as simple as just "coming up with a new architecture" - they'd have done. They re-invest billions into research every year. (They are one of the few companies with such a research budget or a Board of Directors willing to let them reinvest so much instead of paying higher dividends to shareholders).

    Even Ryzen is looking to hit roughly the same IPC wall on the same size nodes.



    Yeah, except there often really ISNT any way to make that happen.

    A lot of calculations for games MUST be done in serial. Equation X requires the result of Equation W to even start, etc.



    Then you're having some other issue and/or already running a ton of background processes.

    Overwatch in particular is super lightweight. It doesn't even max out a single thread on my i7-4790K. And that's running at 200% render scale/4K DSR.
    You mean using more cores is hard, or benefiting from multi-threading? Cause I'd ask why did they make the Xbox One and PS4 8 core cpus if games cant use them much.

  13. #33
    Quote Originally Posted by Tehterokkar View Post
    You have something running on the background. It is literally impossible for WoW or D3 to run at 100% CPU usage on a quad core. Why? Because those games can only take advantage of max 2 cores because of the way they are coded. Not sure about how OW uses multi-cores, but it's a Blizz game so I would assume same as WoW/D3.
    Quote Originally Posted by Kagthul View Post
    Then you're having some other issue and/or already running a ton of background processes.

    Overwatch in particular is super lightweight. It doesn't even max out a single thread on my i7-4790K. And that's running at 200% render scale/4K DSR.
    If you only play with a 60 hertz monitor and v-sync is enabled. I have a 144 hz monitor, and to me having a constant 144 fps is important. You can definitely reach 100% cpu utilization in certain areas or encounters without any programs running in the background. And yes, I know some games don't use all 4 cores, so if even 1 core reaches 100%, then I know I am being bottlenecked by the CPU, which was the point I was making.
    Last edited by Speaker; 2017-01-13 at 04:49 AM.

  14. #34
    Quote Originally Posted by Speaker View Post
    If you only play with a 60 hertz monitor and v-sync is enabled. I have a 144 hz monitor, and to me having a constant 144 fps is important. You can definitely reach 100% cpu utilization in certain areas or encounters without any programs running in the background. And yes, I know some games don't use all 4 cores, so if even 1 core reaches 100%, then I know I am being bottlenecked by the CPU, which was the point I was making.
    Again, no, you cannot reach 100% CPU usage in WoW on a quad-core. It is scientifically impossible. If it's only 1 core reaching 100% and other cores are basically 0% usage, then it's your WoW settings that are screwed.

    There is tech in place that splits the load of an once single-threaded task to all available CPU threads, but the total workload still cannot exceed the same amount as having only 1 core work on it. However there is a way to disable this in WoW with a console command that has been long forgotten and probably never used after Wrath(since quad cores started appearing, after Cata this command was enabled by default).

  15. #35
    Intel has improved alot over the last 6 years since Sandy, now sure at the top end they have "only" gained up to 30% in performance i believe. However top end performance has not been their biggest focus, as the majority of their consumer customers is not looking for top end performance. So yes, for some ppl esp enthousiasts it might be dissapointing. Having said that, as a enthousiast myself i don't mind it that much either, as it saves me a ton of money. Where from +- 1993 to +-2008 i build a new rig nearly every year, sometimes every 2 years. Then in 2008 i build a Nehalem rig based on a overclocked i7 920 @ 3.8ghz, And kept that till Haswell release (it could have even lasted longer CPU wise, if the mobo was not aging to much slowing down my SSD's etc. Then i Build a Haswell Rig @ 4.5Ghz that i still use today. Thats 2 new rigs in around 9 years, alot of money saved.

    Intel focussed on other design points the last decade, their on chip GPU's have improved massively (and while not useful for us gamers, for the average user thats pretty big), Average power comsumption has gone down quite a bit, which is good for everybody, but esp in the laptop market they have some chips now that offer good performance but also a batterylife that was not thinkable 10 years ago in formfactors that are actually handy to carry around. Al the chips have hardware engines now for media playback, which was not the case in the past. All in all Intel improved their CPU's massively over the last 10 years, just not at the top end performance.

  16. #36
    Quote Originally Posted by Speaker View Post
    If you only play with a 60 hertz monitor and v-sync is enabled. I have a 144 hz monitor, and to me having a constant 144 fps is important. You can definitely reach 100% cpu utilization in certain areas or encounters without any programs running in the background. And yes, I know some games don't use all 4 cores, so if even 1 core reaches 100%, then I know I am being bottlenecked by the CPU, which was the point I was making.
    Really? Because I've had times when I've been playing wow in raids where my cpu didn't even have a high enough load for speedstep to clock up to my overclock (I have it enabled because I don't want the cpu pulling 190W when it doesn't have to).

  17. #37
    To be clear im not really upset about this, sure i have had the itch to upgrade my PC the last couple of years but this just means i can put my money into other toys

    And yes intels low end stuff is great, check out this 35w pentium:
    http://ark.intel.com/products/97486/...Cache-3_00-GHz

    35w part that runs at 3ghz with hyperthreading, once my microcenter starts stocking these its going into my HTPC. Currently the best low watt part for a HTPC is a AMD athlon AM1 socket, but they cant natively even playback 4k content, just 1080. For another 10w's this pentium should be able to playback 4k youtube videos and be a lot better for steam in home streaming.

  18. #38
    Old God Vash The Stampede's Avatar
    10+ Year Old Account
    Join Date
    Sep 2010
    Location
    Better part of NJ
    Posts
    10,939
    Now you know why there's so much attention given to AMD's RyZen.

  19. #39
    Quote Originally Posted by Fascinate View Post
    https://www.overclock3d.net/reviews/...k_cpu_review/8

    7600k at stock gets a 674 in cinebench R15. I just tested my 2500k which also runs at 4.2ghz and scored a 552. How pitiful is that lol, 22% gain in 6 years time.

    It may be more like 23-24% because the stock 7600k only boosts 1 core to 4.2ghz but the fact intel could have easily released the 2500k with a 4.2ghz boost clock just kinda shows how little they have done in 6 years, they only have increased clocks since with hardly any performance gain.

    Was just kind of curious of the actual gains since then because you see various reports, but cinebench is a great evaluator of raw performance.



    That's what you get when you have zero competition in the premium consumer cpu segment
    Why would you go all out, when you can release a small incremental upgrade every year and your customers buy three chips instead of one?

  20. #40
    Deleted
    Quote Originally Posted by coprax View Post
    That's what you get when you have zero competition in the premium consumer cpu segment
    Why would you go all out, when you can release a small incremental upgrade every year and your customers buy three chips instead of one?
    Again, why do people think intel is holding back?

    Is it so bad to admit that Intel has hit a wall on development and delivering those development, these incremental increase is more likely the best they can actually do, no matter how much money they throw at it.

    The budget they have already put into the socket 1151 development has already been billions, you don't spend that type of money on holding back, that is going full out.

    Looking at the radar, Samsung is ahead of Intel at delivering 10nm and below nodes, and this will be reflected in Zen plus which is slated to be 10 nm or less.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •