Page 46 of 78 FirstFirst ...
36
44
45
46
47
48
56
... LastLast
  1. #901

  2. #902
    .... so, as i said earlier, the Coffee Lake core i3s (all), and all i5s and i7s that aren't the 2 K SKUs are DOA.

  3. #903
    I doubt that

    plus having an iGPU is also an advantage for certain buyers segments



    i am only interested in the hexacores though myself

  4. #904
    Quote Originally Posted by Kagthul View Post
    .... so, as i said earlier, the Coffee Lake core i3s (all), and all i5s and i7s that aren't the 2 K SKUs are DOA.
    The majority of PCs today ship with only Intel GPU's. I'm talking about something like 65-75% market share. Interestingly enough iGPU marketshare has dropped about 10% in the last two years.
    Last edited by kaelleria; 2017-08-23 at 01:56 PM.

  5. #905
    I should have been more clear - im talking purely DIY market here.

    OEMs dont pay retail prices, obviously, and Intel has much better margins than AMD (i doubt AMDs retail prices are much hiher than what OEMs pay), so for OEMs, Intel may still be price competitive, and for laptops, AMD has nothin to compete with 15W quad-core hyperthreaded parts.

    But for DIY/retail - DOA. Ryzen parts will trounce the i3 and the locked i5s and i7s pretty handily on price/performance, and even if you add a media GPU (like a GT 730 or 1030) its STILL cheaper...

    And Raven Ridge APUs are right around the corner if you CANNOT use a gpu.

  6. #906
    Ryzen parts will trounce the i3 and the locked i5s and i7s pretty handily on price/performance
    erm





    if Ryzens 3 did like this vs the measly dual core Kaby i3s .. then they will do worse vs Coffee i3 quads

    and the same story for new i5s - getting a 2 core boost


    performance or p/p vs Ryzen is going to be pretty good on Coffee CPUs (if the prices are same/very close to 7-th gen) unless we are talking just getting the cheapest 16 threads possible for specific work (where the 1700 is still king)



    but this thread is getting sidetracked by Ryzen talk again

  7. #907
    Quote Originally Posted by Kagthul View Post
    OEMs dont pay retail prices, obviously, and Intel has much better margins than AMD (i doubt AMDs retail prices are much hiher than what OEMs pay), so for OEMs, Intel may still be price competitive, and for laptops, AMD has nothin to compete with 15W quad-core hyperthreaded parts.
    You may want to rethink that. While AMD don't have anything great on the laptop side, the margins aren't going to stay in Intel's favor.

    https://www.servethehome.com/amd-epy...-cost-savings/

    In the end, to enter the market, the MCM module makes sense. One of the major topics at the conference today was MCM with Intel’s morning paper called “Heterogeneous Modular Platform” itself advocating its EMIB interconnect for MCM modules. AMD claims that it costs 59% as much as it would to manufacture a MCM 32 core package versus a monolithic die. This includes the approximately 10% area overhead for MCM related components.
    Intel are already working on similar technology but they really need to get going because their costs are definitely going to be higher than anything based on Zen.

    I really think that this gen from Intel is probably the last monolithic design that we will see coming on the CPU side of things. It doesn't make sense to keep pushing it because they are really at the limit of what is feasible.
    Last edited by Gray_Matter; 2017-08-23 at 06:23 PM.

  8. #908
    Quote Originally Posted by Life-Binder View Post
    erm





    if Ryzens 3 did like this vs the measly dual core Kaby i3s .. then they will do worse vs Coffee i3 quads

    and the same story for new i5s - getting a 2 core boost


    performance or p/p vs Ryzen is going to be pretty good on Coffee CPUs (if the prices are same/very close to 7-th gen) unless we are talking just getting the cheapest 16 threads possible for specific work (where the 1700 is still king)



    but this thread is getting sidetracked by Ryzen talk again
    Its not sidetracking, its comparing the product in question to its competition.

    And ignoring overclocking (since that R3 1200 will hit 3.8Ghz on the stock cooler) is foolish as all get out.

    - - - Updated - - -

    Quote Originally Posted by Gray_Matter View Post
    You may want to rethink that. While AMD don't have anything great on the laptop side, the margins aren't going to stay in Intel's favor.

    https://www.servethehome.com/amd-epy...-cost-savings/
    I think you're missing what i was saying. Intel's margins being better means they can afford to sell to OEMs cheaper than AMD can. AMD's retail prices are a lot closer to the prices they charge OEMs, whereas Intel's a quite a bit higher, so OEMs may still go with Intel because the cost is still actually lower for them, where it is not for retail customers.

    It was me saying that for pre-builts, yeah, Coffee Lake is probably not DOA.

    But a 109$ Ryzen 3 is still going to beat the snot out of a Coffee Lake i3 (cheapest of which is about ~140US, and the K-SKU is almost the cost of an i5..) in Price/performance at retail/DIY.

    Same with the locked i5s, which have really low clock speeds, and are more expensive across the board than the R5s, and even the locked i7s, which also have low clock speeds.

    The extra cores aren't going to help that much since the price-equivalent Ryzen parts have +2 cores over even the Coffee Lake 2-core bump.

    It's a bad lineup at bad prices.

    The only attractive parts are the i5 and i7 K-SKUs, and even those only for people who need single-threaded performance over total overall performance.

  9. #909
    The Lightbringer Evildeffy's Avatar
    15+ Year Old Account
    Join Date
    Jan 2009
    Location
    Nieuwegein, Netherlands
    Posts
    3,772
    http://wccftech.com/intel-coffee-lak...nchmarks-leak/

    Little bit more LEAKED information regarding the i3-8350K which is tested @ 4,12GHz and on a Z370 motherboard.
    Considering, provided of course the leak is indeed accurate, it supposedly has an IPC lead of 11,66% over Kaby Lake.

    Having said that a boost of such magnitude would be pretty damned big, however... it's done with the CPU-Z test which, unfortunately, doesn't normalize scores.
    It assigns more value to Part A than Part B of it's own internal tests which means that this is absolutely useless, proof of this was Ryzen pre-CPU-Z patch outscoring the i7 without so much as breaking a sweat.

    I still doubt the validity of the CPU-Z benchmark but we'll see relatively soon.

    What we can take from this however if they do turn out to be true is that some special little snowflake(s) on this forum will have to accept the fact that the i3 will NOT be compatible with the 100/200 series chipsets as it will utilize Coffee Lake-S architecture, so "ZOMG! LOGIC PREVAILS!".
    (Had to get that out there)

  10. #910
    yeah I never bother with CPU-Z bench, its too unreliable and also differs between versions

    as far as CPU performance I, personally, only ever look at Cinebench15 (mostly single-core) and games .. and for GPUs only 3Dmarks graphics score + games



    but as long as 8700K:
    - has a ringbus
    - at least same IPC in games as 7700K (no regression)
    - can do ~4.8 GHz all-cores OC on air/AIO, no delid

    Im 100% satisfied (even if Im not buying it and waiting on 10/7nm hexacores )

  11. #911
    The Lightbringer Evildeffy's Avatar
    15+ Year Old Account
    Join Date
    Jan 2009
    Location
    Nieuwegein, Netherlands
    Posts
    3,772
    Quote Originally Posted by Life-Binder View Post
    yeah I never bother with CPU-Z bench, its too unreliable and also differs between versions

    as far as CPU performance I, personally, only ever look at Cinebench15 (mostly single-core) and games .. and for GPUs only 3Dmarks graphics score + games
    Better indicator of CPU performance as well IMO but 3DMark for GPU ... I avoid that ... I prefer actual game FPS benches more than anything for that.

    3DMark shows what's potentially capable, not reality.

    Quote Originally Posted by Life-Binder View Post
    but as long as 8700K:
    - has a ringbus
    - at least same IPC in games as 7700K (no regression)
    - can do ~4.8 GHz all-cores OC on air/AIO, no delid

    Im 100% satisfied (even if Im not buying it and waiting on 10/7nm hexacores )
    Not asking for much are you?
    I know it's hoping it's capable of that but like I've told you in the past, which hopefully you've actually learned from, getting it like that whilst increasing complexity isn't something that's a given or always possible.

    Be patient, even if it maxes out at 4,5 - 4,6GHz .. it's still good even if the IPC is identical to Skylake/Kaby Lake.

  12. #912
    Quote Originally Posted by Evildeffy View Post
    The clocks are separated in current tech but sure go ahead, overclocking the BCLK according to you increase PCIexpress as well then right?
    There are no separate clocks as there are no separate buses. There is a DMA bus, that is clocked by BCLK. Same clock is uses to generate CPU frequency, combined with the multiplier. So if you're asking if PCIe bus is affected by BLCK overclock then no, as there is no such thing on Intel mainstream platform, but if the questions is "does it affect the frequency the bus that is communicating PCIe devices and the CPU" the answer is yes.

    Quote Originally Posted by Evildeffy View Post
    ..... Alright, apparently every Architecture from Bloomfield -> Westmere and Lynnfield and prior is HEDT, good to know.
    Like I said before (several times now), in terms of forgiveness it's IMC -> PCIe -> iGPU but of course to you it's not true, sure that's fine... I mean your own example of LGA775 points to an overclock of between 120 - 140MHz being harmful which on LGA775 is already overclocked by 20MHz before you start
    Those are indeed HEDT in a way that they actually shared a platform with server chips. Also, Sandy Bridge was the generation that introduced DMA bus, but it was also the first generation that features truly locked processors (you couldnt really overclock LGA775 chips with multiplies, but could use FSB overclocking). 20 MHz FSB overclock wasnt equal to 20 MHz BCLK overclock as FSB only combined memory and CPU base clocks.

    Quote Originally Posted by Evildeffy View Post
    And like your examples 20MHz was actually doable if you wanted to stabilize above ~235 - 240MHz base clocks on the first Gen i7 and "technically but not really" 2nd gen i3/i5/i7 (Lynnfield) so that's already 20%, pretty much over your own estimate .. so yeah continue on, it seems you truly do know how it all works. /sarcasm.
    Stay classy, trying to apply my examples that pertained to LGA775 to Nehalems, which were completely different as a platform.

    Quote Originally Posted by Evildeffy View Post
    Relevancy of the statement? I never stated "ZOMG! GTX 1060 LEVELS BRUV!" I stated good efficiency when not pushed to all it's got.
    And Intel's iGPUs are still weaker than the crappy APUs from AMD (Kaveri/Godavari etc.) ... do you really think they'll make a worse APU than they already have?
    Good luck with that.
    Sure, I dont doubt that they can make faster iGPUs. I doubt that those will be any good for the market they are primarily meant for, which is mobile, due to their traditionally horrible power efficiency, and Vega is no different in this regard. Desktop - we have to wait and see. Previous desktop APUs werent that popular (somewhat due to the FM1/2 socket), despite being powerful.

    Quote Originally Posted by Evildeffy View Post
    Power measurements are read at the EPS socket for specific CPU draw, not wall. (Here's such a tool: Click me!
    You just made every Electrical Engineer turn in their grave yet again with that statement as you just stated every electrical device in the world doesn't work like that.
    Hell Nikola Tesla and Thomas Edison should rise from the grave just to correct you on that.
    And you cannot read. It's actually in your quote. Selecting either of those doesnt change anything, it's you're still measuring the input of the CPU VRM, not CPU input. Also, the number of reviewers who actually measure power draw on the rail is absurdly low, most reviewers measure total system power consumption on the wall, making most of the data you have (or can get) from the EPS socket effectively irrelevant since you cannot compare it to anything else.

    Quote Originally Posted by Evildeffy View Post
    Previously stated... history and added stupidity.
    What is cheaper? Holding and maintaining 2 different uArchs and silicons or doing so with 1 being cut down?
    I'll give you a hint... it's not the former!
    Your hint doesnt apply. Look that the HEDT lineup *wink-wink*

    Quote Originally Posted by Evildeffy View Post
    It's an entirely new series, Core i3 is not called Kaby Lake R but Coffee Lake-S, why would they shame their own marketing and PR team by not making the Core i5/i7 not compatible with 100/200 series but actually allow their Core i3 to compatible?
    It's Coffee Lake ... not Kaby Lake, it's simple logic and logic that has remained true with Intel for literally almost 30 years.

    But no worries, they'll change just for you now.
    I dont care what kind of series it is. I care what people are going to be able to buy, because I have to build it. For now nothing is confirmed and people who operate with rumours still stick with the 200-series compatibility of those chips. I know that you're an AMD fanboy, but you can just be a fanboy, no need to hate on the stuff that doesnt concern you.
    R5 5600X | Thermalright Silver Arrow IB-E Extreme | MSI MAG B550 Tomahawk | 16GB Crucial Ballistix DDR4-3600/CL16 | MSI GTX 1070 Gaming X | Corsair RM650x | Cooler Master HAF X | Logitech G400s | DREVO Excalibur 84 | Kingston HyperX Cloud II | BenQ XL2411T + LG 24MK430H-B

  13. #913
    The Lightbringer Evildeffy's Avatar
    15+ Year Old Account
    Join Date
    Jan 2009
    Location
    Nieuwegein, Netherlands
    Posts
    3,772
    Quote Originally Posted by Thunderball View Post
    There are no separate clocks as there are no separate buses. There is a DMA bus, that is clocked by BCLK. Same clock is uses to generate CPU frequency, combined with the multiplier. So if you're asking if PCIe bus is affected by BLCK overclock then no, as there is no such thing on Intel mainstream platform, but if the questions is "does it affect the frequency the bus that is communicating PCIe devices and the CPU" the answer is yes.
    So you're answering with no and then explaining it is because that's the PCIe bus you're talking about...
    Also no, the only thing left on BCLK right now is just RAM and iGPU.

    To quote AnandTech:
    Quote Originally Posted by AnandTech
    For the last few generations Intel has locked down its processors in terms of the CPU multiplier such that only a handful of parts allow a full range of overclocking. CPU frequency is determined by its base frequency (or base clock, typically 100 MHz) and multiplier (20x, 32x, 40x and all in-between depending on the part). The base clock has always been ‘open’, however in Sandy Bridge, Ivy Bridge and Haswell it has been linked to other parts of the system, such as the storage or the PCIe, meaning that any overclocking beyond 103-105 MHz led to other issues such as signal degradation or data loss. The Skylake platform changes this – as we noted back in our initial Skylake launch details, the chipset and PCIe now have their own clock domains, meaning that the base frequency only affects the CPU (core, uncore, cache), integrated graphics and DRAM.
    Keep trying though.

    Quote Originally Posted by Thunderball View Post
    Those are indeed HEDT in a way that they actually shared a platform with server chips. Also, Sandy Bridge was the generation that introduced DMA bus, but it was also the first generation that features truly locked processors (you couldnt really overclock LGA775 chips with multiplies, but could use FSB overclocking). 20 MHz FSB overclock wasnt equal to 20 MHz BCLK overclock as FSB only combined memory and CPU base clocks.
    I don't know how to reply to the first part of this statement... EVERY SINGLE CHIP was as such but now they're "special" and all HEDT because they shared a platform with server chips? ..... -.-
    And like I quoted AnandTech above... it most certainly is since Skylake... if you knew how to separate the iGPU lock-in.
    Funny how fast Intel responded to that by locking it all down.

    Quote Originally Posted by Thunderball View Post
    Stay classy, trying to apply my examples that pertained to LGA775 to Nehalems, which were completely different as a platform.
    I gave you an example as to when it was done and how, you can try the same on LGA775 but results would vary, PCIe bus has become more resillient over time and Nehalem was only the first example, I gave you another which also allowed for it.
    But if you want to be exactly like that then I'll go back to "PCIe overclocking was never raised nor talked about until you specifically pointed out what would happen if you did on a platform that already had PCIe bus speed separated from FSB overclocking".

    Quote Originally Posted by Thunderball View Post
    Sure, I dont doubt that they can make faster iGPUs. I doubt that those will be any good for the market they are primarily meant for, which is mobile, due to their traditionally horrible power efficiency, and Vega is no different in this regard. Desktop - we have to wait and see. Previous desktop APUs werent that popular (somewhat due to the FM1/2 socket), despite being powerful.
    Funnily enough the AMD APUs were pretty damn popular in HTPC's due to the fact of it's superior multimedia capabilities.
    Also Mobile and Desktop APUs have already been confirmed and are on the way, if they design it to fit the bill it'll work.. it's an iGPU .. not a dGPU so it adhere's to TDP specifications.

    Quote Originally Posted by Thunderball View Post
    And you cannot read. It's actually in your quote. Selecting either of those doesnt change anything, it's you're still measuring the input of the CPU VRM, not CPU input. Also, the number of reviewers who actually measure power draw on the rail is absurdly low, most reviewers measure total system power consumption on the wall, making most of the data you have (or can get) from the EPS socket effectively irrelevant since you cannot compare it to anything else.
    You state I cannot read yet you're the one who posted the following:
    Quote Originally Posted by Thunderball
    Please stop trolling. Just look up TDP definition, it's a rating designed to help you select cooling capacity for any given CPU. Sure, power draw is related to the amount of heat generated by the CPU, but as all measurements we have are done at the wall (or at best on a 12V rail) so it's pretty hard to how much heat CPU is dissipating because we dont know much it actually draws.
    I can surely read and understand it and from it I deduce that both Nikola Tesla and Thomas Edison would turn around in their graves by the pure statement you made there as every single electronic device has the same characteristics for TDP and power draw, you draw 120W by utilizing 12V and 10A, you're going to need cooling able to dissipate 120W period, efficiency can only complement this ... not reduce it.
    The EPS12V line is there purely for the CPU, it is a line dedicated to CPU power draw, we don't need anything to "compare it to" .. that's how it is.
    It's 12V and we measure Amps from it, the difference is that we never really needed to since the FX-9590 was atrocious and was measured by others the same way and now for the HCC dies to do the same since now it's going to matter.
    If you put a Ford Mustang KOTR 500 edition on a stand to measure horsepower and it tells you it's 500... do you need another Mustang KOTR 500 edition to compare it to?

    Quote Originally Posted by Thunderball View Post
    Your hint doesnt apply. Look that the HEDT lineup *wink-wink*
    HEDT platform is only active with 1 uArch at a time on an X79 (Sandy Bridge-E and Ivy Bridge-E) / X99 (Haswell-E and Broadwell-E) chipset, there are no 2 uArchs on HEDT at the same time, same for consumers.
    You cannot even remotely compare them as they are entirely different platforms or did you somehow manage to get an LGA115X socket working on an X79/X99/X299 platform? (Kaby Lake-X is not one of them!)

    Quote Originally Posted by Thunderball View Post
    I dont care what kind of series it is. I care what people are going to be able to buy, because I have to build it. For now nothing is confirmed and people who operate with rumours still stick with the 200-series compatibility of those chips. I know that you're an AMD fanboy, but you can just be a fanboy, no need to hate on the stuff that doesnt concern you.
    I'm sorry logic is physically hurting your brain, there's nothing I can do about that, but you know better right?
    Show me a uArch of Intel CPUs where the Core i5/i7 were new CPUs on a new chipset and the Core i3 line EXCLUSIVELY being compatible with a previous series.
    Go ahead ... I'll wait.

    But it's still hilarious you call me an AMD fanboy considering I have exactly 1 AMD E-450 APU that serves as my home server/NAS where the rest is all Intel.

    Keep going with the fanboy comments though.. Who knows you may actually start believing it when you become a full-fledged Troll.

  14. #914
    If someone wants to buy a PC now, should they wait for Coffee Lake?
    "It is always darkest just before the dawn " ~Thomas Fuller

  15. #915
    Quote Originally Posted by a C e View Post
    If someone wants to buy a PC now, should they wait for Coffee Lake?
    That depends on many factors. What will the PC be used for? Do you have something currently that can hold you over? Can you live with nothing for a couple months?

  16. #916
    Quote Originally Posted by a C e View Post
    If someone wants to buy a PC now, should they wait for Coffee Lake?
    generally yes unless you are literally without one and cant wait

  17. #917
    Quote Originally Posted by Evildeffy View Post
    To quote AnandTech:
    Thanks for confirming my point.

    Quote Originally Posted by Evildeffy View Post
    I don't know how to reply to the first part of this statement... EVERY SINGLE CHIP was as such but now they're "special" and all HEDT because they shared a platform with server chips? ..... -.-
    And like I quoted AnandTech above... it most certainly is since Skylake... if you knew how to separate the iGPU lock-in.
    Funny how fast Intel responded to that by locking it all down.
    HEDT is defined by a platform, not a chip. You dont need to separate iGPU, you can disable it. Intel locked it down because with the SuperMicro trick you could overclock any processor with any chipset.

    Quote Originally Posted by Evildeffy View Post
    I gave you an example as to when it was done and how, you can try the same on LGA775 but results would vary, PCIe bus has become more resillient over time and Nehalem was only the first example, I gave you another which also allowed for it.
    But if you want to be exactly like that then I'll go back to "PCIe overclocking was never raised nor talked about until you specifically pointed out what would happen if you did on a platform that already had PCIe bus speed separated from FSB overclocking".
    LGA775 results wouldnt vary when overclocking PCIe bus, Nehalems are no different, but you go into system bus overclocking for some reason (which had nothing in common until Sandy Bridge). And if you think that PCIe overclocking is irrelevant, then no, it's isnt: it's one thing that didnt allow CPU BCLK overclocking since Sandy Bridge.

    Quote Originally Posted by Evildeffy View Post
    Funnily enough the AMD APUs were pretty damn popular in HTPC's due to the fact of it's superior multimedia capabilities.
    Not really, mostly due to the fact that they were cheap, and Celeron/Pentium (which were comparable in price) iGPUs were heavily cut down.

    Quote Originally Posted by Evildeffy View Post
    I can surely read and understand it and from it I deduce that both Nikola Tesla and Thomas Edison would turn around in their graves by the pure statement you made there as every single electronic device has the same characteristics for TDP and power draw, you draw 120W by utilizing 12V and 10A, you're going to need cooling able to dissipate 120W period, efficiency can only complement this ... not reduce it.
    That's right, but that doesnt that mean all of the heat is going to be dissipated by the CPU.

    Quote Originally Posted by Evildeffy View Post
    The EPS12V line is there purely for the CPU, it is a line dedicated to CPU power draw, we don't need anything to "compare it to" .. that's how it is.
    It's 12V and we measure Amps from it, the difference is that we never really needed to since the FX-9590 was atrocious and was measured by others the same way and now for the HCC dies to do the same since now it's going to matter.
    If you put a Ford Mustang KOTR 500 edition on a stand to measure horsepower and it tells you it's 500... do you need another Mustang KOTR 500 edition to compare it to?
    So again, VRM is irrelevant then? Example - people have been complaining that X299 VRM is hot. Of course it is, it operates at 1.8V-1.9V constant, and while it has it's advantages: less Vdroop, more stable current supply, better efficiency, especially on low loads, it also gets very hot when current draw starts going up.

    Quote Originally Posted by Evildeffy View Post
    HEDT platform is only active with 1 uArch at a time on an X79 (Sandy Bridge-E and Ivy Bridge-E) / X99 (Haswell-E and Broadwell-E) chipset, there are no 2 uArchs on HEDT at the same time, same for consumers.
    You cannot even remotely compare them as they are entirely different platforms or did you somehow manage to get an LGA115X socket working on an X79/X99/X299 platform? (Kaby Lake-X is not one of them!)
    Kaby Lake-X is one of them. It doesnt have FIVR (and Skylake-X chips do, chipset supports both) and doesnt have quad channel memory controller (it's not a cutdown memory controller, Skylake-X does, chipset supports both).

    Quote Originally Posted by Evildeffy View Post
    I'm sorry logic is physically hurting your brain, there's nothing I can do about that, but you know better right?
    Show me a uArch of Intel CPUs where the Core i5/i7 were new CPUs on a new chipset and the Core i3 line EXCLUSIVELY being compatible with a previous series.
    Go ahead ... I'll wait.
    Kentfield, Wolfdale, Yorkfield - same situation essentially: LGA775 all over the place, but the support for the first generation of Core 2 Quads (65nm) was very limited, same with 45nm later - die shrinks of Pentiums/Celerons were supported all over the place, but new Core 2 Duo and Core 2 Quads had very limited support.
    R5 5600X | Thermalright Silver Arrow IB-E Extreme | MSI MAG B550 Tomahawk | 16GB Crucial Ballistix DDR4-3600/CL16 | MSI GTX 1070 Gaming X | Corsair RM650x | Cooler Master HAF X | Logitech G400s | DREVO Excalibur 84 | Kingston HyperX Cloud II | BenQ XL2411T + LG 24MK430H-B

  18. #918
    The Lightbringer Evildeffy's Avatar
    15+ Year Old Account
    Join Date
    Jan 2009
    Location
    Nieuwegein, Netherlands
    Posts
    3,772
    Quote Originally Posted by Thunderball View Post
    Thanks for confirming my point.
    Nice attempt at bait'n'switch .. however you failed for those that have the ability to read, especially that last sentence.

    Quote Originally Posted by Thunderball View Post
    HEDT is defined by a platform, not a chip. You dont need to separate iGPU, you can disable it. Intel locked it down because with the SuperMicro trick you could overclock any processor with any chipset.
    Your statement of any chip prior to Sandy Bridge and all AMD chips of being HEDTs still stand with your statement which is incredibly stupid.
    HEDT is defined by both platform and chip, otherwise they'd be the same, but keep trying.

    Quote Originally Posted by Thunderball View Post
    LGA775 results wouldnt vary when overclocking PCIe bus, Nehalems are no different, but you go into system bus overclocking for some reason (which had nothing in common until Sandy Bridge). And if you think that PCIe overclocking is irrelevant, then no, it's isnt: it's one thing that didnt allow CPU BCLK overclocking since Sandy Bridge.
    Knowledge cracks starting to show, keep going.
    We went into overclocking the FSB (LGA775 and prior) / Base Clock (Nehalem/Lynnfield/Westmere) because there was no other option, there was no increased multiplier other than what was defined as the max turbo bin for Nehalem and up.
    I could repeat once again the reason but sufficed to say I've said it more times than I can count on 1 hand right now in this thread and will simply state: Read it.
    It's answered in the same manner as to what uArch differences mean, as in directly.

    Quote Originally Posted by Thunderball View Post
    Not really, mostly due to the fact that they were cheap, and Celeron/Pentium (which were comparable in price) iGPUs were heavily cut down.
    More cracks showing, you think because it's cheap is the only reason? ATi/AMD (graphics only) has ALWAYS had superior multimedia and connection capabilities and you want a HTPC with as much hardware rendering as possible for the least load and heat and max out silence, that's why they picked it.
    But sure, blame it on the cut down iGPUs...

    Quote Originally Posted by Thunderball View Post
    That's right, but that doesnt that mean all of the heat is going to be dissipated by the CPU.
    Sign of admission and still dodging, I have to say this one is nicely done regardless.
    But you've now defeated your own points prior with this statement so yeah .. keep going.
    Also the law of energy conservation says what about energy going into an isolated system?

    Quote Originally Posted by Thunderball View Post
    So again, VRM is irrelevant then? Example - people have been complaining that X299 VRM is hot. Of course it is, it operates at 1.8V-1.9V constant, and while it has it's advantages: less Vdroop, more stable current supply, better efficiency, especially on low loads, it also gets very hot when current draw starts going up.
    VRM is relevant but it's a system that reacts to the feed of the CPU not the CPU reacting to the feed of the VRM.
    It's there to support the CPU, making it entirely not even remotely attached to the point at hand.
    But if you wish to involve VRM, sure .. add it, it doesn't change anything to the topic.
    You've already admitted you're plain trolling now with the power draw cooling prior anyway.

    Quote Originally Posted by Thunderball View Post
    Kaby Lake-X is one of them. It doesnt have FIVR (and Skylake-X chips do, chipset supports both) and doesnt have quad channel memory controller (it's not a cutdown memory controller, Skylake-X does, chipset supports both).
    So you're willing to stand there and tell me that an actual CPU that is LGA1151 that is parked on an LGA2066 CPU is truly an HEDT part, specifically designed and engineered to be capable of fully utilizing the platform (oh lookie lookie, your own statement used against you once more) instead of simply being placed there because Intel wanted to try and steal some thunder from AMD's ThreadRipper?
    Even though, through YouTubers and IIRC some written "media", the Mobo Makers were completely caught by surprise by the addition of these and the 14/16/18 core count CPUs at the last minute?
    So I'll ask you once more and think about this before you answer:
    Do you think Kaby Lake-X is HEDT? Knowing full well your own statement of it being platform prior and not chip?
    Considering that if that's the case full coverage of connections on prior mentioned platform should work.

    Quote Originally Posted by Thunderball View Post
    Kentfield, Wolfdale, Yorkfield - same situation essentially: LGA775 all over the place, but the support for the first generation of Core 2 Quads (65nm) was very limited, same with 45nm later - die shrinks of Pentiums/Celerons were supported all over the place, but new Core 2 Duo and Core 2 Quads had very limited support.
    Except those had actual differences such as running on different FSB speeds, speeds which were considered either overclocking extremely highly or beyond the capability of the motherboard resulting in a possible smoke, flame, destruction, panic etc...
    FSB was only from 800 to 1600 difference, not that big.. only about 8 to 16 times current base clock speeds.
    Not to mention what you've just mentioned as an example is more like AMD's strategy, different CPU generations on the same socket.
    Can use newer boards with older chips but not vice versa.

    Your example and logic still hold no point and mine stands.
    Also where's the Core i series in this?

    - - - Updated - - -

    =================================================

    https://videocardz.com/72230/intel-c...rock-z370-pro4
    Another Intel Core i7-8700K leak.

    Though I'll still reserve judgement until I see full glory official specs and reviews there are 2 things to note from this:
    4,3GHz is still the max. mentioned speed, had it been 4,7GHz it surely would've been mentioned as a point.

    Power on this is already on the 105W mark, which may mean nothing at all... but it does mean that SiSoft already expects a higher TDP than mentioned by Intel.

    So they are assuming TDP shenanigans as well.

  19. #919
    Quote Originally Posted by Evildeffy View Post
    Your statement of any chip prior to Sandy Bridge and all AMD chips of being HEDTs still stand with your statement which is incredibly stupid.
    HEDT is defined by both platform and chip, otherwise they'd be the same, but keep trying.
    HEDT is branding, not classification. By your incredibly non-stupid statement Threadripper 1900X and 7740X/7640X systems are not HEDT, nice, keep going.

    Quote Originally Posted by Evildeffy View Post
    Knowledge cracks starting to show, keep going.
    We went into overclocking the FSB (LGA775 and prior) / Base Clock (Nehalem/Lynnfield/Westmere) because there was no other option, there was no increased multiplier other than what was defined as the max turbo bin for Nehalem and up.
    I could repeat once again the reason but sufficed to say I've said it more times than I can count on 1 hand right now in this thread and will simply state: Read it.
    It's answered in the same manner as to what uArch differences mean, as in directly.
    I dont want to accuse you of something that you might not have meant, but of course you could change the multiplier on 775 and 1156/1366, increasing multiplier was an exclusive feature of extreme processors. If that's not what you me that I dont get what you're saying at all.

    Quote Originally Posted by Evildeffy View Post
    More cracks showing, you think because it's cheap is the only reason? ATi/AMD (graphics only) has ALWAYS had superior multimedia and connection capabilities and you want a HTPC with as much hardware rendering as possible for the least load and heat and max out silence, that's why they picked it.
    But sure, blame it on the cut down iGPUs...
    Yep, 4c CPUs with a decent lowend GPU for $100-150 and a tame TDP (45-85W)? You could also build a decent Intel HTPC, but it would be an i5, which starts around $200, and iGPU would be weaker. Unfortunately for AMD I've heard that Intel's NUCs are getting quite big right now (and I honestly have no idea why, apart from maybe a NetFlix 4K support, but FFS those use laptop hardware).

    Quote Originally Posted by Evildeffy View Post
    Sign of admission and still dodging, I have to say this one is nicely done regardless.
    But you've now defeated your own points prior with this statement so yeah .. keep going.
    Also the law of energy conservation says what about energy going into an isolated system?
    Isolated system is only relevant for power draw over the CPU power plane. What am I dodging again? That the CPU gets all the power that goes in over EPS 12V rail? Who's dodging now?

    Quote Originally Posted by Evildeffy View Post
    But if you wish to involve VRM, sure .. add it, it doesn't change anything to the topic.
    You've already admitted you're plain trolling now with the power draw cooling prior anyway.
    Sure it doesnt. Doesnt consume any power and doesnt dissipate any heat. I didnt say that heat dissipated by the CPU can be measured by the power draw over 12V EPS rail, YOU did.

    Quote Originally Posted by Evildeffy View Post
    So you're willing to stand there and tell me that an actual CPU that is LGA1151 that is parked on an LGA2066 CPU is truly an HEDT part, specifically designed and engineered to be capable of fully utilizing the platform (oh lookie lookie, your own statement used against you once more) instead of simply being placed there because Intel wanted to try and steal some thunder from AMD's ThreadRipper?
    No, I'm not, stop dodging. It's two different uArchs on one platform, which according to you, doesnt happen. HEDT is, again, not a classification, it's a branding, created by the way by Intel themselves.

    Quote Originally Posted by Evildeffy View Post
    Except those had actual differences such as running on different FSB speeds, speeds which were considered either overclocking extremely highly or beyond the capability of the motherboard resulting in a possible smoke, flame, destruction, panic etc...
    FSB was only from 800 to 1600 difference, not that big.. only about 8 to 16 times current base clock speeds.
    Not to mention what you've just mentioned as an example is more like AMD's strategy, different CPU generations on the same socket.
    Can use newer boards with older chips but not vice versa.

    Your example and logic still hold no point and mine stands.
    Also where's the Core i series in this?
    Core series is just branding, or did Intel magically starting using different physical principles in their CPUs?

    Nice to see that you know nothing about LGA775, so you're probably still a young guy, good for you. All mainstream Core 2 chips had 800-1333 FSB, all of those variants were introduced with the first model line, there were no FSB speed changes later. It's not like LGA775 didnt have multipliers, and clock speeds only have gone down since Pentium 4. There were de-facto refreshes of S478 chipsets that only supported Pentium D dual-cores (2 1-core dies on 1 substrate), so there was no question why those wouldnt support new chips, but it's the same story right now, with the "Kaby Lake Refresh".
    R5 5600X | Thermalright Silver Arrow IB-E Extreme | MSI MAG B550 Tomahawk | 16GB Crucial Ballistix DDR4-3600/CL16 | MSI GTX 1070 Gaming X | Corsair RM650x | Cooler Master HAF X | Logitech G400s | DREVO Excalibur 84 | Kingston HyperX Cloud II | BenQ XL2411T + LG 24MK430H-B

  20. #920
    The Lightbringer Evildeffy's Avatar
    15+ Year Old Account
    Join Date
    Jan 2009
    Location
    Nieuwegein, Netherlands
    Posts
    3,772
    Quote Originally Posted by Thunderball View Post
    HEDT is branding, not classification. By your incredibly non-stupid statement Threadripper 1900X and 7740X/7640X systems are not HEDT, nice, keep going.
    HEDT is branding but not classification? Intel may have started that trend but it is most certainly a classification.
    ThreadRipper 1900X shouldn't be HEDT no, just like 7740X/7640X... the difference here however is the fact that even that 1900X delivers a full 64 PCIe lanes and Quad Channel memory, i.e. nothing has been skimped and isn't tacked on as an afterthought.
    Indeed though... keep trying.

    Quote Originally Posted by Thunderball View Post
    I dont want to accuse you of something that you might not have meant, but of course you could change the multiplier on 775 and 1156/1366, increasing multiplier was an exclusive feature of extreme processors. If that's not what you me that I dont get what you're saying at all.
    Ok perhaps I should've phrased this differently:
    This is not a possibility to do on any chip barring an eXtreme model, which then are even more of a rare item to see than now, you hear people whine about paying 1000 USD for a consumer chip... how about then being 1200 USD for a consumer chip?
    99,99% of anyone out there bought the cheaper part and overclocked it if they needed more speed, by which multipliers only went up to as high as it goes stock.
    Meaning multiplier overclocking was not going to happen, it was FSB OC only.
    Or did you somehow managed to find a way to change a Q6600 (picking random example) into a QX6800 (because QX6850 was 1600FSB and didn't work on most boards) so we can enjoy multiplier OC? (not to mention Multiplier OC was actually generally way more unstable than FSB OC)

    Remember the topic was OCing, technically you can include the QX editions but that was rarer than me being struck by lightning in the face RIGHT NOW by Zeus himself from Mount Olympus.
    But perhaps that was my mistake for not stating general overclocking instead of 0,001% multiplier instability overclocking, I'd assumed you'd not take that path of nitpicking... I have only myself to blame I guess.

    Quote Originally Posted by Thunderball View Post
    Yep, 4c CPUs with a decent lowend GPU for $100-150 and a tame TDP (45-85W)? You could also build a decent Intel HTPC, but it would be an i5, which starts around $200, and iGPU would be weaker. Unfortunately for AMD I've heard that Intel's NUCs are getting quite big right now (and I honestly have no idea why, apart from maybe a NetFlix 4K support, but FFS those use laptop hardware).
    Ok... I'm sure it has nothing to do with the superior multimedia capabilities in AMD graphics cards vs. Intel.. nope.
    It's all just down to Intel being a dick and cutting down iGPU power. -.-

    Quote Originally Posted by Thunderball View Post
    Isolated system is only relevant for power draw over the CPU power plane. What am I dodging again? That the CPU gets all the power that goes in over EPS 12V rail? Who's dodging now?
    It's fine, I've repeated myself enough and have given you plenty of correct information, it's not like every single electronic device in the world abides by this principle right? ... Oh wait.

    Quote Originally Posted by Thunderball View Post
    Sure it doesnt. Doesnt consume any power and doesnt dissipate any heat. I didnt say that heat dissipated by the CPU can be measured by the power draw over 12V EPS rail, YOU did.
    You're honestly not capable of comprehending prior mentioned information can you?
    Or are you trolling now, I'd honestly like to know whether or not I should be attempting anything at all with you right now as talking to a wall may actually yield more productive results.

    Quote Originally Posted by Thunderball View Post
    No, I'm not, stop dodging. It's two different uArchs on one platform, which according to you, doesnt happen. HEDT is, again, not a classification, it's a branding, created by the way by Intel themselves.
    Like I stated prior which you seem to not understand, was the 7740X/7640X designed for the X299 platform? Can the CPU utilize the same resources? Does it have the ability to drive the same things as it's bigger say 7800X sibling?
    No? It was an afterthought just stuck in there by Intel? Oh it's literally a 7700K with the iGPU fused off soldered to the right pins on an LGA2066 PCB? Oh... no, no it is not HEDT, it is nothing short of a gimmick, but if you truly want to argue semantics whilst full well knowing exactly what was implied (even specifically implied) then go ahead, call the 7640X/7740X HEDT, I won't correct you even if it's literally a gimmick according to the entire tech industry and the motherboard manufacturers.

    Quote Originally Posted by Thunderball View Post
    Core series is just branding, or did Intel magically starting using different physical principles in their CPUs?
    Re-read the original question, perhaps you'll learn what the question was.
    But even if you do or even did, I doubt you'd answer properly and truthfully.

    Quote Originally Posted by Thunderball View Post
    Nice to see that you know nothing about LGA775, so you're probably still a young guy, good for you. All mainstream Core 2 chips had 800-1333 FSB, all of those variants were introduced with the first model line, there were no FSB speed changes later. It's not like LGA775 didnt have multipliers, and clock speeds only have gone down since Pentium 4. There were de-facto refreshes of S478 chipsets that only supported Pentium D dual-cores (2 1-core dies on 1 substrate), so there was no question why those wouldnt support new chips, but it's the same story right now, with the "Kaby Lake Refresh".
    Oh are we on about age now? Nice, childish resorts there.
    I am very likely older than you are and as we can clearly see considerably more experienced and knowledgeable.

    But please ... do go on about my age, it's hilarious.

    Oh btw:
    Kentsfield: Core2 Quad Q6600 - 1066FSB (Technically 266MHz but w/e... stupid MT/s)
    Kentsfield: Core2 Quad QX6850 - 1333FSB (Technically 333MHz but w/e ... again)

    Yorkfield: Core2 Quad Q8000/Q9000 series - All 1333FSB
    Yorkfield: Core2 Quad QX9770 / QX9775 series - 1600FSB (technically 400MHz but w/e ... again again)

    Wolfdale: Core2 Duo E7000 series - All 1066FSB
    Wolfdale: Core2 Duo E8000 series - All 1333FSB

    There's also Conroe which had all from 800 - 1333FSB ... this spans from Jan 2006 up to and including March 2010, including it's motherboards.
    1333FSBs were tacked on later, especially the 1600FSB ... since the earlier revisions were for Pentium D 775 etc. and used an even lower FSB.
    Not all motherboards also supported these speeds and if you did get one that supported them early on you had buggy operation.
    They were not the defacto speeds and the multipliers were locked regardless.

    There were also an incredible crapton of chipsets (nVidia nForce 4 SLI anyone? Who loved that? ADMIT IT!) which could and couldn't support those speeds, hell even today for those cheapskates who buy Core2 Xeon series have to watch what FSB to buy in order for their mobo to work with it or not just to extend the life of their servers a bit.

    But no... I don't know anything about LGA775 or anything, nor the fact that you can change some Xeons to LGA775 instead of their original LGA771 to get higher performance out of the chip or anything.
    I deffo also didn't work with so many Intel chips that I can no longer remember the amount of them, including overclocking a Q6600 to 3GHz from it's stock 2,4GHz on an ASUS P5Q Deluxe orso.
    I also don't have a literal stack of 7 x Pentium E2210s right next to me here and an 8th which is dead due to PSU blow-up or anything.
    I also don't remember the time that VIA Hyperion drivers existed, or SiS... or hell even when Chaintec existed as mobo manufacturer.

    Nor does the LGA775 wiki page state this either:
    Quote Originally Posted by Wikipedia
    Compatibility is quite variable, as earlier chipsets (Intel 915 and below) tend to support only single core Netburst Pentium 4 and Celeron CPUs at an FSB of 533/800 MT/s. Intermediate chipsets (e.g. Intel 945) commonly support both single core Pentium 4-based CPUs as well as dual core Pentium D processors. Some 945 chipset-based motherboards could be given a BIOS upgrade to support 65nm Core-based CPUs. For other chipsets, it also varies, as LGA 775 CPU support is a complicated mixture of chipset capability, voltage regulator limitations and BIOS support.
    Nope, those are all a figment of my imagination and doesn't even remotely explain why your comparison is not even remotely valid.
    (Little bit of a sarcasm addition there, hope you noticed!)

    But I'll return to state the following: Read the original question and answer it since you're so skilfully trying to dodge.
    This will double invalidate your point entirely.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •