Right, like i said, if youre happy with it, ride that 9600K down in flames. Maybe OC it if you havent (depending on what cooler you have). My 8600k does 4.8ghz all core with an undervolt, and i can push it to 5ghz with a mild overvolt.
I'm currently using a Corsair H100i Pro liquid cooling system.
I'd like to try overclocking, but I wouldn't even know where to begin. I've googled a bit and there are so many guides but with different information... it's all pretty confusing, so any pointers on that would be much appreciated.
I believe you have my stapler...
first off, how is it 'meaningless' when it's factually contradicting what you said, and to direct quote an AMD interview on Tom's Hardware talking with AMD VP David McAffee: 'The official answer from AMD would be these 300- series motherboards are not a supported configuration in our engineering validation coverage matrix. There are potential issues that could be in there that we're simply not aware of at this point in time.' he then later said in the same piece: 'It's certainly something that we're not just leaving on the side and ignoring; we definitely understand there's a vocal part of the community that's passionate about this. And we want to try to do the right thing. So we're still working through it.'
as far as INTEL supporting chipsets longer than they usually would i gave a somewhat compelling arguement as to why they might, never said they would, and that usually it's best to go with what is normally the case, however with the way things are right now, i can't see them having much success with the 'old way' as a long term strategy considering the current global situation.
as to your, again, factually incorrect statement regarding power consumption, a stock out of the box 5950X from AMD pulls ~120 watts of power, a stock out of the box 12900k pulls ~240-245 watts of power, as can be seen in the Gamers Nexus review video of the 12900k, with an overclocked 5950x for comparison drawing 'only' 10 more watts than the stock 12900k, since the 12900KS haven't been released it nobody can say what kind of ridiculous power draw they will have but it's guaranteed to be higher than that considering INTEL are touting a clock speed of 5.5GHz single core boost, and lastly regarding this topic, it's a bit disingenuous comparing a 'bottom of the barrel' CPU (12400) which is designed around power efficiency and light workloads, and a mid tier leaning towards high tier CPU (5800x) which is designed around performance more than raw power efficiency, which for reference stock out of the box 5800x is rated at 65 watts, the 12400 is rated similar and is nowhere near the actual performance of a 5800x, your INTEL 'fanboi' is showing here trying to claim otherwise.
i wasn't aware that 'crushing the competition' was merely being ahead of the similar matched product by 2-3% on average for gaming up to 12% on titles that favour 'your' platform was considered 'crushing', furthermore, AMD utterly shits all over INTEL for both productivity metrics as well as HEDT and server hardware, i mean just look at encoding and decoding metrics for AMD 5000 series vs alder lake, it's not even remotely close for INTEL, and like i said in my post the only way INTEL have 'clawed back' the gaming crown is by pushing their geriatric manufacturing process to its limits, regardless of semantic or technicalities, the smaller process node used by TSMC and by extension AMD has allowed them much better overall performance for a number of years now, that is an indisputable fact, furthermore i only recently updated my INTEL based system to an AMD system, i'm in no way a 'fanboi' as you seem to be claiming, i'm just someone who sees what is good and what is bad, and makes a judgement call based on that information as i suppose i'm in the minority, as you claim, when it comes to understanding hardware spec sheets and what the data means, and it was due to this information that i went with a 5600x over a 5800x based system because i worked out over the course of a year basing what i use my PC for and my current energy bills, i would be using too much electricity with a higher power draw system, and because of that i was able to drop some of the other parts i had decided to buy and get some better quality pieces with the savings, and while this may be anomalous at this particular point in time, it's something that's certainly changing as prices for energy go up, where i live by as much as a 50% increase, because you stated pricing in dollars i'm assuming you're American, so many of these points will be lost on you as your bubble protects you from many of these issues that are plaguing the rest of the world right now, so it's my fault for assuming you understood these issues.
it's also laughable you trying to compare the M1 chip to anything produced by AMD (TSMC) and INTEL, it's like trying to compare a family car and a sports car and expecting a fair comparison, it's just idiotic.
as a final statement from me, i agree, nobody knows yet what AM5 and zen 4 is gonna perform like, but if it's 'phase 2' of the zen architecture, looking at how utterly massive the gains were from 1000 series to 5000 series, if the next set of chips are gonna be performing in the same scaling pattern, it's safe to assume that AMD are still gonna be market leaders for years to come, yes RIGHT NOW DDR5 is nowhere worth the price being asked when compared to DDR4 in terms of performance, but the exact same thing happened when DDR3 to DDR4 transition happened, it's a brand new technology that still has growing pains and it won't be worth the cost for the average consumer until at least 12-18 months from now assuming similar maturation levels as previous generations had, as far as what was reported at CES, NOBODY reported anything ground breaking whether that be INTEL, NVIDIA or AMD, hell, for a three quarter hour long presentation NVIDIA spend less than a minute total on TWO GPU launch skus, and historically AMD has never 'raved' about stuff until they can validate it after they were left with egg on their face way back in the early 2010's, so it's just gonna have to a wait and see situation.
- - - Updated - - -
if you have no experience with OC'ing then i highly recommend looking up youtube videos from the bigger tech reviewers and look back at older videos that are talking about your specific hardware (if they have them) because 'modern' OC guides aren't intended for older systems, try and look back at some older articles for your specific hardware and go from there.
Because it doesnt. its you reading something, and then somehow coming up with a meaning that isn't there. Like, literal delusion.
Three years later. Still working on it. Yeah, ok. If/When it materializes, cool. Not holding my breath, and its STILL n ot a selling point.and to direct quote an AMD interview on Tom's Hardware talking with AMD VP David McAffee: 'The official answer from AMD would be these 300- series motherboards are not a supported configuration in our engineering validation coverage matrix. There are potential issues that could be in there that we're simply not aware of at this point in time.' he then later said in the same piece: 'It's certainly something that we're not just leaving on the side and ignoring; we definitely understand there's a vocal part of the community that's passionate about this. And we want to try to do the right thing. So we're still working through it.'
I dont understand why you simply CANNOT seem to understand most people do not ever do a drop in CPU replacement. Not ever. Never. Sub 1%. So how long the socket is supported is utterly meaningless because people simply dont upgrade while its relevant.
Not even sure what you're drooling about here, as i claimed absolutely NOTHING about Intel supporting sockets longer. Sockets last two generations, for the most part, going ALLLLLLL the way back to the 2nd Gen Core i-series parts (2500K, etc). Thats all i said they were going to do this time. Raptor Lake is already confirmed to also be socket 1700 and 600-series motherboards are already confirmed for support (via firmware update, likely) of Raptor Lake. Thats two generations. Exaclty like it has been for close to 15 years.as far as INTEL supporting chipsets longer than they usually would i gave a somewhat compelling arguement as to why they might, never said they would, and that usually it's best to go with what is normally the case, however with the way things are right now, i can't see them having much success with the 'old way' as a long term strategy considering the current global situation.
You seem to have literally no concept of what the word "factually" means. Since you have yet to use it correctly.as to your, again, factually incorrect
lolno. Thats just like saying a stock out of the box 10600K pulls 65W. AMD is just as untruthful with their real TDP as Intel is.statement regarding power consumption, a stock out of the box 5950X from AMD pulls ~120 watts of power,
Did you actually WATCH the video? Because it doesn't pull near that stock. That was overclocked. And there's almost zero point to overclocking because you get like 200mhz all-core, maybe on a good day, and generally have to overvolt the bejezus out of it.a stock out of the box 12900k pulls ~240-245 watts of power,
.... watch dem goalposts FLLLLLLYYYYYYYas can be seen in the Gamers Nexus review video of the 12900k, with an overclocked 5950x for comparison drawing 'only' 10 more watts than the stock 12900k, since the 12900KS haven't been released it nobody can say what kind of ridiculous power draw they will have but it's guaranteed to be higher than that considering INTEL are touting a clock speed of 5.5GHz single core boost, and lastly regarding this topic, it's a bit disingenuous comparing a 'bottom of the barrel' CPU (12400) which is designed around power efficiency and light workloads, and a mid tier leaning towards high tier CPU (5800x) which is designed around performance more than raw power efficiency, which for reference stock out of the box 5800x is rated at 65 watts, the 12400 is rated similar and is nowhere near the actual performance of a 5800x, your INTEL 'fanboi' is showing here trying to claim otherwise.
It is absolutely fair to compare any part to any other when the performance is the same or better. And "the 5800X isnt designed for power efficiency"... neither is the 12400. There's literally ZERO difference in silicon between the cores in the 5600 and 5800... just more of them. There's ZERO difference in the P-cores in the 12400 and 12900K. There not designed differently.
What kind of clownshoes shit goes on in your brain?
I might add, you're also wrong, as the 12400 keeps pace with, and outperforms, the 5800 even in productivity tasks. For less than half the price.
Holy shit are you changing the goalposts.i wasn't aware that 'crushing the competition' was merely being ahead of the similar matched product by 2-3% on average for gaming up to 12% on titles that favour 'your' platform was considered 'crushing',
Im talking about a 12400 equaling or beating the 5800. The actual price equivalent CPU (a 12700) of a 5800 is 30+% faster.
Uh... no. a 12700 (not even the top end SKU) beats every mainstream AMD processor up to and including the 5950X by over 10% in every productivity benchmark.furthermore, AMD utterly shits all over INTEL for both productivity metrics
No, because Intel actually has new HEDT silicon and AMD does not, still being on 3 year old Threadripper parts.as well as HEDT
.... what? how do you look at charts that show Intel clearly dominating and then claim its the other way around? Are you like.. word dyslexic or something? You swap the names of the Intel parts for the AMD ones?and server hardware, i mean just look at encoding and decoding metrics for AMD 5000 series vs alder lake, it's not even remotely close for INTEL,
Look at all these benchmarks where AMD dominates Intel:
https://www.techheuristic.com/intel-...lose-to-5900x/
Oh, wait, not a siingle one. Whoops.
The server one, though, is true enough, mostly. Intel CPUs are still better for certain types of server processes though (due to additional instruction sets that dont exist on Epyc) but thats rather niche.
Brand new, entirely different process and core architecture that is more transistor dense than TSMCs upcoming 5nm process.... but sure.. "geriatric".and like i said in my post the only way INTEL have 'clawed back' the gaming crown is by pushing their geriatric manufacturing process to its limits
Its neither, its outright dishonesty on TSMCs part.regardless of semantic or technicalities,
No they haven't. Rocket Lake kept up pretty well with even the 5000 series chips, neck and neck.. at 14nm. Benchmarks are a real thing that have actually been done and they dont support what you're saying at all. More power efficient? Definitely, because Intel's Skylake/14nm process was a power pig. But "overall performance"... nope. Neck and Neck.the smaller process node used by TSMC and by extension AMD has allowed them much better overall performance for a number of years now,
Your delusions that are quite literally NOT supported by any of the testing done or benchmarks recorded are not fact, much less indisputable.that is an indisputable fact,
Is there some reason you're capitalizing Intel (which is a proper name, and not an acronym), like those mental deficients that insist on calling Macs MACs? I mean, its kinda funny, tbh.furthermore i only recently updated my INTEL based system to an AMD system,
You need to check your math. Even at full potential draw, the cost difference would be about 9$, if your computer was on 24/7 and the CPU was pinned the entire time.i'm in no way a 'fanboi' as you seem to be claiming, i'm just someone who sees what is good and what is bad, and makes a judgement call based on that information as i suppose i'm in the minority, as you claim, when it comes to understanding hardware spec sheets and what the data means, and it was due to this information that i went with a 5600x over a 5800x based system because i worked out over the course of a year basing what i use my PC for and my current energy bills, i would be using too much electricity with a higher power draw system,
Cool story though. Definitely doesnt make you look like a clown. And as for "understanding what the specs mean" - if you somehow believe that Zen 3 is remotely on par with Alder Lake.... yeah, apparently not.
Id like to point out that your choice of a Zen 4 chip over Rocket Lake (11 series) or 10 series chip was a perfectly fine one - at the time, Intel's chips werent blowing up anyone's skirts and no one knew at the time what Alder Lake held in store.
But the situation that obtained when you built does NOT obtain now. Hell, the 4/8 Core i3 12100 just blew the doors off the 3600G (which IS slightly slower than the X) in both gaming and productivity.
Not sure what delusion-fueled dreams you've had telling you otherwise, but Zen 3 is NOT better than Alder Lake at anything other than power efficicieny in the top 2 SKUs. And at those SKUs, the 12900K utterly crushes the 5950X, even when it isnt OCed.
... Energy prices in nearly every first world country are lower on average than the US. I cant speak to 2nd World countries (China, Russia, etc). Even if you were paying 5x as much as i do (unlikely) itd be 50$ a year running balls-out 24/7.and because of that i was able to drop some of the other parts i had decided to buy and get some better quality pieces with the savings, and while this may be anomalous at this particular point in time, it's something that's certainly changing as prices for energy go up, where i live by as much as a 50% increase, because you stated pricing in dollars i'm assuming you're American, so many of these points will be lost on you as your bubble protects you from many of these issues that are plaguing the rest of the world right now, so it's my fault for assuming you understood these issues.
And if it IS 5x as expensive... seriously look into some solar panels and an inverter to defray some of that cost. Pay for itself in a couple of months.
TSMC is not AMD. TSMC just makes the chips, based on AMD's design.it's also laughable you trying to compare the M1 chip to anything produced by AMD (TSMC)
TSMC is not Apple. TSMC just makes the chips, based on Apple's design.
BOTH currently use TSMCs "7nm" process, and the M1 utterly obliterates the Zen 3 chips in its core count range, both in outright performance AND in power draw. The M1 Pro and Max similarly dominate core-count equivalent Intel and AMD chips. (But both AMD and Intel still pull ahead in the 16-core+ zone, for most tasks - albiet at literally 6x the power draw).
Well, we agree that you're an idiot, at least.and INTEL, it's like trying to compare a family car and a sports car and expecting a fair comparison, it's just idiotic.
Its a perfectly valid commparison, since all three CPUs feature in the same exact markets (laptops, desktops, and production machines). Its not the family cars' fault that it outperforms your needlessly expensive sports car.
Which only happened because AMD was still trying to get their memory controllers to function properly. Once they did, generation-on-generation improvements fell right down to industry norms (about 8-15%). Not that that's a bad thing.as a final statement from me, i agree, nobody knows yet what AM5 and zen 4 is gonna perform like, but if it's 'phase 2' of the zen architecture, looking at how utterly massive the gains were from 1000 series to 5000 series,
They wont, because AMD already fixed the memory controller issues, which was the majority of the giant jump between Zen 1/2 and Zen 3 came from.if the next set of chips are gonna be performing in the same scaling pattern,
Right, because Intel is just sitting still. Raptor Lake already confirmed via engineering samples logged into benchmark sites to be about a 10% IPC gain, and more cores across the board, with the 13900 engineering sample being 14 P-cores (28 therads) and 8 E-cores. (And even the i3 having E-cores, which they currently dont in 12th gen) Again, im not saying Zen 4 wont be impressive. We simply dont know. But we already know how giant of a leap Alder Lake was for Intel (~20+% IPC alone), and there are already samples out there of Raptor Lake.it's safe to assume that AMD are still gonna be market leaders for years to come,
Uhh.. no. Since Zen 1 DIDNT implode on their face, their keynotes of late have been VERY positive spin and crowing about how awesome there stuff is going to be ever since. Even when they have been overprimising and massively under-delivering (6500XT being weaker than a YEARS-old RX 480, lulz - not that nVidia was a lot better with the 3050, which is basically a very marginally faster 1660 TI/SUPER + Tensor and RT cores. I guess it can at least benefit from DLSS.). And when they had good numbers to show, theyve shown them. They showed NOTHING about Zen 4 other than "yeah its coming". And there's a SINGLE Zen 3 3D SKU coming... in June. Seems like that was either a dud or (equally likely, IMO) supply issues are preventing them from taking real advantage of it.and historically AMD has never 'raved' about stuff until they can validate it after they were left with egg on their face way back in the early 2010's, so it's just gonna have to a wait and see situation.
Anywho, i only ended up replying to you these last two times because other people had quoted you.
Since you're normally on ignore, barring another quote, ill be going back to ignoring your drivel.
- - - Updated - - -
Itll have to be phone-pics (im not sure if there's a way to capture screenshots of an EFI anyway) but i can show you where the settings are in the ASUS Z390 BIOS.
My rig is a Strix Z390i - your EFI/BIOS should be nearly identical to mine.
Ill post those up here a little later when i finally adjourn to my office to do some gaming tonight.
- - - Updated - - -
Okay, these are just going to be links, because its just easier for me to link to Google Photos than rehost them somewhere else:
First off, when you enter the EFI/BIOS (ill just use EFI from here on out, its shorter and more correct these days), youll see this screen:
https://photos.app.goo.gl/gzhibw9t3VJVJRX46
Thats the basic EZ Mode EFI screen. Notice down in the bottom right the "Advanced" Button. See here:
https://photos.app.goo.gl/fByA75Qg6X1Ed9358
Click that. It should take you to the main "Advanced" screen:
https://photos.app.goo.gl/ADUab2dnFPXKVUWZ6
You want to click the AI Tweaker heading there at the top.
Itll land you here:
https://photos.app.goo.gl/kC5qcC4LnkXvKAeS7
My cursor is currently on the XMP settings in that pic. Enable your XMP profile if you haven't previously.
If you feel like it latter, there's every likleyhood your existing 2666 RAM can be manually overclocked higher. Gains wont be massive but you can play with it later if you want.
Notice at the bottom, highlighted in red is "CPU Core Ratio"
The next shot will show that in more detail (just scroll down to that):
https://photos.app.goo.gl/vRMiTy4NXB7J95k39
Enable "Sync All Cores" as i have in the screenshot.
Set the ratio to what you want the speed to be in the first entry ("1 Core Ratio Limit"); since you're syncing them, this is the only one you need to change.
Its multiples of 100mhz. So the 48 you see here is 4,800Mhz/4.8Ghz.
Start at say... 4.6 or 4.7 the first time.
If you scroll down further, youll see this:
https://photos.app.goo.gl/YPsz27PPmsaeCALQ6
This is where you adjust voltage. I only undervolt (run at lower than stock voltage) my 8600K because i have a small, constrained case (Mini-ITX - Phanteks Evolv Shift) and a single 120mm AIO to cool it. With your larger 240mm cooler, you shouldn't need to reduce the voltage to keep temps in check. Only worry about this if you cant get the CPU to be stable, which means you might to need to overvolt it.
Like i said, for now, just leave it stock, and try the CPU all core at 4.6 or 4.7Ghz (46, 47 multiplier).
Hit the Exit area, and Save Changes and Exit.
Download Prime 95:
https://www.mersenne.org/download/
And run it in torture test for a while. Let it really run. If it crashes, you can try fiddling with the voltage (go up in very small steps.. .02v or so each time, re-run the test, etc, until you get it stable).
If it is stable at 4.6 or wherever you start.. bump it up, run the test again. If you get to 4.8-5.1, you're fairly well golden unless you really want to get into altering RAM timings and stuff. And if you get it to 4.8 or higher at stock voltage but its unstable, try adding voltage to get it stable (just like before, small increments).
My experience with my 8600K was basically... set it at 48, it ran fine but a little hot.... then i lowered the voltage and its been stable. It took all of a few minutes.
Should the same for you. The 8600/9600K were very good overclockers.
Last edited by Kagthul; 2022-01-11 at 04:39 AM.
i have never heard of this website 'techheuristic' before, the first set of images are taken from a different website with the only hardware known for the benchmarks they have being whatever the CPU is as listed, and a 6900XT GPU, nowhere does it state the actual test setup used so the results are quite literally meaningless, to coin your wording, furthermore, a single page of cinebench R32 is NOT representative of real performance nor is it even remotely helpful when comparing CPU's, it's used as a one off benchmark because of historical reasons, the old R15 was completely removed from any and all comparisons a long time ago, it's only a matter of time before this iteration is axed, so again, not really understanding how you came to your conclusions based on such inaccurate information.
'DiD YoU eVeN WaTcH tHe ViDeO?', yes, and you clearly cannot read, whenever GN uses the word STOCK next to a power consumption slide for CPU comparisons, that's what it drew for them out of the box with no alterations, meaning, based on the GN testing only (since it's the video i referenced), his power draw chart shows a 12700k pulling 158.4 watts of power STOCK, and the 12900k pulling 243.6 watts of power STOCK, the 5950x OC given is shown in the slide as 1.343v SET/1.275 GET which drew 253.2 watts total power, GN NEVER overclocks a new CPU when doing reviews, clearly you have never watched any of his videos or you would know this, so where you're getting this notion that the INTEL CPU's tested are 'overclocked' from, i will never know.
i'm not even going to bother addressing your other bullshit spewed in here, because quite honestly i'm going to bed and i'm too tired to try and keep up with your utterly moronic mental gymnastics, the only data you linked is not only flawed, it's also wrong on it's own website, on one of the slides they pulled from techspot.com they have annotated it with the comment:
'In Corona renderer, the Core i7-12700KF has defeated 5800X by a considerable margin. Here, 12700KF delivers 30% more performance over 5800X and is only 9% slower than Ryzen 9 5900X.'
yet the chart they are talking about shows the R7 5800X ABOVE the 12700KF with the annotation on the slide *HIGHER IS BETTER*, so which is it?, the INTEL chip is better because they said it was, or the AMD chip is better because the graph shows it is, you can't have both, if you're going to try and use 'data' to back up your arguement, at least use a reputable fucking source that knows what the fuck it's talking about because by extension they have made you look every bit the clown you appear to be, enjoy trying to spin that one to suit your narrative, maybe send them an email pointing out their clear and obvious mistake?
Thanks for the write-up Kagthul. I guess the tedious part of overclocking is the stress testing part. I imagine you want to run Prime95 for a few hours at least, from what I've read via google, like 12+ hours for certainty?
Either way, I appreciate the help and I'll give it a go!
I believe you have my stapler...
AMD was very specific with the wordings:
* ZEN4 will have DDR5 support (unlikely a hybrid DDR4/DDR5 controller like Intel is using)
* AM5 will have PCI-E 5.0
The later one is the strange one from the announcements, so while AM5 will have at SOME POINT IN TIME PCI-E 5.0 support (storage, GPU, PCH or all of them?!?), they worded it very unspecific, so ZEN4 will either have none because PCI-E 5.0 is pretty expensive and AMD without the OEM market has to gauge the DIY budget much better as Intel or they will include parts of PCI-E 5.0 with each new CPU gen every ~1.5-2 years.
@Topic
I dont see a huge plus with AM4/AM5 lasting duration either. Its nice for coolers, but when I build a new 2000-3000€ PC, I honestly just get a new cooler with new fans and if you get low-budget CPUs, the kit coolers are included and again usefull for both brands so even the cooler topic with AM4/AM5 vs Intel sockets doesnt matter that much.
All I see with AM4 how silly people choose AMD with intended CPU upgrades in their mind and are now stuck with a single M2 slot, poorly designed VRMs that brown out with STOCK running R5-R9 CPUs (B450 included), because the boards were targeted at low budget systems and the manufacturers cut every corner possible.
So they go ahead, get the 300-400€ ZEN3 CPUs, that will kill the trash mainboards in a few months/years while getting not even one feature that made it to mainstream by now - multiple M2's instead of SATA, much better WiFi chips, much better ethernet chips, lots of improvement in RAM tracing and gigantic VRM designs that are able to output 300-400W without a heatsink - thats longetivity.
All I know from my standpoint is that I will be upgrading I7-9700k probably with either 13th gen or 14th gen.
Alder Lake looks really good and 9700k being only 8 cores no HT always tilted me, but it's still a mostly pointless upgrade for me because I'm not really that much of a CPU capped for whatever I do at the moment with my gaming rig. Or rather, I am, but what does it matter if FPS is already triple digit anyway for stuff I usually play.
So dumping ~$1k+ on upgrading already good performance is not the best plan. Thus 13th/14th gen it is when hopefully there will be more modern stuff that actually would need that extra power.
All my ignores are permanently filtered out and invisible to me. Responding to my posts with nonsense or insults is pointless, you're likely already invisible and if not - 3 clicks away. One ignore is much better than 3 pages of trolling.
the only issue with your upgrade plan, is what the GPU market will look at by that point in time, it's rumoured that the 4000 series nvidia set is coming later this year, and it's assumed to follow the same trend as has been 'normal' now for the entire lifespan of the 3000 series, as well as the 6000 series for AMD where unless you're one of the extremely lucky few to get a founders edition card you will not be seeing them at MSRP, and will your budget at that point allow for a GPU upgrade (assuming you don't already have a decent one you can transplant over).
All my ignores are permanently filtered out and invisible to me. Responding to my posts with nonsense or insults is pointless, you're likely already invisible and if not - 3 clicks away. One ignore is much better than 3 pages of trolling.