Well i just usually discarded Gigabyte because they made every PCB that horrible blue color. Picked up a GA-B75N for my fileserver, it's proven stable thusfar.
I'd go as far as to disregard everything you've just said about bottlenecking as wrong.
If we look at what bottleneck truly means, it's the narrow part of a bottle that makes the liquid inside it pour out slower. If you think that liquid as a GPU and proceed to your analog of "But bottlenecking is where, if you'd change the GPUs to dual titans and you would not get better performance" you are thinking it wrong. If you replace the beer inside your bottle with vodka, you are going to get more drunk drinking it but the rate at which the liquid pours out is still the same, hence you have a bottleneck. Now if you do the same with GPU's you could think that having one titan with inferior CPU is the beer and two titans is the vodka, you still increase the performance going from one to two but your CPU remains the bottleneck that makes the system not as fast as it could be (rate of which the liquid pours out is slower then it could be).
Talking about bottlenecks when discussing computer components is perfectly fine.
PS. How is the "weakest link" any more accurate? What's the link between the CPU and GPU? What about the saying "chain is only as strong as it's weakest link"? Now that's implying that if you change your strong links (GPU's) to even stronger, you'd still only be as strong as your weakest link (CPU), and hence, "not get better performance".
Maybe we shouldn't look at these words and phrases literally. Just saying...
Err, what.
More like, the bottleneck being the neck of the bottle, and it doesn't matter how much the bottle holds, since it can only pour X litres/minute anyway. That is a bottleneck. So if you can increase the amount to pour and increase the amount of informattion passing the weakest link, it's not a bottleneck.
Some people think this way, it's either running crap or running amazingly smooth. Two options only, nothing more no degree of how much it is necking.
If a random guy who wants to play BF3@120fps ultra would start a thread, asking if a 2600k with 4.6GHz overclock would bottleneck a gtx 780 SLI setup, then he'll get answers "no it won't blablabla". Considering he really needs 120fps all the time because he's a hardcore sniper, his FPS could dip below 100 fps with both gpu's sitting at 60% so you were almost better off with a single card >.<
This video is just nice to see what's really happening with a typical i7 & 780's in SLI.
I think that's because the i7 isn't overclocked, and battlefield doesn't really scale beyond 4 threads. Still though, in a multiplayer map it will never be able to saturate the SLI780, making it the weakest link. But to call it a bottleneck when it's doing 80-120fps would be really far fetched, i'd rather say the graphics solution is overkill.
BF3 is actually one of the few games that supports beyond 4 cores? (or atleast HT from what i know , gain isn't much but still moar performance)
@faith vid , damm those 60 fps in that vid suck. (what kind of cpu clocks in there?, with my 5,0 worst rare dips are upper 90's for BF3 @1440p though)
Last edited by mmoce1d4ab16bc; 2013-08-13 at 09:34 PM.
@shroud, no not really. Check it out:
http://www.techspot.com/review/458-b...nce/page7.html
2600K doesn't have any significant lead over the 2500K. The performance difference is probably the cache memory.
And also, i think the 3770K in the video is stock.
still those benches are single card only?
also it does improve minimum fps by the looks of it? (mostly core clock related likely though)
guess i'll have to do some hardware monitoring the coming week to verify some stuff and see how much gpu load i manage @1440p aswell.
Well yeah, it's a test for CPU scaling. Graphics set to medium to decrease the GPU influence.
It's 1/2 fps, like i said probably due to the cache. If it scaled beyond 4 threads you'd have seen allot better results with 4 extra threads from the i7.
Cpu load depends on which card you use and BF3 takes only advantage of 6 threads. It can burn a 2600k max to 80% or 50% for a 3930k or 80% for fx8350 etc. A 26xx would be just performing half of the performance HT can max provide (eg 30%/2=15%) IF needed so it isn't worth it to an i5.
3930K really performs better than a 2600K as you can see here. The minimums are really impressive.
Uhm from an older video seems to be 4.2GHz@stock volts.
Well I found that the gpu loads were higher on 1440p than 1080p for some reasons. I was struggling with average gpu loads of 60-65% on 1080p with 670's in SLI so I just returned that card but tried 1440p as well and the gpu loads were significantly higher, maybe it's less cpu intensive at 1440p than 1080p.
If people want a game which tests CPU load then Crysis 3 is the one. Fully utilizes hyperthreading and AMD Piledriver cores to the max (assuming decent GPU), possibly the only game where we see AMD's 8-core CPU's pulling ahead of Intel's non-HT quads.
WoW Character: Wintel - Frostmourne (OCE)
Gaming rig: i7 7700K, GTX 1080 Ti, 16GB DDR4, BenQ 144hz 1440p
Signature art courtesy of Blitzkatze
Not surprising, small resolutions are more forgiving for GPU, wheras it's approximately always the same CPU calculations needing to be done; the GPU only has to worry about what's on the screen, after all.
I'm probably over-simplifying, but thinking that way helps in coming close.
hmm interesting nalak 40man raid at full ultra on 1440p didn't dip below 40 fps.
i actually expected <30's
Can you give a source for these numbers? It's just that i can't see any of this in the previously posted Techpost benchmark. If it actually had more than 4 threads you'd see the i7-2600K pull ahead of the 2500K in before mentioned benchmark. Beyond 4cores only IPC/Cache matters, or they tested it in a wrong area where the GPU remained the limiting factor.
With a 5ghz i7-3770K, you're joking right?