The specs though... wow
http://www.geforce.com/hardware/desk...specifications
The specs though... wow
http://www.geforce.com/hardware/desk...specifications
| Intel i5-4670k | Asus Z87-Pro | Xigmatek Dark Knight | Kingston HyperX Fury White 16GB | Sapphire R9 270x | Crucial MX300 750GB | WD 500GB Black | WD 1TB Blue | Cooler Master Haf-X | Corsair AX1200 | Dell 2412m | Ducky Shine 3 | Logitech G13 | Sennheiser HD598 | Mionix Naos 8200 |
Even though this is irrelevant due to the change let me clarify things for you a bit since you are thinking linearly.
Heat issues "last summer" means a year ago, a lot of things change in a year's time.
Stepmania, whether graphically intensive or not, is irrelevant as it's not the game that caused it primarily.
What 1 architecture runs at is also irrelevant for the next (GT120 -> GTX 460).
Here's some things you need to keep in mind when dealing with the GTX 400 series (Fermi):
1. Fermi has always been known as a hothead, it was in fact so horrible in heat generated and reliability that nVidia does not want to be reminded of it.
2. Electronics age and degrade and by degradation it gets hotter and less efficient as well as increased voltage leak to your GPU.
3. VRAM also "requires" cooling and thus are also getting hot but also falls under the same category as the GPU which can also degrade and cause issues but VRAM generally does not overheat but they most deffo crack and cause issues.
4. VRM deliver power to your GPU and they get VERY hot (hotter than your GPU core in general) and also age and fail and when this happens your GPU will no longer get consistent voltage and most of the time those are voltage spikes, increasing the temperature for the GPU as well as possibility electrically damaging the core with voltages it should not be getting.
5. Just because there's not much dust coming out visually does not mean there isn't any and it sounds like you have the reference design (blower cooler) as well which is an inferior design PCB and inferior cooling, regardless dust may be stuck under the shroud you cannot see.
6. Your thermal paste may have evaporated leading to your heatsink not making proper contact where your GPU reaches stupid degrees and your heatsink being lukewarm (if that) to the touch.
7. Your GPU's micro-solder points may have cracked due to the heat (soldering tin expands and contracts) and can short stuff out or cause overheating underneath the actual GPU itself (this is where baking in the oven is often used).
These are all examples of what it may be and most, if not all, are a direct cause of GPU heat generation.
Not all of them are "Just dust, I cleaned it and was nothing there!", not even remotely.
The point is that YOUR specific issue is a dying of the GPU (note me saying GPU and not GFX), which means the actual core is damaged causing your issues.
And there are only 2 possibilities happening if you haven't touched the unit itself and that is:
1. Heat (By far the most common cause and especially common for GTX 400 series cards)
2. Over-volted (Caused by either Overclocking or VRMs dying .. last one of which caused by heat again)
There is a 3rd option and that is that your PSU shorted and caused it... but if that were the case the rest of your computer would be damaged as well.
So yes there's a VERY likely chance (I would estimate 95%) that heat did kill your card.
Again: Stepmania not being graphically intensive means very little.
Not true at all really, i have a gtx 465 in my closet that runs cooler at load than my gtx 760 does. ITs all about the cooler they put on it, mine is a galaxy model that is heavy and super overbuilt. 69c load no matter what application it runs.
A GTX 465 is pretty much the only exception in the entire range due to the fact that it was an update to the 460.
The Fermi architecture was still terrible and also as said the GPU core is NOT the only temperature dictating the lifespan of your card.
Whilst most manufacturers have learned how to include all components in the cooler to be cooled.. it did not used to be as such.
So "Not true at all really" is incorrect as you clearly do not know the facts and issues surrounding this specific architecture.
ugh, agp. the bane of my life when i was trying to get GTA 3 to run on my pc. could not find a pci graphics card for the life of me.. ended up having to play it at my dad's friend's house on their fancy riva tnt2
Correct.. it was the GF100 that was powering the GTX 470 and then neutered again to create the GTX 465.
Why do you think this was done? Because the GTX 460 was done so well? Think about it for a second.
Also did you know the original stock BIOSes that were shipped with most reference 480s f.ex. ran under load near 100°C?
Then a new BIOS came out, which didn't alter ANY functions of the card as it retained all the specs and slightly increased fan profiles and the GPU miraculously got up to 10°C cooler?
But when 2 monitors were attached it would still be 70°C+ when idle in Windows ... this was after release of course with a fan profile of about 50%+ as well.
Eventually another BIOS release was pushed out several months which lowered temps to the 60°C range with dual monitors but I believe you're getting the gist of this.
It took a redesign of the card's PCB, as well as a HEAVILY neutered GF100 (vs. the GF104) and VRM cooling (done by A LOT of manufacturers) implementation to create the 465 to replace the junk that was 460.
The Fermi architecture was all-in-all a failure and it's cards were generally short lived due to aforementioned issues.
Granted this was affected on a per AIB basis but none of them (nor should they) can solve the GPU's shortcomings itself.
The 460 came after the 465....
Don't get me wrong fermi did run hot out of the box, but this was mainly due to stock coolers. AIB partners had the 480 under 80c load which is more than can be said for even todays newest reference cooled graphics card (1080 benhcmarks have shown 83-84c is normal)
I should have mentioned this actually indeed.
The GTX 460 was released because it was cheaper than the 465 and performed equally (until OC was involved) but the 465 being the all round better thermals and management, it was meant to be the replacement of the 460 because nVidia found out that GF100 series was just flawed.
They did spin it into an extra cash making machine though.
Don't get me wrong fermi did run hot out of the box, but this was mainly due to stock coolers. AIB partners had the 480 under 80c load which is more than can be said for even todays newest reference cooled graphics card (1080 benhcmarks have shown 83-84c is normal)[/QUOTE]
Even with the proper coolers the GPUs were hoarding at high 80s with high speed fans.
The 480 reference was actually at 100% fan speed under load and generated between 92 - 98°C worth of temps depending on silicon lottery.
Current reference designs actually do 83-84°C with relative silence and lowered fan speed where the 480 ... well ... even a headset didn't help.
My point, which I've been trying to make, is that the Fermi core in general was a disaster.
And the life expectancy of these chips isn't reliant on JUST GPU core temp, especially not reference designs.
(Since the OP said he used compressed air on both sides)
For example:
Even the reference designed coolers of the R9 290X which had a TDP of 300W vs. the 200W of GTX 480 held the card no higher than 94-95°C.
It was designed for these thermals and it did so whilst at "Uber mode" (bios switch with more aggressive fan profiles) created a max of 58dBA.
The GTX 480 reference designs at those temps (and higher but I'll ignore that for now) did so with 64dBA.
Now you're thinking 6dBA isn't that much of a difference... did you know that it's about every 3dBA to DOUBLE the sound pressure from a previous level?
(dBA doesn't scale linearly)
As much as you think it all related to reference coolers it really wasn't but heat generated in various places was the definite downfall of the Fermi series.
And yes non-reference coolers (and designs) alleviated that issue but the core problems remained which weren't solved until the GTX 500 series.
As I close this off I'll let you think about the launch of Windows 10 and the nVidia driver that was pushed through and caused a lot of cards to die an unsightly death due to burnout... which generation of cards (reference or not) were the ones that died out almost immediately and by far the largest amount with those bad drivers?
Last edited by anon5123; 2016-05-25 at 01:37 AM.
The thing about the temps under load, it's not about how graphically demanding the game is, your GPU is going to be running hardware acceleration even for shitty games like Hearthstone and it will cause the GPU to heat up. There's probably only a few % heat difference between a game like Crysis and a game like Diablo 3. The other part is that thermal paste only lasts so long. It doesn't 'evaporate' as someone suggested, but rather it gets old and loses conductivity. It can get hard, shrink up, and barely have any effect at all. It's a good idea to replace thermal paste on your CPU and GPU every couple years you have them, especially the older they get. I'm still running a q9650 in my machine (2008) and it performs like a champ because I keep my rig as free of dust as possible and I redo my thermal paste whenever I have the PC apart to clean it. Paste is inexpensive and it takes ~2 minutes to put fresh stuff on a heatsink.
Last edited by Eroginous; 2016-05-25 at 06:30 AM.
My Gaming Rig: Intel Core 2 quad q9650|ASUS P5G41-T M|2x4GB Supertalent DDR3 1333Mhz|Samsung 840 Evo 250GB|Fractal Design Integra R2 500w Bronze|ASUS Strix GTX 960 4GB|2x AOC e2770s 27" (one portrait, one landscape)|Bitfeenix Phenom Micro ATX
Don't hate my rig, there's nothing quite like the classics.
Then what did kill it, in your oh so expert opinion? If it were not heat, then it would have to be volts. Did you OC the GPU? I doubt it. Did your PSU kill anything else in your system or your other video card upon installing it? Nope, so guess what it was. Heat. If either heat or volts did not kill it, it would still be running. It's pretty obvious volts did not kill it, so it has to be heat. Oh well, you solved your problem, so whatever, don't have to believe me if you don't want to, but your only argument as to why you think it was not heat is that the game you are playing is not demanding. As Evil explained, that has very little to do with it.
Incorrect. Evaporation means that something reached a temperature that caused it to go from a liquid to a gas. Thermal paste does not convert to gas at high temperatures. It becomes hard, brittle, and more dense. Thus being unable to do what it's designed to do with efficiency. But keep trying? You're using a lot of big words and you're almost sounding smart?
My Gaming Rig: Intel Core 2 quad q9650|ASUS P5G41-T M|2x4GB Supertalent DDR3 1333Mhz|Samsung 840 Evo 250GB|Fractal Design Integra R2 500w Bronze|ASUS Strix GTX 960 4GB|2x AOC e2770s 27" (one portrait, one landscape)|Bitfeenix Phenom Micro ATX
Don't hate my rig, there's nothing quite like the classics.
Evaporation is actually the proper word here as evaporation does not equate into an instant effect.
The thermal paste (or parts of it, you know the actual make up of the paste) actually DOES evaporate into a gaseous state thereby creating pockets of air in the thermal paste and causing a rise in thermal impedance and therefore temperatures.
Thermal paste is a viscous liquid and not a solid material, it becoming hard and brittle is actually those components evaporating due to temperatures but with a delayed over time effect and not instantaneous.
If this isn't the case and it "shrinks" you may have found the first liquid material in the world that is compressible beyond it's original point.
(note that gaseous state is not too far from liquid except already compressed, so compressing something compressed is what you're going for here.)
Think about it properly for just a second here... what happens to a liquid that "shrinks" ... ANY liquid?
It won't if it's below freezing temp(32F) out.....so yeah, temperature is relevant. Above a certain temperature, it will evaporate. Higher temps will speed it up as you said, but at certain temperatures it will not evaporate at all. Humidity has to do with what it does at what temperatures as well. In a more humid environment, it takes higher temperatures for it to evaporate.
Sigh, I wish you would learn what words mean before you use them incorrectly. Evaporation, by definition, only happens to liquids. Now, there are some materials that appear solid in their liquid form, and materials that appear liquid in their solid form. Thermal Paste is one of those solids that appear to be a liquid. Thermal paste does not evaporate, ever. Under any circumstances.
Otherwise it wouldn't be a good thing to use for heat conductivity.
Please stop spreading misinformation about this kind of stuff, you can easily fool someone into thinking your incorrect info is correct, and then they go tell someone else. The OP got his answer, let's not confuse him any more, k?
Thanks.
My Gaming Rig: Intel Core 2 quad q9650|ASUS P5G41-T M|2x4GB Supertalent DDR3 1333Mhz|Samsung 840 Evo 250GB|Fractal Design Integra R2 500w Bronze|ASUS Strix GTX 960 4GB|2x AOC e2770s 27" (one portrait, one landscape)|Bitfeenix Phenom Micro ATX
Don't hate my rig, there's nothing quite like the classics.