1. #1

    770 vs 760 vs 280x?

    As you probably can detect from the title, I'm in the process of upgrading my GPU. And upon gazing over this post you can probably detect that I'm one of those over thinkers, who takes a simple enough concept and blow it out of proportions with added layers of complexity. If it makes you more comfortable, I'm pretty confident I'll go for the 770, I'm just looking for input in case I missed something (as much as one strives for omnipotency, it's a fools quest).

    I had expected to keep my trusted MSI 560 Ti for another year, and then upgrade to the 800-series, but the autumn is loaded with titles I want to experience without having to play on low settings. I'm especially looking forward to the new Dragon Age title, as well as Mass Effect 4 whenever it will be released.

    Both titles will use the Frostbite 3 engine, which can bring my current 560 Ti to its knees (googling a random bench puts it at 16fps in BF4 Ultra 4xMSAA/HBAO), while the 770 gets a playable 44fps. That bench seems to illustrate my pondering quite nicely, since the 760 gets 32fps, which isn't enough for a competitive FPS, but to me it's fully acceptable in a single player RPG. I've also included the 280x since it's a bit cheaper than the 770, but with seemingly equal or greater performance (I suspect due to Mantle and 3Gb?).

    I've checked up on DirectX 12, and it'll be backwards compatible with current Nvidia cards. For whatever that's worth. I'm unsure what I'll be missing if I'm rocking a AMD card at that point. I guess it's a question of Mantle now, or DirectX 12 a year and a half from now?

    Vegas12 render times is somewhat important to me too. Not exactly something I'm using today, but I don't want to close that door. And ananadtech reports the 280x being more than twice as fast vs the 770, and I just can't seem to think there's something wrong with their bench. It also reports 770 being ~x2 faster than the 560Ti, despite me having read that the 600- and 700-series are inferior to the higher end of the 500-series (starting with the 560 Ti) when it comes to Vegas12 encoding speeds. Anyone who can shed some light on this?

    Also, I was wondering. Some bench suites seem to include performance per dollar, and per watt. The P/D is obviously reflecting the market, but the P/W is the one that got me wondering. I'm assuming they just take the highest recorded FPS, and divide it with the wattage, or something in a similar fashion. My question is, for instance, if my 560 Ti is working at 100% and achieves 50fps, surely it draws more power than a 770 working at ~70% due to the 60fps vsync? Bench suites don't seem to take vsync into account, they just mash the 100% performance/draw. Or am I mistaken? To me it just doesn't seem to reflect the real world the way they do the comparisons. I mean, you'd find me play with vsync on in a competitive game over my dead body, but when you're playing a Tower Defense game at 900fps on a 60hz monitor, surely I'm not the only one who'd rather have lower power draw, a cooler gaming room and less humming above those 1k FPS?

    So, to summarize:
    *If it matters, it'll be a MSI Twin Frozr edition (I've already checked for availability), regardless of which of the three it'll be.
    *I'm gaming on 1080p, I don't intend to go higher until next upgrade.
    *This is a single player eye candy game upgrade, when I play multiplayer it's competitive, and I would play it at 240p if it gave me an advantage.
    *Budget is... Well, I prefer MSI TF 770 or cheaper. But if I can pay 20% more for a 20% upgrade, I won't hesitate. Or 50% more for a 50% upgrade, and so on, and so forth.

    Thanks for reading it ALL! And any input / thoughts / comments is greatly appreciated!

    TL;DR:
    Read it all.

  2. #2
    Deleted
    Deal is that amd cards are going down alot, wish i could say the same about Nvidia. Dunno the prices there but an R9 290 atm is less than 70bucks more expensive than a mid range price of an gtx770. It obviously depends what is your budget ceiling.

  3. #3
    Pit Lord Ghâzh's Avatar
    15+ Year Old Account
    Join Date
    Mar 2009
    Location
    Helsinki, Finland
    Posts
    2,329
    Quote Originally Posted by Raphtheone View Post
    I've checked up on DirectX 12, and it'll be backwards compatible with current Nvidia cards. For whatever that's worth. I'm unsure what I'll be missing if I'm rocking a AMD card at that point. I guess it's a question of Mantle now, or DirectX 12 a year and a half from now?
    DirectX works for both AMD and Nvidia cards so this shouldn't be a deciding factor.

    Quote Originally Posted by Raphtheone View Post
    Vegas12 render times is somewhat important to me too. Not exactly something I'm using today, but I don't want to close that door. And ananadtech reports the 280x being more than twice as fast vs the 770, and I just can't seem to think there's something wrong with their bench. It also reports 770 being ~x2 faster than the 560Ti, despite me having read that the 600- and 700-series are inferior to the higher end of the 500-series (starting with the 560 Ti) when it comes to Vegas12 encoding speeds. Anyone who can shed some light on this?
    Almost all the Nvidia 700 series cards have their cuda cores disabled only exception being the Titans. They did this because gamers barely need them usually so they've cut them out to get more space, reduce heat and most importantly make people who want more computing power spend more money on either a Titan or a computing card.

    So if you really need the faster rendering times you should maybe consider AMD.

    Quote Originally Posted by Raphtheone View Post
    Also, I was wondering. Some bench suites seem to include performance per dollar, and per watt. The P/D is obviously reflecting the market, but the P/W is the one that got me wondering. I'm assuming they just take the highest recorded FPS, and divide it with the wattage, or something in a similar fashion. My question is, for instance, if my 560 Ti is working at 100% and achieves 50fps, surely it draws more power than a 770 working at ~70% due to the 60fps vsync? Bench suites don't seem to take vsync into account, they just mash the 100% performance/draw. Or am I mistaken? To me it just doesn't seem to reflect the real world the way they do the comparisons. I mean, you'd find me play with vsync on in a competitive game over my dead body, but when you're playing a Tower Defense game at 900fps on a 60hz monitor, surely I'm not the only one who'd rather have lower power draw, a cooler gaming room and less humming above those 1k FPS?
    Depends on the benchmark but the testing method is usually explained in the review. Although from what I've seen sometimes when they talk about power / wattage ratio they use a benchmark that finishes faster the faster card you have. Meaning that they can rate a card that does the job faster higher then something slower. You get the job done faster but use more power per second versus doing it slower and using more power overall. But yeah not sure about this one (or if it even matters).


    So which one should you pick? I have no definitive answer. Although I usually and almost exclusively use Nvidia that's only because of personal preference. If the vegas render times are important to you go with AMD or if you play titles that have Mantle support planned. Nvidia has Shadowplay if it has any relevance. If none of that matters get what's cheaper or whatever colour you like better.

  4. #4
    Immortal Stormspark's Avatar
    7+ Year Old Account
    Join Date
    Jun 2014
    Location
    Columbus OH
    Posts
    7,953
    Power-wise it goes 280x > 770 > 760. Unless you need PhysX or Shadowplay, AMD will get you more performance for cheaper.

  5. #5
    What if I need a graphic card, which shall have CUDA/any technology helping with Photoshop etc + I want to play Wildstar/Elder Scrolls Online on at least very high (ultra, if possible)? still amd?

  6. #6
    http://www.videocardbenchmark.net/high_end_gpus.html

    770 outperforms the 280x according to this website.

    I used this website when deciding on what to buy for my new computer. It's pretty accurate.
    Last edited by Gaviel; 2014-07-22 at 12:25 AM. Reason: Added some stuff.

  7. #7
    Deleted
    Quote Originally Posted by Akaihiryuu View Post
    Power-wise it goes 280x > 770 > 760. Unless you need PhysX or Shadowplay, AMD will get you more performance for cheaper.
    770 and 280X trade blows, 760 is slower.

    Since you mentioned, in terms of render time, I've found that 4690k is significantly faster than GTX 760 with x264vfw. No idea if it took advantage of SLI, probably not.

  8. #8
    Quote Originally Posted by Ghâzh View Post
    Almost all the Nvidia 700 series cards have their cuda cores disabled only exception being the Titans. They did this because gamers barely need them usually so they've cut them out to get more space, reduce heat and most importantly make people who want more computing power spend more money on either a Titan or a computing card.
    Not to nitpick, but this needs clarification. The 780 and 780 Ti both have everything on board that the Titan and Titan Black do, the DP cores are just hardware locked. The cards are identical aside from that and have the same amount of transistors, etc. Also, CUDA cores haven't been a separate entity for a long time, the "CUDA" name was rolled into general shader cores. The 780 Ti has all 2880 CUDA cores but has the DP hardware locked down.

    Quote Originally Posted by looz View Post
    770 and 280X trade blows, 760 is slower.

    Since you mentioned, in terms of render time, I've found that 4690k is significantly faster than GTX 760 with x264vfw. No idea if it took advantage of SLI, probably not.
    Yeah, CPU encoding is generally going to be a lot faster since Nvidia loves locking out their gaming audience from computational power via hardware locks and $1,000 pay walls for Titans. It's pretty shitty.

    On the bright side, the built-in h264 hardware encoder can do wonders in OBS for local recording or streaming with no performance penalties. Though anyone with a Haswell chip has a better h264 hardware encoder anyway via QuickSync.
    Last edited by glo; 2014-07-22 at 09:21 AM.
    i7-4770k - GTX 780 Ti - 16GB DDR3 Ripjaws - (2) HyperX 120s / Vertex 3 120
    ASRock Extreme3 - Sennheiser Momentums - Xonar DG - EVGA Supernova 650G - Corsair H80i

    build pics

  9. #9
    Where is my chicken! moremana's Avatar
    15+ Year Old Account
    Join Date
    Dec 2008
    Location
    Florida
    Posts
    3,618
    The 770 and 280x trade punches, if price isn't a issue, go with a 780Ti or R9-290.

    However, if you plan on buying the new 800 series when they come out, grab a 760 or a 270X and save some coinage now

  10. #10
    Quote Originally Posted by Kostattoo View Post
    Deal is that amd cards are going down alot, wish i could say the same about Nvidia. Dunno the prices there but an R9 290 atm is less than 70bucks more expensive than a mid range price of an gtx770. It obviously depends what is your budget ceiling.
    I did contemplate the 290, but it doesn't follow a linear price/performance progression, which is enough for me to dismiss it.

    Quote Originally Posted by Ghâzh View Post
    DirectX works for both AMD and Nvidia cards so this shouldn't be a deciding factor.
    A dx10 card doesn't support dx11 features, just as a dx11 card won't support dx12 features. Getting free added - and relevant - value should always be considered.

    Quote Originally Posted by Ghâzh View Post
    [...] if you really need the faster rendering times you should maybe consider AMD.
    You make a strong case for the 280x. It is something I'll have to investigate further, since a bunch of sites claims 280x is faster, others the 770, there's just no consistency in the results I've found. It's maddening.

    Quote Originally Posted by Ghâzh View Post
    Nvidia has Shadowplay if it has any relevance.
    Shadowplay is by its very concept f-ing awesome. But from what I gather it does x264 recordings, and if you've tried editing a few hours of x264-footage in Vegas you'll know to avoid it like the plague.

    Quote Originally Posted by Ghâzh View Post
    If none of that matters get what's cheaper or whatever colour you like better.
    Color is actually on the "nice bonus"-list.

    Quote Originally Posted by looz View Post
    [...] in terms of render time, I've found that 4690k is significantly faster than GTX 760 with x264vfw. [...]
    Since you mentioned, have you tried Blender animation renders? I'll add it on my what-to-google-today-list, but subjective information adds an important angle to the objective stuff. On the x264vfw subject, I remember being a tad disappointed my 560 Ti wasn't that much faster than my 4,2Ghz 2500k. Iirc it only shaved off a third of the render time.

    Doing my best to compare the Vegas 12 render times between the 560 Ti and the 280x, anandtech shows it's 80 vs 25 seconds, respectively. If my math is correct, what takes my 560 Ti an hour to render takes 18m45s for the 280x. That upgrade is so insane I have a hard time believing it, and have to verify it.

    The above was measured vs the 7970 Ghz Ed. Taking the "long route" proves unreliable, comparing 560 Ti with 770, and then 770 with the 280x, and then taking the 560 Ti and 280x numbers and comparing them, it's 80vs44 -> 45vs23 = the 770 scores 44 seconds and 45 seconds on what should be the same test. Different rigs, different testing parameters, I don't know, and frankly I find it irrelevant. Skewed results nonetheless. For instance, the Radeon HD 7950 Boost goes from 51 seconds in the GPU13 tests to 24 seconds in the GPU14, and I can't find out why. I also can't find the actual Vegas12 test for the 560 Ti (which they obviously added to their database separately). It pains me that I have to ignore anandtech and their comprehensive data.

    Half a year ago I read this thread about how Fermi was superior to Keppler in Vegas12 renders. I don't know what's changed, since suddenly the 600- and 700-series outperform the 500-series. Yet the old information out there hasn't been updated, and I can't find new information that tells me to discard the old.

    Ah well, I'm off hunting information.

  11. #11
    Quote Originally Posted by Raphtheone View Post
    A dx10 card doesn't support dx11 features, just as a dx11 card won't support dx12 features. Getting free added - and relevant - value should always be considered.
    Anything GCN and later will be compatible with DX12 on AMD's side (HD 7xxx and R7/R9), further back for Nvidia cards.

    https://www.google.com/search?client...utf-8&oe=utf-8
    i7-4770k - GTX 780 Ti - 16GB DDR3 Ripjaws - (2) HyperX 120s / Vertex 3 120
    ASRock Extreme3 - Sennheiser Momentums - Xonar DG - EVGA Supernova 650G - Corsair H80i

    build pics

  12. #12
    Quote Originally Posted by glo View Post
    Anything GCN and later will be compatible with DX12 on AMD's side (HD 7xxx and R7/R9), further back for Nvidia cards.

    https://www.google.com/search?client...utf-8&oe=utf-8
    +1.

    I started this thread with my mind set on the 770, but the longer this thread runs, the more I'm leaning towards the 280x.

  13. #13
    280X costs less than 770 in Sweden, I'd go with the 280X.
    They are selling their stock out in waiting for the new Tonga architecture.

    You could wait for those as well, but they might cost more. They won't perform worse though, for sure.
     

  14. #14
    Lurking the internet today, this is what I've found out thus far:

    * Radeon doesn't work in Blender. [1] and [2] This sucks oh so much.

    * Even with 3gb RAM the 280x doesn't outperfom a 2gb 770 in Skyrim on 5,760 x 1,080, 8x AA, 16x AF, 'Ultra' Settings, w/ High Res Texture Packs. Did I miss something? I thought Skyrim could use >2Gb with mods. What rigs, games and settings do people play on/with to warrant buying the 3-6Gb GPU's?

    * In the same test, the 280x doesn't outperform a 770 in BF4, a Frostbite 3 game (and hence arguably a relevant bench for the upcoming DA:I/ME4)

    *According to this bench, the 280x draws marginally more power (~3,5%), but performs at ~82% of the 770 during the power draw test (Unigen Valley 1.0). Might be an isolated incident, have to check that out.

    *Thanks to the link glo provided, I got some reading candy on DX12. As I suspected/was hoping, there are (eye candy?) features in DX12 that won't be supported by DX11 cards. Still, the backwards compatibility means better performance on current-gen GPU's, which will be enough while waiting for the DX12 GPU's to drop in price. But you guys probably knew that already. This text here is for me to get down as much as for whoever to read in the future with my predicament googling for answers. And they're surprisingly hard to find.

    I'm still trying to find conclusive numbers on Vegas while both editing and encoding, but so far I've yet to find anything but circumstantial stuff. Basically, as I see it right now, both the 280x and 770 will shorten the render, and I'm at the point where I don't care by how much, so I'm borderline inclined to consider it good enough. Also, I've skimmed on it, but I need to find better info on heat and sound generation. I suspect Nvidia has the upper hand on this, but since I've been out of the loop since my 560Ti upgrade, I've gotta find out for sure.

    To make matters worse, I realized I can X-Fire on my MB. I probably won't go that way, but I've gotta see if 2x 280's are worth it (I'm unsure if I want to get 2x 280x's with no Blender support + my PSU might not like me if I try).

  15. #15
    Herald of the Titans theWocky's Avatar
    10+ Year Old Account
    Join Date
    Aug 2012
    Location
    Wellington, New Zealand
    Posts
    2,766
    What CPU do you have?

    An Intel i5 CPU with Quicksync renders my videos pretty darn fast - have you compared that performance to the video cards? Talking 4~5x faster than normal when I use HandBrake for compressing my video files.

    See here - article from 2012, but render was 50% faster than 1000 cuda cores:

    http://www.sonycreativesoftware.com/...ssageID=855377

    EDIT: Me, personally, I hate anything AMD - just personal preference. From experience, I find nVidia / Intel a helluva lot more compatible and better over-all for gaming.
    Last edited by theWocky; 2014-07-22 at 08:04 PM.

  16. #16
    Quote Originally Posted by theWocky View Post
    What CPU do you have?

    An Intel i5 CPU with Quicksync renders my videos pretty darn fast - have you compared that performance to the video cards? Talking 4~5x faster than normal when I use HandBrake for compressing my video files.

    See here - article from 2012, but render was 50% faster than 1000 cuda cores:

    http://www.sonycreativesoftware.com/...ssageID=855377

    EDIT: Me, personally, I hate anything AMD - just personal preference. From experience, I find nVidia / Intel a helluva lot more compatible and better over-all for gaming.
    I've got a 2500k, and a z68 MB, which from a quick glance should suffice. Apparently I've gotta set the HD4000 in BIOS and attach an additional screen or some such, it's something that'll have to wait until the morning before I start doing benches.

    Speaking of Handbrake, I remember having tried to render from x264 (stupid mishap with Dxtory) to a more edit-friendly codec (aiming for larger files, not smaller), but iirc there was no way to change the codec, correct?

  17. #17
    I would never use QuickSync (or Nvidia's hardware encoder) for anything other than recording game footage. QuickSync is fast because it's incredibly lossy and inefficient by design, which is fine when you're working with large file sizes and a high bitrate to compensate. However, this should never be used for final compression before uploads to YouTube and the like. If you do, you're going to end up with a pixelated mess after YouTube compresses it on their end.
    i7-4770k - GTX 780 Ti - 16GB DDR3 Ripjaws - (2) HyperX 120s / Vertex 3 120
    ASRock Extreme3 - Sennheiser Momentums - Xonar DG - EVGA Supernova 650G - Corsair H80i

    build pics

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •