Page 1 of 2
1
2
LastLast
  1. #1

    Asus GEN3 boards

    For anyone wanting an Asus PCIE 3.0 board, they just came in stock on newegg. I just ordered one myself five minutes ago.

    http://www.newegg.com/Product/Produc...82E16813131790

  2. #2
    Interesting. Although rather inconsequential at this point in time.

  3. #3
    Deleted
    Nope, since my HD 6950, along with all other cards on the market are not made to support PCIE 3.0, and even current dual GPU cards hardly max out PCIE 2.1

  4. #4
    Right, the PCIE 3.0 isn't available until Ivy bridge, but its nice to get a z68 pro board with it. So when the 7 series of GPUs comes out early next year you can upgrade easily. Im coming from an AMD 770 board, so this will be a big upgrade for me.

  5. #5
    Wait, the new graphic cards are actually going to be able to utilize the power of PCIe 3.0?

  6. #6
    New GPUs are supposedly more than double the power.

  7. #7
    I believe they will support PCIe 3.0, but I doubt the 600/7000 series GPUs will *require* that much bandwidth. If anything I'd imagine the difference between PCIe 8x and 16x would make more of a difference than it does now (might mean if you want to SLI/Xfire the new cards you might actually need a motherboard with with 16x/16x instead of 8x/8x)

  8. #8
    Deleted
    Quote Originally Posted by Irloki View Post
    New GPUs are supposedly more than double the power.
    I'd like to see some sources backing this up.
    PCIE 2.1 has been around for a while now, and even nowadays, most cards are comfortable running on X8 even.

  9. #9
    Quote Originally Posted by Adappy View Post
    I believe they will support PCIe 3.0, but I doubt the 600/7000 series GPUs will *require* that much bandwidth. If anything I'd imagine the difference between PCIe 8x and 16x would make more of a difference than it does now (might mean if you want to SLI/Xfire the new cards you might actually need a motherboard with with 16x/16x instead of 8x/8x)
    True. Hopefully PCIe 3.0 and boards that natively feature them have enough power to juice up the graphic cards whenever necessary. Preferably I'd enjoy seeing a bunch more lanes become standard.

  10. #10
    No graphics card would need PCI-E 3.0 in the next couple of years.
    You can run a 6990 on PCI-E 1.0 and it works just fine.
    Last edited by haxartus; 2011-10-17 at 07:42 PM.

  11. #11
    The Unstoppable Force DeltrusDisc's Avatar
    10+ Year Old Account
    Join Date
    Aug 2009
    Location
    Illinois, USA
    Posts
    20,098
    If you ask me, all of these companies are raving about PCI-E 3.0 available on motherboards as a huge marketing ploy, and it's too bad so many people are falling for it. Take for example what has so far been stated in this thread...

    Even the fastest single GPU video card on the market, the GTX 580, only uses about x8 amount of bandwidth, on an x16 PCI-E 2.1 lane (where it could obviously use up to the full x16, if it had that much information to put through, however it doesn't). A 590/6990 barely use more, intriguing enough. On the other hand, a PCI-E 3.0 x8 is equal to PCI-E 2.1 x16.... so essentially, unless you're getting like a FusionIO PCI-E SSD, which I highly doubt with their $100k+ pricetags, you won't be filling up that bandwidth, and until we know for sure the 6xx/7xxx series GPUs will use such a larger amount of bandwidth, it will remain a marketing ploy to get more money, and that is that.

    It's honestly kind of like AMD's Bulldozer chips being better "in the future, with Windows 8!!!1!" We can't be sure, and until we can be, it's a very "in the dark" buy.
    "A flower.
    Yes. Upon your return, I will gift you a beautiful flower."

    "Remember. Remember... that we once lived..."

    Quote Originally Posted by mmocd061d7bab8 View Post
    yeh but lava is just very hot water

  12. #12
    GTX 690 (if they make such thing) won't be able to max out PCI-E 2.0, nor the GTX 790 since it won't be a die shrink or an architecture change.

  13. #13
    Quote Originally Posted by Synthaxx View Post
    The GTX6 series are based on the Kepler architecture, not Fermi. If they don't become the new 480's and get superseded in a few months after release, then the GTX7 series will be based on the Maxwell architecture. Sure, they're both die shrinks, but the architecture won't be the same.
    Still I don't believe that they would need PCI-E 3.0, unless we get something like a 100% performance boost per generation, which is ridiculous.

  14. #14
    The Unstoppable Force DeltrusDisc's Avatar
    10+ Year Old Account
    Join Date
    Aug 2009
    Location
    Illinois, USA
    Posts
    20,098
    Quote Originally Posted by haxartus View Post
    GTX 690 (if they make such thing) won't be able to max out PCI-E 2.0, nor the GTX 790 since it won't be a die shrink or an architecture change.
    Okay I think this is a bit of a stretch... from what I've read the 6xx series is supposed to be a pretty considerable increase in graphics computing power.... It probably won't max out PCI-E 3.0 but I wouldn't put it past a 690 to maybe be able to max out PCI-E 2.0.
    "A flower.
    Yes. Upon your return, I will gift you a beautiful flower."

    "Remember. Remember... that we once lived..."

    Quote Originally Posted by mmocd061d7bab8 View Post
    yeh but lava is just very hot water

  15. #15
    Quote Originally Posted by haxartus View Post
    Still I don't believe that they would need PCI-E 3.0, unless we get something like a 100% performance boost per generation which is ridiculous.
    My response to you lays in another's quote:
    Quote Originally Posted by Irloki View Post
    New GPUs are supposedly more than double the power.

  16. #16
    Titan Frozenbeef's Avatar
    10+ Year Old Account
    Join Date
    Aug 2009
    Location
    Uk - England
    Posts
    14,100
    yay more things to spend money on <3

  17. #17
    The current dual chip graphics cards don't really suffer from PCI-E 2.0 x8. There is a bottleneck at x4 but it might be closer to x4 than to x8.

  18. #18
    Naw, high-end cards are bottlenecked by a few %. GTX 580 is so by about 5% though, so not exactly groundbreaking.

  19. #19
    The Unstoppable Force DeltrusDisc's Avatar
    10+ Year Old Account
    Join Date
    Aug 2009
    Location
    Illinois, USA
    Posts
    20,098
    Quote Originally Posted by Frozenbeef View Post
    yay more things to spend money on <3
    Please don't waste your money on this gimmick...
    "A flower.
    Yes. Upon your return, I will gift you a beautiful flower."

    "Remember. Remember... that we once lived..."

    Quote Originally Posted by mmocd061d7bab8 View Post
    yeh but lava is just very hot water

  20. #20
    Deleted


    So far this marketing presentation is the only indication of Keplers capabilities compared to Fermi that I know of. And coming from Nvidia it should be taken with a pinch of salt.

    Anyway, we seem to be going from about 2 DP GFLOPS per Watt to with Fermi to 5 DP GFLOPS per Watt with Kepler.

    Now while I don't personally know what DP GFLOPS per Watt means it clearly indicates improvement in efficiency and probably power consumption, but it says nothing about actual performance in terms of FPS. We also don't know which Kepler generation they're referring to.

    While I think it's safe to assume that power consumption will drop significantly, I wouldn't expect actual performance increase of more than 20-30% compared to current GPUs.

    After all, lowering power consumption by 20% while increasing performance by 20% is still an increase of 50% in performance / watt.
    Last edited by mmoc433ceb40ad; 2011-10-17 at 08:18 PM.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •