1. #1
    Deleted

    28nm mobile GPUs leaked

    A couple of days ago some preliminary facts were leaked about Nvidia and AMDs upcoming 28nm mobile GPUs.

    Nvidia





    AMD




    Now obviously this isn't hard evidence and doesn't say much about the potential of Desktop GPUs. But it might give us a solid implication of what to expect from the next-gen of GPUs.

    Edit: Bear in mind that Nvidias list does not refer to the new Keplar architecture. Instead it refers to a Fermi shrink. "Keplar" models won't be out for another 3 months after.
    Last edited by mmoc433ceb40ad; 2011-08-25 at 11:31 PM.

  2. #2
    Sweet mother of... Those wattages make me tingly.

  3. #3
    256bit GDDR5 is plenty even for ultra high end cards.

  4. #4
    the N13E-GTX is probably something like the GTX660M and all the lowend GPU are probably a die shrink of Fermi.

  5. #5
    Quote Originally Posted by Synthaxx View Post
    Hmmm.... NVidia going back to 256Bit. Strange move considering they've been 384-bit for most of their major cards in the past few years. No doubt they'll be looking to pump up the clocks with such a move. Perhaps this is a move enforced by the masses who have had issues with their 580's and 570's misbehaving.

    Of course, it could just be limited to mobile chips, but it's still something worth considering.
    560Ti uses 256bit, 570 uses 320bit, only 580 GPU's use 384bit (from the 500 series)

    -edit- Just looked up the specs for the mobile GPU's: GTX560M/GTX570M 192bits, GTX580M 256bits.
    Last edited by Asmekiel; 2011-08-25 at 11:14 PM.

  6. #6
    Out of idle curiosity, I think it'd be awesome to have a series of videos generally explaining what videocard differences are. For people like me, I have to doublecheck with you all because I get easily confused by this shit. :P

    I can read the various clockspeeds, etc, but I can't say I find comparing them easy. :P

  7. #7
    Titan
    10+ Year Old Account
    Join Date
    Oct 2010
    Location
    America's Hat
    Posts
    14,141
    My understanding was that Nvidia was a long way off from releasing any 28nm graphics cards, both desktop or mobile. Maybe the die shrink problems are for desktops. Good to see that both companies are pushing for better power consumption, too bad the top end chips will be expensive and we will see more powerful models before the 28nm chips are widely used.
    Last edited by Rennadrel; 2011-08-25 at 11:09 PM.

  8. #8
    Quote Originally Posted by Rennadrel View Post
    My understanding was that Nvidia was a long way off from releasing any 28nm graphics cards, both desktop or mobile. Maybe the die shrink problems are for desktops. Good to see that both companies are putting for better power consumption, too bad the top end chips will be expensive and we will see more powerful models before the 28nm chips are widely used.
    Last estimates were Q1 for both companies, with AMD maybe even late december. Looking at the charts this seems about right. Starting mass production/shipping doesn't mean (high) availability.
    The TDP of a GTX560M is 75W according to:http://www.notebookcheck.net/NVIDIA-...I.58864.0.html. Looking at the chart there hasn't been much change in this.

    -edit- Grats on 1k posts

    -extra edit- And hooray for MMO-Champion for getting the old smileys back. Three cheers for Boub!

  9. #9
    Deleted
    Quote Originally Posted by Drunkenvalley View Post
    Out of idle curiosity, I think it'd be awesome to have a series of videos generally explaining what videocard differences are. For people like me, I have to doublecheck with you all because I get easily confused by this shit. :P

    I can read the various clockspeeds, etc, but I can't say I find comparing them easy. :P
    It's difficult to generalize what to look for in a GPU since the specs and relative performance change with every generation.

    Most of the time you should be looking at specific models, rather than specs.

    That said, there are a few general statements to be made.

    Memory width: "Gaming" GPUs are defined by their 256 bit or higher memory interface. Anything with less is generally considered a "low-end" GPU and should really be considered for serious gaming. Generally, GPUs from 150 dollars upwards feature 256 bit memory width. 128 or wven 64 bit are reserved for low-end budget machines.

    Memory size: Memory size is one of these aspects that steadily increases over time. Similar to normal RAM, Memory size dictates how much texture information a GPU can store and process at once. The general consensus is that with current graphics details, 1 GB of visual memory is suffice for any single-monitor Full-HD setup. More than this 1GB only translate into extra performance when used in combination with even higher resolutions and multi-monitor setups.

    Memory type: Just like with regular RAM, visual RAM is updated on a regular basis, resulting in less power consumption, higher clock-rates or faster access times. Generally, the higher the suffix the better the RAM.

  10. #10
    The bus width and the memory type are all about memory bandwidth. If the bandwidth is enough, there is no reason to go beyond 256 bit GDDR5.
    You may get away with a lower clocked memory for a higher bus width, since it produces the same bandwidth, but that's it.
    We might see 128bit in the mid to high end segment again if Nvidia decides to use faster memory than GDDR5 (XDR2 for example).
    Last edited by haxartus; 2011-08-26 at 08:10 AM.

  11. #11
    Aha. I notice some folks around here overclock their stuff -- does this produce any useful results?

  12. #12
    The Patient warhead0's Avatar
    10+ Year Old Account
    Join Date
    Aug 2010
    Location
    Queensland, Australia
    Posts
    304
    Interesting but some things about this table's do seem rather odd.

    Not sure if fake or not....

  13. #13
    The Lightbringer Uggorthaholy's Avatar
    15+ Year Old Account
    Join Date
    Feb 2009
    Location
    Weatherford, TX
    Posts
    3,169
    Quote Originally Posted by Drunkenvalley View Post
    Aha. I notice some folks around here overclock their stuff -- does this produce any useful results?
    I noticed an increase in folding speed after OC'ing just a modest 800/1600 on my GTX 460.

  14. #14
    Actually, I think it might be AMD that will come first with the faster XDR2 memory (at least for HD7970). Nvidia is "fine" with their 384 bit GDDR5.
    256 bit XDR2 sounds good enough.
    Last edited by haxartus; 2011-08-26 at 09:01 AM.

  15. #15
    Bloodsail Admiral dicertification's Avatar
    10+ Year Old Account
    Join Date
    Mar 2011
    Location
    Canada
    Posts
    1,006
    Quote Originally Posted by warhead0 View Post
    Interesting but some things about this table's do seem rather odd.

    Not sure if fake or not....

    I'm a regular semiaccurate reader and followed him while he was still at the inq., while charlie does edit model numbers and such to protect his moles. He has a pretty good record when it comes to early information. He has been very wrong in the past as well though. hence the site name.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •