Page 4 of 13 FirstFirst ...
2
3
4
5
6
... LastLast
  1. #61
    Old God Vash The Stampede's Avatar
    10+ Year Old Account
    Join Date
    Sep 2010
    Location
    Better part of NJ
    Posts
    10,939
    Quote Originally Posted by Artorius View Post
    And even then Steam has 3x more people using OSX than Linux. It doesn't matter, and people can always open their parallels/use boot-camp.
    We'll see when SteamBox is released in October. Personally I'm not a big fan of running multiple OS's on a single machine. If I was a Mac gamer, I would remove OS X and just install Windows but then that's not the reason you own an Apple product.

    Quote Originally Posted by Hellfury View Post
    My previous card was a AMD 5970 now its a nvidia 970

    Iam extremely disapointed to know that probably my 970 wont be future proff at all, and nvidia probably wants that aswell to sell more pascals.

    iam happy for AMD, stronger adversary means better prices and cards for consumers.

    Its because that we have a weak AMD that nvidia can get away with selling cards at 700$ that can barely play 4k and others cards with 3.5GB of ram disguised as 4GB
    Disappointingly, owners of HD 5000 and HD 6000 won't get DX12 or Vulkan support. Keep in mind that Nvidia didn't screw up so much that AMD's Mantle was a success. Remember API's like DX12 and Vulkan were the result of Mantle. Nvidia GPUs weren't designed for API's like DX12 because Nvidia was busy making GPUS for DX11. Also remember that DX12 won't create magic performance. If Mantle didn't do much for existing DX11 titles then don't expect anything much with DX12 and Vulkan.

    Your 970 will do fine. It's not until games take advantage of DX12 when we'll see the benefits, and that's going to be a while. Converting existing DX11 games over to DX12 won't do much for gamers.

  2. #62
    Fluffy Kitten Remilia's Avatar
    10+ Year Old Account
    Join Date
    Apr 2011
    Location
    Avatar: Momoco
    Posts
    15,160
    Seems like Nvidia GPUs doesn't have Asynchronous Compute capabilities.
    http://www.overclock.net/t/1569897/v...#post_24356995
    Personally, I think one could just as easily make the claim that we were biased toward Nvidia as the only 'vendor' specific code is for Nvidia where we had to shutdown async compute. By vendor specific, I mean a case where we look at the Vendor ID and make changes to our rendering path. Curiously, their driver reported this feature was functional but attempting to use it was an unmitigated disaster in terms of performance and conformance so we shut it down on their hardware. As far as I know, Maxwell doesn't really have Async Compute so I don't know why their driver was trying to expose that.

  3. #63
    Deleted
    Something else that made me laugh at Nvidia is this

    http://www.dsogaming.com/news/oxide-...the-benchmark/

  4. #64
    Old God Vash The Stampede's Avatar
    10+ Year Old Account
    Join Date
    Sep 2010
    Location
    Better part of NJ
    Posts
    10,939
    Quote Originally Posted by Thorianrage View Post
    Something else that made me laugh at Nvidia is this

    http://www.dsogaming.com/news/oxide-...the-benchmark/
    Nvidia has always had a lot of power of what developers do to their code. Money talks so... Gameworks exposes weakness in AMD's GPU with tessellation but Nvidia hardware isn't without it's own flaws. Right GTX 970 owners with your 3.5GB of memory? I'll give you a couple of examples how Nvidia influences developers in the past when their hardware wasn't up to the task.

    Lets go back to the original Xbox when it had Nvidia graphics in it. Microsoft was pissed at them cause they wouldn't lower their prices so Xbox could compete against PS2/GC. DX9 was in the works and both Nvidia and ATI had their own implementation of what they thought DX9 specification would look like. And of course, both designed hardware before the specification was finalized cause of course they would. Now Nvidia thought Microsoft would go with their specification but they ended up going to ATI's. The difference was that ATI's version used 24-bit color to render and of course ATI's GPU ran only in 24-bit color mode. Nvidia thought DX9 was going to do both 16-bit and 32-bit color modes, but DX9 didn't end up using either. So when games used DX9 they needed 24-bit or higher, and turns out the 32-bit color mode on the Geforce FX's ran like crap on Nvidia hardware. So for a while DX9 games were really mostly DX8.1 games with a few DX9 functions that didn't force Nvidia hardware into 32-bit color mode. And of course Nvidia made sure developers didn't use any real DX9 functions in their games. It wasn't until Half Life 2 that really used DX9 and Geforce FX cards looked really bad.

    Fast forward a bit and Nvidia made DX10 graphic cards while ATI was late with their iteration. Eventually ATI released their cards but with DX10.1 support. DirectX 10 was controversial cause any game that used DX10 functions would not be playable on any GPU at the time. DX10.1 on the other hand did give a performance boost that allowed games to be playable with DX10.1 enabled. Assassin's Creed was released with support for DX10.1 but later Ubisoft patched it and removed it. And since DX10.1 was only availbe on ATI hardware at the time, it seemed rather suspicious as to why the feature that worked perfectly well was removed from the game.

    So when Nvidia asks developers to remove a feature from the game don't act surprised. They have a history of this.
    Last edited by Vash The Stampede; 2015-08-30 at 06:02 PM.

  5. #65
    The Unstoppable Force Gaidax's Avatar
    10+ Year Old Account
    Join Date
    Sep 2013
    Location
    Israel
    Posts
    20,863
    Quote Originally Posted by Thorianrage View Post
    Something else that made me laugh at Nvidia is this

    http://www.dsogaming.com/news/oxide-...the-benchmark/
    Basically that means that Nvidia does not properly support DX12... wow that's a huge clusterfuck.

  6. #66
    Fluffy Kitten Remilia's Avatar
    10+ Year Old Account
    Join Date
    Apr 2011
    Location
    Avatar: Momoco
    Posts
    15,160
    Good thing some big sites are posting this.
    http://www.techpowerup.com/215663/la...irectx-12.html
    Quote Originally Posted by Gaidax View Post
    Basically that means that Nvidia does not properly support DX12... wow that's a huge clusterfuck.
    Pretty much, a major feature of DX12 is Asynchronous Compute, and seems like no Nvidia GPU currently supports it on the hardware. All it can do is the driver says it's there but the hardware just doesn't do it.

    Don't know when Pascal architecture was made and fleshed out, but it's taped out at TMSC so it's not going to change unless they pull it. Xbone contains two Asynchronous Compute Engines and PS4 contains 8 ACEs. For PS4s these were planned well back during 2009 from an interview.
    http://www.gamasutra.com/view/featur...k_.php?print=1
    If Pascal doesn't have any dedicated ACEs of some kind then either the PC ports for games are going to either have AC stuff axed, most likely will for Gameworks games, be present and prove a detriment for their cards for compute settings, or do some weird roundabout. I'm honestly hoping for the 2nd or 3rd option.
    GCN 1.0 has 2 ACEs. GCN 1.1/1.2 has 8 ACEs. PS4's resembles more like 1.1/1.2 but isn't, if that makes sense.
    Last edited by Remilia; 2015-08-31 at 06:36 PM.

  7. #67
    Deleted
    Pascal was taped out a few months ago, I know Nvidia have money and could afford to delay Pascal to respin it, the question will then be, when will that be out.

    Few predicaments Nvidia are in to bring Pascal to market;

    TMSC node fabs (AMD GPUs may suffer this as well)
    HBM availability - rumour is that AMD has priority to the global supply of HBM 2
    ACE units or lack of it since it is a DX12 feature to gain more performance since it can lead to better cpu and gpu performance
    ACE is important to a lot of people on these forums, why? MMOs is why, a lot of people here play mmos on these very forums so this discussion in this area is highly relevant to people here.

    A respin on Pascal means AMD will have a product on the market that will have most likely full use of DX 12 for hell of a lot longer then Nvidia with how things are looking.

    Nvidias response to Oxide most likely has something to do with Pascal then Maxwell in its current form at least imo, I really don't think Pascal has anything like ACE properly, not with Nvidias investment in serial and its optimisation for DX11.

    MMOs by nature are compute driven, by being able to move more compute to the GPU means more FPS in large scale content and quite frankly the most cost effective method to do it if you just look at the 290 results from the ashes benchmark, thats a huge boon for budget builders for mmos gaming like a massive one.

    I actually hope serial processing is low priority and has nothing to do with AMD vs Nvidia but a lot of my games that I play frequently are MMOs and this is a great deal of benefit to me personally and I am sure to plenty of others.

  8. #68
    Quote Originally Posted by Thorianrage View Post
    Pascal was taped out a few months ago, I know Nvidia have money and could afford to delay Pascal to respin it, the question will then be, when will that be out.

    Few predicaments Nvidia are in to bring Pascal to market;

    TMSC node fabs (AMD GPUs may suffer this as well)
    HBM availability - rumour is that AMD has priority to the global supply of HBM 2
    ACE units or lack of it since it is a DX12 feature to gain more performance since it can lead to better cpu and gpu performance
    ACE is important to a lot of people on these forums, why? MMOs is why, a lot of people here play mmos on these very forums so this discussion in this area is highly relevant to people here.

    A respin on Pascal means AMD will have a product on the market that will have most likely full use of DX 12 for hell of a lot longer then Nvidia with how things are looking.

    Nvidias response to Oxide most likely has something to do with Pascal then Maxwell in its current form at least imo, I really don't think Pascal has anything like ACE properly, not with Nvidias investment in serial and its optimisation for DX11.

    MMOs by nature are compute driven, by being able to move more compute to the GPU means more FPS in large scale content and quite frankly the most cost effective method to do it if you just look at the 290 results from the ashes benchmark, thats a huge boon for budget builders for mmos gaming like a massive one.

    I actually hope serial processing is low priority and has nothing to do with AMD vs Nvidia but a lot of my games that I play frequently are MMOs and this is a great deal of benefit to me personally and I am sure to plenty of others.
    Just on the bolded part, that would be Nvidia's Hyper-Q technology. Essentially their own named version of an ACE. The Maxwell 2 graphics cards of the current generation were supposed to have 32 Asynchronous Compute Engines, compared to the 8 that current AMD cards have. However, this simply was found not to be true. They have 1 Async Compute Engine, with 32 queues. Thus the really bad comparative performance.

    This can easily fall under the same category as the 3.5GB vs 4GB 970 argument. Whether a marketing oversight or just pure bullshit on the PR team's part, it's hard to say. However, the Pascal is actually designed to have the full 32 ACE's. I doubt Nvidia's going to do much about this and will not rush a Pascal launch, more likely just activating cruise control and damage control until Pascal actually drops.

  9. #69
    Fluffy Kitten Remilia's Avatar
    10+ Year Old Account
    Join Date
    Apr 2011
    Location
    Avatar: Momoco
    Posts
    15,160
    Quote Originally Posted by Shinzai View Post
    Just on the bolded part, that would be Nvidia's Hyper-Q technology. Essentially their own named version of an ACE. The Maxwell 2 graphics cards of the current generation were supposed to have 32 Asynchronous Compute Engines, compared to the 8 that current AMD cards have. However, this simply was found not to be true. They have 1 Async Compute Engine, with 32 queues. Thus the really bad comparative performance.
    Hyper-Q was already there during the inception of Kepler. It's a method of managing resources with the compute and graphics queue. The issue however is if they actually use it, Hyper-Q becomes the serial part of the GPU, completely destroying the point of any parallel computing. AMD's implementation is 8 ACEs with 8 Queues per ACE allowing for 64 compute queues and one separate graphics engine. The game feeds into one of the 8 ACEs and then it assigns it to a queue to be computed. This allows for 8 paths running in parallel with 64 queues running for compute.

    Hyper-Q is the opposite. It manages the entirety of both the graphics and compute. So what happens is the game feeds into the Hyper-Q and it assigns it to either the graphics or compute queues, this is why it's considered 1 graphic 31 queues. This however makes it serial in nature because it's all going into one thing, Hyper-Q. If they want to make it parallel they have to get rid of Hyper-Q. Hyper-Q is fine in GPU assisted compute for research, video rendering, etc, it is however not fine in real time compute.
    This can easily fall under the same category as the 3.5GB vs 4GB 970 argument. Whether a marketing oversight or just pure bullshit on the PR team's part, it's hard to say. However, the Pascal is actually designed to have the full 32 ACE's. I doubt Nvidia's going to do much about this and will not rush a Pascal launch, more likely just activating cruise control and damage control until Pascal actually drops.
    When your driver says it has something and it doesn't, it's not marketing anymore.
    And honestly there's not much info on Pascal in compute. So saying it has 32 ACE is very premature and whether it's the same as Maxwell's.
    Last edited by Remilia; 2015-08-31 at 07:38 PM.

  10. #70
    Quote Originally Posted by Remilia View Post
    When your driver says it has something and it doesn't, it's not marketing anymore.
    True.

    And honestly there's not much info on Pascal in compute. So saying it has 32 ACE is very premature and whether it's the same as Maxwell's.
    Again true, it's based on rumour-milling from the DX12 presentation that Nvidia had re-jigged at GDC, for their own presentation at GTC 2015, where they focus a large amount on Compute and its potential gaming uses and specifically Async Compute potential.

    Nvidia's already planned around Async, but they probably weren't expecting it to be an actual issue, so suddenly in this year.

    As I said earlier, I still expect them to just sit back, watch what happens and throw money at anything that needs fixing afterwards.

    http://on-demand-gtc.gputechconf.com...demand-gtc.php

    http://on-demand.gputechconf.com/gtc...deo/S5561.html
    Last edited by Shinzai; 2015-08-31 at 08:07 PM.

  11. #71
    Fluffy Kitten Remilia's Avatar
    10+ Year Old Account
    Join Date
    Apr 2011
    Location
    Avatar: Momoco
    Posts
    15,160
    Question is really when or what it's going to come out as. Honestly though... I find it very hard to believe anything Nvidia says. I base off what companies do, not what they say, and they've been saying some pretty words with shit actions, this goes for every company. If they come out with hardware that's capable of it, okay then, but until that happens, whatever. They definitely though can't keep their original system.

    I just honestly don't have any remote faith in Nvidia. I just get the feeling that once Pascal releases and especially if it has AC capabilities, Maxwell will get dumped like Kepler. Kepler has been marked as Legacy for months already.

    Also the computing for the first link, deep learning etc, is FP16 computing which isn't something that games give a shit about. Games mostly go off by FP32 computing / integer compute.

    What they wanted to bring back in Pascal in terms of computing power though is FP64 / double precision computing. Maxwell is completely devoid of it.
    Last edited by Remilia; 2015-08-31 at 08:40 PM.

  12. #72
    Quote Originally Posted by Remilia View Post
    Question is really when or what it's going to come out as. Honestly though... I find it very hard to believe anything Nvidia says. I base off what companies do, not what they say, and they've been saying some pretty words with shit actions, this goes for every company. If they come out with hardware that's capable of it, okay then, but until that happens, whatever. They definitely though can't keep their original system.

    I just honestly don't have any remote faith in Nvidia. I just get the feeling that once Pascal releases and especially if it has AC capabilities, Maxwell will get dumped like Kepler. Kepler has been marked as Legacy for months already.

    Also the computing for the first link, deep learning etc, is FP16 computing which isn't something that games give a shit about. Games mostly go off by FP32 computing / integer compute.

    What they wanted to bring back in Pascal in terms of computing power though is FP64 / double precision computing. Maxwell is completely devoid of it.
    The bolded sounds like typical Nvidia tactics to me, which is why I believe it a genuine possibility. Especially if they pop out a Maxwell refresh alongside the Pascal.

  13. #73
    Fluffy Kitten Remilia's Avatar
    10+ Year Old Account
    Join Date
    Apr 2011
    Location
    Avatar: Momoco
    Posts
    15,160
    Quote Originally Posted by Shinzai View Post
    The bolded sounds like typical Nvidia tactics to me, which is why I believe it a genuine possibility. Especially if they pop out a Maxwell refresh alongside the Pascal.
    It's a very anti-consumer tactic, which I am one and not sure why some people support it.

  14. #74
    The Lightbringer Artorius's Avatar
    10+ Year Old Account
    Join Date
    Dec 2012
    Location
    Natal, Brazil
    Posts
    3,781
    Quote Originally Posted by Remilia View Post
    It's a very anti-consumer tactic, which I am one and not sure why some people support it.
    It's worse when people try to debate saying that Nvidia has "better drivers", when old cards have been giving some very strange performance with latest drivers.
    They might be better at improving drivers at day 0, but at the long run the competitor is much better at improving old products performance.

  15. #75
    The Lightbringer Evildeffy's Avatar
    15+ Year Old Account
    Join Date
    Jan 2009
    Location
    Nieuwegein, Netherlands
    Posts
    3,772
    May I remind you 2 that the DX12 performance you've seen in Ashes of the Singularity from nVidia are with the Asynchronous Shaders disabled?
    Basically reverting into a DX11 way of processing with DX12 effects?

    Of which now it's speculated that more features that are part of DX12_1 are driver simulated by nVidia.

    I.E. the performance you've seen so far is for DX12 is not truly DX12 but DX11 enhanced because of lack of hardware support.
    The exact statement that the "true" DX12 performance was so horrible and disastrous that they disabled that path for nVidia hardware.

  16. #76
    Iam disgusted by this but not surprised.

    I mean the 970 is a great card for DX11 but makes me sad that for dx12 will be just a expensive brick.

    Also the fact that the card is so popular will make most game studios that dont have the necessary R&D, just forget about dx12 and continue development for dx11 becuase whats the point if the nvidia cards dont gain from it.

  17. #77
    The Lightbringer Evildeffy's Avatar
    15+ Year Old Account
    Join Date
    Jan 2009
    Location
    Nieuwegein, Netherlands
    Posts
    3,772
    Quote Originally Posted by Hellfury View Post
    Iam disgusted by this but not surprised.

    I mean the 970 is a great card for DX11 but makes me sad that for dx12 will be just a expensive brick.
    It will not be a brick, it will still function well... just not as good in DX12 .. it doesn't mean a GTX 900 series (or earlier) cards are invalidated, just overpriced.

    Quote Originally Posted by Hellfury View Post
    Also the fact that the card is so popular will make most game studios that dont have the necessary R&D, just forget about dx12 and continue development for dx11 becuase whats the point if the nvidia cards dont gain from it.
    Except that developers will never do that as it will kill their business as it would mean taking a considerable performance hit with less graphics in games.
    What do the mob live on? "IT MUST BE AS CRISP AS MY BUTT HAIR, IF NOT IT'S NOT A GOOD GAME! IT MUST RUN WELL ON MY 2 DOLLAR GFX CARD TOO!"

    Besides all developers have been begging for this control for years, they will not leave it behind.

  18. #78
    The Unstoppable Force Gaidax's Avatar
    10+ Year Old Account
    Join Date
    Sep 2013
    Location
    Israel
    Posts
    20,863
    Well, I just pity all the guys who bought super-expensive GPU from NVIDIA in hopes of future proofing. I almost jumped the 980Ti gun too, as I usually buy expensive GPU and then keep it for 2-3 years, but I guess I'll be staying with 290X for some more time now and just wait for HBM2 products.

  19. #79
    Over 9000! zealo's Avatar
    10+ Year Old Account
    Join Date
    Jan 2013
    Location
    Sweden
    Posts
    9,519
    The way I see it, how this goes for Nvidia really depends on whether Pascal supports async compute at similar levels to GCN, if they do not AMD has a real good shot at coming back properly.

    Not that surprised even if it sucks for me personally having a high end rig with kepler SLI, was never really expecting it to go past DX12 without needing a hardware upgrade but didn't expect games to start using DX12 in any serious manner just yet.

  20. #80
    Maxwell suck in DX12 ... off course it does, they want you to upgrade to pascal Duuhhhh. That said, expect a pretty serious jump in performance with the next gen GPU. 14nm + DX12 oooh boy its gonna be amazing.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •