Page 42 of 45 FirstFirst ...
32
40
41
42
43
44
... LastLast
  1. #821
    Quote Originally Posted by Evildeffy View Post
    If 800 for launch numbers for Germany is accurate... it's more stock than the GTX 1080/1070 had when they launched... a year ago.

    I can see Vega 56 being a success when 3rd party designs come out but reference... meh.
    The whole price thing is also pretty shit though... I mean I know nVidia did/does the same but still a dick move no-one expected from AMD.

    It seems to do well for compute side of things but meh ... not exactly what I was hoping for in terms of performance competition.

    Ah well.
    I have a sneaky suspicion that the price increase is so that AMD can pocket some of the profits from the mining demand hike at the moment instead of this profit going to retailers.

    The increased prices rule Vega out completely from a price/perf point of view without the shortages going on at the moment. The 64 was already a really tough sell and the 56 slightly overpriced but still viable. With the increased prices they are out of whack with reality at MSRP unless it's what I said before. A way to pocket some of the cash from the mining boom.

    - - - Updated - - -

    Quote Originally Posted by caralhoPT View Post
    Still nothing to compete or beat the 1080ti as expected. Guess I won't be selling my 1080ti anytime soon.
    They don't really need to because it's a very small market. As long as they are price/perf competitive with the other cards then it's fine. The latest pricing news is more of a problem then not having a 1080ti competitor.

  2. #822
    The Lightbringer Evildeffy's Avatar
    15+ Year Old Account
    Join Date
    Jan 2009
    Location
    Nieuwegein, Netherlands
    Posts
    3,772
    Quote Originally Posted by Gray_Matter View Post
    I have a sneaky suspicion that the price increase is so that AMD can pocket some of the profits from the mining demand hike at the moment instead of this profit going to retailers.

    The increased prices rule Vega out completely from a price/perf point of view without the shortages going on at the moment. The 64 was already a really tough sell and the 56 slightly overpriced but still viable. With the increased prices they are out of whack with reality at MSRP unless it's what I said before. A way to pocket some of the cash from the mining boom.
    They could be expecting, when the missing driver features are implemented, to gain a massive performance boost...
    Which IS possible but highly unlikely.

    This may push their higher MSRP but it'd be a stupid way to go about it as first impressions are what lasts unfortunately.

  3. #823
    Quote Originally Posted by Lathais View Post
    The problem is, the FE was nothing more than a glorified reference cooler, which historically have been less expensive than AIBs. Of course, we won't know for quite some time, but perhaps that's what AMD was attempting to do here, but then still give early adopters a better price? I dunno, it kinda sounds like I am making excuses for them saying that though. I guess we'll see when AIBs come out if they are less expensive or not.
    I dont really see a problem with FE being more expensive: noone makes you buy it. It also allowed Nvidia and partner to sell cards before actual partner cards were ready.

    On Vega side I dont see how AIBs are going to make it less expensive. I mean it's a 486mm2 die with HBM2 and a 12 (6x2) phase VRM. You can probably go lower on Vega 56 with it's low power limit, but I dont really see any way to save money on the PCB when it comes to Vega 64.

    - - - Updated - - -

    Quote Originally Posted by Evildeffy View Post
    Failing to see the point I see...

    Most of the time, as prior history has shown, MSRP was actually pretty close before the GTX 10 series and this series Vega launch (mining crap not included), the MSRP was also upheld with Polaris AIBs for the most part even.

    The point is that FE is nothing but reference design and "garbage" compared to 3rd party designs... and the AIB partners should have been cheaper and the point was that they weren't always doing that because nVidia was a big enough dick to invoke FE tax and board partners had to do the same.

    It's the same type of trickery with playing with prices, that was my point, nVidia asked more for FE to have AIBs justify asking more for their designs.
    AMD asked less with their reference designs to jack up the prices later, either which way MSRP isn't adhered to at all and we suffer for it.

    Clearly I have to elaborate to the detailed points to get it across.
    You clearly dont realize that board partner also make and sell FE cards? Again, there were plently of cards cheaper than FE before all that mining shit started. Also, all this only applies to 1070 and 1080, both 1060 and 1080 Ti FEs only sold off Nvidia's website and werent overpriced compared to AIB MSRP.
    R5 5600X | Thermalright Silver Arrow IB-E Extreme | MSI MAG B550 Tomahawk | 16GB Crucial Ballistix DDR4-3600/CL16 | MSI GTX 1070 Gaming X | Corsair RM650x | Cooler Master HAF X | Logitech G400s | DREVO Excalibur 84 | Kingston HyperX Cloud II | BenQ XL2411T + LG 24MK430H-B

  4. #824
    The Lightbringer Evildeffy's Avatar
    15+ Year Old Account
    Join Date
    Jan 2009
    Location
    Nieuwegein, Netherlands
    Posts
    3,772
    Quote Originally Posted by Thunderball View Post
    You clearly dont realize that board partner also make and sell FE cards? Again, there were plently of cards cheaper than FE before all that mining shit started. Also, all this only applies to 1070 and 1080, both 1060 and 1080 Ti FEs only sold off Nvidia's website and werent overpriced compared to AIB MSRP.
    Actually board partners don't make FE cards... they buy them from nVidia which has an OEM making them (I don't know which to be fair), FE is nVidia exclusive build.
    You still aren't reading what I wrote properly but I'll leave it at that.

    And as for the latter part, I SPECIFICALLY mentioned the 1080/1070 and not 1060 and below, go ahead read back.

  5. #825
    Warchief Zenny's Avatar
    10+ Year Old Account
    Join Date
    Oct 2011
    Location
    South Africa
    Posts
    2,171
    Quote Originally Posted by Evildeffy View Post
    Really? Nothing to do with hardware? Well that's weird since it's always the case with such.
    But regardless this has been said many times in the past on this forum, on terms of Hawaii and Maxwell it was 8 graphics and compute pipelines including 8 command queues (AMD) vs. 1 command queue and 1 compute pipeline with 31 graphics pipelines for nVidia.
    Quite a big difference, dive a little into history as that'll tell you more than I can re-hash at the minute.

    Sufficed to say that engineers also stated that AMD's hardware is "more parallel" than nVidia.
    It's actually 1 Graphics + 31 Compute (or just 32 compute), which has dynamic scheduling on the queues with Pascal:

    http://www.anandtech.com/show/10325/...ition-review/9

    Quote Originally Posted by Anaandtech
    For Pascal, NVIDIA has implemented a dynamic load balancing system to replace Maxwell 2’s static partitions. Now if the queues end up unbalanced and one of the queues runs out of work early, the driver and work schedulers can step in and fill up the remaining time with work from the other queues.
    If you want word from Nvidia that they can handle multiple command lists:

    https://developer.nvidia.com/dx12-dos-and-donts

    Quote Originally Posted by Nvidia
    Submit work in parallel and evenly across several threads/cores to multiple command lists
    Command lists are not free threaded so parallel work submission means submitting to multiple command lists
    You still need a reasonable number of command lists for efficient parallel work submission
    Even for compute tasks that can in theory run in parallel with graphics tasks, the actual scheduling details of the parallel work on the GPU may not generate the results you hope for
    Make sure to use just one CBV/SRV/UAV/descriptor heap as a ring-buffer for all frames if you want to aim at running parallel asynchronous compute and graphics workloads
    Keep the main queue open to do rendering work in parallel
    There are already some reviews out (mentioned prior) where the RX Vega is already beating the 1080Ti/Titan X at these compute side of things, I'd say they still have that lead... Steve Burke from Gamer's Nexus' Vega 56 review states the same thing in his review.
    In compute tests the 1080ti and Titan Xp are all over the place, mostly due to not having the correct drivers. Slower Quadro cards beating the Titan Xp for instance. Functionally the Vega 64 does beat a 1080ti in compute tests but is in turn defeated by Quadro cards, which is admittedly far more expensive. If we just look at the raw TFLOP numbers (not much of a metric true) the gap at the high end has dwindled to nothing.
    DotA and Talos Project have Vulkan tacked on as an afterthought ... DOOM (2016) had it developed from that engine, we know the difference for those engines.. DX12 is a prime example.
    Don't pick stuff that is tacked on after, it's useless as they can't really drive it, hence why I picked DOOM only.
    Otherwise I should adhere to this list: https://en.wikipedia.org/wiki/List_o...Vulkan_support
    You could have been more specific.

    Correct, by that I meant the aforementioned single pipeline for commands and processing for execution, that hasn't changed... that is nVidia's hardware.
    AMD has 8 in Hawaii (R9 290(X) and R9 390(X), unsure of Polaris and deffo unsure of RX Vega) that are deffo controlled and done in parallel where it can ONLY be done serially in Maxwell/Pascal.
    I'm not sure how you equal 31 compute to serial.
    They deffo are, they mightn't suck at their driver scheduling, but this stands.
    And the driver can schedule multiple command lists across multiple compute queues that can be dynamically altered between graphics/compute.

    You think this has nothing to do with the fact of how their hardware is designed?
    Why is it you think Maxwell showed regression in DX12/VULKAN? If they hadn't input the mid process interrupt they showcased in Pascal launch event they'd still have regression, because of their serial pipeline.
    This example was showcased here: Clicky for Picture!
    Your statement is saying that AMD's DX11 performance is purely based on drivers at this rate where their own Architecture for parallelization is getting in the way.
    AMD is limited in how DX11 is designed, it cannot take full advantage of the hardware scheduling it offers, Nvidia is far more flexible in DX11 due to the way they schedule work. In DX12 AMD is fully able to leverage hardware that they couldn't really before and thus the massive boost it undergoes, Nvidia gets very little in the way of this but show no regression in DX12 if the implementation is done correctly.

    Neither Maxwell nor Pascal have any sort of regression in a well optimized DX12/Vulkan path, Maxwell does loose performance with Async though (it's disabled in drivers, Pascal does not. Although we are talking about Pascal here.

    https://m.hardocp.com/article/2017/0...card_review/11

    You'll notice a number of games that Nvidia actually gains performance in DX12. Some of them are even AMD Evolved titles.
    Your evidence is all in the history of this forum... perhaps @Artorius can give you a link to it since he's paying attention, otherwise dive into the history of my posts and find it somewhere, it's all in there.
    We've danced this dance before, nobody actually links any evidence when I ask for it.
    See this is where we disagree, you claim bulk is fully parallel, I claim having a driver through a single pipeline telling the card what to do isn't really parallel but at the same time is.
    The driver can submit work to multiple queues in parallel, as per the Nvidia link above.
    Let me ask it of you this way:
    Are the Warp Schedulers as nVidia refers to it fully independent? Or are they to be driven by the drivers?
    Are the similar features in AMD cards driven by drivers or are they driven by the HWS?
    Warp schedulers are fully in hardware:

    http://cdn2.ubergizmo.com/wp-content...maxwell-sm.gif

    There's a big difference and I didn't state nVidia sucked for it... far from it, they've done well, but having a driver middleman is not 100% parallelization.
    Why is it not when the driver can submit work in parallel? Being software does not prevent a system from being parallel.

    - - - Updated - - -

    Quote Originally Posted by Lathais View Post
    and you don't think that the reason that AMD gets a bigger boost is because doing it with the hardware scheduler is more efficient? So that does not prove that it IS the hardware that is making the difference.

    No one is saying nvidias GPUs can not do this, just that being done by software means that the hardware is less parallel. Yes, the SOFTWARE can handle the parallel instructions, but the hardware can not. With AMD, the hardware CAN handle it because it's hardware is parallel and nvidia's is not. nvidia emulates parallel hardware with software that is less efficient.
    Did you look at the link? Nvidia is almost twice as efficient in DX11 than AMD is, but they both perform similarly in DX12. The larger boost for AMD is due to AMD starting at a worse position.

    Plus the idea that Nvidia hardware is not parallel is pure nonsense. Do you think 3.5k SM units are fed in serial?
    Last edited by Zenny; 2017-08-16 at 08:44 AM.

  6. #826
    The Lightbringer Evildeffy's Avatar
    15+ Year Old Account
    Join Date
    Jan 2009
    Location
    Nieuwegein, Netherlands
    Posts
    3,772
    Quote Originally Posted by Zenny View Post
    It's actually 1 Graphics + 31 Compute (or just 32 compute), which has dynamic scheduling on the queues with Pascal:

    http://www.anandtech.com/show/10325/...ition-review/9
    Yes, I reversed the pipeline types .. woohoo!
    The point I was trying to make is that any command given has to go through that 1 big pipeline first to reach the 31 others to be processed, like a highway with 1 lane, that pipeline, as much as the drivers may make it parallel is still serial.
    Even in the quote below the part I'm quoting it says the driver still needs to step up.

    Quote Originally Posted by Zenny View Post
    If you want word from Nvidia that they can handle multiple command lists:

    https://developer.nvidia.com/dx12-dos-and-donts
    Yes they can but it's still processed serially, hell even the last sentence of your quote says it.
    The architecture is built around serial processing and parallel execution with driver intervention.

    I will repeat myself of saying that that isn't necessarily bad but that both architectures have pros and cons.
    nVidia's way allows for high speed DX11 (and below) because of the API and GPU working in tandem and gain great performance from it.
    AMD's way tries to do parallelization even if the API doesn't do it and loses a bit of performance in older APIs, where the newer ones can exploit it much better.

    Your draw call example is one such example where the parallelization hardware comes to full power on AMD and nVidia's is "more limited" and that's purely because of how the architecture is set up.

    Quote Originally Posted by Zenny View Post
    In compute tests the 1080ti and Titan Xp are all over the place, mostly due to not having the correct drivers. Slower Quadro cards beating the Titan Xp for instance. Functionally the Vega 64 does beat a 1080ti in compute tests but is in turn defeated by Quadro cards, which is admittedly far more expensive. If we just look at the raw TFLOP numbers (not much of a metric true) the gap at the high end has dwindled to nothing.
    Doesn't change my statement does it?

    Quote Originally Posted by Zenny View Post
    You could have been more specific.
    Perhaps I should have been, I assumed it would've been a common knowledge thing to do.

    Quote Originally Posted by Zenny View Post
    I'm not sure how you equal 31 compute to serial.
    To get to those 31 pipelines you HAVE to send a command/queue list through the 1 pipeline first, meaning you physically CANNOT push more than 1 stream of data through, that equates to serial and it describes the entire Maxwell/Pascal architecture right there.

    Quote Originally Posted by Zenny View Post
    And the driver can schedule multiple command lists across multiple compute queues that can be dynamically altered between graphics/compute.
    Again the 1 big pipeline highway to 31 pipelines later, cram everything through 1 and get a bottleneck.

    Quote Originally Posted by Zenny View Post
    AMD is limited in how DX11 is designed, it cannot take full advantage of the hardware scheduling it offers, Nvidia is far more flexible in DX11 due to the way they schedule work. In DX12 AMD is fully able to leverage hardware that they couldn't really before and thus the massive boost it undergoes, Nvidia gets very little in the way of this but show no regression in DX12 if the implementation is done correctly.

    Neither Maxwell nor Pascal have any sort of regression in a well optimized DX12/Vulkan path, Maxwell does loose performance with Async though (it's disabled in drivers, Pascal does not. Although we are talking about Pascal here.

    https://m.hardocp.com/article/2017/0...card_review/11

    You'll notice a number of games that Nvidia actually gains performance in DX12. Some of them are even AMD Evolved titles.
    With that you've affirmed my point of architecture being a huge player in these things.
    Also Pascal does gain, I never said that Pascal has regression (it does but that also counts for AMD cards in shitty DX12 titles that are tacked on later, not nVidia specific!) because of the mid-op interrupt I linked earlier, that very mid-op interrupt was missing on Maxwell and those are the cards (I again specifically mentioned those) that have performance regression, so nVidia forced it to operate serially in the drivers to have DX12 be as DX11 is.
    Does anyone still remember that nVidia still promised a driver to enable Async Compute for Maxwell and have good performance gains?

    Simply put for Maxwell the driver converts DX12/Vulkan into DX11 and operates off of that.

    This creates latency and is also the reason why Maxwell is "catastrophic" for VR according to some developers.

    Quote Originally Posted by Zenny View Post
    We've danced this dance before, nobody actually links any evidence when I ask for it.
    You just did it yourself, no need to anymore.

    Quote Originally Posted by Zenny View Post
    The driver can submit work to multiple queues in parallel, as per the Nvidia link above.
    Still going through that 1 big serial pipeline before it can reach the 31 other pipelines, driver can, card cannot.
    Again: CARD CAN EXECUTE PARALLEL TASKS BUT IS FED SERIALLY.

    Quote Originally Posted by Zenny View Post
    Warp schedulers are fully in hardware:

    http://cdn2.ubergizmo.com/wp-content...maxwell-sm.gif
    Not the question but your picture does still show that it gets fed the process serially .. nice instruction cache there above the schedulers.

    Quote Originally Posted by Zenny View Post
    Why is it not when the driver can submit work in parallel? Being software does not prevent a system from being parallel.
    Again:
    The driver may be parallel and optimized nicely, the card still has to process it serially for it to execute parallel tasks.

    To ask an example of you:
    Why was Maxwell deemed atrocious for VR by developers?
    When you find the answer to this it gives you a good idea.

    Once again:
    No-one said that nVidia cards CANNOT execute tasks in parallel.
    We're saying nVidia cards have to PROCESS COMMANDS in serial due to every command needing to be done by the driver and fed down 1 pipeline to the card which then has to distribute it after to 31 more pipelines creating a bottleneck.
    AMD's cards however have 8 pipelines (not Vega, unsure of that one!) feeding and working to 8 more pipelines without driver interference and is handled by the HWS... which makes it less of a bottleneck and a "true" parallelization card.

    I'M NOT SAYING NVIDIA SUCKS, I'M SAYING BOTH HAVE PROS AND CONS AND THEY ARE DIFFERENT.
    NVIDIA BEING LESS PARALLEL OPTIMIZED VS. AMD BEING LESS SERIAL OPTIMIZED.

    Old vs. New API of things.

  7. #827
    Warchief Zenny's Avatar
    10+ Year Old Account
    Join Date
    Oct 2011
    Location
    South Africa
    Posts
    2,171
    Quote Originally Posted by Evildeffy View Post
    Yes, I reversed the pipeline types .. woohoo!
    The point I was trying to make is that any command given has to go through that 1 big pipeline first to reach the 31 others to be processed, like a highway with 1 lane, that pipeline, as much as the drivers may make it parallel is still serial.
    Even in the quote below the part I'm quoting it says the driver still needs to step up.
    One Graphics queue? The same thing AMD has?





    Yes they can but it's still processed serially, hell even the last sentence of your quote says it.
    The architecture is built around serial processing and parallel execution with driver intervention.
    One graphics queue, multiple compute queues being done in parallel:


    I will repeat myself of saying that that isn't necessarily bad but that both architectures have pros and cons.
    nVidia's way allows for high speed DX11 (and below) because of the API and GPU working in tandem and gain great performance from it.
    AMD's way tries to do parallelization even if the API doesn't do it and loses a bit of performance in older APIs, where the newer ones can exploit it much better.
    Yes, I've said as much.

    Your draw call example is one such example where the parallelization hardware comes to full power on AMD and nVidia's is "more limited" and that's purely because of how the architecture is set up.
    Nvidia does more draw calls on Vulkan then AMD does.

    To get to those 31 pipelines you HAVE to send a command/queue list through the 1 pipeline first, meaning you physically CANNOT push more than 1 stream of data through, that equates to serial and it describes the entire Maxwell/Pascal architecture right there.

    Again the 1 big pipeline highway to 31 pipelines later, cram everything through 1 and get a bottleneck.
    1 Graphics queue plus multiple compute queues, like AMD does.

    With that you've affirmed my point of architecture being a huge player in these things.
    Also Pascal does gain, I never said that Pascal has regression (it does but that also counts for AMD cards in shitty DX12 titles that are tacked on later, not nVidia specific!) because of the mid-op interrupt I linked earlier, that very mid-op interrupt was missing on Maxwell and those are the cards (I again specifically mentioned those) that have performance regression, so nVidia forced it to operate serially in the drivers to have DX12 be as DX11 is.
    Does anyone still remember that nVidia still promised a driver to enable Async Compute for Maxwell and have good performance gains?

    Simply put for Maxwell the driver converts DX12/Vulkan into DX11 and operates off of that.

    This creates latency and is also the reason why Maxwell is "catastrophic" for VR according to some developers.
    Maxwell doesn't have anything converted to DX11, the overhead of changing API's on the fly would cripple the card. The developer who complained about VR on Maxwell is full of shit as most VR games perform better on the 980ti compared to the Fury X.

    https://www.hardocp.com/reviews/vr/
    You just did it yourself, no need to anymore.
    I have tons of links in my posts to various tech sites and Nvidia whitepapers/posts.


    Still going through that 1 big serial pipeline before it can reach the 31 other pipelines, driver can, card cannot.
    Again: CARD CAN EXECUTE PARALLEL TASKS BUT IS FED SERIALLY.
    Sigh.

    Not the question but your picture does still show that it gets fed the process serially .. nice instruction cache there above the schedulers.
    Well yes, no GPU on earth has everything being parallel, something needs to feed SM units or schedulers.

    Oh wow, look at how serial AMD does work.
    Again:
    The driver may be parallel and optimized nicely, the card still has to process it serially for it to execute parallel tasks.

    To ask an example of you:
    Why was Maxwell deemed atrocious for VR by developers?
    When you find the answer to this it gives you a good idea.
    Because the developer (singular, not multiple) was talking nonsense?
    Once again:
    No-one said that nVidia cards CANNOT execute tasks in parallel.
    We're saying nVidia cards have to PROCESS COMMANDS in serial due to every command needing to be done by the driver and fed down 1 pipeline to the card which then has to distribute it after to 31 more pipelines creating a bottleneck.
    AMD's cards however have 8 pipelines (not Vega, unsure of that one!) feeding and working to 8 more pipelines without driver interference and is handled by the HWS... which makes it less of a bottleneck and a "true" parallelization card.

    I'M NOT SAYING NVIDIA SUCKS, I'M SAYING BOTH HAVE PROS AND CONS AND THEY ARE DIFFERENT.
    NVIDIA BEING LESS PARALLEL OPTIMIZED VS. AMD BEING LESS SERIAL OPTIMIZED.

    Old vs. New API of things.
    AMD has 1 Graphics Command Processor and 8 ACE units.

  8. #828
    Quote Originally Posted by Gaidax View Post
    But they officially hyped it as $499 solution???

    Yeah, and that's screwed up and wrong and I am mad at AMD for it. That doesn't suddenly make what nvidia did right though.

  9. #829
    https://translate.google.com/transla...-text=&act=url

    update:

    AMD has now confirmed on demand that the individual RX Vega graphics cards are actually limited without Radeon packs in the reference design. The "standalone" Radeon RX Vega 64 was sold out in Germany in the shortest possible time. Prospective customers do not need to wait for offers for 500 euros. The Radeon Packs did not have an increase in UVP. AMD also recommends 609 euros for the Black version - the price increases to 649 euros and more are from the dealers.

  10. #830
    The Lightbringer Evildeffy's Avatar
    15+ Year Old Account
    Join Date
    Jan 2009
    Location
    Nieuwegein, Netherlands
    Posts
    3,772
    Quote Originally Posted by Zenny View Post
    One Graphics queue? The same thing AMD has?
    Ok I see where I may have worded it wrongly.
    From your own pictures look at the Queue pipelines from AMD and nVidia.

    Now from a simple logical deduction PoV .. what does that wording tell you?

    Quote Originally Posted by Zenny View Post
    One graphics queue, multiple compute queues being done in parallel:
    Ah unfortunately I cannot comment on GPUView... I simply do not have the knowledge for this.

    Quote Originally Posted by Zenny View Post
    Yes, I've said as much.
    As have I multiple times prior and in the history of this fora.

    Quote Originally Posted by Zenny View Post
    Nvidia does more draw calls on Vulkan then AMD does.
    Does it? I've yet to see that and would like sources for this.

    Quote Originally Posted by Zenny View Post
    1 Graphics queue plus multiple compute queues, like AMD does.
    Like I said I picked the wrong wording/pipeline name, should've paid more attention to that, I corrected myself but the point still remains.
    8 independent Queue Engines for command processing vs. 1 Queue Engine... Parallel vs. Serial.

    Quote Originally Posted by Zenny View Post
    Maxwell doesn't have anything converted to DX11, the overhead of changing API's on the fly would cripple the card. The developer who complained about VR on Maxwell is full of shit as most VR games perform better on the 980ti compared to the Fury X.

    https://www.hardocp.com/reviews/vr/
    FPS yes, Frame Time Variance (the holy grail on VR) not even remotely.
    I don't remember whom it is who posted it exactly but there have been several reviews, I believe Tom's Hardware had it... unsure (trying to find it whilst writing without success, I will keep looking and park in when found), where an R9 380X had better frame time variance to a GTX 980Ti ... because frametime in VR is the thing that makes you want to throw up instead of keeping your breakfast/lunch/dinner in.

    Though I find it funny you call him being full of shit though as nVidia introduced the mid-op interruption for smoothing out these issues.
    So you would state on a professional level he's wrong and you're right?

    Quote Originally Posted by Zenny View Post
    I have tons of links in my posts to various tech sites and Nvidia whitepapers/posts.
    Yes with the wrong understanding.
    Can link many things if you don't understand what's being said as you continually say "nVidia IS 100% PARALLEL!!!111oneoneone" when we say "Yes... and no due to Architecture design".

    Quote Originally Posted by Zenny View Post
    Sigh.
    Indeed.

    Quote Originally Posted by Zenny View Post
    Well yes, no GPU on earth has everything being parallel, something needs to feed SM units or schedulers.

    Oh wow, look at how serial AMD does work.
    Except that picture shows 1 single "Global Data Share" where the HWS tasks everything to the SEs/CUs instead of being a buffer with more "instruction cache" per SE/CU.
    I'd actually call that parallel but hey... maybe I'm wrong and that is not parallel but serial, who knows, maybe North is South and East is West!

    Quote Originally Posted by Zenny View Post
    Because the developer (singular, not multiple) was talking nonsense?
    You should let him know that, like stated above.. if you know it better you should take his job, can buy me some upgrades..

    Quote Originally Posted by Zenny View Post
    AMD has 1 Graphics Command Processor and 8 ACE units.
    3rd time: Like above I chose the wrong name, my fault, should've paid more attention, it's "Queue Engine" which I should've named from your very own screenshots
    Point stands.

    Tell me in simple terms the following:
    When you put all the commands to be done in 1 pipeline, meaning only 1 pipeline can feed the GPU with what to do, what do you call that?
    Now what do you call it when you have 8 pipelines that can do it at the same time?

    I will change it to other terms for you to understand:
    If you want to get on a highway and for your entire country there is only 1 way to get on the highway that is a single lane and single car per time and your country has a population of 8 million all needing to get on the highway each person 1 car, what do you call this?

    Now what if you had 8 points of entry with the same data spread throughout your country?

    If you only have 1 command processor (Queue Engine) that only accepts 1 command per time it is serial.
    If you have 8 command processors (Queue Engine) that accepts a command per time per command processor you have 8 commands per time and that is parallel.

    Disagree or not your own pictures have told you the tale of uArch, drivers can't fix if there's only 1 way in.

    Edit:
    I found an even better example for you:
    Is the CPU in your system a Serial Processor or Parallel Processor?
    If Parallel (I hope so) ... why is it Parallel?
    When did Parallel Processors come into existence?

    Edit 2:
    Correction to the above, I mentioned Frame Time, which is also important, but I was actually referring to Frame Time Variance, which determines smoothness of VR play and whether you hurl into a bag or not after a short while.
    I changed the points above to include variance in bold and underlined.
    Last edited by Evildeffy; 2017-08-16 at 02:34 PM.

  11. #831
    http://www.gamersnexus.net/news-pc/3...ce-will-change


    Following the initial rumors stemming from an Overclockers.co.uk post about Vega price soon changing, multiple AIB partners reached out to GamersNexus – and vice versa – to discuss the truth of the content. The post by Gibbo of Overclockers suggested that launch rebates and MDF would be expiring from AMD for Vega, which would drive pricing upward as retailers scramble to make a profit on the new GPU. Launch pricing of Vega 64 was supposed to be $500, but quickly shot to $600 USD in the wake of immediate inventory selling out. This is also why the packs exist – it enables AMD to “lower” the pricing of Vega by making return on other components.

    In speaking with different sources from different companies that work with AMD, GamersNexus learned that “Gibbo is right” regarding the AMD rebate expiry and subsequent price jump. AMD purportedly provided the top retailers and etailers with a $499 price on Vega 64, coupling sale of the card with a rebate to reduce spend by retailers, and therefore use leverage to force the lower price. The $100 rebate from AMD is already expiring, hence the price jump by retailers who need return. Rebates were included as a means to encourage retailers to try to sell at the lower $499 price. With those expiring, leverage is gone and retailers/etailers return to their own price structure, as margins are exceptionally low on this product.

    We also learned from the AIB partners that AMD provided a list of retailers that board partners should sell to, as those would be the companies most likely holding rebates to best support the lower pricing of the product. We are not clear if any such rebates will be reinstated at time of partner card launch, and have not been given information to lead us to either conclusion. At this point, our understanding is that said initial rebates have expired – they were only available for the first wave of cards – and retailers now largely have free rein on pricing. The packs are still in-stock at some stores and, from what we’ve been told by a third, reliable source, have seen highest allocation since Vega’s launch. Our present understanding is that Newegg received 60-70 units allocated for “packs” on their store, but a significantly lower number of standalone cards. That’d explain why we saw the inventory and sell-through behavior at launch.

    This affects Vega 64 as of now. We are not sure of the impact on Vega 56.

    We reached out to AMD for comment on August 15 and have received no response.

  12. #832
    550-590 GBP is going to be the price of Vega after they restock. I don't know how stupid you'd have to be to buy this card for gaming. I didn't look at the mining performance of it so I can't talk about that, but whoever buys this card for gaming is a massive fucking retard. Winning vs. a stock 1080 only after OC and costing $200+ more while also consuming more power? Is that a meme?

  13. #833
    https://www.youtube.com/watch?v=OcniwiKEIuA

    Vega 64 is losing to the 1080 across almost every test (Deus Ex the only exception) and losing many tests by 10%+. Yes, it's comparing an aftermarket 1080 to the reference Vega 64, but given Vega's power consumption/heat, it's not a given that aftermarket coolers will have the same gain as they did on Pascal chips.

    I don't really understand why they even released this card (other than getting mining profits). It's slower, hotter, noisier, has less stable and less frequently updated drivers and requires more power than the card it's competing with. Even if you have a FreeSync monitor or something, I still think the use case is tenuous. I think you'd be better off saving the $100-200, getting a 1080 and just dealing without having FreeSync/Gync. Or, if you take the money you would save plus what you can sell your monitor for, you're probably not all that far off getting the Dell GSync/144Hz/1440p monitor. For anyone that isn't already invested into FreeSync, I don't see any justification for buying it.

  14. #834
    Deleted
    Quote Originally Posted by Tiberria View Post
    I don't really understand why they even released this card (other than getting mining profits).
    Well since they have it they might as well release it to recoup some of the development costs.

    Quote Originally Posted by Tiberria View Post
    It's slower, hotter, noisier, has less stable and less frequently updated drivers and requires more power than the card it's competing with. Even if you have a FreeSync monitor or something, I still think the use case is tenuous. I think you'd be better off saving the $100-200, getting a 1080 and just dealing without having FreeSync/Gync. Or, if you take the money you would save plus what you can sell your monitor for, you're probably not all that far off getting the Dell GSync/144Hz/1440p monitor. For anyone that isn't already invested into FreeSync, I don't see any justification for buying it.
    It is slightly slower yes - not as big of a deal as people make out of it in most cases. Hotter? Yes wait for non reference coolers. Noisier? again wait for non reference coolers. Bitching about amd drivers is outdated and not valid argument anymore. Playing on freesync/gsync vs playing without it with 10% more frame rate... have you ever used synced display? Its not even a contest 10% more frames is so not worth giving up the sync. If I had freesync monitor and would be in market for GPU I would rather buy the "10% slower card" than go though hassle of selling monitors and buying new one.

    So the only valid argument you could make about it is the price. If it is indeed more expensive by a big margin than 1080... I agree wtf AMD? Also the vega to get is the smaller edition one.

  15. #835
    Quote Originally Posted by larix View Post
    Well since they have it they might as well release it to recoup some of the development costs.



    It is slightly slower yes - not as big of a deal as people make out of it in most cases. Hotter? Yes wait for non reference coolers. Noisier? again wait for non reference coolers. Bitching about amd drivers is outdated and not valid argument anymore. Playing on freesync/gsync vs playing without it with 10% more frame rate... have you ever used synced display? Its not even a contest 10% more frames is so not worth giving up the sync. If I had freesync monitor and would be in market for GPU I would rather buy the "10% slower card" than go though hassle of selling monitors and buying new one.

    So the only valid argument you could make about it is the price. If it is indeed more expensive by a big margin than 1080... I agree wtf AMD? Also the vega to get is the smaller edition one.
    Yes - I have a GSync 1440p 144 Hz monitor with a 1080 Ti, so I'm well aware what sync does.

    The reviewers are noting that the Vega reference card is hotter and noisier than the Nvidia 1080, 1080 Ti, etc Founders Edition reference cards. That suggests that the aftermarket cards will continue to be hotter and noisier than aftermarket 1080s as well. As far as AMD drivers, yes, they continue to be inferior, less stable, and less frequently updated than Nvidia drivers. Nvidia has a new driver update out multiple times a month and invariably has an update out with optimizations whenever a major AAA game is released. There's no question that Nvidia has a significantly superior driver support infrastructure.

    The real #1 issue continues to be that it's effectively significantly more expensive for weaker performance. That, and the PSU requirements. AMD recommends a 750w PSU for the regular 64 and a 1000w PSU (lol) for the water cooled one. The 1080 only recommends a 500w PSU.

  16. #836
    Apparently the story is now that the MSRP has not changed. AMD gave rebates to suppliers to keep the price at the MSRP for release to try and prevent a situation where price gouging at release made the cards completely unobtainable for most people. Once the rebates ran out the prices were pushed straight up to match the demand (and probably relatively short supply).

    - - - Updated - - -

    Quote Originally Posted by Tiberria View Post
    The real #1 issue continues to be that it's effectively significantly more expensive for weaker performance. That, and the PSU requirements. AMD recommends a 750w PSU for the regular 64 and a 1000w PSU (lol) for the water cooled one. The 1080 only recommends a 500w PSU.
    The 64 is not really a great option for gaming without the Freesync pricing advantage and even then it's not great. On the compute side it more than handles itself.

    I think the 56 is a much better gaming option. Combined with a FreeSync monitor it's going to be a lot cheaper than the 1070 + G-Sync combo. I still wish they had priced it $50 cheaper though but with the present shortages because of mining I can understand why they didn't.

  17. #837
    The Lightbringer Evildeffy's Avatar
    15+ Year Old Account
    Join Date
    Jan 2009
    Location
    Nieuwegein, Netherlands
    Posts
    3,772
    Quote Originally Posted by Tiberria View Post
    Yes - I have a GSync 1440p 144 Hz monitor with a 1080 Ti, so I'm well aware what sync does.

    The reviewers are noting that the Vega reference card is hotter and noisier than the Nvidia 1080, 1080 Ti, etc Founders Edition reference cards. That suggests that the aftermarket cards will continue to be hotter and noisier than aftermarket 1080s as well. As far as AMD drivers, yes, they continue to be inferior, less stable, and less frequently updated than Nvidia drivers. Nvidia has a new driver update out multiple times a month and invariably has an update out with optimizations whenever a major AAA game is released. There's no question that Nvidia has a significantly superior driver support infrastructure.

    The real #1 issue continues to be that it's effectively significantly more expensive for weaker performance. That, and the PSU requirements. AMD recommends a 750w PSU for the regular 64 and a 1000w PSU (lol) for the water cooled one. The 1080 only recommends a 500w PSU.
    Barring the fact that Vega is indeed rather disappointing and the only one worth a damn is the Vega 56 right now I do have to correct you on one thing.

    Your driver comment is considerably misplaced as nVidia has had terrible drivers, bit better lately, and driver amount is the same between the 2 for a lot of High Profile games.
    They haven't been "inferior" for multiple years ... especially not when it comes to "stability" ... when it comes to that nVidia has a lot more driver kernel crashes than AMD, especially upon the release of the GTX 10 series.

    As far as your PSU comment goes, please pay little attention to that as most GPU manufacturer overestimate their requirements.
    Yes it's a little "over the top" but they do it for the shit PSU manufacturers out there because most people are clueless idiots.

  18. #838
    Quote Originally Posted by Gray_Matter View Post
    Apparently the story is now that the MSRP has not changed. AMD gave rebates to suppliers to keep the price at the MSRP for release to try and prevent a situation where price gouging at release made the cards completely unobtainable for most people. Once the rebates ran out the prices were pushed straight up to match the demand (and probably relatively short supply).

    - - - Updated - - -



    The 64 is not really a great option for gaming without the Freesync pricing advantage and even then it's not great. On the compute side it more than handles itself.

    I think the 56 is a much better gaming option. Combined with a FreeSync monitor it's going to be a lot cheaper than the 1070 + G-Sync combo. I still wish they had priced it $50 cheaper though but with the present shortages because of mining I can understand why they didn't.
    The Vega 56 will potentially be a competitive option - if the $400 list price is actually going to be something it is viably obtainable at. There's a whole lot of speculation that the real price will be closer to $500-$550, and if that ends up being the case, there's little reason to go for it over just getting the 1070 and putting the extra money into the CPU, or stepping up to the 1080.

    - - - Updated - - -

    Quote Originally Posted by Evildeffy View Post
    Barring the fact that Vega is indeed rather disappointing and the only one worth a damn is the Vega 56 right now I do have to correct you on one thing.

    Your driver comment is considerably misplaced as nVidia has had terrible drivers, bit better lately, and driver amount is the same between the 2 for a lot of High Profile games.
    They haven't been "inferior" for multiple years ... especially not when it comes to "stability" ... when it comes to that nVidia has a lot more driver kernel crashes than AMD, especially upon the release of the GTX 10 series.

    As far as your PSU comment goes, please pay little attention to that as most GPU manufacturer overestimate their requirements.
    Yes it's a little "over the top" but they do it for the shit PSU manufacturers out there because most people are clueless idiots.
    Whenever I've had to deal with AMD cards and drivers - across several cards over the last 10 years, and this is up to the R9 390 being the last AMD card I tried to use, I have always ended up with some type of problem or another. The driver based problem I had with the R9 390 was a total showstopper (basically constant blue screens on GTA5) that AMD was unable to resolve over the month I had the card despite spending hours trying to trouble shoot. I ended up sending it back on the day 30 of the return policy for a 980 Ti - which worked flawlessly and eliminated all problems. And, it wasn't like I was using some type of exotic setup either (a 4770k with an Asus Z97 chipset board and Seasonic PSU - pretty mainstream). It was utter driver incompetence, and the R9 390 was just a rebadged 290 that had been out a year anyway. This was less than 2 years ago - definitely not "several years back". I have had probably 10 different Nvidia cards over the last 15 years, and never experienced a showstopper/unplayable type driver stability issue, yet have had one major issue or another with 4 out of 4 AMD cards owned/tried in that time frame. Until I am proven otherwise, AMD driver support is not on the level of Nvidia.

    And as far as PSUs, sure, of course manufacturers inflate the requirements because they can't control all variables and don't want to be held responsible for people blowing out their cards because they tried to run them on cheap 400w PSUs or something. However, as much as you want to claim that a quality 500w or whatever PSU will run a Vega 64 fine, if you run into issues and go back to the AIB manufacturer for support, they are going to ask you what PSU you have, and try to blow off the issue completely if it doesn't meet the stated requirements. Not only that, but PSUs tend to run both hotter and louder and less power efficient the higher draw they have relative to their max capacity, so you probably want to have the recommended PSU size anyway for the sake of your own sanity. That requirement is objectively significantly higher for Vega cards relative to Nvidia cards at the comparable performance level.

  19. #839
    Quote Originally Posted by Tiberria View Post


    Whenever I've had to deal with AMD cards and drivers - across several cards over the last 10 years, and this is up to the R9 390 being the last AMD card I tried to use, I have always ended up with some type of problem or another. The driver based problem I had with the R9 390 was a total showstopper (basically constant blue screens on GTA5) that AMD was unable to resolve over the month I had the card despite spending hours trying to trouble shoot. I ended up sending it back on the day 30 of the return policy for a 980 Ti - which worked flawlessly and eliminated all problems. And, it wasn't like I was using some type of exotic setup either (a 4770k with an Asus Z97 chipset board and Seasonic PSU - pretty mainstream). It was utter driver incompetence, and the R9 390 was just a rebadged 290 that had been out a year anyway. This was less than 2 years ago - definitely not "several years back". I have had probably 10 different Nvidia cards over the last 15 years, and never experienced a showstopper/unplayable type driver stability issue, yet have had one major issue or another with 4 out of 4 AMD cards owned/tried in that time frame. Until I am proven otherwise, AMD driver support is not on the level of Nvidia.
    So, you are saying.. They are all bad, because few years back, you had a bad experience with your specific setup in one game. OK.

  20. #840
    The Lightbringer Evildeffy's Avatar
    15+ Year Old Account
    Join Date
    Jan 2009
    Location
    Nieuwegein, Netherlands
    Posts
    3,772
    Quote Originally Posted by Tiberria View Post
    Whenever I've had to deal with AMD cards and drivers - across several cards over the last 10 years, and this is up to the R9 390 being the last AMD card I tried to use, I have always ended up with some type of problem or another. The driver based problem I had with the R9 390 was a total showstopper (basically constant blue screens on GTA5) that AMD was unable to resolve over the month I had the card despite spending hours trying to trouble shoot. I ended up sending it back on the day 30 of the return policy for a 980 Ti - which worked flawlessly and eliminated all problems. And, it wasn't like I was using some type of exotic setup either (a 4770k with an Asus Z97 chipset board and Seasonic PSU - pretty mainstream). It was utter driver incompetence, and the R9 390 was just a rebadged 290 that had been out a year anyway. This was less than 2 years ago - definitely not "several years back". I have had probably 10 different Nvidia cards over the last 15 years, and never experienced a showstopper/unplayable type driver stability issue, yet have had one major issue or another with 4 out of 4 AMD cards owned/tried in that time frame. Until I am proven otherwise, AMD driver support is not on the level of Nvidia.
    That is pretty damned anecdotal and I could tell you the same countless situations happens in reverse and I've had happen, that doesn't mean I think nVidia cards are junk... I think they're pretty damned good.

    But let's take the overall trend of driver releases and faults with them in the last several years... AMD's has been factually more stable and less prone to some rather large card bricking bugs, nVidia has had several WHQL drivers pulled because they were bricking/breaking cards, they ad memory clocking issues etc.
    I'm not saying AMD drivers haven't had issues but comparatively from the Crimson drivers they've literally been outclassing the nVidia drivers.

    And that's not anecdotal but factual, the tech community (professional reviewers and such) thinks that as well.

    Again it doesn't make nVidia drivers (or hardware) bad, it simply means in that time the competition made better drivers.

    What your personal experience is with 4 cards I could (like I said anecdotally) tell you the opposite with a magnitude larger than that for 970/980/980Ti/1070/1080 cards.. it really doesn't mean anything.

    Quote Originally Posted by Tiberria View Post
    And as far as PSUs, sure, of course manufacturers inflate the requirements because they can't control all variables and don't want to be held responsible for people blowing out their cards because they tried to run them on cheap 400w PSUs or something. However, as much as you want to claim that a quality 500w or whatever PSU will run a Vega 64 fine, if you run into issues and go back to the AIB manufacturer for support, they are going to ask you what PSU you have, and try to blow off the issue completely if it doesn't meet the stated requirements. Not only that, but PSUs tend to run both hotter and louder and less power efficient the higher draw they have relative to their max capacity, so you probably want to have the recommended PSU size anyway for the sake of your own sanity. That requirement is objectively significantly higher for Vega cards relative to Nvidia cards at the comparable performance level.
    GFX manufacturers actually cannot use that as an argument if they were to really try to deny RMA for you on that basis.
    If you demonstrate to know basic electrical engineering skills such as knowing the card is powered by the 12V rail(s) of your PSU as well as V * A = W (P * I = U) they will have no leg to stand on nor fight you on it because they may take advantage of your stupidity if you don't know but they won't challenge you if you do actually know because they realize it'd be a literal lost battle.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •