I'm not blaming AMD for what PowerColor did, I'm just saying that Nvidia consumers have it better due to their GPUs having lower PCB engineering requirements.
Wont void the warranty unless your liquid metal compound is aggressive enough to eat through solder joints, and yes, it's practically useless for an average user, that's why I like the implementation.
Not connecting it in any way. Reference designs were axed because AMD would end up effectively competing with partner cards. Turbine, which is prefered for OEM sales, wouldnt be sufficient, designing a blower cooler is pointless.
You can overclock RAM, but the point is: 1) it's not designed to work like that, properties of VRAM made by Samsung or Micron are also substantially different 2) VRAM have same stock clocks, regardless of the card manufacturer, so there is no way of achieving a higher memory bandwidth without opting for a different memory type or increasing memory bus width.
That's why Qualcomm are going to certify 3 year old chips for running it? Vulkan standardizes hardware level optimizations, but they are not required to run it, and it certainly doesnt mean that anyone except for AMD is going to make those hardware level optimizations.
You mean designing their products around using the hardware level optimizations that miniscule level of users possess? Why would they do that? AMD implementation right now is foreign for developers for the same reason as why so few games are good at multithreading the workload: average consumer doesnt have that hardware, and wont have for some time, no matter what AMD does.
Absolutely relevant point. 2-2.5 times more performance, about the same memory power consumption. Of course it's not going to scale perfectly, but it's Nvidia's most power hungry card atm (except Titan ofc, let's not count those here). VRM is the only place where you can measure power draw (of course it's not perfect, you have heat loss and power needed to operate the VRM, but 310+30=/=400 (which is a power draw of a whole 1080 Ti card).
It wouldnt. Yes, it would require a more complex VRM design, but it's not something that hasnt been done before, some manufacturers still overengineer their boards for no apparent gains, this way we could put that engineering to use.
They need higher memory bandwidth, yes, wider memory bus is just means to that goal. I'm not saying that current cards bottleneck themselves because of insufficient memory bandwidth, but it probably did prevent them from making wider GPUs based on Polaris architecture. Not the only factor of course, others being die size (bigger the die lower the yield), GPU power consumption and thermals.
Last edited by Thunderball; 2017-05-10 at 02:38 PM.
R5 5600X | Thermalright Silver Arrow IB-E Extreme | MSI MAG B550 Tomahawk | 16GB Crucial Ballistix DDR4-3600/CL16 | MSI GTX 1070 Gaming X | Corsair RM650x | Cooler Master HAF X | Logitech G400s | DREVO Excalibur 84 | Kingston HyperX Cloud II | BenQ XL2411T + LG 24MK430H-B
And I'm trying to tell you they really don't need to
It's just an extreme in the case of PowerColor, even "under-engineered" cards can overclock Polaris to the silicon's limit... just if you cheap out too much well then you're just fucked royally.
Liquid Metal TIM becomes pretty aggressive when there's current flowing through it, you will see it
It'd be pointless because of the RX 400 series.
Reference designs only come into play when a new architecture is released, not on refreshes.
Hasn't been done for quite a while by either AMD or nVidia.
Not really as RAM speeds are not static, they just validate into different speeds.
GDDR5 chips can be sold as 6, 7 and 8Gbps as their yield quality might not reach the standard for highest speed or they just convert the higher speeds into the lower speeds for more yield, 8Gbps was simply set as a max. speed by standards just like GDDR5X is 12Gbps max. and GDDR6 is 16Gbps max.
There are lower speeds but yes if you are at the limit the only direct way to increase bandwidth on a hardware level is change memory type or increase the bus width.
I wish Qualcomm did .. it would mean my SnapDragon 801 phone would be able to run Android 7 officially if Cyanogen hadn't kicked the bucket.. (OnePlus One)
However .. Vulkan is an entire API, not just hardware level, of course you don't need hardware optimizations as you'd fall back to software.
Which is what's been done and future SoCs will have hardware standards far surpassing the speed of PC hardware in terms of optimizations (FAR greater userbase).
However though... Google requires you to be Vulkan compatible in order to receive official certification to run Android 7 and up, no Vulkan = you won't get an official release.
The funny thing is that most people already do if they have older generation nVidia hardware (up to 700 series) and from AMD's HD7900 series.
nVidia will however not (and rightfully so) expose DX12 capability in their older class hardware even though they could technically run it more efficiently than their Maxwell/Pascal series.
If you're referring to "scaling" if Polaris would be as powerful then the answer would still be no.
Bus width is defined by the amount of VRAM chips you have with each one having X amount of width, Polaris has 8 chips and 256-bit bus, meaning each chip is 32-bit.
The same goes for nVidia as 32-bit bus width per chip is actually the "max" value for standardized GDDR5(X).
If Polaris were to increase it's die-size by 300% (hypothetical) and leave the VRAM at 256-bits it would still only consume 25-30W.
If the 1080Ti were to reduce it's bus to 256-bit as well, they would end up with ~25W power draw.
That's why the HBM2 memory is important as well as it would reduce power consumption in all fronts even if the bus is gigantic in comparison.
Die size is irrelevant but it has been quite a while that VRAM budget for TDP specs = ~20% of the card, this may be coincidence or design choices, I'm not certain.
Yes and no, you also increase TDP by a massive factor and would require a far beefier cooler as well, this has effects on multiple things than just PCB design.
If you assume a certain budget based upon supplier specifications than you can plan your design around it, VRAM, whilst required, should never be anywhere close to an equal budget of a card's design as the GPU itself.
The reason they didn't create bigger Polaris cards is because Radja Koduri (or however you spell his name) didn't WANT a bigger Polaris.
Polaris was still the design of the guy before him, whomever that was, and Radja Koduri wanted to push a big design like Vega since his return to AMD.
Vega is his creation, so what comes of that... we'll see but if AMD wanted to they could have designed a larger Polaris die with bigger bus width.
In fact this was likely the original plan until Lisa Su drastically changed things and Radeon Technology Group was created.
I have no idea what Vega will become, it could be a work of genius or it could be a steaming pile of dog shit.
I'm of course extremely curious and excited to find out but I keep all options open at any time.
Just saying that if nVidia join the "multi-threading" arena, which they will as it's also an enterprise gain, you WILL see nVidia lose out on their vaunted efficiency.
If this is true then it lends credence to the new 'leak' about Volta coming out significantly earlier than expected.
http://en.koreaportal.com/articles/3...-big-thing.htm
http://www.overclock.net/t/1629867/k...next-big-thing
10850k (10c 20t) @ all-core 5GHz @ 1.250v | EVGA 3080 FTW3 Ultra Gaming | 32GB DDR4 3200 | 1TB M.2 OS/Game SSD | 4TB 7200RPM Game HDD | 10TB 7200 RPM Storage HDD | ViewSonic XG2703-GS - 27" IPS 1440p 165Hz Native G-Sync | HP Reverb G2 VR Headset
lul?
If that's true - Vega is DoA pretty much.
Im reasonably certain GTX 2070 and maybe even GTX 2080 could still get away with a higher clocked GDDR5X if they wanted to, so those are/will not be limited by GDDR6 availability
Titan Volta and 2080Ti though yeah, definitely GDDR6, it seems like the best course to use that (and still get 750-800+ GB/s) rather then run into the same problems & costs AMD does with HBM2
I doubt 2080 will be much faster than a 1080ti, so yes it should get away with GDDR5X. The other question though is if Nvidia wants to piss of all the people whom bought 1080ti and bring a comparable card for sale ~$500 3 months later. If not, then they might just as well wait for GDDR6 for 2070 and 2080.
Then again don't know if Nvidia cares, because those who bought the 1080ti, will probably buy the 2080 anyways if it's even a bit faster.
MMO-Champion Rules and Guidelines
3 months later would mean GTX 2080 is fully coming out in June or July, thats completely impossible1080ti and bring a comparable card for sale ~$500 3 months later.
Fall 2017 at earliest and still I think 2018 is more likely
and even then theres plenty of chances that a 2017 Volta will just be some non-gaming version
- - - Updated - - -
http://www.anandtech.com/show/11360/...note-live-blog
sexy
- - - Updated - - -
. .Versus Pascal: 1.5x general purpose FLOPS, but 12x Tensor FLOPS for DL training
I expected a bigger jump tbh, much of that stuff is simply being gigantic chit.
~50% is pretty big
also looks like these TENSOR things take up a ton of space - https://i.imgur.com/nOqh7lD.png
gaming versions wont have that or Double Presicion cores, so I think we can expect smaller die sizes there, certainly nothing above 550-600 mm2 for the flagship Titan
either way you can only judge that after the cards are out and gaming performance reviewed ..TFlops alone mean nothing
- - - Updated - - -
personal speculation - but this seems likely in 2018GV102 (Titan V / 2080Ti)
5120 - 5376 CUDA Cores
GV104 (GTX 2080)
3584 CUDA Cores
well overall gaming perf increase from gen to gen will be on par with Kepler --> Maxwell and Maxwell --> Pascal .. anywhere from 30-35% to 50%
there is no way they drop the ball and risk their gaming sales dropping due to not being faster enough then prev gen
HOW that % will be achieved is unknown (likely a combo of 12nm process, bigger core count & IPC), but honestly I dont care that much as long as its there
Wonder how high these Volta cards are going to clock.
Id expect Volta to be 0-15% higher than Pascal (this is just pure clocks .. on top of more cores and whatever else they put in there)
- - - Updated - - -
https://devblogs.nvidia.com/parallel...o-twi-vt-13918
New Streaming Multiprocessor (SM) Architecture Optimized for Deep Learning Volta features a major new redesign of the SM processor architecture that is at the center of the GPU. The new Volta SM is 50% more energy efficient than the previous generation Pascal design, enabling major boosts in FP32 and FP64 performance in the same power envelope. New Tensor Cores designed specifically for deep learning deliver up to 12x higher peak TFLOPs for training. With independent, parallel integer and floating point datapaths, the Volta SM is also much more efficient on workloads with a mix of computation and addressing calculations. Volta’s new independent thread scheduling capability enables finer-grain synchronization and cooperation between parallel threads. Finally, a new combined L1 Data Cache and Shared Memory subsystem significantly improves performance while also simplifying programming.
With 84 SMs, a full GV100 GPU has a total of 5376 FP32 cores, 5376 INT32 cores, 2688 FP64 cores, 672 Tensor Cores, and 336 texture units. Each memory controller is attached to 768 KB of L2 cache, and each HBM2 DRAM stack is controlled by a pair of memory controllers. The full GV100 GPU includes a total of 6144 KB of L2 cache. Figure 4 shows a full GV100 GPU with 84 SMs (different products can use different configurations of GV100). The Tesla V100 accelerator uses 80 SMs.
Im assuming the bolded parts will be in the gaming Voltas, while the non-bolded wont be, saving space5376 FP32 cores, 5376 INT32 cores, 2688 FP64 cores, 672 Tensor Cores, and 336 texture units