Page 4 of 46 FirstFirst ...
2
3
4
5
6
14
... LastLast
  1. #61
    Bloodsail Admiral TheDeeGee's Avatar
    10+ Year Old Account
    Join Date
    Dec 2011
    Posts
    1,194
    Depends on the price.

    A 3060 shouldn't be more than €300.

  2. #62
    The Unstoppable Force Ghostpanther's Avatar
    10+ Year Old Account
    Join Date
    Dec 2012
    Location
    USA, Ohio
    Posts
    24,112
    Quote Originally Posted by Mamut View Post
    Nah, my GTX 970 is still enough for games I play
    This. My GTX 1650 Super plays everything now that I care to play very well. The future? *shrugs. I will worry about that when it comes to a system when a game comes out I really want to play bad enough to warrant a new system.
    " If destruction be our lot, we must ourselves be its author and finisher.." - Abraham Lincoln
    The Constitution be never construed to authorize Congress to - prevent the people of the United States, who are peaceable citizens, from keeping their own arms..” - Samuel Adams

  3. #63
    Quote Originally Posted by Twdft View Post
    Also: electricity isn't free. 1kWh more per 4 hours gaming doesn't sound much for adding a second 250W gpu, but it adds up over the year.
    Well 2x1080ti depending on how lucky, yu can get for 500e, those 2 are faster than 2080ti which costs 2 or more times expensive. If one additional card adds 1kwh/day that is 365kwh/year. In US I don't think price is more than 20cents/kwh so in one year you will spend 70$ more so it will take you 10 years or so to get to the difference between 2x1080ti and one 2080ti...
    Last edited by markos82; 2020-08-22 at 12:57 PM.

  4. #64
    Quote Originally Posted by markos82 View Post
    Well 2x1080ti depending on how lucky, yu can get for 500e, those 2 are faster than 2080ti which costs 2 or more times expensive. If one additional card adds 1kwh/day that is 365kwh/year. In US I don't think price is more than 20cents/kwh so in one year you will spend 70$ more so it will take you 10 years or so to get to the difference between 2x1080ti and one 2080ti...
    Assuming 4 hours of gaming everyday. LOL casuals

  5. #65
    Quote Originally Posted by Yizu View Post
    Assuming 4 hours of gaming everyday. LOL casuals
    LOL even if you play 8 hours that is still 5 years of electricity. Lol math

  6. #66
    I'm still waiting for next gen VR before I upgrade my old 1080.
    My nickname is "LDEV", not "idev". (both font clarification and ez bait)

    yall im smh @ ur simplified english

  7. #67
    Quote Originally Posted by TheDeeGee View Post
    Depends on the price.

    A 3060 shouldn't be more than €300.
    400. But then again, it wont launch until November, most likely.
    R5 5600X | Thermalright Silver Arrow IB-E Extreme | MSI MAG B550 Tomahawk | 16GB Crucial Ballistix DDR4-3600/CL16 | MSI GTX 1070 Gaming X | Corsair RM650x | Cooler Master HAF X | Logitech G400s | DREVO Excalibur 84 | Kingston HyperX Cloud II | BenQ XL2411T + LG 24MK430H-B

  8. #68
    Scarab Lord Wries's Avatar
    10+ Year Old Account
    Join Date
    Jul 2009
    Location
    Stockholm, Sweden
    Posts
    4,127
    Quote Originally Posted by Lahis View Post
    200% scaling makes each pixel take up 4 real pixels on the screen, effectively halving the resolution.
    This is not how "HiDPI"-aware applications behave when you scale 200% in Windows or MacOS. It is true that you will look at objects such as borders and buttons and that they will be around the same size you'd expect them to be compared to the legacy display pixel density, but individual elements, like text, are usually displayed with proper weights to take advantage of the resolution.

    It's basically the "retina display"-approach. I know someone always get rabbidly mad when you mention anything apple but if this sounds badly explained by me you can watch Steve Jobs' audience-friendly rundown on the topic with the iphone 4 launch

    (A notorious exception to this is the Battle.net-window, that quite jarringly just quadrouples the pixels. Small indie company.)
    Last edited by Wries; 2020-08-22 at 03:40 PM.

  9. #69
    Please wait Temp name's Avatar
    10+ Year Old Account
    Join Date
    Mar 2012
    Location
    Under construction
    Posts
    14,631
    Quote Originally Posted by markos82 View Post
    Well 2x1080ti depending on how lucky, yu can get for 500e, those 2 are faster than 2080ti which costs 2 or more times expensive. If one additional card adds 1kwh/day that is 365kwh/year. In US I don't think price is more than 20cents/kwh so in one year you will spend 70$ more so it will take you 10 years or so to get to the difference between 2x1080ti and one 2080ti...
    Yeah, 2x1080ti is faster if you can find a game that scales with SLI. Most don't scale well, and a few even scale negatively. It's almost never a good idea to get 2 lower-tier cards over a single higher tier card unless you know your workload will scale well with SLI.

  10. #70
    Banned Strawberry's Avatar
    15+ Year Old Account
    Join Date
    Jul 2007
    Location
    Sweden/Yugoslavia
    Posts
    3,752
    Quote Originally Posted by markos82 View Post
    But they are still not the same thing. Refresh rate is how many times monitor refreshes the picture and fps how many pictures are "drown/s" but yes hmif you have 60hz monitor having gpu that can draw 100+ fps won't matter because you are limited to 60hz... But again fps isn't the same thing as hz
    Never said it is.
    But it's natural to use hz instead of fps when talking about game fluidity.

    Why?
    Because you can tell someone "for your game to be fluid, you need to aim for 120 fps". Meanwhile, they have a 60hz screen, which would be misleading to them, as they would never achieve the 120fps fluidity.

  11. #71
    Quote Originally Posted by Yizu View Post
    Oh i thought render scaling in WoW, because that one works differently. He was talking about UI scaling in windows right? Why does that matter? LOL We are talking about graphical horsepower in games.
    I keep seeing people use that 200% windows UI scaling thing as an argument against 4k screens, and my guess is that those people havent really experienced it themselves. Im using a 55" screen with 200% scaling in windows, and it definitely doesnt change it into a 1080p monitor by any means. All it does is to make shit readable as 100% makes everything look tiny, and the text/image/video is still rendered in 4k. Everything is pretty much extremely sharp, with no visible pixels anywhere.

    OP, I'm also eyeballing the 3090, but what Im most curious about these days is the difference between the 3080 and 3090. Seems like the difference between the 2 cards isnt as big as previous generations.

  12. #72
    Quote Originally Posted by Temp name View Post
    Yeah, 2x1080ti is faster if you can find a game that scales with SLI. Most don't scale well, and a few even scale negatively. It's almost never a good idea to get 2 lower-tier cards over a single higher tier card unless you know your workload will scale well with SLI.
    You can't expect to get 2x fps with sli 1080ti but many games do scale well, while in some are slower that difference is something like 10-20% while the difference in price is 2x or more.
    I agree that it can be a better option to by single 2080ti but not now when the upgrade is not that great.
    When the price for "new" 2080ti ( if you can get it on outlet) drops to 600e then it would be a good deal now its simply too expensive to what it brings to table...

  13. #73
    Quote Originally Posted by markos82 View Post
    And still not good enough, you are paying 1200$ for the gpu, it should be capable of running games in 4k in ultra but it's not, most games i have seen are under 60fps or few fps over, if the 30xx top tier card will be 30-40% faster that will still be too low for premium product.
    When we get to the point where cards are capable of doing over 100fps in 4k on ultra details then it will be OK, but 8k gaming is too far away....
    A lot of 'ultra' settings in games are not worth the performance hit they provide really, so many of these settings add so little to a dynamic scene and the only way to appreciate some of these very high settings is to stand about and pan you camera which only really works in a few titles.

    A lot of modern day games base settings are actually decent i.e medium settings, not to mention a lot of these ultra settings are not optimised at all, however developers want to put most of their efforts into console like settings and this has been the pattern for most games.

    Currently developers are still making games for PS4 and Xbox one which I have seen some articles stating developers have found these consoles a pain to develop for due to the weak cpu mainly in them, if thats taking up a lot of time to release a game, what time do they have left to polish up on ultra settings?

    We may see this change with the new consoles in the developer work flow as in theory, they will have more headroom to develop for and should speed things up, but in modern day games, you really need to tell me if these games look bad on medium to high settings cos they don't.

    A lot of people can not tell the difference between ultra and medium high settings in dynamic scenes during actual gameplay.

  14. #74
    Quote Originally Posted by Thunderball View Post
    Considering the price range I've been given ($1300-1500) and stuff that goes into the 3090 I think the price can easily be justified. Yea, that's fucking expensive, but the card is also insane.
    Afaik, they are going with Samsungs node. And Samsung is pretty desperate to get their market share on manufacturing atm. So it's not that expensive for Nvidia to make these. Sure they are huge chips and the huge power consumption is going to lead to overengineered boards costing more. But still they are pricing it at that level because they can, not because they have to.

    Quote Originally Posted by Thunderball View Post
    You're not counting RTX hardware at all.
    Why would he? First generation RTX was mainly a demo for what things could be. There are only a few games you can use it, but always with a hefty penalty on performance.

  15. #75
    Quote Originally Posted by Wries View Post
    I think it's a bit disingenuous to refer to 250W TDP GPUs with the usual die-sizes as entry-level or midrange. Nvidia sure are price-gouging but their HPC/Server-stuff should be considered a different category.
    it's not. just because nvidia calls a 1080 a high end card doesn't mean it is. if you look at how they binned and codenamed stuff in the past it would be midrange. it's just something they know they can get away with.

    same reason they almost always can quickly introduce a better card if amd happens to beat em, they just keep the good stuff as a backup because why would they try to make a good product when they don't have too. it's just the standard stuff that happens when a company dominates a market.

  16. #76
    Biggest selling point for me is the dedicated hardware nvenc encoder on the RTX cards. It's just so seamless to use that for streaming content. AMD doesn't have a counter that I'm aware of aside from the better multitasking capabilities of their CPU line.

  17. #77
    Quote Originally Posted by Hellobolis View Post
    it's not. just because nvidia calls a 1080 a high end card doesn't mean it is. if you look at how they binned and codenamed stuff in the past it would be midrange. it's just something they know they can get away with.
    I agree. Just look at the fermi gtx 580 (full 520mm die) vs kepler gtx 680 (midrange 294mm die, we know that its midrange because they release a full 561mm die later that is titan). Nvidia could just as easily release that 3090 as their hot and loud 600$ card if there is competition in the market, like what happened in fermi.

  18. #78
    Quote Originally Posted by mrgreenthump View Post
    Afaik, they are going with Samsungs node. And Samsung is pretty desperate to get their market share on manufacturing atm. So it's not that expensive for Nvidia to make these. Sure they are huge chips and the huge power consumption is going to lead to overengineered boards costing more. But still they are pricing it at that level because they can, not because they have to.
    No. Afaik top products are still going to be made by TSMC. Lowend products are probably on Samsung, or maybe even larger node even initially (I think that happened with both Pascal and Turing aswell). I dont doubt that Samsung wants to be competitive with TSMC but: 1) their 7nm node has lower density than TMSC, but cheaper 2) they dont have experience making such big monolithic dies.


    Quote Originally Posted by mrgreenthump View Post
    Why would he? First generation RTX was mainly a demo for what things could be. There are only a few games you can use it, but always with a hefty penalty on performance.
    Because then the benchmark is 1) useless 2) misleading.

    - - - Updated - - -

    Quote Originally Posted by Hellobolis View Post
    it's not. just because nvidia calls a 1080 a high end card doesn't mean it is. if you look at how they binned and codenamed stuff in the past it would be midrange. it's just something they know they can get away with.

    same reason they almost always can quickly introduce a better card if amd happens to beat em, they just keep the good stuff as a backup because why would they try to make a good product when they don't have too. it's just the standard stuff that happens when a company dominates a market.
    It was always this way ever since Kepler. And the reason is quite more simple than just "milking the market". All the largest die version and the best bins are reserved for deep learning and workstation cards, where the margins are WAY higher than desktop. Also, noone would buy the "full spec" cards because they're just useless for gaming, but the price is going to be way higher.

    - - - Updated - - -

    Quote Originally Posted by Yizu View Post
    I agree. Just look at the fermi gtx 580 (full 520mm die) vs kepler gtx 680 (midrange 294mm die, we know that its midrange because they release a full 561mm die later that is titan). Nvidia could just as easily release that 3090 as their hot and loud 600$ card if there is competition in the market, like what happened in fermi.
    You're wrong. First Titan was based off a bigger chip that was developed later and became 780/780Ti. Nvidia couldnt make bigger chips (technically they could but yields were too low) initially due to TSMC's 28nm node immaturity at the time. Do you think they would release the two chip monstrousity that was GTX 690 if they could make bigger chips? With Fermi they could make bigger chips, but the chips were too hot as they were already, and also huge.
    R5 5600X | Thermalright Silver Arrow IB-E Extreme | MSI MAG B550 Tomahawk | 16GB Crucial Ballistix DDR4-3600/CL16 | MSI GTX 1070 Gaming X | Corsair RM650x | Cooler Master HAF X | Logitech G400s | DREVO Excalibur 84 | Kingston HyperX Cloud II | BenQ XL2411T + LG 24MK430H-B

  19. #79
    Quote Originally Posted by Thunderball View Post
    No. Afaik top products are still going to be made by TSMC. Lowend products are probably on Samsung, or maybe even larger node even initially (I think that happened with both Pascal and Turing aswell). I dont doubt that Samsung wants to be competitive with TSMC but: 1) their 7nm node has lower density than TMSC, but cheaper 2) they dont have experience making such big monolithic dies.

    Because then the benchmark is 1) useless 2) misleading.
    Well, I agree it doesn't make a lot of sense for them unless they got slapped by TSMC, because TSMC 7nm is full. The rumors of the supposed 3090 being near 400W@2GHz does somewhat support the claim it's on Samsung as TSMC 7nm is no way that inefficient. Unless they are having another Fermi moment.

    What benchmark. Why? I haven't seen people misleasing other people with RTX, it's cool tech just sadly not widely supported. There were some other things RTX does outside RT, which do have value. But I wouldn't put much value on the RT part itself. Maybe with WoW getting RT shadows there is some for users this forums.

    I just cant help but to think the first gen Turing RTX will be overshadowed by Ampere RTX so much that games having RT will have trouble running with Turing if they are tuned to have the max of Ampere.

  20. #80
    Quote Originally Posted by mrgreenthump View Post
    it's cool tech just sadly not widely supported.
    It is widely supported. Every single new game has support for it.

    Quote Originally Posted by mrgreenthump View Post
    There were some other things RTX does outside RT, which do have value. But I wouldn't put much value on the RT part itself. Maybe with WoW getting RT shadows there is some for users this forums.
    I would. The new NVENC that these cores drive is amazing. DLSS is REALLY good and keeps getting better. That is unfortunately not that well supposed as RTX lightning effects.

    Quote Originally Posted by mrgreenthump View Post
    I just cant help but to think the first gen Turing RTX will be overshadowed by Ampere RTX so much that games having RT will have trouble running with Turing if they are tuned to have the max of Ampere.
    That depends on the model. Anything that's lower than 2070S/2080 just doesnt have enough hardware to properly support those effects. Ampere will just have more hardware.
    R5 5600X | Thermalright Silver Arrow IB-E Extreme | MSI MAG B550 Tomahawk | 16GB Crucial Ballistix DDR4-3600/CL16 | MSI GTX 1070 Gaming X | Corsair RM650x | Cooler Master HAF X | Logitech G400s | DREVO Excalibur 84 | Kingston HyperX Cloud II | BenQ XL2411T + LG 24MK430H-B

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •