Page 3 of 3 FirstFirst
1
2
3
  1. #41
    Quote Originally Posted by kaelleria View Post
    You've got it reversed... Samsung has terrible yields while TSMC has great yields. Expect availability of the 30 series to be very limited until next year.
    Uh, you're utterly daft if you think that is true. Samsung has some of the best yields in the business. But, hey, plenty of people in this forum seem to live in fantasy worlds where everything is the opposite of reality. Seem you're one of them.

    TSMC's issue isn't yields, its "being booked". Theyre already booked 18 months out, Samsung is not (because they dont always do a lot of chip fab for other companies, mostly just for themselves) and Samsung can bump people, if it needs to. If it pisses of a smaller company, they simply dont care. Samsung doesn't rely on outside business in order for its chip fabs to be profitable. So if nVidia comes along and is like "here's a giant bonus, put us at the top of the queue" - Samsung will do it.

    TSMC cant get away with that. They are entirely based around producing other people's designs. They piss off a customer and thats lost money.

    - - - Updated - - -

    Quote Originally Posted by kaelleria View Post
    Should be coming this year in fact...
    More of the reverse reality world.

    AMD has NEVER set a date for Ryzen 4. The last set of leaked slides showed it as "Late 2020", and the most recent leaks (from inside TSMC) have suggested Q1 2021 is more realistic. If it launches in "Late 2020" itll be a vapor launch with no stock available. Dont expect "good" availability until spring 2021.

    Jay just did a video on how AMD takes literally FOREVER to pivot or recover from a misstep.

    They put out the Zen 2 laptop chips WAY too late. They needed to launch in Jan or Feb. Instead, they waited so damn long, that they got a whole 5 weeks of impressive performance vis-a-vis Intel's laptop chips...

    And then Tiger Lake launched at basically ate their lunch and their dinner while it was at it. 4 core/8 thread parts beating full 8 core parts by a wide margin. Xe GPU performance twice as good as any of the laptop APUs.

    Because they take too damn long (in part, because if their reliance on TSMC, to whom they are just one more customer) to react and pivot.

    - - - Updated - - -

    Quote Originally Posted by Vilendor View Post
    There is no way they can keep up with the demand, i expect half the PC community will want one. They will run out of stock and stores will sell it for 1000$+ for a lot of months.
    Lolwut? 50% of the PC community is going to run out and spend 700$ on a GPU? Are you insane?

    The money making GPUs are always the low-midrange SKUs. For every 2080/SUPER and 2080Ti that got sold, 20 1660 variants got sold.

    The average gamer doesn't have 700$ to throw down on JUST a GPU.

    The "average gamer" is playing on a rig that cost 700-800$ for the ENTIRE RIG.

    - - - Updated - - -

    Quote Originally Posted by Coldkil View Post
    I don't know exact numbers obviously but that new tech is meant exactly for this. GPUs are becoming faster and faster at a much higher rate compared to CPUs. So giving them direct storage access frees up quite a bunch of CPU resources - especially since game data is now a huge bulk compared to older generations.
    It will mostly help with initial loading times and allow them to more quickly stream data for nearby assets much faster, but it wont magically reduce CPU dependency for most things (like AI, issuing draw calls, etc) and it wont even help much other than the initial load for games that aren't streaming entire worlds from storage like, for instance, Doom Eternal. It loads the entire level into memory in one go. This will make that load faster, but it wont really affect gameplay.
    Last edited by Kagthul; 2020-09-04 at 11:31 PM.

  2. #42
    Quote Originally Posted by Kagthul View Post
    Uh, you're utterly daft if you think that is true. Samsung has some of the best yields in the business. But, hey, plenty of people in this forum seem to live in fantasy worlds where everything is the opposite of reality. Seem you're one of them.
    What?! Samsung has been reported to have terrible yields with their newer processes for past 6 months or so. It's until very recently that it was reported they've gotten their 7nm to acceptable level in terms of yields.

    - - - Updated - - -

    Quote Originally Posted by Kagthul View Post

    More of the reverse reality world.

    AMD has NEVER set a date for Ryzen 4. The last set of leaked slides showed it as "Late 2020", and the most recent leaks (from inside TSMC) have suggested Q1 2021 is more realistic. If it launches in "Late 2020" itll be a vapor launch with no stock available. Dont expect "good" availability until spring 2021.
    Exept Lisa Su has commented multiple times that Zen 3 will launch this year(including Desktop).

    - - - Updated - - -

    Quote Originally Posted by Kagthul View Post
    And then Tiger Lake launched at basically ate their lunch and their dinner while it was at it. 4 core/8 thread parts beating full 8 core parts by a wide margin. Xe GPU performance twice as good as any of the laptop APUs.
    Considering we have nothing else than Intel's "Real world benchmarks" to go on.. Maybe just wait for reviews till we see what happens. Xe does look somewhat promising though. But again reviews.

  3. #43
    Quote Originally Posted by Rennadrel View Post
    No it isn't, and in literally every performance metric out there, it's not worth the price for the very slight performance advantage it has, so stop talking out your ass about things you don't understand. In both Cinebench and PassMark, the 10700K scores marginally better than the high end Ryzen 9, which is still cheaper. Literally the performance metrics are all out there, 5-20% performance advantage depending on the application is not worth nearly double the price.
    Spoken like someone who always settles for "good enough" or doesnt make money with his computer.

    I dont do that kind of work anymore, but when i was doing video editing, a 20% performance boost is worth basically any dollar amount. I billed myself at 70$ an hour. If i had 20% more render time per day, thats 20% more money. A CPU that costs twice as much but gives me 20% more time to render and therefore bill more hours? Pays for itself in a week. If that.

    And if you're going to actual Pro-level work (and making commensurate money) - you almost never even look at the price other than "can i get it to pay for itself within X weeks", and if the answer is yes, you press the buy button.

    When i worked for an Ad Agency, they replaced our Macs (Power Mac G5s at the time) every generation. Because the upgrade paid for itself in weeks. Or less, sometimes.

    Similarly when you're an enthusiast, you pay more for the best. Full stop. Does anyone NEED a 3090? No. The answer is no. No one "needs" it. A 3080 will do 4k 100fps+, and no one is seriously 8k gaming.

    But there are enthusiasts out there that will buy one day 1, even if they know that aftermarket/3rd party cards might be cheaper (see 2080Ti). Because its their hobby/jam/whatever. So they spend out to get the best, because thats their thing.

    Similarly, if i were building right now.... i'd still be buying Intel. Granted, my use case is very specific (i ONLY game on my Windows PC, i have a Mac for daily driving and a Chromebook for casual use in the living room), so i dont get any real benefit out of 10 cores or any of that nonsense, but i will gain noticeable gaming performance with a 10600K vs a Ryzen 3600/X, because i can OC that 10600K to 5.1+ghz no problems, and that Ryzen wont break 4.3ghz single core and probably not much north of 4.1ghz unless the chip is a golden sample.

    Thats worth the extra ~120$ to me.

    - - - Updated - - -

    Quote Originally Posted by mrgreenthump View Post
    What?! Samsung has been reported to have terrible yields with their newer processes for past 6 months or so. It's until very recently that it was reported they've gotten their 7nm to acceptable level in terms of yields.

    - - - Updated - - -
    You're gonna have to source that. Because Google doesn't have shit in the first three pages. Samsung's yields have traditionally been extremely good.

    Exept Lisa Su has commented multiple times that Zen 3 will launch this year(including Desktop).

    - - - Updated - - -
    How nice for her. She also said that Big Navi would be here BEFORE the consoles, back in February.. now its a vague "After"

    Lisa Su saying something is not a release date. And if it launches in December, it wont be realistically available until February. Fuck, there are entire countries where you cant even FIND the 3100 and 3300X. And thats been "out" for MONTHS. AMD is basically the king of missed release dates and deadlines.

    Considering we have nothing else than Intel's "Real world benchmarks" to go on.. Maybe just wait for reviews till we see what happens. Xe does look somewhat promising though. But again reviews.
    Uh.. wut? There are actual benchmarks from 3rd party machines in the Passmark databases. Have been for several weeks.

  4. #44
    Quote Originally Posted by Kagthul View Post
    I dont do that kind of work anymore, but when i was doing video editing, a 20% performance boost is worth basically any dollar amount. I billed myself at 70$ an hour. If i had 20% more render time per day, thats 20% more money. A CPU that costs twice as much but gives me 20% more time to render and therefore bill more hours? Pays for itself in a week. If that.
    Then you wouldn't look at a 10700k, you'd look for the best. And the topic is that 10700k is sort of useless when comparing to both their own line up and AMD.

    - - - Updated - - -

    Quote Originally Posted by Kagthul View Post
    Uh.. wut? There are actual benchmarks from 3rd party machines in the Passmark databases. Have been for several weeks.
    Sorry if I missed that, but I've trained myself to ignore anything to do with Passmark. Maybe they've fixed their evaluation process, but earlier this year they somehow come to the conclusion that Ryzen 3 4300U was the fastest AMD CPU.

  5. #45
    Quote Originally Posted by mrgreenthump View Post
    Exept Lisa Su has commented multiple times that Zen 3 will launch this year(including Desktop).
    She can say whatever she wants but true availability is not happening this year, especially for the parts people want, i.e. 6 cores and entry level 8 cores.
    R5 5600X | Thermalright Silver Arrow IB-E Extreme | MSI MAG B550 Tomahawk | 16GB Crucial Ballistix DDR4-3600/CL16 | MSI GTX 1070 Gaming X | Corsair RM650x | Cooler Master HAF X | Logitech G400s | DREVO Excalibur 84 | Kingston HyperX Cloud II | BenQ XL2411T + LG 24MK430H-B

  6. #46
    Bloodsail Admiral Viikkis's Avatar
    10+ Year Old Account
    Join Date
    Oct 2013
    Location
    Finland
    Posts
    1,083
    Quote Originally Posted by Kagthul View Post
    As Miyagie pointed out - if that were going to be true, it would have already happened.

    XBone and PS4 run on X86-64 CPUs that are 8 cores, and that's never meant that games support more cores, specifically.
    But both PS4 and XBone have 2 quad core CPUs inside not a single 8 core CPU. So they're packing 2x4 cores which is technically 8 cores but divided between 2 different processors. Next gen uses single 8 core CPU.

  7. #47
    Quote Originally Posted by Viikkis View Post
    But both PS4 and XBone have 2 quad core CPUs inside not a single 8 core CPU. So they're packing 2x4 cores which is technically 8 cores but divided between 2 different processors. Next gen uses single 8 core CPU.
    This is irrelevant, all of this is done just due to manufacturing. Look at 3300X vs 3100 - it's the exact same thing.
    R5 5600X | Thermalright Silver Arrow IB-E Extreme | MSI MAG B550 Tomahawk | 16GB Crucial Ballistix DDR4-3600/CL16 | MSI GTX 1070 Gaming X | Corsair RM650x | Cooler Master HAF X | Logitech G400s | DREVO Excalibur 84 | Kingston HyperX Cloud II | BenQ XL2411T + LG 24MK430H-B

  8. #48
    Quote Originally Posted by Thunderball View Post
    This is irrelevant, all of this is done just due to manufacturing. Look at 3300X vs 3100 - it's the exact same thing.
    What? Are you seriously thinking 2x4 core Jaguar is anything like 2+2 ccx Zen 2? Also 3300X is superior in gaming to 3100. And don't forget that we are talking freaking Jaguar vs Zen 2. The difference will be immense even if Jaguar would be a true 8 core.

  9. #49
    Quote Originally Posted by Viikkis View Post
    But both PS4 and XBone have 2 quad core CPUs inside not a single 8 core CPU. So they're packing 2x4 cores which is technically 8 cores but divided between 2 different processors. Next gen uses single 8 core CPU.
    Im amused that you think this is a remotely valid point.

    And. well, it isnt. Jaguar is just like Bulldozer - its 8 cores, with shared resources between 2-core pairs.

    And .... Ryzen is basically ALSO 4-core CPUs glued together with Infinity fabric. The CCXes are basically individual CPUs. Theyre just tied together differently than the Bulldozer based architecture (far better).

    Which is STILL irrelevant, as the point is:

    Developers have had 8 core X86-64 to develop for for over 10 years. And in all that time theyve never made "moar coarez!" a thing. And if there was any environment that would have tried to make Multicore important it would have been Bone and PS4 - because they were already weaksauce CPUs to begin with. Theyd have been trying to squeeze every ounce of performance out of them.

    And yet, massively multi-core aware still isn't a thing.

    Because that isn't how software development (for games) works, and because in gaming workloads, there is some stuff you just can not paralellize - it MUST be done in sequence, and will therefore be bound to single core performance before anything else.

    If "moar coarz!" was ever going to take off (for gaming), it would have already done so. Its been 10 years (incl. pre-release dev time).

    Especially when you consider that porting between Xbone and Windows is barely even a thing (given that the Bone is running on Win10 under the hood), if things were optimized for lots of cores, and performed well on Jaguar, it would have SCREAMED on a modern Zen or Intel chip.

    And yet, they dont.

    Theyve had the time. They didn't do it. They aren't going to suddenly start.

    - - - Updated - - -

    Quote Originally Posted by mrgreenthump View Post
    What? Are you seriously thinking 2x4 core Jaguar is anything like 2+2 ccx Zen 2? Also 3300X is superior in gaming to 3100. And don't forget that we are talking freaking Jaguar vs Zen 2. The difference will be immense even if Jaguar would be a true 8 core.
    Not sure what point your trying to make (i cant see Thunderd- his posts, so i can only see what you quoted).

    The SPEED of the cores involved is 1000% irrelevant.

    THe issue is the claim of "well now that the consoles have high-core-count CPUs, thats going to be REQUIRED ON PC TOO, 'cause MOAR COARZ!!!"

    Which is just horseshit of the highest order.

    Theyve had over a decade to optimize for 8 X86-64 cores.

    A decade.

    A. Decade.

    And yet, it still hasn't happened. If the emergence of "lots of cores" was going to lead to sudden leaps in performance, it would have already happened, since the consoles have had lots of cores for the last several generations (PS3 was 8 cores, IIRC, though it was a custom architecture; the X360 was a fast 8 core PowerPC (RisC) chip, booth the PS4 and Bone have been 8 cores).

    And yet, when you move the games to a PC, especially from Bone > PC (both X86-64 and Win10) .. there's no performance increase.

    Because developers dont care about "MOAR COARZ!". They put in the required work to hit the desired framerate + detail settings, and then cease working on optimization unless later additions make the performance drop below the desired results.

    So, again... if the availability of high core counts was going to suddenly require MOAR COARZ! on PC.... its had 10+ years to materialize, and yet here we are.

  10. #50
    Bloodsail Admiral Viikkis's Avatar
    10+ Year Old Account
    Join Date
    Oct 2013
    Location
    Finland
    Posts
    1,083
    Quote Originally Posted by Kagthul View Post
    Im amused that you think this is a remotely valid point.
    Glad you got some fun out of it then.

  11. #51
    You're looking at about a 10% bottleneck with an 8600k and an RTX 3080 at 1080p, less at 1440p and 4K. This is perfectly fine.

  12. #52
    Quote Originally Posted by Laqweeta View Post
    I'm getting everything this gen just to make sure. i7 10700K (AMD single core performance is still weak compared to Intel, so pass) 32GB 3600 Ram and a 3080.
    Great choice! I bought the i7 10700k and it's a beast! It also runs much cooler than Ryzen.

  13. #53
    Mine was at 7 years, went from Haswell i5 4690K to an i7 10700K and the performance boost was significant, plus having access to the NVMe storage made my compute boot instantly.

  14. #54
    An 8600k should be able to handle any game at 60FPS. Given your intended framerate isn't changing, then your CPU requirement isn't changing either.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •