Page 26 of 95 FirstFirst ...
16
24
25
26
27
28
36
76
... LastLast
  1. #501
    Deleted
    Quote Originally Posted by Evildeffy View Post
    http://wccftech.com/intel-developing...re-generation/

    Interesting... and a massive dick move at that.
    I don't see how that is dick move tbh. Also I don't see how that could make much difference, for all we know oldschool cisc instruction are translated to something much more sophisticated internally and the legacy instructions could already be processed by same components as the new ones...

    Even if they do break compatibility with old x86 what they are doing is evolution not revolution and that should not affect AMDs licence agreements

  2. #502
    The Lightbringer Evildeffy's Avatar
    15+ Year Old Account
    Join Date
    Jan 2009
    Location
    Nieuwegein, Netherlands
    Posts
    3,772
    Quote Originally Posted by Gorgodeus View Post
    I read it all, and see no issues. They are not beholden to using AMD tech.
    So you see no reason in basically dumping technology still in, a lot, of use today to further their own ideologies upon others.
    Basically deciding how the market should go instead of responding to what users want, mind you that for example some of these CPUs would make some games not work due to missing instruction sets.

    If they remain within the x86 domain then nothing changes in terms of licensing and will still have to cross-license with AMD.

    The point however is that if you have any programme that requires that instruction set which either an enterprise or gamer is using and you want to upgrade...
    You will simply be told "Fuck off, buy an older CPU or this uber expensive server chip".

    My issue, simply put, is for the removal of backwards compatibility as a lot of software manufacturers will not patch something like this, especially not game developers that aren't an MMORPG.

    Yes you could emulate some of them using the newer instruction sets but like all "emulations" ... it'll never be the same speed or as responsive as the native support.

    - - - Updated - - -

    Quote Originally Posted by larix View Post
    I don't see how that is dick move tbh. Also I don't see how that could make much difference, for all we know oldschool cisc instruction are translated to something much more sophisticated internally and the legacy instructions could already be processed by same components as the new ones...
    As stated.. emulated, think of it as nVidia's so called implementation of "true multi-threading capability" in Maxwell/Pascal GPUs.
    Performance hit and no guarantee of operation.
    See above for a bit more consequences and my point.

    Quote Originally Posted by larix View Post
    Even if they do break compatibility with old x86 what they are doing is evolution not revolution and that should not affect AMDs licence agreements
    As I state above, licensing is irrelevant if they keep calling it x86 as x86-64 has to be used regardless as switching the entire software market to a new 64-bit tech is really not going to happen.
    That was also not my reason for calling it a dick move.

  3. #503
    Deleted
    Quote Originally Posted by Evildeffy View Post
    The point however is that if you have any programme that requires that instruction set which either an enterprise or gamer is using and you want to upgrade...
    You will simply be told "Fuck off, buy an older CPU or this uber expensive server chip".

    My issue, simply put, is for the removal of backwards compatibility as a lot of software manufacturers will not patch something like this, especially not game developers that aren't an MMORPG.

    Yes you could emulate some of them using the newer instruction sets but like all "emulations" ... it'll never be the same speed or as responsive as the native support.

    - - - Updated - - -

    As stated.. emulated, think of it as nVidia's so called implementation of "true multi-threading capability" in Maxwell/Pascal GPUs.
    Performance hit and no guarantee of operation.
    See above for a bit more consequences and my point.
    One simple and already widely in use solution for that issue is kernel level emulation - basically replace missing/legacy calls with working counterparts in runtime - this is already done in KVM and XEN as far as I know and there is little to no performance penalty for this.

    The other solution that i mentioned is already done in cpu inner workings. Example: you get an SSE2 and AVX instructions - before either of them gets processed they get deconstructed to simpler smaller instruction, then put in execution queue. There is no reason not modify SSE2 input during that process to fit in AVX execution modules, hell it probably is already done that way. I just don't see what Intel could gain in reality by removing support for them. Unless of course they assume that by the 2020 they will have no competition on the market and will be free to do w/e. If however that is not the case and AMD is still around that could be a big win for AMD and Intel would be basically shooting themselves in foot

  4. #504
    I dont like people who hold back things from moving forward, it took case designers til last year basically to make a case without an optical drive bay because 12 people on the planet dont know how to use a emulator or torrent to crack a game that no one has heard of, but they REALLLYYY like to play.

    And who even knows whats gonna happen in this particular scenario, like larix said there are probably multiple ways of getting around the issues that would arise with a new x86 architecture.

    I just wish people wouldnt be so resistant to change, move with the times and adapt!
    Last edited by Fascinate; 2016-12-27 at 03:25 AM.

  5. #505
    Fluffy Kitten Remilia's Avatar
    10+ Year Old Account
    Join Date
    Apr 2011
    Location
    Avatar: Momoco
    Posts
    15,160
    Going by original source, which is way shorter, to the point, and nothing about AMD64 licensing. The title kind of made it sound like they were going for a change in the ISA, but all the source is going is core architecture, which isn't anything new. A lot of SIMDs are really out of date though. One of the biggest reason why x86 takes so much more power compared to ARM is actually all those additional stuff that aren't used outside of certain applications. Most of those applications pertain to more compute oriented tasks like video rendering, data crunching or what not that are very commonly up to date or open sourced in some case, where as stuff like SSE and AVX are very very rare in games.

    It'd more than likely removing SSE1-3 support and only keeping SSE4 and removing AVX1 and go with AVX2 / AVX-512. If the programmer didn't even do any exception handling for extension set capability for the CPU, there's probably something even more wrong going on than just the hardware being incapable of doing it, basically the programmer wasn't thinking at that point.

    In a sense it's welcomed cause it does lean more to the capability of ARM and x86 together, something that Qualcomm / Microsoft is already attempting to do on the hardware level.

    Intel really though probably does need to go to a new architecture. They acquired VISC ISA company (completely forgot the name) and implications of compatibility and compute capability is pretty good from that technology. While I obviously won't know the exact details of the architecture, there is a certain point where old software / hardware can hamstring new tech. Easy example would be USB Type A vs Type C. Type C has a lot more capability including thunderbolt / PCI-E lanes, displayport (audio/video) and so forth, where as Type-A is practically stuck. HBM or HMC stacked RAM is another example of different is better allowing for a lot more different things instead of DDR# which is in all practicality stuck.
    You have to remember, Intel's architecture starting at Core2/Nephalem hasn't in it's base form changed much. Everything keeps getting built on top of it. There is a limit to how much you can stack ontop of something before you start worrying about breaking old things and things get impeded because of that, both in hardware and software.

    Zen more than likely has bigger room to expand but we'll see in a few years.
    Last edited by Remilia; 2016-12-27 at 03:31 AM.

  6. #506
    The Lightbringer Evildeffy's Avatar
    15+ Year Old Account
    Join Date
    Jan 2009
    Location
    Nieuwegein, Netherlands
    Posts
    3,772
    Quote Originally Posted by larix View Post
    One simple and already widely in use solution for that issue is kernel level emulation - basically replace missing/legacy calls with working counterparts in runtime - this is already done in KVM and XEN as far as I know and there is little to no performance penalty for this.

    The other solution that i mentioned is already done in cpu inner workings. Example: you get an SSE2 and AVX instructions - before either of them gets processed they get deconstructed to simpler smaller instruction, then put in execution queue. There is no reason not modify SSE2 input during that process to fit in AVX execution modules, hell it probably is already done that way. I just don't see what Intel could gain in reality by removing support for them. Unless of course they assume that by the 2020 they will have no competition on the market and will be free to do w/e. If however that is not the case and AMD is still around that could be a big win for AMD and Intel would be basically shooting themselves in foot
    Modifying f.ex. the SSE instruction set to incorporate the previous ones is the same as it is now.
    And I've, unfortunately Exact and their use of SQL is one of them, had software packages (Exact is a CRM software integrating SQL) that use SSE.
    The people working on this will likely not patch it and tell you to use older hardware because issues WILL arise from this.
    The emulation COULD work without issues but unfortunately that removing some instruction sets and trying to emulate things don't always work out too well.

    Still though, I'm not too obsessed with them doing as such but more to the point of their behaviour of "change or GTFO".
    There's still plenty of software using those instruction sets and most are business based which means they'll be crying about this for their software and no upgrade path... I mean think of how long it took for Windows XP to be dethroned from the top dog position.
    Businesses do not want to move as it's an extremely costly endeavour.

    - - - Updated - - -

    Quote Originally Posted by Remilia View Post
    Going by original source, which is way shorter, to the point, and nothing about AMD64 licensing. The title kind of made it sound like they were going for a change in the ISA, but all the source is going is core architecture, which isn't anything new. A lot of SIMDs are really out of date though. One of the biggest reason why x86 takes so much more power compared to ARM is actually all those additional stuff that aren't used outside of certain applications. Most of those applications pertain to more compute oriented tasks like video rendering, data crunching or what not that are very commonly up to date or open sourced in some case, where as stuff like SSE and AVX are very very rare in games.
    I honestly do not even know why they'd mention the licensing crap as IMO it has no bearing on this unless this clean-up will be called something entirely different than x86 and be radically different to the architecture.
    Until then it really has 0 bearing on this.

    Quote Originally Posted by Remilia View Post
    It'd more than likely removing SSE1-3 support and only keeping SSE4 and removing AVX1 and go with AVX2 / AVX-512. If the programmer didn't even do any exception handling for extension set capability for the CPU, there's probably something even more wrong going on than just the hardware being incapable of doing it, basically the programmer wasn't thinking at that point.
    It's frightening how many of these are out there.

    Quote Originally Posted by Remilia View Post
    In a sense it's welcomed cause it does lean more to the capability of ARM and x86 together, something that Qualcomm / Microsoft is already attempting to do on the hardware level.

    Intel really though probably does need to go to a new architecture. They acquired VISC ISA company (completely forgot the name) and implications of compatibility and compute capability is pretty good from that technology. While I obviously won't know the exact details of the architecture, there is a certain point where old software / hardware can hamstring new tech. Easy example would be USB Type A vs Type C. Type C has a lot more capability including thunderbolt / PCI-E lanes, displayport (audio/video) and so forth, where as Type-A is practically stuck. HBM or HMC stacked RAM is another example of different is better allowing for a lot more different things instead of DDR# which is in all practicality stuck.
    You have to remember, Intel's architecture starting at Core2/Nephalem hasn't in it's base form changed much. Everything keeps getting built on top of it. There is a limit to how much you can stack ontop of something before you start worrying about breaking old things and things get impeded because of that, both in hardware and software.

    Zen more than likely has bigger room to expand but we'll see in a few years.
    Problem with going to an entirely new architecture is making your client base going along with it.
    Whilst the spec is old it still drives the entire market.

    What were to happen for example if Intel would call it x98 architecture, offered cramped down instruction sets but functionally no different from current x86 and x64 architectures (other than possible speed improvements).
    Most of the market would ignore Intel for a long time because competitor AMD still offers the old ones at a cheap price.
    Forcing developers to "take it easy" as the instruction sets still are being delivered by competitor AMD but without the added development costs nor hardware costs.
    By the time market would switch Intel will have suffered massive losses because they'd lose most of their clientèle.

    And to make matters worse for them if they'd incorporate x86 backwards compatibility then they also have to offer x86-64 which means that the license remains intact as otherwise Intel would also lose the license to produce Multi-Core CPUs, Integrated Memory Controllers, GPU-on-CPU tech etc.

    As much as it may improve technology, this particular area is both a possible death trap as well as a dick move to people who need the older instruction sets to keep their software running.

  7. #507
    Fluffy Kitten Remilia's Avatar
    10+ Year Old Account
    Join Date
    Apr 2011
    Location
    Avatar: Momoco
    Posts
    15,160
    Well, that's the issue with wcftech, they make it sound like something it's not. CPU architecture is not the same as Instruction Set Architecture. Going by quoted source, x86 isn't changing, it's still using x86 architecture, what's changing is the CPU architecture and SIMDs.

  8. #508
    Old God Vash The Stampede's Avatar
    10+ Year Old Account
    Join Date
    Sep 2010
    Location
    Better part of NJ
    Posts
    10,939
    Quote Originally Posted by larix View Post
    I don't see how that is dick move tbh. Also I don't see how that could make much difference, for all we know oldschool cisc instruction are translated to something much more sophisticated internally and the legacy instructions could already be processed by same components as the new ones...

    Even if they do break compatibility with old x86 what they are doing is evolution not revolution and that should not affect AMDs licence agreements
    Intel is going to have an uphill battle getting people to adopt their new x86 architecture. Though at this point it shouldn't even be called x86, since it's not even backwards compatible. Calling it x86 would be just marketing. Good luck getting developers to port their applications over.

    Anyway, it's not like AMD doesn't plan to leave x86 as well. Remember the AM4 socket platform will not only allow Zen x86 chips, but ARM chips as well. Considering how far ARM has gotten, I would think ARM has a better chance of penetrating the market than Intel's new x86 architecture.

    It's not like Intel hasn't done this in the past and failed. Remember Itanium?
    Last edited by Vash The Stampede; 2016-12-27 at 06:09 AM.

  9. #509
    The Lightbringer Artorius's Avatar
    10+ Year Old Account
    Join Date
    Dec 2012
    Location
    Natal, Brazil
    Posts
    3,781
    I'd be extremely happy if we could move from x86 to something like RISC-V or even ARMv8. I don't really think those ISAs are "better" than x86 in any meaningful way (it's not like they're objectively worse at everything either, they're just different), but that would mean other companies would be able to design their own PC CPUs which is amazing to the consumer. I can easily see Apple and Samsung joining the party.

    We have working builds of windows 10 with ARM support through x86 emulation now, remember Apple also emulated PowerPC at some point so I don't think it's impossible. Although Intel would do everything they could to keep x86 as the standard, even dirty things.

  10. #510
    Quote Originally Posted by Remilia View Post
    Going by original source, which is way shorter, to the point, and nothing about AMD64 licensing. The title kind of made it sound like they were going for a change in the ISA, but all the source is going is core architecture, which isn't anything new. A lot of SIMDs are really out of date though. One of the biggest reason why x86 takes so much more power compared to ARM is actually all those additional stuff that aren't used outside of certain applications. Most of those applications pertain to more compute oriented tasks like video rendering, data crunching or what not that are very commonly up to date or open sourced in some case, where as stuff like SSE and AVX are very very rare in games.

    It'd more than likely removing SSE1-3 support and only keeping SSE4 and removing AVX1 and go with AVX2 / AVX-512. If the programmer didn't even do any exception handling for extension set capability for the CPU, there's probably something even more wrong going on than just the hardware being incapable of doing it, basically the programmer wasn't thinking at that point.
    Yeah, there's no way they'll move away from current AMD64 ISA. uarch change is just under the hood, most parts of the interface (the ISA) will remain the same.

    They probably want to trim the obsolete stuff from it, like all the initialization dance the OS has to go through until it's in x64 mode. You literally need code from the 80s to bootstrap the OS (yes, 16bit stuff!), then you skip through 2 more modes because they're useless anyways now, while everything for these modes is still present in the CPU. And then there's 32 bit compatibility mode to maintain, wondering if that's on the chopping block too, as it doesn't have to be. It's not too complicated to map user level instructions 32bit instructions to 64bit ones in microcode and the OS will have to do some trickery as well so that programs will work, but it's doable. 32bit ring0 code will be dead though.

    As far as SIMD instructions go all SSE iterations are forward compatible. So removing 1-3 but leaving 4 doesn't make sense (actually if you're looking only at 4, it's not a huge instruction set and not very useful on its own). AVX was the first point where forward compatibility broke, and I guess that's where Intel wants to draw the line. AVX conceptually (not the instructions themselves) is backwards compatible to SSE but forward compatibility was lost, and when writing code that switched modes constantly (without zeroing the vectors first) many cpu cycles are wasted, and in general it's a very bad practice because it interferes with OoOE.
    My guess is what they want to do is to map legacy SSE instructions to respective VEX prefix'ed ones in microcode. This is a relatively easy change which should preserve full backwards compatibility with all normal code. Stuff that actively mixes SSE and AVX without zeroing out registers will break though.

    But all the talk about removing some backwards compatibility should be small stuff. And that what ticks me off, it should be, but it feels like not. It's not like this wasn't brought up before many many times, while each and every time was shot down with the same argument. And now that same argument is used the other way around... Somewhat puzzling. And currently I can think of only one explanation, and it's not a good one for the technology. So I hope I'm wrong
    Last edited by dadev; 2016-12-27 at 07:43 PM.

  11. #511
    Ok, here is a question for the experts here who know their stuff.

    Say Intel and the new AMD cpu's are great 4 core processors and are affordable and 4 cores become the standard for gaming.

    What does that mean for the old AMD 8300-9590 processors or the 4 core Intel processors out right now? If software developers, gaming companies prefered of course, start to maximize four cores, wouldn't the older four core processors, specifically the AMD 8300-9590, get a huge boost in performance since thats what those processors were designed for?

  12. #512
    The Lightbringer Evildeffy's Avatar
    15+ Year Old Account
    Join Date
    Jan 2009
    Location
    Nieuwegein, Netherlands
    Posts
    3,772
    Quote Originally Posted by Jimmy Thick View Post
    Ok, here is a question for the experts here who know their stuff.

    Say Intel and the new AMD cpu's are great 4 core processors and are affordable and 4 cores become the standard for gaming.

    What does that mean for the old AMD 8300-9590 processors or the 4 core Intel processors out right now? If software developers, gaming companies prefered of course, start to maximize four cores, wouldn't the older four core processors, specifically the AMD 8300-9590, get a huge boost in performance since thats what those processors were designed for?
    Yes it should alleviate SOME of the bottlenecks they experience.

    The FX series when used in properly threaded games aren't that bad at all, they still aren't as powerful as new gen Intel CPUs but they'd make up some of the differences yes.
    That said they'd still be outdated architectures and still be slower, what this would actually do is shine more light on the FM2+ socket and their CPUs.
    For example the Athlon 880K as that is 2 CPU generations later than the FX series and core-for-core actually noticably more potent.

  13. #513
    Fluffy Kitten Remilia's Avatar
    10+ Year Old Account
    Join Date
    Apr 2011
    Location
    Avatar: Momoco
    Posts
    15,160
    Quote Originally Posted by dadev View Post
    Yeah, there's no way they'll move away from current AMD64 ISA. uarch change is just under the hood, most parts of the interface (the ISA) will remain the same.

    They probably want to trim the obsolete stuff from it, like all the initialization dance the OS has to go through until it's in x64 mode. You literally need code from the 80s to bootstrap the OS (yes, 16bit stuff!), then you skip through 2 more modes because they're useless anyways now, while everything for these modes is still present in the CPU. And then there's 32 bit compatibility mode to maintain, wondering if that's on the chopping block too, as it doesn't have to be. It's not too complicated to map user level instructions 32bit instructions to 64bit ones in microcode and the OS will have to do some trickery as well so that programs will work, but it's doable. 32bit ring0 code will be dead though.

    As far as SIMD instructions go all SSE iterations are forward compatible. So removing 1-3 but leaving 4 doesn't make sense (actually if you're looking only at 4, it's not a huge instruction set and not very useful on its own). AVX was the first point where forward compatibility broke, and I guess that's where Intel wants to draw the line. AVX conceptually (not the instructions themselves) is backwards compatible to SSE but forward compatibility was lost, and when writing code that switched modes constantly (without zeroing the vectors first) many cpu cycles are wasted, and in general it's a very bad practice because it interferes with OoOE.
    My guess is what they want to do is to map legacy SSE instructions to respective VEX prefix'ed ones in microcode. This is a relatively easy change which should preserve full backwards compatibility with all normal code. Stuff that actively mixes SSE and AVX without zeroing out registers will break though.

    But all the talk about removing some backwards compatibility should be small stuff. And that what ticks me off, it should be, but it feels like not. It's not like this wasn't brought up before many many times, while each and every time was shot down with the same argument. And now that same argument is used the other way around... Somewhat puzzling. And currently I can think of only one explanation, and it's not a good one for the technology. So I hope I'm wrong
    Was not aware of SSE being forward compatability.

    Either way I do want to see what happens, since new stuff is fun. Whether it pans out well is a different thing.

  14. #514
    Old God Vash The Stampede's Avatar
    10+ Year Old Account
    Join Date
    Sep 2010
    Location
    Better part of NJ
    Posts
    10,939
    Quote Originally Posted by Jimmy Thick View Post
    Ok, here is a question for the experts here who know their stuff.

    Say Intel and the new AMD cpu's are great 4 core processors and are affordable and 4 cores become the standard for gaming.

    What does that mean for the old AMD 8300-9590 processors or the 4 core Intel processors out right now? If software developers, gaming companies prefered of course, start to maximize four cores, wouldn't the older four core processors, specifically the AMD 8300-9590, get a huge boost in performance since thats what those processors were designed for?
    It's going to take a while before developers update their games to make better use of those CPUs. Most people don't go out and buy the latest and greatest immediately. Software developers still don't want to redo all their code just to make a small group of people happy. The new 8 core Zen will likely be $300+, so it's out of the price range of most people. The quad core Zens will be affordable, but games being retooled for quad core won't benefit much for the FX 8350's or 8370's.

    Then you have laptops, which for whatever reason Intel loves them dual core's. Laptop i3/i5 and even some i7's are dual core. Wouldn't be surprised that majority of Steam users with dual cores are on laptops. It'll take a killer app that scales nearly 100% with CPU cores for this transition to happen overnight. Which it could if like HL3 scaled nearly 100% with CPU cores and the game was so amazing that the gravity gun can remove pieces of cars and buildings, with AI so smart it's damn near like AI who live in your virtual game, kinda like Reboot. But we're not because consoles like the PS4 and Xbox are running off AMD's Jaguar cores. Which if you thought the FX chips were bad, you don't want to know how bad Jaguar chips are.

    But what I hope that'll come from this is AMD and Intel fighting each other and lowering prices while providing better products. Also AMD's ARM chips might give some of us an alternative to x86, and move away from that horrible x86 market. It'll be a long time from now before a desktop ARM chip could be a viable purchase for PC gaming, but plenty of Linux guys can make good use of ARM that doesn't revolve around gaming.

  15. #515
    Fluffy Kitten Remilia's Avatar
    10+ Year Old Account
    Join Date
    Apr 2011
    Location
    Avatar: Momoco
    Posts
    15,160
    Do we have any idea on what AMD is planning with ARM based CPUs anyways? We've heard they want to expand that way but nothing else from it that I remember.

  16. #516
    Quote Originally Posted by Remilia View Post
    Do we have any idea on what AMD is planning with ARM based CPUs anyways? We've heard they want to expand that way but nothing else from it that I remember.
    From what I heard, they were planning on using dedicated ARM cores as co-processors to handle certain tasks.

  17. #517
    Old God Vash The Stampede's Avatar
    10+ Year Old Account
    Join Date
    Sep 2010
    Location
    Better part of NJ
    Posts
    10,939
    Quote Originally Posted by Butler Log View Post
    From what I heard, they were planning on using dedicated ARM cores as co-processors to handle certain tasks.
    AMD plans to make dedicated ARM CPUs. What you're talking about is what they're doing about encryption or some security feature. Literally a co-processor to the Zen cores. THere's going to be a day when you can choose between an x86 Zen or an AMD ARM CPU for the socket AM4.

  18. #518
    The Lightbringer Artorius's Avatar
    10+ Year Old Account
    Join Date
    Dec 2012
    Location
    Natal, Brazil
    Posts
    3,781
    5GHz Zen OC@air rumour, probably fake but if true that's incredibly insane.

    - - - Updated - - -

    Also AMD's K12 is ARMv8, designed alongside Zen.

  19. #519
    Fluffy Kitten Remilia's Avatar
    10+ Year Old Account
    Join Date
    Apr 2011
    Location
    Avatar: Momoco
    Posts
    15,160
    I'd go with fake. Then again some people that OC do disable all but one or two core and SMT and then see what happens. That then becomes pretty pointless though.

  20. #520
    Quote Originally Posted by Remilia View Post
    I'd go with fake. Then again some people that OC do disable all but one or two core and SMT and then see what happens. That then becomes pretty pointless though.
    Disable all but one core, dump 10 kg of liquid nitrogen on it, and give it all the voltage the VRM can handle

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •