1. #1
    Herald of the Titans RicardoZ's Avatar
    Join Date
    Oct 2011
    Location
    Orange County, CA
    Posts
    2,569

    Why doesn't Intel make GPU's?

    You'd think with how their cpu's favor gaming so much they'd have gotten in on the act. Why do you think they haven't?

  2. #2
    They were going to, then decided not to.

    http://en.wikipedia.org/wiki/Larrabe...oarchitecture)
    i7-4770k - GTX 780 Ti - 16GB DDR3 Ripjaws - (2) HyperX 120s / Vertex 3 120
    ASRock Extreme3 - JBL S300A - FiiO E10 - EVGA Supernova 650G - Corsair H80i

    build pics

  3. #3
    Titan Synthaxx's Avatar
    Join Date
    Feb 2008
    Location
    Rotherham, England/UK
    Posts
    13,034
    They have. They're built into the CPU's.

    If you're referring to dedicated cards, GPU's require much different development processes to CPU's. They're not just "CPU's for graphics" any more, they're extraordinarily good at things CPU's aren't too great at these days (but which they used to be superior at) such as Floating Point calculation which are used for many scientific applications. Things like Folding@home, bitcoin mining and such are some of the better examples, but things like PhysX are probably the one gamers can relate to most. For this reason, it's better to leave development to the existing competitors in the market and simply focus on leading the way with CPU's.

    If Intel made GPU's, they'd likely end up being slammed with monopoly charges.
    Coder, Gamer - IOCube | #Error418MasterRace #ScottBrokeIt
    Knows: Node.js, JS + JQuery, HTML + CSS, Object Pascal, PHP, WQL/SQL

    PC: 750D / 16GB / 256GB + 750GB / GTX780 / 4670K / Z87X-UD4H | Laptop: 8GB / 120GB + 480GB / GTX765M / 4700MQ

  4. #4
    Old God apepi's Avatar
    Join Date
    Dec 2008
    Location
    Mostly harmless
    Posts
    10,011
    I think intel might try to make a discrete gpu, not 100% on it though.
    http://www.bit-tech.net/news/hardwar...creative-gpu/1
    And this is somewhat relative:} http://blogs.nvidia.com/2010/06/gpus...us-says-intel/
    Time...line? Time isn't made out of lines. It is made out of circles. That is why clocks are round. ~ Caboose

  5. #5
    High Overlord
    Join Date
    Oct 2008
    Location
    UK
    Posts
    105
    They did actually try, quite some years ago - see http://en.wikipedia.org/wiki/Intel740

    I suspect they've concluded it's not worth the effort it would take for them to develop a truly competitive stand-alone graphics chipset, when they can chuck out integrated stuff that's "good enough" to cover massive chunks of the market (eg business, non-gaming home use, etc).

  6. #6
    Fluffy Kitten Badpaladin's Avatar
    Join Date
    May 2010
    Location
    ccaarrrrllllll
    Posts
    11,090
    Quote Originally Posted by Faffin View Post
    I suspect they've concluded it's not worth the effort it would take for them to develop a truly competitive stand-alone graphics chipset, when they can chuck out integrated stuff that's "good enough" to cover massive chunks of the market (eg business, non-gaming home use, etc).
    Precisely. I'd equate it to Apple's iOS maps. They'd have so much ground to cover, and will likely end up falling short in some (or many) areas. GPU's, especially modern ones, are vastly different than CPU's. Treading new ground isn't such a good investment for them.
    My Short Required Reading List: One. Two. || Last.fm

  7. #7
    Quote Originally Posted by RicardoZ View Post
    You'd think with how their cpu's favor gaming so much they'd have gotten in on the act. Why do you think they haven't?
    intel has NEVER favored gaming, they focus on multi-thread programs

  8. #8
    As mentioned already, intel did make and attempt at a discrete GPU called Larabee. It was scrapped and the technology was rolled over into what we now know as Xeon Phi, a coprocessor for servers.

    Quote Originally Posted by fatalwario View Post
    intel has NEVER favored gaming, they focus on multi-thread programs
    And yet the 2500k and 3570k are the choice CPU's for gamers. All companies are focused on multi-threaded programs, including mobile chip makers. It's the future of programming. Not acquiring a 3D graphics card company that already competes in the gaming market, doesn't mean intel doesn't care about gamers.

    Isn't Intel's Lan Fest going on right now?

    On a side note, I wonder what it would have been like had Intel picked up 3DFX instead of Nvidia. That would have made things really interesting.

  9. #9
    Titan Synthaxx's Avatar
    Join Date
    Feb 2008
    Location
    Rotherham, England/UK
    Posts
    13,034
    Quote Originally Posted by Dizey View Post
    And yet the 2500k and 3570k are the choice CPU's for gamers. All companies are focused on multi-threaded programs, including mobile chip makers. It's the future of programming. Not acquiring a 3D graphics card company that already competes in the gaming market, doesn't mean intel doesn't care about gamers.

    Isn't Intel's Lan Fest going on right now?

    On a side note, I wonder what it would have been like had Intel picked up 3DFX instead of Nvidia. That would have made things really interesting.
    Maybe those chips are well suited for gaming, but that's more a side effect of the improvements in the architecture and not as some direct focus on gaming. It just happens that gaming is big business and their chips coincided with what gamers wanted. Also, multithreading isn't the future any more, it's the present (even the past, if you want). You could say parallel processing is the present, and using GPU's for processing instead, but yeah, multithreading isn't big any more, it's standard.

    Roll back to 2005 or so, and you'll see that multithreading was the big thing that everyone should be doing. It was possible, but it was still complicated. Roll to 2006, and it was 64-bit. 2008, standardisation of architectures (e.g. Hyperthreading not requiring special code to take advantage of). 2 of those are now "old news", the other has sort of died out for all but the largest applications (64-bit, mainly reserved for media program in the consumer sector, and scientific programs in education). The last one in that list also applies to GPU's. I remember when details were revealed about NVIDIA's DX10 cards and their 384-bit bus. At that time, I was on another forum, and it baffled some of the regulars for a while.

    It's easier than ever to make a program multithreaded. Even a blank program built with tools from the past 2-3 years will create multiple threads. For example, my own programs spawn 10 or so threads of their own the moment I start to work on them. If I want to run my own code in a thread, know how many lines of additional code it takes to do so? 2. There's limitation to what you can do in threads (don't mess with the interface or things being run in other threads) which might mean you need to rewrite your code and split it into multiple routines, but 2 lines to multithread code is all it takes. Even then, these lines are very short, with one of them simply being the opener for the new code and the other only being a procedure definition in itself. This isn't even an official implementation either, it's a third-party project that someone was good enough to make simple. It's just no big deal any more to make something multithreaded. There's only so far you can go with threads before the performance you gain becomes too negligable to warrant another thread (hence why MMO's are typically viewed with a level of contempt).

    As for Intel picking up 3DFX, I dare say that it'd have simply meant that their IGP's would have come about earlier and the market would have moved to mobile a lot sooner never giving a thought for the desktop market. Sure, we'd have hardware, but I think both hardware and software would be a lot different and that much research would have been skipped in the process.
    Coder, Gamer - IOCube | #Error418MasterRace #ScottBrokeIt
    Knows: Node.js, JS + JQuery, HTML + CSS, Object Pascal, PHP, WQL/SQL

    PC: 750D / 16GB / 256GB + 750GB / GTX780 / 4670K / Z87X-UD4H | Laptop: 8GB / 120GB + 480GB / GTX765M / 4700MQ

  10. #10
    They tried and failed. Modern CPU/GPU design is really hard to get into. The chips are extremely complex and build upon years and years of existing designs. Nvidia/AMD don't design GPUs from scratch - they build them upon their existing design patterns/modules/architectures. Intel simply came too late to the game - despite them having lots of resources, they are just not able to catch up with the years of initial experience Nvidia and AMD have.

    This is also a reason why Nvidia doesn't offer x86 CPUs - they really want to - but they are unable to close the gap to Intel/AMD. And even AMD is seriously behind Intel now simply because Intel had 'luck' with their CPU design. The modern core architecture is build upon - wait for it - Pentium M, which is a derivative of 1995 Pentium Pro (P6). Yes, its that old. They had to scrap the Pentium 4 Netburst architecture because it was a dead end for them and go back to Pentium Pro tech.

    All that said, the current Intel IGP iteration is pretty amazing. And if Haswell indeed offers performance levels close to 650M, it will open up a whole new era for mobile computing.

  11. #11
    Quote Originally Posted by Synthaxx View Post
    Maybe those chips are well suited for gaming, but that's more a side effect of the improvements in the architecture and not as some direct focus on gaming. It just happens that gaming is big business and their chips coincided with what gamers wanted. Also, multithreading isn't the future any more, it's the present (even the past, if you want). You could say parallel processing is the present, and using GPU's for processing instead, but yeah, multithreading isn't big any more, it's standard.

    Roll back to 2005 or so, and you'll see that multithreading was the big thing that everyone should be doing. It was possible, but it was still complicated. Roll to 2006, and it was 64-bit. 2008, standardisation of architectures (e.g. Hyperthreading not requiring special code to take advantage of). 2 of those are now "old news", the other has sort of died out for all but the largest applications (64-bit, mainly reserved for media program in the consumer sector, and scientific programs in education). The last one in that list also applies to GPU's. I remember when details were revealed about NVIDIA's DX10 cards and their 384-bit bus. At that time, I was on another forum, and it baffled some of the regulars for a while.

    It's easier than ever to make a program multithreaded. Even a blank program built with tools from the past 2-3 years will create multiple threads. For example, my own programs spawn 10 or so threads of their own the moment I start to work on them. If I want to run my own code in a thread, know how many lines of additional code it takes to do so? 2. There's limitation to what you can do in threads (don't mess with the interface or things being run in other threads) which might mean you need to rewrite your code and split it into multiple routines, but 2 lines to multithread code is all it takes. Even then, these lines are very short, with one of them simply being the opener for the new code and the other only being a procedure definition in itself. This isn't even an official implementation either, it's a third-party project that someone was good enough to make simple. It's just no big deal any more to make something multithreaded. There's only so far you can go with threads before the performance you gain becomes too negligable to warrant another thread (hence why MMO's are typically viewed with a level of contempt).

    As for Intel picking up 3DFX, I dare say that it'd have simply meant that their IGP's would have come about earlier and the market would have moved to mobile a lot sooner never giving a thought for the desktop market. Sure, we'd have hardware, but I think both hardware and software would be a lot different and that much research would have been skipped in the process.
    I agree that multi-threaded programming is the present, but there are still plenty of applications out there that don't take advantage of it the way they could. I'm not saying that intel developed their latest processors based on gamer needs, but I can see how my statement would have implied that. I don't believe either company produces their CPU with a direct focus on gamers. They try to produce the best performing and efficient CPUs that they can, and when successful, it has that side effect of being highly beneficial to the gamer market.

    I don't think had intel picked up 3DFX back in, '04 i think it was, that they would have simply used it to produce IGP's. It was too big of a brand in the gaming and discrete card market. That's just a matter of opinion though. My "what if" statement was more along the lines of what if intel did with 3DFX, what AMD did with ATI. It would have certainly been interesting.

  12. #12
    Quote Originally Posted by mafao View Post
    They tried and failed. Modern CPU/GPU design is really hard to get into. The chips are extremely complex and build upon years and years of existing designs. Nvidia/AMD don't design GPUs from scratch - they build them upon their existing design patterns/modules/architectures. Intel simply came too late to the game - despite them having lots of resources, they are just not able to catch up with the years of initial experience Nvidia and AMD have.

    This is also a reason why Nvidia doesn't offer x86 CPUs - they really want to - but they are unable to close the gap to Intel/AMD. And even AMD is seriously behind Intel now simply because Intel had 'luck' with their CPU design. The modern core architecture is build upon - wait for it - Pentium M, which is a derivative of 1995 Pentium Pro (P6). Yes, its that old. They had to scrap the Pentium 4 Netburst architecture because it was a dead end for them and go back to Pentium Pro tech.

    All that said, the current Intel IGP iteration is pretty amazing. And if Haswell indeed offers performance levels close to 650M, it will open up a whole new era for mobile computing.
    Are you sure they failed with their GPU? Maybe they just saw the market going in a different direction and had what they needed to proceed, as you say with Haswell. Without any hard figures to back myself up I would still say that if Haswell really is 650m worthy that is the majority of the market snagged while others are still trying to sell you an extra board.

  13. #13
    At this point, I think that if Intel was AT ALL to go the GPU route, they'd be better off giving Nvidia's people a blank check. However, if they did this, it might just cause the government's head to explode from the high volume of "MONOPOLY" being yelled by their people. And Intel-Nvidia combo would probably put AMD so far under so quickly that they wouldn't be able to explain it.

  14. #14
    Quote Originally Posted by Afrospinach View Post
    Are you sure they failed with their GPU?
    If Larrabee had offered competitive gaming performance, Intel would have surely released it. Nope, it was pretty much a failure. The wikipedia article has a very nice history of the project, actually: http://en.wikipedia.org/wiki/Larrabe...oarchitecture)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •