I think though, that in the next couple of years, multithreading is going to be very important to gaming and the computer industry as a whole. Next generation consoles are coming and will usher in a new era of graphics technology for the TV gamer market, studios are going to start optimizing processors for efficiency instead of doing what they do now for consoles and that is maximizing the CPU to it's fullest potential just to render the graphics properly, which isn't efficient for the processor itself. Advanced lighting, shading and textures are impossible on current generation consoles (Wii U excluded), physics doesn't work well at all because the systems can't process it. With the next generation systems, and PC's alike, physics and advanced graphics features will be available to all platforms. I think you will see a better use of PC technology since consoles will be competitive with it for a few years.
PC's aren't even optimized to run effectively since so many games are designed for consoles in mind and then ported to the PC, I think the opposite is going to happen with the next generation, since the PC will still be more powerful, but the technology in the consoles will all be pretty similar so they will design for the computer first and then port as needed. In terms of cost effectiveness, having all 3 consoles running on x86 next generation is going to help as well.
When the new consoles come out those will be based on year old midrange PC hardware because of long development times and the requirement to push prices and cooling into reasonable level. My bet is those will be about on par with A10-5800 processor and Radeon 7670 series GPU, not much higher than that.
Never going to log into this garbage forum again as long as calling obvious troll obvious troll is the easiest way to get banned.
Trolling should be.
the current expectation is that due to both the new consoles being X86 based, games will be developed for PC first, then ported down, how ever, in the case of teh PS4, with it running linux, and having direct hardware access, that means a port script could be made to easily port PS4 games to OpenGL for linux based computers
Not gonna be that easy because linux doesn't really give direct hardware access to games any more than windows does.
That would be really nice, but remains to be seen. Porting code nowadays is trivial anyway since people use cross-platform engines like Unity to make the games anyway, problem is the texture quality which is done low for the consoles' limited video RAM. Making high res textures and scaling it down is lot more expensive than making low res textures and just keep feeding PC players shit like they do now.
Never going to log into this garbage forum again as long as calling obvious troll obvious troll is the easiest way to get banned.
Trolling should be.
Hmm I think this is going to be slightly offtopic but I like to share this video how to make your CPU running at 100% to gain more gpu loads in Crysis 3. Check nocturnals pics of the cpu loads where you can see the CPU isn't hitting it's limit yet.
It's only going to be useful if you use cards like a gtx 690 in sli or a titan in sli. A single 680 this video isn't needed but still worth it to check it out. Seems like he was able to push 12-15% extra on 3 680's each.
Last edited by yurano; 2013-03-04 at 02:02 AM.
Background: AFAIK, a 3770K has 23 PCIe 3.0 lanes. Some are used for USB3/SATA bus so for purposes of PCIe cards, there are only 20 lanes available. For SLI/CF the lanes are split as follows: 16x for one card, 8x/8x for two cards or 8x/8x/4x for three cards (depends on motherboard). In addition, PCIe 3.0 8x has the same bandwidth as PCIe 2.0 16x.
http://i.imgur.com/4sYbB.jpg
The above linked image is a test done by /r/gamingpc comparing PCIe 2.0 16x/16x (equivalent to PCIe 3.0 8x/8x) vs PCIe 3.0 16x/16x for SLI of two cards. In the bandwidth limited case (PCIe 2.0 16x/16x) there's some reduction in performance at 1920x1080, albeit marginal. The performance penalty is much worse at 5760x1080. The Youtube vid you linked seemed to be using 1440p, which is roughly halfway between 1920x1080 and 5760x1080 in terms of pixel count.
We must further consider the fact that the Youtuber used SLI x3 (8x/8x/4x) instead of SLI x2. In the following link, the same bandwidth bottleneck is observed.
http://www.evga.com/forums/tm.aspx?m=1537816&mpage=1
Think you're making a big mistake. Lanes bottleneck means you never can get higher gpu loads, in his case he did reach higher gpu loads. The lanes aren't the bottleneck here but the CPU was, hence why it was running at 100% and gpu's were chilling at 60% load.
He can use boards like the Maximus V extreme which supports Quad sli easily 8x/8x/8x/8x because of the plx chip. Plx chip adds lag or to be more precise it's just going to increase the frame render latency and this stands apart from the loads again.
A gtx 690 in a single pci express slot, runs at 16x or 8x? Not sure about this. If that runs at x16 and you add another 680 so you have 8x (690)/8x(680) so I don't see the problem here?
Yes he is using a Maximus V extreme. http://www.youtube.com/watch?v=YUKathtsFEw
So your bandwidth limitation is invalid.
---------- Post added 2013-03-04 at 06:57 PM ----------
Seems like it's only possible with a 6990 + 6970. So no I guess.