Tech Focus: Where Now For PC Graphics Hardware?
Digital Foundry on the challenges facing NVIDIA and AMD in a console-led development environment
The arrival of the Xbox 360 and PlayStation 3 redefined the landscape of games development. Cutting edge gaming software and technology - once the preserve of PC alone - found a new home: game developers that had once pushed back the frontiers of PC graphics and gameplay soon realised that the consoles presented a more profitable, viable market. Creators like Infinity Ward made the switch early, but as the generation progressed, even PC stalwarts like Crytek and id software transitioned across to console-led development.
With PC gaming becoming less relevant, graphics card manufacturers were left facing a big problem: how to make their pricey, enthusiast products more appealing to the core audience when the lion's share of games were mostly console ports. Running these games at ever-increasing frame-rates and resolutions could only go so far - the hardcore gamers wanted more but the development of impactful PC exclusive features could not be justified by the returns. Meanwhile, PC graphics technology continued to improve by leaps and bounds but the standards setting releases like the original Crysis and Doom 3 were becoming ever rarer. There's a strong argument that PC graphics tech was far more powerful than it really needed to be, with precious little to show for the mammoth levels of rendering might on tap.
Entry-level enthusiast GPUs of a few years ago effortlessly out-quaff console graphics tech, while modern day PC hardware is generations beyond. Next-gen is here now, but this hardware is somewhat under-utilised.
Make no mistake - even the entry-level enthusiasts' graphics cards of a few years ago effortlessly annihilate the RSX and Xenos graphics cores in the current generation consoles. The venerable NVIDIA 8800GT - for years an enthusiast favourite, and still capable of running most new titles adequately - has the consoles beaten in terms of available RAM, bandwidth, stream processors and virtually any other metric you would care to mention. It can run the original Crysis in all its unoptimised glory at a fair old lick, even at 1080p - something that the consoles can't match, even running on the more streamlined CryEngine 3.
That was then. This is now. The modern day equivalents of the 8800GT - the GTX560s and Radeon 6870s of the world - are generations beyond that, conservatively offering two to three times the performance level. But with this level of power now in the mainstream, are we genuinely seeing anything that capitalises on the raw capabilities of these mid-level cards? Here's a comparison of Batman: Arkham City on Xbox 360 alongside a fully maxed-out PC DirectX 11 version of the game.
Despite the enormous differences between console and PC architecture, the sad reality for PC gamers is that most releases produced today are clearly targeted at the console audience. PC represents the ability to run with improved resolutions and much higher frame-rates, but the fact is that the base assets of the game are designed with the limitations of consoles in mind and the core rendering paradigms we've seen adopted this generation (deferred lighting being a prime example) are all about extracting more performance from the fixed architecture of the consoles. Any advantages to PC gamers are mostly a bonus.
Environment, object and character detail is all based around the capabilities of Xbox 360 and PlayStation 3, and in most cases, texture quality looks great at 720p - but somewhat suspect when running at a higher resolution (though as it happens, Arkham City is one of the few games that does scale up beautifully thanks to higher detail PC art). In many cases, the enormous vertex-processing power of the graphics cards remains mostly untapped - often relegated merely to less aggressive LOD (level of detail) handling, processing faraway detail that many probably won't notice anyway. PC graphics have become synonymous with higher precision - FP16 framebuffers, soft shadows, superior ambient occlusion algorithms, but there's definitely a law of diminishing returns here. Embellishing console visuals often requires far more GPU processing power, but it's not being translated into a tangibly superior gameplay experience.
Here's a good example that demonstrates this. In this video we're comparing the various graphical modes found in Crysis 2 on PC, ranging from the basic high quality setting - equivalent to console - to the more extreme modes. The impact on performance is substantial, but the question remains: is all that power translating into a game that's tangibly better than the console experience? For Crysis 2, the extra bling is of course welcome, but the fact is that it's still the increased resolution and frame-rate advantages that are the major reason for playing the game on PC, alongside the obvious interface benefits presented by mouse and keyboard.
AMD and NVIDIA perhaps recognised that the advantages of their advanced hardware are somewhat under-utilised, coming up with some interesting new PC exclusive features in the form of EyeFinity and 3D Vision. Both are niche experiences but lucrative in their own ways, mostly because of a hardware tie-in.
EyeFinity allows gamers to connect multiple screens to their graphics cards and to use the expanded real estate to generate much larger views: why limit gameplay to 1920x1080 when a triple-screen 5760x1080 set-up provides a true panoramic "surround" view? AMD can sell higher-end cards (or multiples in a CrossFire set-up) and the display vendors win as more screens are sold. AMD is also busy deploying its new HD3D stereoscopic 3D rendering set-up, but it's safe to say that this is one area where its rival has taken the lead.
EyeFinity and 3D Vision show how AMD and NVIDIA are leveraging their graphics hardware, but PC features and performance may well be 'unlocked' by next-gen console development.
NVIDIA's 3D Vision is arguably an even more ambitious exercise than EyeFinity - the vendor partnering up with screen manufacturers working on 120Hz screens to produce a state-of-the-art 3D gaming system. True stereoscopy requires the ability to process geometry twice and to cope with "painting" twice as many pixels - in an age where games are driven by console-level assets, that's a walk in the park, even for NVIDIA's mid-range products such as the GTX560. It's reckoned that NVIDIA sold around half a million pairs of 3D Vision glasses - a modest success, but enough to see the company launch its new 3D Vision 2 initiative, with new glasses and "Lightboost" technology in advanced new displays that eliminates most of the loss of brightness inherent with active shutter glasses technology.
The fact that Sony has aggressively pushed 3D has only helped NVIDIA's cause - unlike the PS3, there's more than enough raw power in graphics card technology to bring home all the benefits of 3D without the geometry and fill-rate complications that have seen a number of visual downgrades on PS3 3D titles. But these are niche exercises designed to get the most out of top-tier products aimed at gamers with money to invest in equipment - where does this leave the typical core PC gamer?
Details remain sketchy about the form next-gen consoles will take, but it's basically an open secret that Microsoft's design heavily utilises DirectX 11 architecture. This is a double-edged sword. New, powerful, cheap hardware would be a challenge to PC's supremacy in offering the top-tier gaming experience. Alternatively, a DX11 focus could revitalise the platform on which it all began.
DirectX 11 is a game-changer - the API concentrates on the advantages of parallelism, and introduces a wealth of new technologies: tessellation allows for the creation of far more detailed geometry on the fly for example, while DirectCompute shaders open up GPU power to developers for whatever tasks they want. Post-processing effects like anti-aliasing and motion blur would be good candidates, but the sky is literally the limit here.
Unfortunately, few developers have completely embraced DX11, and where it has been deployed, it is once again being used for iterative improvements upon existing renderers, rather than as the basis for the core engine tech. It's another example of how in the majority of cases, PC functionality is being used for embellishments upon an existing console engine ported across to the PC. Indeed, in many cases, there is no support for DX11 at all.
The brave exception to the rule is DICE's Battlefield 3, built from the ground up with next-gen and DX11 in mind, and a phenomenal example of how impressive the API is. What is especially noteworthy about Battlefield 3 is how the game doesn't especially rely upon top-end video cards for a next-gen level experience: a quad core CPU in combination with a mid-level DX11 GPU can produce superb results. This is an example of how DX11 isn't just about performance sapping extra features - when used as intended, it actually delivers significant performance improvements. BF3 also embraces the ComputeShaders exclusive to DX11, in this case purposed towards calculating the tile-based deferred lighting - an effect that looks spectacular.
BF3 is a sign of things to come - on both PC and next-gen console. On the one hand, movement towards a similar style of development could see an enormous range of existing gaming PCs suddenly become viable as next-gen console alternatives - the leap from Xbox 360 and PlayStation 3 is that pronounced. But on the other hand, the sheer popularity and software support behind the new consoles, plus comparatively cheap price points may well call into question the continued relevance of mainstream PC gaming. How the landscape will change, and how AMD and NVIDIA will react will be fascinating...