The arrival of the Xbox 360 and PlayStation 3 redefined the landscape of games development. Cutting edge gaming software and technology - once the preserve of PC alone - found a new home: game developers that had once pushed back the frontiers of PC graphics and gameplay soon realised that the consoles presented a more profitable, viable market. Creators like Infinity Ward made the switch early, but as the generation progressed, even PC stalwarts like Crytek and id software transitioned across to console-led development.
With PC gaming becoming less relevant, graphics card manufacturers were left facing a big problem: how to make their pricey, enthusiast products more appealing to the core audience when the lion's share of games were mostly console ports. Running these games at ever-increasing frame-rates and resolutions could only go so far - the hardcore gamers wanted more but the development of impactful PC exclusive features could not be justified by the returns. Meanwhile, PC graphics technology continued to improve by leaps and bounds but the standards setting releases like the original Crysis and Doom 3 were becoming ever rarer. There's a strong argument that PC graphics tech was far more powerful than it really needed to be, with precious little to show for the mammoth levels of rendering might on tap.
Entry-level enthusiast GPUs of a few years ago effortlessly out-quaff console graphics tech, while modern day PC hardware is generations beyond. Next-gen is here now, but this hardware is somewhat under-utilised.
Make no mistake - even the entry-level enthusiasts' graphics cards of a few years ago effortlessly annihilate the RSX and Xenos graphics cores in the current generation consoles. The venerable NVIDIA 8800GT - for years an enthusiast favourite, and still capable of running most new titles adequately - has the consoles beaten in terms of available RAM, bandwidth, stream processors and virtually any other metric you would care to mention. It can run the original Crysis in all its unoptimised glory at a fair old lick, even at 1080p - something that the consoles can't match, even running on the more streamlined CryEngine 3.
That was then. This is now. The modern day equivalents of the 8800GT - the GTX560s and Radeon 6870s of the world - are generations beyond that, conservatively offering two to three times the performance level. But with this level of power now in the mainstream, are we genuinely seeing anything that capitalises on the raw capabilities of these mid-level cards? Here's a comparison of Batman: Arkham City on Xbox 360 alongside a fully maxed-out PC DirectX 11 version of the game.
Despite the enormous differences between console and PC architecture, the sad reality for PC gamers is that most releases produced today are clearly targeted at the console audience. PC represents the ability to run with improved resolutions and much higher frame-rates, but the fact is that the base assets of the game are designed with the limitations of consoles in mind and the core rendering paradigms we've seen adopted this generation (deferred lighting being a prime example) are all about extracting more performance from the fixed architecture of the consoles. Any advantages to PC gamers are mostly a bonus.
Environment, object and character detail is all based around the capabilities of Xbox 360 and PlayStation 3, and in most cases, texture quality looks great at 720p - but somewhat suspect when running at a higher resolution (though as it happens, Arkham City is one of the few games that does scale up beautifully thanks to higher detail PC art). In many cases, the enormous vertex-processing power of the graphics cards remains mostly untapped - often relegated merely to less aggressive LOD (level of detail) handling, processing faraway detail that many probably won't notice anyway. PC graphics have become synonymous with higher precision - FP16 framebuffers, soft shadows, superior ambient occlusion algorithms, but there's definitely a law of diminishing returns here. Embellishing console visuals often requires far more GPU processing power, but it's not being translated into a tangibly superior gameplay experience.
Here's a good example that demonstrates this. In this video we're comparing the various graphical modes found in Crysis 2 on PC, ranging from the basic high quality setting - equivalent to console - to the more extreme modes. The impact on performance is substantial, but the question remains: is all that power translating into a game that's tangibly better than the console experience? For Crysis 2, the extra bling is of course welcome, but the fact is that it's still the increased resolution and frame-rate advantages that are the major reason for playing the game on PC, alongside the obvious interface benefits presented by mouse and keyboard.