If you click on a link and make a purchase we may receive a small commission. Read our editorial policy.

Tech Evolution: How Devs Pushed Consoles To Their Limits

Digital Foundry on major tech advances this generation, and how it will transition into the next

The story of each and every console generation is one of evolution - increasingly more complex, visually exciting games coming to market year-on-year. The benefit of fixed hardware architecture is that game-makers get to know the machines they are working with and are able to squeeze out more performance with each successive project. The current seventh generation of consoles has been longer than most - and the technological advances we've seen over the last seven years have been truly remarkable.

"From a tech perspective at least, it's hard to find any actual 'bad games' in AAA development at this time - an endorsement of quality standards in the industry and proof that the current generation is now mature."

A quick tour of the major titles on site at the Eurogamer Expo last week was testament to this: in terms of the technological nuts and bolts at least, it was virtually impossible to find any kind of "bad game" on the show floor whatsoever.

Flashback to late 2004/2005 and the twilight of the PS2/Xbox heyday and there was never the kind of consistency in technical excellence as that seen in today's AAA market. Of course, the games, the budgets - and the industry itself - are bigger than they were back then, and equally of note is the increased importance of events such as GDC and SIGGRAPH, where we see developers sharing technologies, workflows and philosophies.

But it's interesting to see the emergence of a number of technologies, initially defined by the limitations of the current-gen machines, that will continue to evolve as we move into the era of the next Xbox and PlayStation 4.

One of the most impactful changes we see in the wave of current and upcoming games is the shift to what's referred to as deferred rendering. The actual technology isn't actually that new - a vintage 2001 Xbox 1 Shrek game from a North American division of DICE is thought to be the first console title to implement it and variations in the technique were seen in Xbox 360 launch title Perfect Dark Zero along with a more impressive roll-out for the tech in GTA 4 before it really hit its stride in Guerrilla Games' Killzone 2. However, the technology is becoming increasingly more popular for the way in which a vast range of light sources can be added to any given scene, without anything like the performance penalty associated with traditional "forward" rendering - where light sources are calculated in turn with rendering load increasing accordingly, rather than considered as a whole.

Deferred rendering comes in all sorts of flavours but is now a staple component of the developer's rendering arsenal. While it's found favour in modern AAA games, developers like DICE and Rare were using it a very long time ago, and Xbox 360 launch title Perfect Dark Zero is thought to be using the light pre-pass variant of the technique.

The emergence of deferred rendering has also brought about innovation elsewhere. Heavy on memory, the current-gen consoles simply don't have the bandwidth or the RAM to handle anti-aliasing in fully deferred game engines, leading to the rise of post-process AA. Instead of generating multiple samples during the rendering process as per traditional MSAA, post-process solutions tend to treat the screen as a flat 2D object and processing it accordingly. While edge-detect and blurring solutions have been commonplace across the generation, it was SCEE's work with MLAA - first seen in the brilliant God of War 3 - that convinced developers that post-process anti-aliasing was a viable way forward.

"Innovations such as post-process anti-aliasing are great examples of how developers have thought out of the box in extracting as much GPU performance as possible from current-gen console hardware."

NVIDIA's subsequent work with FXAA proved to be the more popular approach for Xbox 360 developers, but it has also cropped up frequently on high-profile PS3 titles in recent times too. It's to the point now where most of the titles we see on current-gen platforms eschew multi-sample anti-aliasing in favour of FXAA, simply for the RAM and GPU performance benefits.

Deferred rendering is here to stay - it's a key component of the cross-generational Frostbite 2 engine, for example - so it's safe to say that post-process anti-aliasing is far more than just a sticking plaster to mitigate for the lack of power and memory in the current-gen consoles. NVIDIA, AMD and other independent developers are already working on more advanced next-gen solutions based on the same principles, and while current-gen results can be mixed, more advanced algorithms in concert with 1080p resolution should produce some impressive results - something we're already seeing on PC now.

Dynamic resolution is also an interesting technique being used by several developers. While operating in resolutions below native 720p is as old as the first Xbox 360 launch titles, studios are now adjusting pixel count on the fly in order to free up precious graphical resources for more demanding scenes. First seen this generation on WipEout HD in order to help maintain a 1080p60 update, a number of games have since implemented it - Evolution Studios with MotorStorm Apocalypse, and id software with Rage to name just two. More recently, Ninja Gaiden 3 and Tekken Tag Tournament have both used it, and at the Eurogamer Expo, Metal Gear Rising also appeared to be utilising the idea. For GPU-bound games in particular it's an interesting technique, and we'll almost certainly see it utilised on next-gen consoles too. Indeed, Ninja Gaiden 3 on Wii U already seems to be using it, despite the new Nintendo machine enjoying a boost in GPU power over both Xbox 360 and PlayStation 3.

We're slowly seeing more games using dynamic resolution. When engine performance is weighed down by intense effects, resolution switches down on the fly to lower GPU load. First used this gen in WipEout HD, it is now more widely implemented - Ninja Gaiden 3 being a good example, where we even appear to be seeing the technique used on the upcoming Wii U version.

Of course, there are plenty of technologies that have evolved that aren't rendering-specific, though they are intrinsically linked to improvements in visual quality. The bad old days of lower resolution textures and missing environmental detail on PS3 titles are all but over, despite the fact that the Xbox 360 still has more available memory to developers than its PlayStation counterpart - a lot of this is down to major advances in background streaming technology. It goes beyond texture and art assets of course - the quantum leap in terms of animation tech we've seen across the generation is linked to this, as is the arrival of games that feature no intrusive loading at all: God of War and Uncharted being good examples.

"Many of the disciplines developers have learnt from this hardware cycle will transition seamlessly into the next - but unfortunately perhaps, SPU coding for Cell probably won't be one of them."

So is the story of the current generation one of uninterrupted progress? Unfortunately not. If the rumours are true and PS4 is using AMD x86 processing cores in its next console, it will mean the end of the Cell architecture and with it development for the unique SPUs - the ultra-fast satellite cores found within PS3's central processor. Without the SPUs, PlayStation 3 features an unremarkable CPU core and graphics hardware that falls short of the performance found in Microsoft's competing console. With them, developers have been able to match and at times exceed the limits of the Xbox 360 by hiving off GPU tasks - vertex processing and post-process effects like anti-aliasing and motion blur - onto SPU. Bespoke SPU coding is what makes games like God of War 3, Uncharted 3 and The Last of Us as spectacular as they are, while third party developers have also made good use of them: Battlefield 3's state-of-the-art lighting system runs from SPU. While some cross-platform developers may breathe a sigh of relief at the end of Cell, it is a shame that the skills built up in the industry won't be that useful going forward.

The good news is that while we can fully expect each new console to have its own particular strengths and weaknesses, the chances are that developers will not need to re-invent the wheel in terms of their current coding processes. It's safe to say that the migration from Xbox and PlayStation 2 to their current-gen equivalents was fraught with problems - the major challenge common to both systems (PS3 in particular) was the move across to multi-core development - running systems in parallel in order to get the most out of the processing power available. It's here where all that knowledge working with Cell's SPUs may pay off - it effectively forced coders to get to grips with a many-core approach to game development.

The move from PS2/Xbox to their modern day equivalents required a fundamental shift in development principles, causing major problems - especially on PlayStation 3. The new consoles appear to offer a more natural progression, and it can be argued that with engines like CryEngine 3 and Frostbite 2, major developers are effectively producing next-gen titles already.

Elsewhere, also easing the upcoming transition is the fact that PC development has effectively offered coders a preview of the next generation - Intel CPUs offer up to 12 logical cores (two threads per core on a six-core processor), and developers such as DICE have optimised their code to make use of the larger range of cores available. While many games are still targeting dual-core processors, our own experiences with our cheap as chips Digital Foundry PC demonstrate that the latest titles are gradually moving across to accommodating an ever-increasing amount of processing cores - music to the ears of AMD, now offering eight-core CPUs at mainstream prices, but also good preparation work for getting more out of the next-gen consoles too.

"Direct X 11 is now so firmly established that developers have had an excellent head-start in working with next-gen architecture."

On top of this, Microsoft's DirectX 11 API - fast becoming the standard for PC gaming - takes centre stage in the next Xbox, currently codenamed Durango. Our sources have suggested that existing PC game engines using DX11 require porting to 64-bit architecture but otherwise run with no problem at all on the new console. Microsoft profited immensely from the close links between DirectX 9 and the Xbox 360: the link appears to have been strengthened still further with the new console. Sony's strategy with PlayStation 4 is less clear - our understanding is that the OpenGL API will be utilised, but only in recent weeks has this been upgraded to provide a similar level of functionality as DirectX 11.

It seems clear that the console makers have learned from the lessons of yesteryear, and that the progression of gaming technology will be considerably smoother from this generation to the next than it was seven years ago - barring any surprises in console design, there will be no fundamental shift in the way games are made; the next consoles will simply be a generational increase in horsepower, but based on the existing principles of a multi-core CPU and graphics processor. While the price we pay may well be a more conservative approach to design, and a move away from the Kutaragi-inspired exotic hardware that characterised three generations of PlayStation hardware, developers should be able to hit the ground running - and for a time at least, develop the same titles across current and next-gen consoles simultaneously.

Related topics
Author
Richard Leadbetter avatar

Richard Leadbetter

Technology Editor, Digital Foundry

Rich has been a games journalist since the days of 16-bit and specialises in technical analysis. He's commonly known around Eurogamer as the Blacksmith of the Future.

Comments