Close
Are you sure? Are you sure you want to report this comment? I understand, report it. Cancel

Digital Foundry

Tech Focus: Olde Worlde Rendering

Wed 07 Sep 2011 7:00am GMT / 3:00am EDT / 12:00am PDT
Digital FoundryDevelopment

How game developers are using decades and centuries old techniques to make modern games better.

Video games are a relatively modern part of human history; as such they ride upon the wave of cutting-edge technological development. Many of the rendering techniques used in games are passed down directly from the film and TV visual effects industries where there is the relative luxury of being able to spend minutes or even hours rendering a single frame, whereas in a game engine there are only milliseconds available. It's thanks to the ever faster pace of processing hardware that these techniques become viable for real-time game rendering.

However, it may surprise you to learn that some of the algorithms used in modern game engines have a longer history and a very different origin than you might think. Some originate in cartography; others from engineering or were simply pieces of maths that had no practical use until recent times. As amazing as it may sound, many crucial 3D rendering techniques - including those being introduced into today's state-of-the-art game engines - can be based on mathematical concepts and techniques that were devised many hundreds of years ago.

As amazing as it may sound, many crucial 3D rendering techniques - including those being introduced into today's state-of-the-art game engines - can be based on mathematical concepts and techniques that were devised many hundreds of years ago.

Here, we present several examples that pre-date our industry but are now incredibly valuable in the latest games. The history and the people who discovered the algorithms will all be revealed: it's a fascinating insight into the ingenuity of modern games developers, re-purposing existing knowledge and techniques in order to advance the science of game-making.

Lambert Azimuthal Equal-Area Projection

What is it? It sounds like a mouthful, but this is a neat trick for projecting a 3D direction (or a point on a sphere) into a flat 2D space. It was invented in 1772 by the Swiss mathematician Johann Heinrich Lambert as a method of plotting the surface of the spherical Earth onto a flat map.

How does this apply to game rendering? In a deferred engine, you first render various attributes of the meshes to what are called g-buffers (geometry buffers) before calculating lighting using that information. One of the most common things to store in a g-buffer is the normal vector which is a direction pointing directly away from the geometry surface for each pixel. The normal vector would usually be stored with an x, y and z coordinate using up three channels of a g-buffer, but you can accurately compress this information down to two channels with Lambert Azimuthal equal-area projection.

The reason this projection is particularly useful compared to others is that it encodes normal vectors that face towards the camera quite accurately, with this accuracy gradually reducing as the normal begins to point away from the camera. This is perfect for game rendering as most of the normal vectors for objects on the screen will point towards the camera or only very slightly away.

The benefits of doing a 2D projection rather than simply storing the normal vector in x, y, z form are that it saves a g-buffer channel which can either be used to save bandwidth and storage requirements, or can be used to store some other useful information about the meshes that will be used in the lighting pass. This technique is used by several modern game engines, including the new CryEngine 3 technology that powered the recent state-of-the-art multi-format shooter, Crysis 2.

Lambertian Reflectance (or Lambert's Cosine Law)

What is it? How is it used in games? Johann Heinrich Lambert was a busy fellow. There are other ideas of his that are commonly used in game rendering. One of the most widespread algorithms used is Lambertian Reflectance, which is used to determine the brightness of diffuse light reflecting off a surface and was first published way back in 1760.

If you'll recall some trigonometry from school, the brightness of diffuse light is proportional to the cosine of the angle between the light source and the surface normal. What this means is that the cosine is equal to one when the angle is zero, so the surface will appear bright when the object is facing the light source.

This cosine goes down towards zero as the angle goes up to 90 degrees, so therefore the brightness of diffuse light goes down in the same fashion. The fact that he worked this out in the 18th century simply from observations of the real world and with no computers to create rendered images to test his theory against is nothing short of astounding.

Filmic Tonemapping

What is it? Game engines started to experiment with HDR (High Dynamic Range) rendering a few years ago, with one of the most notable early examples being 2005's Half Life 2: Lost Coast. HDR rendering aims to improve the level of contrast and visible detail in scenes with extremes of light and darkness - as in the real world. It also helps to emulate how a camera or the human eye adapts to such changes in exposure over time.

If you go from a bright room into a dark room, it will take your eyes (or a camera) some time to adjust to the new lighting conditions, so HDR rendering helps simulate this to gain a more realistic or hyper-realistic cinematic effect.

Over the years, a large proportion of PC, Xbox 360 and PS3 games have used HDR rendering, though HDR has got an undeserved reputation for simply meaning there is lots of bloom layered upon the image.

Homogeneous Coordinates, dating back to 1827, are so universal that they are built into very core of the GPUs and graphics libraries that we take for granted in our games.

What has tonemapping got to do with it? Tonemapping is one of the steps in the process of HDR rendering. A television or monitor cannot display HDR images natively - for example, your screen can't possibly show pixels as bright as the sun or a floodlight, so we have to convert the HDR rendered image to LDR (Low Dynamic Range) for display on a monitor or television. This stage is called tonemapping.

It's a similar process to that used in HDR photography. To produce an HDR photo, you first take several photos of the same scene at different exposure levels, and then run them all through a tonemapping algorithm which takes the most detailed parts of each exposure and blends them together to form a final image. The shape of the graph of how HDR colour/brightness values are mapped to the final image is known as the tonemapping curve.

In games, we have the advantage of not needing several images with different exposures, as we can render a single HDR image with a very wide exposure range and use that as input for the tonemapping. Until recently, most HDR game engines used a tonemapping curve designed to take the maximum amount of contrast at each level of brightness to increase detail as much as possible. A lot of games specifically used something called the Reinhard tonemapping curve. Whilst mathematically correct, this tonemapping curve sometimes results in "flat" looking images with weak "milky looking" black tones.

So how do we get more effective HDR? It turns out that there's a better tonemapping curve that predates the games industry by decades, dubbed the "filmic" tonemapping curve.

In the 1920s and 1930s, the scientists at Kodak were busy designing their Eastman and Kodachrome films - other film stock manufacturers were doing similar research. They came up with a formula which resulted in fine colour reproduction and crisp blacks. The tonemapping curve for real film is what is now known as "filmic tonemapping" and has been used by games in the last couple of years to achieve a more cinematic look. Naughty Dog's classic Uncharted 2 was one of the first games to use this idea.

Homogeneous Coordinates

What are they? This is the daddy of all rendering techniques. Practically every single 3D game in existence uses this. Without homogeneous coordinates, rendering and animation of 3D scenes would be a lot trickier. It is so universal that it is built into very core of the GPUs and graphics libraries (DirectX, OpenGL, etc) that we take for granted in our games.

Homogeneous coordinates were invented as an extension to Cartesian coordinates - the coordinate system we all loved to hate when drawing graphs in maths classes at school. Homogenous coordinates have been around since 1827 when the German mathematician August Ferdinand Möbius published his work Der barycentrische Calcül. One of the main purposes at the time was to allow a finite representation of infinity in projective geometry - an interesting mathematical tool, but with no practical use at the time.

Without getting bogged down in maths (you can easily look it up if you really want to), homogeneous coordinates allow us to combine any combination of translation (movement), rotation, scaling and skewing into a single 4x4 matrix - a simple grid of 16 numbers. Almost every single vertex in any 3D scene will be transformed by at least one of these 4x4 matrices inside a vertex shader. The system also easily allows games to use perspective - the closer an object is to the virtual camera, the larger it appears on screen just as with the real world.

Radiosity

What is it? You may have heard of this one. Today it is used in rendering engines as a global illumination technique: radiosity simulates the way light bounces around a scene on its way from the light source to your eyes. For example, in a room with a red wall, you will see some of that red bounced onto the floor, ceiling and other surfaces.

It's computationally expensive, so very few games do this in real-time at the moment, though there are new methods that aim to increase that proportion. For example the Enlighten software from Geomerics can do this and is used in several current games, including the forthcoming Battlefield 3, while Crytek has been quite vocal in the advantages of running global illumination in real-time within its own CryEngine 3 middleware.

However, for years many games have used pre-calculated radiosity information using lightmaps or other methods. Many games also use real-time AO (Ambient Occlusion), which can be thought of as a very basic form of radiosity.

Graphics coders are not just looking to the film and TV effects industries for inspiration, casting nets much further afield in order to improve visuals within tight performance budgets.

Where does it come from? It is surprising to learn that radiosity originates not in computer generated rendering, but in engineering. It was invented around 1950 as a way to predict heat flow through materials and machines. It specifies how incoming radiation (heat in this case), is diffusely reflected or emitted by surfaces and helps with the design of machines that produce heat.

Light behaves in a similar manner to heat (after all, they're both products of the electromagnetic spectrum), so the methods were adapted for rendering in the 1980s at Cornell University. The "Cornell Box" is a classic example of a test designed to judge the accuracy of rendering engines.

Wrapping Up

Graphics programmers of the games industry are not just looking to the film and TV effects industries for inspiration, but cast their net much further afield in order to improve visuals within tight performance budgets. It is a highly competitive industry that is perpetually moving forwards, even within a single console generation.

If we compare early Xbox 360 and PS3 games to what is on the shelves now, there is a definite overall progression as more and more is squeezed out of the consoles. This pattern has been repeated throughout every generation: this is partly down to the developers getting more used to the hardware over time, but also through the adoption of new ideas and techniques - often from unexpected places - just a handful of which are shown in this article.

In the future, it is certain that other techniques will continue to be adapted from completely different uses to serve as rendering tricks, improving performance, realism (or not as the game requires) and giving more flexibility for the artists to achieve their visions. As gaming enthusiasts, this means we've always got something new to look forward to.

11 Comments

Nice round up of existing methods in current game engines.

Would be great for a following article to look into some other game engines that may use alternative rendering/performance enhancing techniques!

Good job!

Posted:2 years ago

#1

Lee Walton
Co-Founder & Art Director

33 4 0.1
Nice one Keith! Brilliant as usual... I read this without realising I knew the author, and now it makes sense why the writer is so knowledgable!

Posted:2 years ago

#2

Lewis Brown
Snr Sourcer/Recruiter

194 41 0.2
Although I have a limited technical undertsanding of this I still found it incredibly interesting, good work :-)

Posted:2 years ago

#3
This article made me miss math class:]

Posted:2 years ago

#4
Great read!

Posted:2 years ago

#5

Keith Judge
Founder/Technical Director

2 0 0.0
Thanks for the nice comments people!

Posted:2 years ago

#6

Jonathan Hau-Yoon
Junior Game Artist

4 0 0.0
Great job! :)

Posted:2 years ago

#7

Mengkai Cao
Engine&Tool Programmer

1 0 0.0
Fantastic compilations of rendering technology introduction rooted in predating math knowledge. Thanks.

Posted:2 years ago

#8
Awesome article. Its very interesting to see how great guys made possible some great games.

Posted:2 years ago

#9

Mus Cetiner
Studying Game Design and Production Management

2 0 0.0
Excellent read and highly interesting.

Posted:2 years ago

#10

Yiannis Koumoutzelis
Founder & Creative Director

357 181 0.5
Good job rounding up all classic methods of rendering and post process effects! you forgot Fresnel algorithm :) which is also widely used!

this is the reason why many studios in the begining of the "next gen" era had started hiring people with film fx experience. having previously worked in film vfx and stereoscopic/autostereoscopic content creation helped me immensely to understand next gen shaders and later how 3D layout should work. even the way certain things work in post processing are very similar to the way offline compositing works. even today sometimes artists and engineers present to me techniques which we used 10-15 years back in offline rendering. back then, there was a term that was used widely, everybody who is old enough will remember the term "convergence" it is a dream that engineers and hardware manufacturers have worked very hard to achieve, and today we are closer than ever to achieving that. The term was used to signal the unification of real time and offline results and methodologies.

for me Unreal Engine is the closest one can get to true cinematic game visuals following the same core methodologies down to displacement based morphing and subdivision displacement. the samaritan demo is prime example.

Lately we can also see an attempt to apply the same real time techniques to the viewports of our dcc tools such as 3dsmax and maya with Nitrous and Viewport 2.0 respectively.


Edited 1 times. Last edit by Yiannis Koumoutzelis on 11th September 2011 7:36am

Posted:2 years ago

#11

Login or register to post

Take part in the GamesIndustry community

Register now