Skip to main content
If you click on a link and make a purchase we may receive a small commission. Read our editorial policy.

Tech Focus: Olde Worlde Rendering

How game developers are using decades and centuries old techniques to make modern games better.

Filmic Tonemapping

What is it? Game engines started to experiment with HDR (High Dynamic Range) rendering a few years ago, with one of the most notable early examples being 2005's Half Life 2: Lost Coast. HDR rendering aims to improve the level of contrast and visible detail in scenes with extremes of light and darkness - as in the real world. It also helps to emulate how a camera or the human eye adapts to such changes in exposure over time.

If you go from a bright room into a dark room, it will take your eyes (or a camera) some time to adjust to the new lighting conditions, so HDR rendering helps simulate this to gain a more realistic or hyper-realistic cinematic effect.

Over the years, a large proportion of PC, Xbox 360 and PS3 games have used HDR rendering, though HDR has got an undeserved reputation for simply meaning there is lots of bloom layered upon the image.

Homogeneous Coordinates, dating back to 1827, are so universal that they are built into very core of the GPUs and graphics libraries that we take for granted in our games.

What has tonemapping got to do with it? Tonemapping is one of the steps in the process of HDR rendering. A television or monitor cannot display HDR images natively - for example, your screen can't possibly show pixels as bright as the sun or a floodlight, so we have to convert the HDR rendered image to LDR (Low Dynamic Range) for display on a monitor or television. This stage is called tonemapping.

It's a similar process to that used in HDR photography. To produce an HDR photo, you first take several photos of the same scene at different exposure levels, and then run them all through a tonemapping algorithm which takes the most detailed parts of each exposure and blends them together to form a final image. The shape of the graph of how HDR colour/brightness values are mapped to the final image is known as the tonemapping curve.

In games, we have the advantage of not needing several images with different exposures, as we can render a single HDR image with a very wide exposure range and use that as input for the tonemapping. Until recently, most HDR game engines used a tonemapping curve designed to take the maximum amount of contrast at each level of brightness to increase detail as much as possible. A lot of games specifically used something called the Reinhard tonemapping curve. Whilst mathematically correct, this tonemapping curve sometimes results in "flat" looking images with weak "milky looking" black tones.

HDR rendering is more than just adding a bunch of bloom effects over the image - it's all about improving contrast and rendering detail in the extremes of light and darkness. Half-Life 2: The Lost Coast (left) shows an early HDR effort. However, more recent games such as Uncharted 2 (right) use filmic tonemapping to get a cinematic look.

So how do we get more effective HDR? It turns out that there's a better tonemapping curve that predates the games industry by decades, dubbed the "filmic" tonemapping curve.

In the 1920s and 1930s, the scientists at Kodak were busy designing their Eastman and Kodachrome films - other film stock manufacturers were doing similar research. They came up with a formula which resulted in fine colour reproduction and crisp blacks. The tonemapping curve for real film is what is now known as "filmic tonemapping" and has been used by games in the last couple of years to achieve a more cinematic look. Naughty Dog's classic Uncharted 2 was one of the first games to use this idea.

Homogeneous Coordinates

What are they? This is the daddy of all rendering techniques. Practically every single 3D game in existence uses this. Without homogeneous coordinates, rendering and animation of 3D scenes would be a lot trickier. It is so universal that it is built into very core of the GPUs and graphics libraries (DirectX, OpenGL, etc) that we take for granted in our games.

Homogeneous coordinates were invented as an extension to Cartesian coordinates - the coordinate system we all loved to hate when drawing graphs in maths classes at school. Homogenous coordinates have been around since 1827 when the German mathematician August Ferdinand Möbius published his work Der barycentrische Calcül. One of the main purposes at the time was to allow a finite representation of infinity in projective geometry - an interesting mathematical tool, but with no practical use at the time.

Without getting bogged down in maths (you can easily look it up if you really want to), homogeneous coordinates allow us to combine any combination of translation (movement), rotation, scaling and skewing into a single 4x4 matrix - a simple grid of 16 numbers. Almost every single vertex in any 3D scene will be transformed by at least one of these 4x4 matrices inside a vertex shader. The system also easily allows games to use perspective - the closer an object is to the virtual camera, the larger it appears on screen just as with the real world.

Related topics