One of the most important elements to image quality and stability in the modern age is the concept of anti-aliasing, which smoothes away jaggy edges and helps minimise distortion when rendering a high res image at a lower resolution.
The traditional solution is to employ multi-sample anti-aliasing, a hardware feature in all modern graphics accelerators, and of course an element of the Xenos and RSX GPU cores in today's HD consoles. However, despite excellent results (particularly in terms of sub-pixel detailing), it is a relatively expensive effect, heavy on RAM and bandwidth - aspects that are at a premium on the PlayStation 3 in particular. In terms of cross-platform development, the amount of console titles we've seen with anti-aliasing in effect on Xbox 360 but absent on PS3 is considerable.
The response from Sony's Advanced Technology Group (ATG) was remarkable. It took on research from Intel into morphological anti-aliasing, and created its own version of the tech which debuted in God of War III and has regularly featured in both first and third party games - an excellent example of how Sony rolls out technological innovations from its central HQ to all PlayStation developers.
MLAA is a post-process anti-aliasing technique that scans the framebuffer, attempting to pattern-match edges and applying a blur/filter, providing edge-smoothing that goes well beyond the traditional 2x MSAA and 4x MSAA we see in console titles. It's also very expensive from a computational perspective: it's believed that ATG's MLAA requires 3-4ms of rendering time spread across five SPUs.
It's not without its issues either – specifically sub-pixel detailing erroneously picked up as an edge but can be amplified and can actually exaggerate pixel-popping issues rather than minimising them.
MLAA exists now because of the limitations of current generation consoles and because of the performance speed-up with respect to MSAA.
Of course, ATG's tech is also exclusive to PlayStation 3, but in an age where console developers are looking to extract every last ounce of performance from the architecture, MLAA-style implementations have value on Xbox 360, and the case for post-process anti-aliasing for PC titles is growing too. While Sony got there first with its MLAA work, the basic concepts are hardly proprietary. Jorge Jimenez, Jose I. Echevarria and Diego Gutierrez are working on a GPU-based implementation that works very well indeed on Xbox 360 and PC.
The team not only spent time talking with Digital Foundry about their tech, but actually handed over demo code, giving us a chance to check out its results. Because MLAA is a post-process technology, analysing the image as a 2D object, this meant we could use the filter on our own lossless HDMI video captures from existing console titles, and test out the quality of the technique independently…
Q:Why MLAA? Why now? Is the trend in gaming development making traditional multi-sampling anti-aliasing too costly for the current console platforms? Why isn't MSAA suited for deferred rendering techniques?
Jorge Jimenez:Filter-based anti-aliasing in general appeared as a consequence of many factors. Speaking of previous graphics technology, the first that comes to our mind is the fact that it allows overcoming the limitations found on some platforms. For example, it's not possible to use MSAA in conjunction with multiple render targets (MRT) on DirectX 9, which is a feature required to output the input data used for post-processing effects like ambient occlusion or object motion blur. But the more important reason may be that deferred shading rules out MSAA on some platforms, as this technique requires the usage of MRT and the ability to read individual samples of an MSAAed buffer. Also, on the Xbox 360, anti-aliasing usually forces the usage of tiling, a technique used to render a frame in tiles, in order to fit the memory requirements of the eDRAM. Tiling forces re-transforming meshes that cross multiple tiles and also introduce some restrictions that may increase the complexity of the engine architecture.
In current PC (DX10+) and probably future console technology you can easily mix MRT with MSAA, so you no longer have problems generating the buffers that post-processing effects usually require. And you can also access individual samples. So, you may be thinking, what's the problem then? The first problem is the huge memory consumption: for 8x MSAA and a regular G-buffer you require 506MB just for generating the final frame (not counting models, textures, nor temporal framebuffers). The second one is that in every place where there is an edge, the pixel shader needs to be supersampled. In other words, by using 16x MSAA in a deferred engine, you have to calculate the color 16 times for each pixel on an edge, instead of once, which can lead to a huge performance penalty.
From a quality point of view, the maximum MSAA sample count usually reachable in current generation of consoles is 2x, while filter-based approaches can exceed 16x in term of gradient steps. So, it’s a big quality upgrade on consoles. Furthermore, filter-based anti-aliasing allows developers to easily overcome the usual pre-tonemapping resolve problems found in HDR engines.
And last, but not least, filter-based anti-aliasing usually yields an important speed-up with respect to MSAA. For example, our GPU implementation (Jimenez's MLAA) runs 1180% faster than MSAA. So, these techniques are very appealing even to forward rendering approaches.
So, to summarise, MLAA (and post-processing filters in general) exist now because of the limitations of current generation of consoles and previous PC technology, and because of the performance speed-up with respect to MSAA. The widespread usage of deferred shading warrants its usage on the future, even on the case MSAA performance improves on the GPUs that are to come in the next years.
Q:We've seen Sony's implementation of MLAA, costing around 3-4ms across five SPUs. How does your implementation compare from a performance perspective?
Jorge Jimenez:It's difficult to say given that we are running on different platforms and configurations. On PC, we're quite fast. In fact, we're almost free on the mid-high GPU range (around 0.4ms on a GeForce GTX 295) and, as far as we know, the fastest approach given the maximum line length we are able to handle. On the Xbox 360 we run at 2.47ms, with still a lot of possible optimisations to try.
Q:Have you been able to compare from a quality perspective at all? There's a perception that CPU MLAA offers higher quality than GPU-accelerated solutions.
Jorge Jimenez:We can't speak for all the MLAA(-like) implementations out there, but we think our current version 1.6 (the one used for these comparisons) has raised the quality bar considerably. In our tests, it produces results on par (when not superior) to CPU MLAA. One of our best features is that we are very conservative with the image: we only process where we are sure there is a perceptible edge and version 1.6 does a pretty good job searching for perceptible edges. This allows preserving the maximum sharpness while still processing all the relevant jaggies.
We can't speak for all the MLAA implementations out there, but we think our current version has raised the quality bar significantly. In our tests, it produces results on par (when not superior) to CPU MLAA.
Q:A big question surrounding MLAA solutions has been how sub-pixel edges are handled, particularly in terms of "pixel popping". Your solution seems to have a "lighter touch", smoothing edges but not making existing pixel-popping worse. Is that a fair summary?
Jorge Jimenez:Common to all anti-aliasing filters out there, if you are working at final display resolution (1x), pixel-popping is going to happen sooner or later. You can try attenuating or eliminating "spurious" pixels, but we think that this is not the optimum solution since it doesn't tackle the root of the problem: sub-sampling. As we said before, we're really conservative with the image, so we avoid introducing additional steps that don’t always work, and so they can affect negatively temporal coherence.
We think the next step involve hybrid approaches combining MSAA (with low sample counts) with filter-based anti-aliasing techniques. This would allow having a good trade-off between subpixel-features, smooth gradients and low processing time.
Q:Post-process anti-aliasing techniques seem to work better at 1080p - is it the case that higher resolutions help hide the artifacts?
Jorge Jimenez:This is a really interesting question. It's not that it hides MLAA artifacts, but it's the higher resolutions that hide aliasing due to a higher sampling of the scene. At a sufficient resolution, anti-aliasing would not be required: your eyes would do it for you. Instead of discerning each pixel separately, the pixels would be so small that the visual system would average groups of them, yielding the same result as if anti-aliasing was applied.
To demonstrate that, as we cannot make pixels of our monitors smaller, what we will do is to walk away from it. Take a look at this image; on the left you have the perfect anti-aliased image, and on the right the same image at increased resolution, but in this case without anti-aliasing. If you see the images from a distance you will not be able to discern one from other; this is the same as making the pixels so small that you no longer discern them individually.
On the image of the left, the average is done by the computer, and on the right, by your own eyes. This simple example also explains the usual anti-aliasing process; in the end, we just mimic nature. But note how much you have to walk away from the monitor: the resolution required to eliminate aliasing would need to be so high that it would not be a practical solution.
Q:Where do you see anti-aliasing technology progressing as we move into the next gen console era?
Jorge Jimenez:We think the extensive usage deferred shading seen in AAA games will ensure the continuous evolution of filter-based anti-aliasing approaches. In fact, in the past year we have seen the born of a whole lineup of techniques, which will be covered in our SIGGRAPH 2011 Course "Filtering Approaches for Real-Time Anti-Aliasing", and hopefully it will motivate further research in this direction. We believe the evolution will be to combine the best ideas of each technique, trying to maximise the pros of each technique while minimising the cons.
Furthermore, the most realistic a computer generated image is, the more important it is to have a near perfect anti-aliasing. When you were looking at a low-poly character five years ago, the appearance of the graphics looked rather synthetic. You didn't care about aliasing, as there were bigger graphics problems to look at. However, with current rendering advances, when you look at photorealistic game content, aliasing may reveal that the image is synthetic and not real. So, in the future, as the realism of graphics continues to evolve, the importance of having high-quality anti-aliasing will become more and more important. We won't want the jaggies to destroy the illusion created by a perfectly animated and rendered character, revealing that it is, in truth, just a bunch of vertices smartly put together.