I attended the Lighting Papers session on Tuesday. Of particular note were the following two papers:
The Lightspeed Automatic Interactive Preview Lighting System
This paper describes a lighting preview system from ILM. Similarly to other relighting systems (like Pixar’s LPICS) it does deferred rendering to enable fast iteration on the lighting (which happens after everything else, including camera, has been locked down). The interesting thing is that they found a way to do deferred rendering on scenes with transparency, antialiasing and motion-blur. They have a separate buffer with all their fragments and their “deep-frame-buffer” properties (this buffer has no specific 2D layout though, it’s just a bucket of fragments). Then they also have a frame buffer with AA, transparency and motion blur already resolved so that every pixel has pointers to the list of fragments and their blending weights (since AA, transparency and motion blur are ultimately just blending fragment colors). Then they do deferred shading on the ‘bucket of fragments’ and blend the results to see the final frame. This is kind of like a “deep A-buffer” for deferred rendering. This is interesting since many games have been using some kind of deferred shading lately. The paper is available here.
Frequency Domain Normal Map Filtering
This paper proposes a nice solution to the surface minification problem, which has been bothering me for a while.
When shaders were simple like color A (lighting) * color B (albedo), then standard MIP mapping on the textures worked great. However, when you start doing high-frequency nonlinear stuff like bumpy specular, MIP mapping looks bad (aliasing) and just wrong. This is because when an object is far away, you have a lot of texels (describing various surface properties) covering a single screen pixel. What you WANT is to shade all those pixels, and then average the result to get your screen pixel. What you GET with MIP-mapping is to average your shader inputs and shade once. Not the same thing at all. The most obvious problem is sparkly aliasing, but even if you use one of the common hacks to remove them (like fading out specular with distance) the surface still doesn’t look like it should. The SIGGRAPH 2005 sketch “SpecVar maps” explains the problem.
My take on this is that you want to filter your BRDF and normals as one unit. Most BRDFs implicitly contain a normal distribution function (NDF), even common ones like Blinn-Phong, which describe the fact that specular highlights are the result of millions of microscopic facets with normals pointing in (semi) random directions. In fact, the shape of the highlight is the same as the shape of the distribution function of these microfacet normals. This aggregation of tiny normals into a BRDF is plainly the same thing that happens with MIP mapping, but at a different scale. You can think of the normal map as just orienting the distribution of microfacet normals so that the distribution peak points in a different direction.
So if you have a normal map as well as a map describing the NDF (like a specular power map), all you need to do is to rotate the NDFs of the averaged texels in the direction of the normals from the normal map, and then find a single NDF (subject to the restrictions of your model, so if you are using Blinn-Phong you are fitting a cosine power lobe) which is the closest fit to the averaged NDFs. This will determine your MIPped normal and specular power. You can also throw other BRDF arguments which are stored in textures into this, like specular gain maps. You do this in the tools, so you don’t have to touch your shaders and there is no runtime cost whatsoever.
The “Frequency Domain Normal Map Filtering” paper doesn’t look at the problem exactly this way, but is very close. It is based on Ravi Ramamoorthi’s frequency-space philosophy of shading, and is very rigorous. The authors propose various representations for the BRDF (Spherical Harmonics for low-frequency ones, and multiple lobes of something called a von Mises-Fisher distribution for high-frequency ones). The paper as written only works on spatially invariant BRDFs, but I asked the author who was presenting the paper and confirmed that you can fold your BRDF into the NDFs used at the top-level, and thus represent things like per-pixel specular powers. The paper, video and some shader code are available here.
The way I would apply this paper is to convert the Blinn-Phong specular map (or whatever you are using) and normal map into a single von Mises-Fisher lobe per pixel at the top level, run their algorithm, convert the result back to normal and specular power values for the closest-fitting Phong lobe (from rendering with a vMF, they look visually very similar to a Phong lobe, so hopefully this would not be too hard), and Bob’s your uncle. No speckly artifacts in the distance, your surface looks right from afar, etc.
It could be that using their method is overkill with a single lobe, and maybe the fitting can be done as quickly directly on Phong lobes – I don’t know, my knowledge of fitting methods is a bit shaky (which is why I went to the Least-Squares course a few days back). I welcome any comments from people who know more about this stuff.