Feed on
Posts
Comments

The new Real-Time Rendering website is now live. Besides an updated version of the resources that the previous website had, there is also a blog where I, Eric and Tomas will be posting various updates on the field. Check it out!

Now that NVIDIA’s has announced its newest GPU architecture (the GeForce GTX 200 series), interesting architectural details are popping up on the web. The best writeup I’ve found is by AnandTech. In the past, such detailed writeups would have required a lot of reverse-engineering and guessing. Nowadays, a lot of the detailed low-level information is forthcoming directly from NVIDIA itself. This is probably due to the desire to promote GPGPU programming models such as CUDA, which require low-level hardware knowledge to write efficient code. The architecture details make for some interesting reading.

Distribution-based BRDFs

This important BRDF paper is finally available online. I’ve been waiting for it to become available since I read an earlier draft two years ago. It presents a cheap, yet realistic BRDF model and can be easily fit to data from real-world materials. Anyone interested in material rendering should check it out.

While I Was Away…

Postings have been pretty sparse of late, with no postings at all since September 2007. The reason I haven’t been keeping the blog up to date is that I’ve been busy with this:

RTR Cover

The second edition of “Real-Time Rendering” is one of my favorite technical books, so when I heard Tomas and Eric were looking for someone to help out with the third edition I wasted no time in offering my services. Luckily, they accepted, and some very hard (but fun) work ensued. Given everything that has happened in the field since 2002 (when the second edition was published), bringing the book up to date while keeping to the high standards set by previous editions was definitely a challenge. I think we did a pretty good job. The cover image links to the Amazon listing, where thanks to their nifty “Look inside the book” feature you can form your own opinion. If you have a chance to attend SIGGRAPH this year (always a good idea), we plan to do a signing at the A K Peters booth.

I’ve owned a (signed!) copy of Andrew Glassner’s Principles of Digital Image Synthesis for a short while. It’s an amazingly in-depth book on the fundamentals of computer graphics, and I warmly recommend it. Anyway, I recently happened to glance at the appendices and noticed Appendix G. This appendix has tons of interesting spectral data in table and graph form: indices of refraction and extinction for various materials, CIE observer and human cone response curves, and spectra of various light sources. The appendix claims that the data is also available via ftp, but unfortunately the directory it gives doesn’t exist anymore. However, the appendix also mentions that the spectral reflectance data given for real objects is a subset of a larger set that is also available on ftp, and that one appears to still exist, albeit at a slightly different address (if your browser has problems with ftp, use a standalone client to open an anonymous ftp connection to ftp.eos.ncsu.edu and go to pub/eos/pub/spectra). This site has reflectance spectra for various color chips including the 64 Munsell colors, as well as 170 objects such as various types of wood, human skin and hair, leaves, rocks, many types of fabric, etc.

A good source for spectral data is handy even if you are not doing spectral rendering; you can convert them into RGB values and use them as reference reflectance values (I did that recently when I needed a good reference RGB reflectance for copper).

They have the video for this for $10 or included if you got the full conference.

For anyone who isn’t familiar with the SIGGRAPH Papers Fast-Forward session, this is a fairly recent tradition (since 2002), where all the papers are presented in very short (50 second) presentations. This gives you a chance to get a quick overview of the year’s SIGGRAPH papers so you can figure out which ones are worthy of further attention.

For some reason, many of the presenters in 2002 did (or at least tried to do) humorous presentations rather than playing it straight (most notable was Ken Perlin’s rap presentation of his Improved Noise paper). This has since become part of the tradition.

Unlike other sessions, the Encore video doesn’t give the full impact of being there since you don’t see the presenters dressed in drag / multicolored clown afro wigs / Borat costumes / etc. It is still well worth seeing however.

The same seven papers that were cut from the Encore papers sessions were cut from this as well (see my earlier post for details).

SIGGRAPH Encore

This year the ‘SIGGRAPH Encore’ access to full video of presentations was pretty slick, with most courses, papers and sketches available (for a fee) within 24 hours at the conference itself.

These are great even for people who attended the conference (due to all the overlapping sessions it’s impossible to see everything you’re interested in), but for people who couldn’t attend at all these are a godsend. You get whatever was on the presenter’s screen (slides, demos, etc.) with audio, so this is almost as good as being in the room.

These are also available as pay downloads at the SIGGRAPH Encore website . Probably the best deal is the entire conference: courses, papers & sketches for $200. Note however, that some presentations are missing, mostly Hollywood stuff (undoubtedly for copyright reasons).

Note also that if you are an ACM or SIGGRAPH member, 2003, 2004 and 2005 video is available for free (streaming only) at a different site. If you have an ACM digital library subscription, the 2006 videos are available there as well as the older ones. Perhaps the 2007 video will eventually become available to members or subscribers as well.

After the jump, I list what’s not there for people who are trying to decide whether to order this:

Continue Reading »

I was just browsing the A K Peters website recently (they have a great catalog of graphics books, like Real-Time Rendering, Advanced Global Illumination, and many others; they also publish the Journal of Graphics Tools) and I noticed an interesting book that they have forthcoming this October: “Color Imaging: Fundamentals and Applications” by Erik Reinhard and others. Erik Reinhard is the inventor of one of the most commonly-used tone mapping operators, and the primary author of the book High Dynamic Range Imaging.

Given my tendency to want to get down to the very fundamentals of rendering, I’ve been interested in color theory for a while (I am currently slowly slogging my way through Color Science by Wyszecki & Styles) and the description of this book is intriguing: “This book provides the reader with an understanding of what color is, where color comes from, and how color can be used correctly in many different applications. The authors first treat the physics of light and its interaction with matter at the atomic level, so that the origins of color can be appreciated. The intimate relationship between energy levels, orbital states, and electromagnetic waves helps to explain why diamonds shimmer, rubies are red, and the feathers of the Blue Jay are blue. Then, color theory is explained from its origin to the current state of the art, including image capture and display as well as the practical use of color in disciplines such as computer graphics, computer vision, photography, and film.”

I think I’ll pick this one up when it comes out…

SIGGRAPH 2007 Roundup

Whew! I’m back from SIGGRAPH, which was exciting and exhausting (as it usually is). I got a few days behind in the rush; I’ll finish up the remaining items of note in this post (in the order I saw them, not necessarily in order of importance).

  • Wave Particles: this is a technique for simulating 2D waves which has some limitations (fixed wave speed, limited boundary shapes) but is very fast, low on memory and GPU-friendly. Basically wave fronts are represented as a series of stateless deterministic particles. Wave-object interactions are supported.
  • Shrek Lighting: most film CG lighting used to be done via placement of lots of small direct lights rather than via global illumination (GI), but many film houses are making the transition to GI-based lighting. PDI is one notable example - in the transition from Shrek to Shrek 2 they moved to (single-bounce) global illumination for their lighting and never looked back. In two different presentations at SIGGRAPH they mentioned this transition as a good thing for them. Some games still do their prelighting with a multitude of direct lights, and it does afford the lighting artists a lot of control, but I think this approach will become rarer as time goes on.
  • Stencil Routed A-Buffer: This sketch presented a way to get order independent transparency faster than depth peeling; in effect abusing the MSAA samples as A-buffer fragments. The method is clever but has some drawbacks (not the least of which is the loss of MSAA when using it).
  • Advanced Real-Time Rendering in 3D Graphics and Games: This full-day course was excellent, featuring many great tips from premier game developers like Valve and Crytek. Unfortunately the course notes are not up yet, but hopefully they wil be up soon at the AMD conference presentation webpage (or perhaps at the old ATI one). Of particular note were Valve’s improvements to their famous but unfortunately named ‘Radiosity Normal Maps’, and Crytek’s ‘Screen-Space Ambient Occlusion’ method.
  • LucasArts & ILM: A Course in Film and Game Convergence: this tutorial outlines LucasArt’s and ILM’s attempt at greater collaboration between the two companies. I’m intrigued by the relationship between film and game rendering, so this tutorial was of particular interest to me. It seems that overall the experience was positive (LucasArts sure got some great tools out of the deal!) and it will be interesting to see how it develops in future.
  • Real-Time Edge-Aware Image Processing with the Bilateral Grid: This paper presented a clever data structure for doing bilateral filtering and various other edge-aware processing several orders of magnitude faster than previous methods (though some of that speedup was due to the GPU implementation and not the algorithm itself). These types of filters are usually very slow because they are large and not separable. The numbers they quoted (9ms for a good-sized bilateral filter on a 720p image with a fast GeForce 8000-series card) is almost fast enough for game post-processing effects (the time budget for a post-processing pass is closer to 1-2ms), so it might be worth looking at the description of the implementation in the paper to see if there are any obvious optimization possibilities.

Curl Noise

I saw the presentation of the paper “Curl Noise for Procedural Fluid Flow” on Tuesday. I’ve wanted to use velocity fields to advect particles in games for quite a while now, including procedural noise fields. Naively I thought I would want to use Perlin noise, however, it turns out that Perlin noise velocity fields aren’t the best choice since they have ’sinks’ where all of the particles will eventually languish. The ‘Curl Noise’ paper cleverly presents a noise which is divergence-free by construction (since it is defined as the curl of a potential field). More importantly, it looks very fluid-like and convincing; either laminar or turbulent flow are possible with a fine degree of control. The authors also present methods to have the flow go around objects (even moving objects). This looks very relevant for games - I will read the paper more carefully when I get home for the implementation details (the paper, animations and example code are available here).

Lighting Papers

I attended the Lighting Papers session on Tuesday. Of particular note were the following two papers:

The Lightspeed Automatic Interactive Preview Lighting System
This paper describes a lighting preview system from ILM. Similarly to other relighting systems (like Pixar’s LPICS) it does deferred rendering to enable fast iteration on the lighting (which happens after everything else, including camera, has been locked down). The interesting thing is that they found a way to do deferred rendering on scenes with transparency, antialiasing and motion-blur. They have a separate buffer with all their fragments and their “deep-frame-buffer” properties (this buffer has no specific 2D layout though, it’s just a bucket of fragments). Then they also have a frame buffer with AA, transparency and motion blur already resolved so that every pixel has pointers to the list of fragments and their blending weights (since AA, transparency and motion blur are ultimately just blending fragment colors). Then they do deferred shading on the ‘bucket of fragments’ and blend the results to see the final frame. This is kind of like a “deep A-buffer” for deferred rendering. This is interesting since many games have been using some kind of deferred shading lately. The paper is available here.

Frequency Domain Normal Map Filtering
This paper proposes a nice solution to the surface minification problem, which has been bothering me for a while.
When shaders were simple like color A (lighting) * color B (albedo), then standard MIP mapping on the textures worked great. However, when you start doing high-frequency nonlinear stuff like bumpy specular, MIP mapping looks bad (aliasing) and just wrong. This is because when an object is far away, you have a lot of texels (describing various surface properties) covering a single screen pixel. What you WANT is to shade all those pixels, and then average the result to get your screen pixel. What you GET with MIP-mapping is to average your shader inputs and shade once. Not the same thing at all. The most obvious problem is sparkly aliasing, but even if you use one of the common hacks to remove them (like fading out specular with distance) the surface still doesn’t look like it should. The SIGGRAPH 2005 sketch “SpecVar maps” explains the problem.
My take on this is that you want to filter your BRDF and normals as one unit. Most BRDFs implicitly contain a normal distribution function (NDF), even common ones like Blinn-Phong, which describe the fact that specular highlights are the result of millions of microscopic facets with normals pointing in (semi) random directions. In fact, the shape of the highlight is the same as the shape of the distribution function of these microfacet normals. This aggregation of tiny normals into a BRDF is plainly the same thing that happens with MIP mapping, but at a different scale. You can think of the normal map as just orienting the distribution of microfacet normals so that the distribution peak points in a different direction.
So if you have a normal map as well as a map describing the NDF (like a specular power map), all you need to do is to rotate the NDFs of the averaged texels in the direction of the normals from the normal map, and then find a single NDF (subject to the restrictions of your model, so if you are using Blinn-Phong you are fitting a cosine power lobe) which is the closest fit to the averaged NDFs. This will determine your MIPped normal and specular power. You can also throw other BRDF arguments which are stored in textures into this, like specular gain maps. You do this in the tools, so you don’t have to touch your shaders and there is no runtime cost whatsoever.
The “Frequency Domain Normal Map Filtering” paper doesn’t look at the problem exactly this way, but is very close. It is based on Ravi Ramamoorthi’s frequency-space philosophy of shading, and is very rigorous. The authors propose various representations for the BRDF (Spherical Harmonics for low-frequency ones, and multiple lobes of something called a von Mises-Fisher distribution for high-frequency ones). The paper as written only works on spatially invariant BRDFs, but I asked the author who was presenting the paper and confirmed that you can fold your BRDF into the NDFs used at the top-level, and thus represent things like per-pixel specular powers. The paper, video and some shader code are available here.
The way I would apply this paper is to convert the Blinn-Phong specular map (or whatever you are using) and normal map into a single von Mises-Fisher lobe per pixel at the top level, run their algorithm, convert the result back to normal and specular power values for the closest-fitting Phong lobe (from rendering with a vMF, they look visually very similar to a Phong lobe, so hopefully this would not be too hard), and Bob’s your uncle. No speckly artifacts in the distance, your surface looks right from afar, etc.
It could be that using their method is overkill with a single lobe, and maybe the fitting can be done as quickly directly on Phong lobes - I don’t know, my knowledge of fitting methods is a bit shaky (which is why I went to the Least-Squares course a few days back). I welcome any comments from people who know more about this stuff.

More Courses

I went to the “Practical Global Illumination With Irradiance Caching” course. I’m interested in irradiance and radiance caching techniques because they are closely related to baked lighting techniques used in games - one could argue that baked lighting is nothing more than irradiance caching computed offline and applied to interactive renders. An example of a previous paper on this topic with interesting applications for baked lighting is “An Approximate Global Illumination System for Computer Generated Films” by Tabellion and Lamorlette (SIGGRAPH 2004).
Anyway, the course included a presentation on the original radiance caching paper (it was called something else back then) from 1988 by Greg Ward (who also received a well-deserved Computer Graphics Achievement Award today), as well as presentations on more recent developments by Jaroslav Krivanek, Henrik Wann Jensen, Pascal Gautron and Okan Arikan. Krivanek’s radiance caching work for glossy surfaces sounded potentially interesting for baked lighting but used too many SH coefficients to be practical for games. Greg Ward did mention a method of applying rotation gradients of irradiance to bump maps which sounded worth investigating further.
I skipped out on Okan Arikan’s section at the end to see the real-time synthesis section of the “Example-Based Texture Synthesis” course (course notes are available here). The tile-based methods are something I have been keeping an eye on for a while for possible game applications so I was happy to hear more about the latest work on those. There were also some pixel-based synthesis techniques which appeared too slow for real time but perhaps just right for doing at level loading time.
This afternoon I went to Hanan Samet’s spatial data structures course. The course was so full that they had to open up an overflow room, which almost filled up as well! Hanan presented a bunch of interesting data structures (I never knew there were so many different kinds of quadtrees) and algorithms. I now really want to read his new book (“Foundations of Multidimensional and Metric Data Structures”) which I bought last year but haven’t gotten around to reading yet (I feel bad about that, especially since Hanan was kind enough to sign my copy).

I leafed through GPU Gems 3 today, and noticed that chapter 24 has an interesting discussion of gamma-correctness issues. It looks pretty good (as does the book as a whole), and should be worthwhile reading for anyone interested in my recent post on gamma-correctness. NVIDIA has a page for the book here.

Least-Squares Course

In the afternoon I attended the course “Practical Least-Squares for Computer Graphics”. Least-squares minimisation is a crucial CG tool, and this course had a great presentation of the subject. I highly recommend reading the course notes, which are available here.

Live from SIGGRAPH 2007

Finished an exhausting first day at the conference today - it seemed that I couldn’t step into the hallway for a minute without falling into a long and interesting conversation with a colleague I hadn’t seen since last SIGGRAPH.
I spent the morning at Pixar’s course “Anyone Can Cook - Inside Ratatouille’s Kitchen” (the course notes, and a whole bunch of other neat Pixar papers, are available here). I love going to film production courses and sketches at SIGGRAPH - film rendering is surprisingly similar to game rendering in that it is highly performance-sensitive (albeit at a very different time scale), very art-driven and focused on visual results rather than theoretical correctness. There are always some fresh insights relevant to my work. Being presented by Pixar, this course was no exception.
I was especially looking forward to this course since “Ratatouille” had some of the best CG I’d ever seen, from both a technical and creative standpoint. The movie was great fun too!
The production team had a very strong emphasis on subsurface scattering - previously they had only used it for skin, here it was used for food items and other objects. Of the two methods used, the “Gummi Light” seemed cheap enough to warrant further investigation - I’ll need to read the course notes for further details.
They carefully tweaked the exposure and lighting levels to really bring out the vibrant colors - made sure the interesting details were in the mid-range so the colors didn’t get clipped or crushed and left room for highlights.
Pixar made heavy use of reflection maps as opposed to specular lobe highlights - they found that reflection maps resulted in a much richer and more realistic look.
To speed up their renderings, they used light sources to fake light bounce and caustics.
They also discussed more artist-oriented issues like placement of objects in the scene to convey mood, and many others. Overall a fun and interesting course.

I just bumped into this gamma link on Stephen Westin’s website, and that reminded me of other interesting graphics stuff on his site which seemed worth sharing:

Of course, there are also pdfs of his papers, many of which are worth a read.

I’ve been spending a fair amount of time recently making our game’s rendering pipeline gamma-correct, which turned out to involve quite a bit more than I first suspected. I’ll give some background and then outline the issues I encountered - hopefully others trying to “gamma-correct” their renderers will find this useful.

In a renderer implemented with no special attention to gamma-correctness, the entire pipeline is in the same color space as the final display surface - usually the sRGB color space (pdf). This is nice and consistent; colors (in textures, material editor color pickers, etc.) appear the same in-game as in the authoring app. Most game artists are very used to and comfortable with working in this space. The sRGB color space has the further advantage of being (approximately) perceptually uniform. This means that constant increments in this space correspond to constant increases in perceived brightness. This maximally utilizes the available bit-depth of textures and frame buffers.

However, sRGB is not physically uniform; constant increments do not correspond to constant increases in physical intensity. This means that computing lighting and shading in this space is incorrect. Such computations should be performed in the physically uniform linear color space. Computing shading in sRGB space is like doing math in a world where 1+1=3.

The rest of this post is a bit lengthy; more details after the jump.

Continue Reading »

Welcome!

Welcome to the RenderWonk blog! I’ll be posting my various and sundry thoughts on computer graphics here, with a bit of a slant towards things which are both real-time and physically principled. Hopefully people will be leaving some interesting comments as well…