Continuum Photography

I wanted to take a moment to share with everyone my sister and brother in law’s new, absolutely phenomonal, photography company – Continuum Photography.

As long as I’ve known Josh I have never once seen him without a camera in his hand for more than five minutes.  They are both extraordinarily talented (in both photography and post processing) and dedicated.  They do weddings, portraitures, special events, commercial photography and more – basically anything you’d want a professional photographer for.  They’ll travel just about anywhere in the world for your events, are highly professional, have wonderful personalities and are determined to ensure your satisification with their services.

Neat Multiprocessing Links and Papers

Intel Threading Building Blocks –
“A primary benefit of TBB is that applications using the library automatically scale to utilize the available processing cores, with no changes to the source code or the executable program file. In other words, the same executable will create threads that utilize one core on a single core machine, two cores on a dual-core machine, four cores on a quad-core machine, etc. No recompilation of the application is required, because the library itself detects the hardware architecture and uses that information to determine how to break up the tasks for assignment to each core.” – From the Wiki Entry


The MIT ‘Adaptive Scheduling of Parallel Jobs’ Project –“In this project, we are investigating adaptive scheduling and resource allocation in the domain of dynamic multithreading. Most existing parallel programming systems are nonadaptive, where each job is assigned a fixed number of processors. This strategy may lead to a poor use of available resources. For example, if the job’s parallelism changes while the job is executing, or if the resources available in the system change, the job is still forced to run with the same number of processors as it was allotted when it started executing. A more attractive model would be an adaptive model, where processors allotted to a job change according to the job’s parallelism and the system environment.”

Some papers associated with this project:

Provably Efficient Two-level Adaptive Scheduling

Adaptive Task Scheduling with Parallelism Feedback

Adaptive Work Stealing with Parallelism Feedback

An Empirical Evaluation of Work Stealing with Parallelism Feedback

Dynamic Processor Allocation for Adaptively Parallel Work-Stealing Job


The Landscape of Parallel Computing Research: A View from Berkeley


If anyone has any others they’d like to mention, please feel free to leave a message in the comments section.

Bidirectional Reflectance Distribution Functions

January 23, 2008 1 comment

Anyone with a keen eye that has used any of today’s Real-Time Rendered Graphics applications (read: played any games) has probably taken notice of the main topic of this entry – nothing lights like it is supposed to! We have all these bright, nice shiny bricks on our virtual buildings that are more accurately modeled with a strictly diffuse Lambert term (which we’ve been able to do real-time for close to 20 years) than with the Phong reflectance model that is used is most applications today. The Phong model could easily be tuned to model this effect very easily (just limit the specular and glossy components) but it appears that bricks laminated and coated with slime are more visually appealing to the graphics designers than something even reasonably plausible.

This is one of my biggest gripes with graphics today and has been for over 5 years. Not just with the over (and disgustingly inaccurate) dramatization of lighting effects that’s been going on but with the entire direction the lighting systems in modern real-time renderers have been heading in general. Most renderers take a single lighting model and apply it to ever surface in the world with some sort of per-surface tuning in the form of specular maps, gloss maps, etc. With a lot of fine-tuning by excellent artists, this can actually produce rather believable results. In some of the “better” rendering systems multiple lighting models are supported where you can draw some surfaces with the Phong lighting model, some with Ward, some with Lafortune, some with He-Torrance, etc. You’re still basically fighting a losing battle; however, as you’re trying to model what’s really a minimum of 4-dimensional function (the BRDF) with a series of 2D snap-shots and trying to figure out what “looks right”. The world would be a whole lot happier if developers would integrate some sort of actual BRDF data, measured from the real-world, into their pipeline to at least fit some of these analytical models to. There’s literally tons of free BRDF data available to anyone with an internet connection already. None of this have I seen used in any commercially available real-time rendering package.

Both of these solutions are analytical models used to approximate the true reflectance function of real surfaces – they’re guaranteed right off the bat to never be completely correct (and most of them aren’t even physically correct in theory). This is analogous to having a set of random points that you’d like to fit a curve to. The problem is we have to use a set function for all surfaces and fit drastically different (some almost completely incoherent) data to it – there’s bound to be a wide range of surfaces that fall by the wayside in this approximation.

A theoretically much more sound approach to all of this is to note that every surface in the real-world has its own distinct BRDF. Using one BRDF for all surfaces is just not going to cut it. Add a few more models to the mix and you’re definitely getting closer to what you want, but you’re still falling short. It’d be best if we had some sort of data driven approach where the lighting model wasn’t even a mathematical function as it is today and is simply some sort of “texture” (I use the term loosely here) that we just reference with the incident light vector, the outgoing eye vector and the wavelength of the light we’re trying to sample. We then set associate a BRDF for each texture/surface. In the lighting pass we simply reference the associate BRDF and boom – bricks look like bricks, plastic looks like plastic, wood looks like wood, etc no matter where you put the light and no matter what direction you view the surface from. This is essentially the holy grail of reflectance modeling (at least, after it has been extended to include sub-surface scattering and anisotropic reflections).

So where’s the problem? Remember a BRDF is, at a minimum, a 4-dimensional function – and we’re trying to tabulate all that data at a reasonable level of precision such that we don’t see any (much) banding or ringing. Clearly we now have a [massive] compression issue – we’d like to take all this data (that can easily be over 16MB per BRDF) and compress it in such a way that the memory print is acceptable (say 256-512kb), the core integrity of the data is preserved and we want to decompress this data on the fly extremely quickly. Luckily, there has been a good amount of research done in the academic community towards this end. Since the data we’re trying to compress is over the hemisphere of the point of incident, spherical compression models are an obvious first place to look. Among these we have (to name a couple) spherical harmonics and wavelets. Spherical harmonics have very promising results when we limit our scope to extremely low-frequency lighting, but as soon as you up the frequency of the lighting you start needing far too many coefficients to model the BRDF not to mention all the banding that occurs even with the copious amount of coefficients.

The solution, for the time being, seems to lye with wavelets or some form thereof. There has been a number of research papers published recently (and many not so recent) on using wavelets to compress BRDFs and subsequent decompression in real-time for use in real-time lighting. A couple of these papers manage to get quite impressive results too with BRDFs compressed to a maximum of around 256KB. The real-time rendering portions of their techniques leave a bit to desire, though – render rates are generally around 5-10 Hz with extremely simple scenes at low resolution on the fastest hardware around today. This actually boggles my mind a little, as I was able to make a few tech demos based on the techniques they describe in their papers (although using none of their code). I was able to achieve roughly equivalent results as far as quality goes, but the render rates were a full 10 orders of magnitude greater for far more complex scenes on far less sophisticated hardware. That’s the difference between a technique that never leaves the academic community and one that’s used as common practice in current and next generation games/real-time rendering.

I’m at a loss at what exactly was hampering their implementations so drastically, as all I did was to basically follow their instructions (granted their instructions were more on the conceptual level, leaving tons of room for varied implementations) and yet we came to two completely separate levels of performance in our final implementations. As soon as time (which is so incredibly short these days) allows, I plan to do a much more thorough investigation of what exactly is going on here.

Anyway, in summary – if we’re not there yet we’re damn close to being able to have completely data-driven, physically and aesthetically correct BRDFs defined on a per-surface basis. No more Phong shaders ran on every surface in the world, nor a list of hundreds of different light shaders that an artist has to weed through in order to find out which one works for the surface they’re trying to model. There will be one data-driven lighting function across all surfaces that allows all surfaces to be lit exactly as they should as defined by the BRDF associated with the surface.

Even if you don’t go this direction with your renderer – please, I beg of you, at least start using actual measured BRDF data to fit your lighting model to the surface you’re trying to represent. Using nothing but an artist’s eye to fit 4+ dimensional data to a single lighting model is never going to work out right. Fine tuning the data, after it’s been fit as closely as possible to the real-world data, is fine; but at least use some sound, actual data somewhere in your pipeline to make your fits. It won’t in anyway affect your rendering performance but it will make everything look a few orders of magnitude better.

Mathias Paulin’s Corner

Wavelet Encoding of BRDFs for Real-Time Rendering

Real-time Rendering with Wavelet-Compressed Multi-Dimensional Datasets on the GPU

Cornell Reflectance Data

MERL

CUReT

Experimental Analysis of BRDF Models (w/ BRDF Data)

[Coming: Lots more BRDF databases as soon as I dig out my favorites from my other computer]

My Nephew, Hayden

Categories: Uncategorized Tags: , ,

So I finally made a blog..

…and I don’t have time to put anything on it

 “Coming soon!”

Categories: Uncategorized Tags: ,