Story

CS Prof working on making computer generated images even more real

Researchers from several universities including Cornell University have developed a graphics algorithm that not only realistically models light reflecting off complex surfaces like water, leather, glass, and metal, but which also runs about 100 times faster than current state-of-the-art systems.

"This project is all about trying to get rid of the look that often gives away computer-generated images, where objects look impossibly smooth and perfect," said Cornell Computer Science Professor Steve Marschner. "Our new research paper shows that very small details that you can't see individually still make a really important contribution to the visual texture of surfaces. Our research shows how you can efficiently keep track of all these subpixel features and quickly find just the ones that actually do matter to the image."

The algorithm developed by the researchers leads to much more realistic results because it breaks down each pixel of an uneven or intricate surface into a myriad of so-called "microfacets." Each microfacet acts like a tiny, smooth mirror, reflecting light in a particular direction. Taken together, tens of thousands of these tiny mirrors can help generate a highly realistic representation of many different surfaces.

Microfacets had already been used in other rendering systems, but processing them with accuracy required an impractical amount of number-crunching. The system introduced here reduces the necessary calculations by a factor of 100 and is only about 40 percent more demanding on hardware than the simplified "smooth surface" methods shown above.

Whereas other systems would calculate the reflections one by one, requiring plenty of computing resources, the scientists in this study grouped microfacets into patches and then approximated the amount of light reflected by each patch. The result is not only 100 times quicker than before, but it can now also, for the first time, be used in computer animations rather than still pictures alone as is the case with current systems.

The advance will be presented later this week at SIGGRAPH 2016 in Anaheim, California.

This article was excerpted from gizmag.com and UCSD.