Cornell Bowers College of Computing and Information Science
photo of fake sushi

Story

Faux Real

Computer Science 50th Anniversary

Graphics and human perception researchers collaborate to blur the lines between virtual and real.

Walk into any big box craft store and you’ll see a burst of colorful plants, flowers, trees, fruits, and greenery, but few would mistake them for the real thing. Even though many appear quite realistic, there’s something about fake plants and food that can’t fool some people. But it turns out it’s exceedingly difficult to figure out exactly what that is, at least from a visual perspective.

This is a question computer graphics researchers, like Cornell’s Kavita Bala, and perception psychologists are exploring together with the goal of creating more realistic virtual representations of complex real life objects. Their work has applications that extend far beyond the world of Hollywood movies and video games. Bringing virtual closer to reality has implications for architects, interior and fashion designers and industrial designers who will benefit from the ability to create cost-effective virtual prototypes of products that accurately simulate visual appearance.

“Human beings care about appearance deeply because it plays such a big role in their lives,” according to Bala. “For instance, if the color of a piece of sushi is a little bit off, that might tell you not to eat it.” Perhaps the same perception is at play when looking at those fake plants and fruits.

But Bala’s research revealed, when it comes to visual appearance, people care about some aspects and not others. For example, understanding the appearance of edges contributes significantly to a human’s understanding of space and, therefore, shape. In other words, people look at edges to understand what object they’re seeing.

“This realization paved the way for the use of perception metrics to determine where we want to spend our computational power,” says Bala. “Do we spend a lot of time trying to compute a part of the scene that doesn’t matter or focus on the part of the image that really matters to a human?” After all, to be useful for designers, the algorithms used to render virtual images must be computationally efficient, allowing them to produce an image in a reasonable amount of time.

 

Kavita Bala
Cornell Computer Science Professor Kavita Bala
“Human beings care about appearance deeply because it plays such a big role in their lives,” according to Bala. “For instance, if the color of a piece of sushi is a little bit off, that might tell you not to eat it.”

 

 

When it comes to producing realistic images, perception is only part of the equation, however. The other part is physics and previous research in this area relied on simplified assumptions, according to Bala. Her work on translucency perception – with collaborators Ted Adelson at MIT, Todd Zickler from Harvard, and Steve Marschner from Cornell – revealed that when light hits complex materials like jade, marble and cloth, internal scattering contributes greatly to their appearance, and that this is extremely hard to model accurately. In fact, to better understand these physical phenomena Bala assembled a collection of translucent stones on her office windowsill as objects of inspiration and instruction.

“You could spend an incredible amount of time modeling a complex object’s physical properties,” says Bala, “But human perception has allowed us to focus those efforts on what really matters. We’ve been trying to find the connection between the physical phenomenon and the perception.” Her ultimate goal with this research, she says, is to “create images that are indistinguishable from reality.”

Bala’s body of work built on the foundation of groundbreaking research in the 1980s by Don Greenberg, Director of Cornell’s Program of Computer Graphics, and the late Ken Torrance, a Cornell professor of Mechanical and Aerospace Engineering, on radiosity. Their research paved the way for the development of core rendering engines that could transform 3D models into photo-realistic virtual renderings of buildings that portray the effects of light reflection on diffuse surfaces.

Radiosity has its limitations, however. The rendering algorithm works best when modeling matte surfaces, not modern day buildings with complex, reflective surfaces like glass and steel. Lightcuts, the result of nearly a decade of work by Bala and her graphics colleagues, including Greenberg, advanced this technology by integrating perception metrics to create robust, scalable and highly complex illumination efficiently. Today, the algorithms they developed form the backbone of Autodesk’s 360 Cloud Rendering engine. Autodesk’s 3D design software has been used to design everything from New York’s Freedom Tower to Tesla electric cars.

So what’s next for graphics and human perception? Bala says her future work will focus on understanding the other attributes of materials beyond the visual, including texture and context, to allow for reasoning about utility, functionality and even safety of objects.

Her current project, with Cornell colleague Noah Snavely, is to build the world’s largest open source database of real world materials, called OpenSurfaces.  The database uses crowdsourcing to collect photographs of surfaces, like wood, metal and stone, and annotate them with properties, including material, texture and contextual information. The massive database can then be used to teach robots how to recognize objects and what they’re used for so they can be better helpers for humans.

While Bala says the ultimate goal of her work is to blur the lines between the virtual and real worlds, the importance of perception means there will always be a place for the human element. “Humans understand what they’re seeing, robots don’t,” she says. “At least not yet.”