extraordinary.

Rendering Synthetic Objects into Legacy Photographs.

We propose a method to realistically insert synthetic objects into existing photographs without requiring access to the scene or any additional scene measurements. With a single image and a small amount of annotation, our method creates a physical model of the scene that is suitable for realistically rendering synthetic objects with diffuse, specular, and even glowing materials while accounting for lighting interactions between the objects and the scene. We demonstrate in a user study that synthetic images produced by our method are confusable with real scenes, even for people who believe they are good at telling the difference.

Kevin Karsch, a Computer Science PhD Student, and his team at the University of Illinois are developing a software system that lets users easily and convincingly insert objects into photographs, complete with realistic lighting, shading, and perspective. According to their documentation, aside from a few annotations provided by the user, like where the light sources are, the software doesn’t need to know anything about the image. Even keeping in mind much demo videos are spot polished, I’m still astonished at how seamless it all seems. This could very well be ground breaking work.