Tuesday, 11 December 2007

Stanford's Make3D Screenshots: 2D to 3D Photos


As previously posted, Make3D is developing some very cool technology in generating 3D from 2D images. Ashutosh Saxena has permitted me to grab some screen caps from his site and it is quite revealing. From an original 2D photograph (image 1), the team from Stanford University creates a predicted 3-D model mesh view (image 2) and then creates a predicted 3-D fly-through image (image 3) which is the real eye candy here let me tell you. You use your arrow keys to navigate around WITHIN the photo. Awesome.

Here is a exerpt from their site explaining how they derive 3D from 2D:

We consider the task of 3-d depth estimation from a single still image. Depth estimation is a challenging problem, since local features alone are insufficient to estimate depth at a point, and one needs to consider the global context of the image. Our model uses a hierarchical, multi-scale Markov Random Field (MRF) that incorporates multiscale local- and global-image features, and models the depths and the relation between depths at different points in the image.

We show that, even on unstructured scenes (of indoor and outdoor environments which include forests, trees, buildings, etc.), our algorithm is frequently able to recover fairly accurate depthmaps. We further propose a model that incorporates both monocular cues and stereo (triangulation) cues, to obtain significantly more accurate depth estimates than is possible using either monocular or stereo cues alone.

Pretty cool stuff! Don't fully understand it, but cool!

Here are some videos of the technology in action. Check out what they can do with normal two dimensional photos!






Link: Make3D

No comments:

Post a Comment