Michael here. We know that stereoscopic 3D has already made a huge impact on cinema over the past couple of years, and that it is well on its way to your home.
But what’s next? Prof. Ramesh Raskar (left in below picture), leader of the Camera Culture research group at the MIT Media Lab and Co-Director of the Center for Future Storytelling, has his eye on the future and is working with his teams to bring some incredible new technologies into existence that will not only transform how we see movies, but how we record and share the stories of our lives.
Imagine 6D or even 8D capture and display. Personal, do-it-anywhere motion capture. Cameras that will guarantee that your subject is never out of focus, and others that can even see around corners.
A huge thanks to Prof. Raskar for taking the time to talk with me about his projects!
MICHAEL: The research you are doing seems to be aimed at what's next, beyond stereo 3D. What do you think will be the "next big thing?" Will there be something beyond 3D that will continue to provide the impetus for people to go outside their homes to the movie theater?
PROF. RASKAR: Certainly. 3D is just a piece of the puzzle. The key limitation for anything that is on traditional formats is that you are always going to trade off the resolution for other parameters. You lose resolution to see stereo on a traditional monoscopic display - lenticulars - or if you do time sequential you lose resolution in time. As resolution in space (which is pixels) and resolution in time (which is frame rate) becomes less of an issue, then you can start trading off the resolution for color, or stereo, and so on. We've already seen that a little bit, where you have six channels - some Mitsubishi and Samsung projectors support that - six primary channels, six primary colors, as opposed to three. And then either time sequential, or view sequential.
MICHAEL: The autostereoscopic displays I saw at CES were impressive, but were based on 1080P screens. I didn't get a chance to see Phillips' 4K model.
PROF. RASKAR: [For autostereoscopic] you will lose resolution in time or in space. To get more views, you need more pixels that can only be created in space or in time. The fact that autostereoscopic displays are being based on 1080p is really a legacy issue. They should really be being built on 4K, or even larger screens. And that will come. It's just a matter of time.
MICHAEL: Some have the opinion that consumers won't accept 4k in an 8k world.
PROF. RASKAR: I don't think that argument is valid. At some point it's OK to trade off resolution for stereo. But right now, at 1080P, we're just at the acceptable level of resolution. So if you go from 1080 to half of that - that's not acceptable. But once you go to 8K I think 4K would be sufficient. It's like buying a camera that is 6 Megapixel versus 12 Megapixel. Does anyone really care? It's just a marketing gimmick.
MICHAEL: Many have been pushing for higher frame rates in theaters - either 48 fps or 60 fps. But what about the idea of eliminating the very idea of discrete frames?
PROF. RASKAR: Yes, certainly. More people are talking about "frameless" rendering, in which certain parts of the image get updated in an asynchronous manner - so there is no sanctity of a frame. I'm a believer in that concept as well, but as soon as your frame rate becomes sufficiently fast - let's say it's 500 frames per second - things like frameless motions can be implemented with a 500 hz projector, or 500 hz displays. There's not that big an issue to get there. The key reason people are talking about frameless rendering is because of bandwidth. So if you have a mostly static screen, and a character moving really fast, then you don't have to update all the pixels, as not all the pixels are changing - you can save on bandwidth.
MICHAEL: So, where do you see the future of imaging heading?
PROF. RASKAR: We are creating things like 6 dimensional displays. You start with a 2D image. If you add horizontal parallax that's 3D. If you add vertical parallax that's actually 4D. And it turns out that mathematically, if you are to express the experience of a static world, it can be represented using 4 dimensional quantities, not 3 dimensional quantities. So the experience of the world is not actually, 3D, but 4D. Some people refer to this as "light fields."
The idea is that you don't represent the world with pixels, but with the rays, and what is happening along each of the rays. As you can imagine, if I move the camera around outside, I am capturing different rays of what is out there.
So that's 4D. So where does the 6th dimension come from? It's based on the lighting. Even if I create a hologram of a flower, and right next to that I put a real flower, they still don't look that similar because if I bring a flashlight then the real flower will respond with beautiful caustics (specular highlights and reflections), shadows, and reflections, but the hologram will not. It will only change with viewpoint, not with lighting. So we are creating displays that also respond to light.
MICHAEL: Do you use lenticulars over the pixels that react to varying incident light angles?
PROF. RASKAR: We use microlenses - I wouldn't call them lenticulars any more, because they are pretty complex per-pixel. We are basically using optics to channel the ambient light so that it reveals a particular part of the image; and for a given viewpoint, it recreates the particular 3D appearance of that object. So I think that things like that would become very interesting. Not necessarily for cinema in the short run, but definitely for home entertainment - where you may shoot a video of your vacation, you come home to watch it, and you want to probe your video with ambient lights, or flashlights. So it’s 3D and also responds to how it was when captured.
MICHAEL: So it has horizontal parallax, vertical parallax, as well as response to light.
PROF. RASKAR: The X position of light, and the Y position of light make it 6D. And mathematically speaking, in our field we call it an 8-dimensional reflectance field. Instead of just holding a flashlight, you could hold a video projector. And the frame buffer for that video projector is also 2-dimensional. So 4 dimensional for viewing, and 4 dimensional for illumination. So put together it is an 8 dimensional reflectance field. So in the research world, the search for 8 dimensional capture and display is a very important goal, and once we get there we can really create hyperrealistic imagery.
MICHAEL: How far along on the path towards 8D are you right now?
PROF. RASKAR: Many people have looked into 8 dimensional capture already, and some of it is already getting used in special effects, where they will capture an expression of a character under an array of lights, and they will record his/her whole appearance, so at post-production, they can manipulate it. This was used in Benjamin Button. So the 8D capture is already there in a way, but as you can imagine it is very cumbersome, very expensive, very special facilities.
MICHAEL: Like the special birdcage-like contraption used by Mova.
PROF. RASKAR: Exactly. People have tried to do 8D capture, but nobody has tried to do more than 4 dimensional display. So our group is the first one to create a 6 dimensional display. And we built one a couple of years ago, which is really exciting. And Mike Bove is working on holographic displays, and we are working on 6 dimensional displays. We think both are really exciting. Another field is multispectral – to create images that aren’t just RGB, but have 6 or 8 color channels. If you look at the real world – a rainbow, for example, cannot be respresented that well in just 3 colors. Certain butterflies, ocean colors, with a very deep cyan, are not captured well with RGB systems, since the color-gamut is not covered with just the primaries. 3 colors is not enough. With 6, you can recreate much more realistic images. But the problem is that we don’t have cameras than can capture in 6 colors.
MICHAEL: Are you working on capture tech for 6 channel color?
PROF. RASKAR: We are working on a device with a color synthesizer. We call it "programmable-wavelength imaging." We have a camera that can change its spectrum. Say, if we are shooting videos of a butterfly, ocean colors, etc., we would pick more cyans, and at home when viewing we could boost more cyans.
MICHAEL: What can you tell me about the smart cameras and computational photography you are working on, such as the anti-motion blur technology.
PROF. RASKAR: The basic concept is that we want to simplify the capture-time decisions. The difference between professionals and amateurs is decreasing rapidly – the part that makes professionals stand apart is that they can make really good decisions at the time of the shoot. They can set the right exposure times, the right ISO, the right focus, focal plane, and so on. And amateurs are not so good at making those decisions. But if you can move all those decisions to post-capture (digitally refocus, remove motion blur, or relight the scene) then anybody in a creative spirit can create beautiful visual imaging.
MICHAEL: You have your hands full – is there any one project that is taking up the majority of your effort at the moment?
PROF. RASKAR: Every one of them is taking up all my effort!
We’re building lots of crazy things. We are building cameras that can see through volumetric objects such as fog, and give post capture control. We are working on a camera that can look around a corner using flash photography.
On the motion analysis side, we are building a motion capture technology called Second Skin, which enables us to finally take motion capture in the real world. All of the components we are building are very lightweight. We just mounted a camera and rig on a car, and one of our guys was running down the street- and we could capture that. That would be almost impossible to do with other methods. Being able to do on-set motion capture is the holy grail of motion capture. we don’t need dots – our technology is imperceptible. We could do a live shot on TV, where you have a character that’s wired up with our technology, and have a live shot that shows the transformation. James Cameron is trying similar things with his pre-vis[ualization].
I think [Second Skin] is the right direction for not just on-set motion capture, but live pre-vis. Also, The Shader Lamps technology we have built is for live feedback to the actor about where they are. Instead of a white ball telling them their eyeline, convergence points, and so on, it will all be real. It will be an integrated environment where the actors can see themselves, the director can see, and the final viewers can see, almost in real time. That’s kind of the dream. And we have pieces of the puzzle in motion capture, in cameras, and on displays.
MICHAEL: Do you see a competition emerging between lenticular-related technologies like 6D and 8D displays, and holograms?
PROF. RASKAR: The limits of a hologram and a lenticular display are the same. In our group we have done a lot of theoretical work and mathematical analysis of light transport. There’s a concept called “augmented light field” that we invented, in which we analyized data that shows that holograms and lenticular screens are the same in the limit – if you have millions of pixels in one dimension for lenticular screens. But clearly that is not practically possible, so that is why you want to use holograms. But we still have to figure out how to make holograms light sensitive.
MICHAEL: With the march of 3D, and at some point the technologies you are working on, into the home, do you still see people going out to movie theaters in the future?
PROF. RASKAR: I hope so. The goal of these new technologies should not just be recording visual memories, but recording experiences and creating an atmosphere where they can be shared. And there is a very social, very human component to both capturing and sharing experiences. As long as movie makers, distributors, technologists, and also average consumers keep that in mind, I think we can create a highly networked, very integrated set of technologies that will allow us to not just watch a movie, but actually live in a movie. And I think that’s exciting. We’re already at a place where the experience of a movie isn’t just the two hours in the theater. But that is still just the Hollywood model. How you can bring this into your everyday life is the exciting part .
Thanks very much to Prof. Raskar for meeting with me in his Cambridge, Massachusetts office, and to Alexandra Kahn for arranging the meeting!
For much more information on these projects and others, head over to the websites for the Camera Culture group, MIT Media Lab, and Prof. Raskar.
Popular Photography magazine has a great new article titled "The Future of Photography" that looks at Prof. Raskar's projects and where imaging will go in the next 40 years.
Also, BBC NEWS has an article on Camera Culture's 3 mm "bokodes" that can be read at a distance of several meters and can contain thousands of times as much information as traditional barcodes.
Raskar is the co-author of “Computational Photography: Mastering New Techniques for Lenses, Lighting, and Sensors."
No comments:
Post a Comment