Lytro, Inc, a technology spin-off company founded by Ren Ng, have been in the news recently with their announcement of a re-focusable camera: take one “image”, and change where the focal plane lies after the fact. This is illustrated in the images above, generated from a single shot from the prototype camera. As you move from left to right across this sequence you can see the focus shifting from the front left of image to back right.I saw this work a few years ago at the mighty SIGGRAPH conference, it comes out of a relatively new field of “computational photography”.
All photography is computational to a degree. In the past the computation was done using lenses and chemicals, different chemical mixes and processing times led to different colour effects in the final image. Nowadays we can do things digitally, or in new combinations of physical and digital.
These days your digital camera will already be doing significant computation on any image. The CCD sensor in a camera is fundamentally a photon-counting device – it doesn’t know anything about colour. Colour is obtained by putting a Bayer mask over the sensor, a cunning array of red, green and blue filters. It requires computation to unravel the effect of this filter array to make a colour image. Your camera will also make a white balance correction to take account of lighting colour. Finally, the manufacturer may apply image sharpening and colour enhancement, since colour is a remarkably complex thing there are a range of choices about how to present measured colours. These days compact cameras often come with face recognition, a further level of computation.
The Lytro system works by placing a microlens array in the optical train, the prototype device (described here) used a 296×296 array of lenses focusing onto a 16 million pixel medium format CCD chip, just short 40mmx40mm in size. The array of microlenses means means that for each pixel on the sensor you can work out the direction in which it was travelling, rather than just where it landed. For this reason this type of photography is sometimes called 4D or light-field photography. The 4 dimensions are the 2 dimensions locating where on the sensor the photon lands, and the direction in which it travels, described by another two dimensions. Once you have this truckload of data you can start doing neat tricks, such as changing the aperture and focal position of the displayed image, you can even shift the image viewpoint.
As well as refocusing there are also potentially benefits in being able to take images before accurate autofocus is achieved and then using computation to recover a focused image.
The work leading to Lytro was done by Ren Ng in Marc Levoy’s group at Stanford, home of the Stanford Multi-Camera Array: dispense with all that fiddly microlens stuff: just strap together 100 separate digital video cameras! This area can also result in terrible things being done to innocent cameras, for example in this work on deblurring images by fluttering the shutter, half a camera has been hacked off! Those involved have recognized this propensity and created the FrankenCamera.
Another example of computational photography is in high dynamic range imaging, normal digital images are acquired in a limited dynamic range: the ratio of the brightest thing they can show to the darkest thing they can show in a single image. The way around this is to take multiple images with different exposures and then combine together. This seems to lead, rather often, to some rather “over cooked” shots. However, this is a function of taste, fundamentally there is nothing wrong with this technique. The reason that such processing occurs is that although we can capture very high dynamic range images, displaying them is tricky so we have to look for techniques to squish the range down for viewing. There’s more on high dynamic range imaging here on the Cambridge in Colour website, which I recommend for good descriptions of all manner of things relating to photography.
I’m not sure whether the Lytro camera will be a commercial success. Users of mass market cameras are not typically using the type of depth-of-field effect shown at the top of the post (and repeated ad nauseum on the Lytro website). However, the system does offer other benefits, and it may be that ultimately it ends up in cameras without us really being aware of it. It’s possible Lytro will never make a camera, but instead license the technology to the big players like Canon, Panasonic or Nikon. As it stands we are part way through the journey from research demo to product.
8 comments
Skip to comment form
I love this post! What I wouldn’t give to be able to shift and resize the depth of field sometimes :-)
I must admit I’m unsure I’d ever use it. I guess if it was “always-on” it would be great for macro shots where the depth of field is shallow and the location of the depth of field could be critical.
Hi Ian,
You might want to look at these 3 photos – http://www.flickr.com/photos/64078112@N05/5864578726/ ; http://www.flickr.com/photos/64078112@N05/5864577926/ ; http://www.flickr.com/photos/64078112@N05/5864023977/
To back your point about cameras doing “significant computation” on the images they record.
Cheers,
Mark.
@Mark – that’s a nice example. I think the white balance algorithm assumes the brightest thing in an image is the colour of the light – clearly no specular reflections there!
Ian – interesting post, thanks.
Have you come across the U2 GigaPixel FanCam? At concerts they snap every area of the crowd in high resolution and then join the pictures to get a massive, zoomable image of the stadium. (The gimmick is that you can then find yourself in the crowd and place a tag to show your friends you were there.) Click here for a recent example from their concert in Miami.
Could they be using something similar to the Multi-Camera Array?
Not sure exactly how they do it, but it may be based on something like this which is a cradle for an SLR which rasters it mechanically across a scene – cheaper than an array of SLRs! http://www.gigapansystems.com/
I’m wondering about doing a blog on stitching panoramas…
Ian
You’re absolutely right. It’s a composite image, based on the same technology, but tweaked a bit to work in sport and concert environments.
Rob, if you like the tagging you’ll probably like the embedded video as well.
Have a look at the Nashville U2 show. http://www.u2.com/gigapixelfancam/110702/
There’s a blind fan with a sign front and center (you need to pan down a bit) – click on the sign.
Also – you guys might like the RedBull X-fighters Fancam, here: redbull.com/cs/Satellite/es_ES/Red-Bull-XFighters-Fancam/001243046065472
Tinus,
Thanks for dropping in to comment, looks like you have a great job there!