Unlocking Lytro's bag of tricks

The Lytro camera is finally available in Australia and here's how the device combines the latest image sensor, miniature optics and computer image processing techniques to make one unique package.

 

The Conversation

We’ve all been there: the photo that would rock if not for the dodgy focus, highlighting a pot plant instead of your subject’s head. This week, nine-or-so months after its launch in the US, the Lytro camera is finally available to buy in Australia – bringing with it the ability to refocus pictures in incredible detail after the fact.

The rectangular-prism-shaped Lytro is an innovative device that combines recent advances in the number of megapixels that can be packed into a digital camera’s image sensor, the fabrication of miniature optics, and computer image processing techniques to produce an affordable, new type of image recorder, that the manufacturer describes as a “light-field camera”.

Of its many unique features, the ability to choose what part of the picture should be in sharpest focus after the image has been recorded is being seen as a huge selling point.

Sure … wait, what?

To understand how such “focusing after the fact” works, think about how we gather visual information about our surroundings. Each point on an object emits rays of light fanning out in many directions. Imagine these rays as straight lines travelling through space. Our eye captures a small bundle of these rays, and focuses them back to a single point on the retina.

The same process happens in a conventional camera, where the image sensor takes the place of the retina. But in the process of focusing, we lose track of exactly which rays were travelling in which directions.

It is this detail which gives us the information about how far away the object is: for a distant object, the rays of light will be more closely parallel to each other than for a nearby object, where the rays are more divergent.

This is why we have to change the focus of the camera between near and far, so that the lens supplies the right amount of change in divergence to bring the rays from a certain distance to a good focus.

But then rays from points at other distances are not brought to an exact focus, and so these points turn out blurry. What we need is a way of recording at once both the positions and the directions of arrival of all of the light rays from an object. That’s what the Lytro does.

Open wide

The front lens of the Lytro is similar to an ordinary, large-aperture camera lens, forming an image near the 11 megarays image sensor. But just in front of the sensor is an array of tiny microlenses, with very short focal lengths.

Each microlens covers the area of a number of pixels on the image sensor, and focuses an image of the back of the main camera lens on to it. The pixels record which part of the main lens a ray came through: that is, the direction of arrival of the ray at the sensor, for that particular spot on the main image.

So what we end up with is a four-dimensional map of the intensities of all of the light rays entering the camera: two dimensions of space data, and two dimensions of corresponding direction data.

This data set is referred to as the “light field”. You could think of this as being like having many little cameras inside the big camera, each one recording the same scene but from a slightly different viewpoint from within the area of the main camera lens.

With some clever digital image processing done on a microcomputer inside the camera, it’s possible to recombine all these little pictures and overlap them, so that any particular desired object in the scene is in focus. At the same time, it’s possible to correct for any aberrations in the main lens, since we know which part of the lens each ray came through.

The main lens is always used at full aperture, so light gathering power is at a maximum and the camera can operate under low light levels.

The trade-off is that since we are using many of the sensor pixels for direction information, we don’t have so many to use for position information, meaning the effective resolution of the image is reduced.

But modern megapixel arrays have more than enough pixels to display acceptable resolution, and again clever image processing helps to “fill in the gaps” between sampling points.

A picture of the future

The operating principles of the camera place it in a class midway between conventional photography, where all directional information is lost, and the technique of holography, in which complete position and direction information is recorded (in fact, it is possible to convert a Lytro image file into a synthetic hologram).

The big advantages of the Lytro are its portability, low cost, ease of use, digital recording, and ability to be used under ordinary lighting conditions.

What further developments can we expect? An obvious enhancement would be to use much larger pixel arrays (say 100 megapixels) and microlens arrays, giving higher spatial and directional resolution. The manufacturing processes for these already exist.

The Lytro has been manufactured to be very economical and easy to use. The blue and silver models come with 8GB worth of storage for A$399, while A$499 will get you the red model, which comes with 16GB of storage.

But we can expect to see more expensive versions becoming available for scientific and industrial applications.

The flexibility of being able to record the complete light field and to analyse it in detail later makes this design ripe for exploitation in ways that have yet to be explored.

Philip Wilksch is an associate professor of Applied Sciences at RMIT University.This article was originally published on The Conversation on October 10. Republished with permission. 

Related Articles