From Threads, Fall 2010 issue
Last spring, Professor Robert Calderbank and postdoc Marco Duarte had just begun working on new research project on single-pixel cameras at Princeton University when Calderbank accepted a position as Dean of the Natural Sciences and Professor of Computer Science at Duke University. The move turned out to be just what the project needed: Serendipitously, Duke is home to the Duke Imaging and Spectroscopy Program (DISP), a research program building some of the most sophisticated cameras in the world.
Examples of images obtained from a low-light single pixel camera that uses a single photomultiplier tube. From left to right: the original subject imaged; reconstructions with 4096 pixels obtained from 800 and 1600 random samples, and reconstruction with 65536 pixels obtained from 6600 measurements.
"It was a stroke of luck," says Duarte. "The group here has a lot in common with what we've been thinking about." Once they settled onto campus this fall, Calderbank and Duarte immediately began working with David Brady, director of DISP, and his team. "We're going to combine our algorithms with what David builds," says Calderbank.
Together, the team has a lofty goal in mind -- to advance the cutting-edge field of compressive sensing, a technique based on the knowledge that a small compressed signal contains enough information to re-create a high resolution image.
An optical table implementation of the single pixel camera. Light from the object is focused onto a digital micromirror device (DMD) which reflects a random sample of the pixels onto a single photodiode that records the light intensity.
Duarte helped to develop the first single-pixel camera in 2006 as a graduate student at Rice University studying with electrical engineer Richard Baraniuk. Traditional digital photography is wasteful because most of the information taken by the camera is lost during the compression process. A single-pixel camera works by passing light from the subject through an array of tiny mirrors and capturing it with one photodetector -- a single pixel. As the light reflects from the subject onto the camera mirrors, the mirrors rotate on or off to collect a random sampling of the image pixels. Then, complex compressive sensing algorithms process the information and assemble a high-resolution image from the random samples obtained.
Knowledge of how the image can be compressed allows the image to be recovered, says Duarte. "It allows you to unravel information into the original sequence. The whole thing is very counterintuitive," he laughs.
Current megapixel cameras can have millions of sensors (a 5 megapixel camera has 5 million sensors) because each silicon sensor is dirt-cheap. However, for other, more exotic types of cameras, like a terahertz imaging camera, more than a handful of sensors can be prohibitively expensive. "Having a camera that works with just one sensor or just ten sensors makes a lot of sense in such cases," says Duarte.
Robert Calderbank
Today, Calderbank and Duarte have plans to move beyond single-pixel cameras: By adding special matrices determined by coding theory, Calderbank's area of expertise, to the compressive sensing algorithms, they hope to extend the idea of a single pixel camera to optical imaging cameras at extremely high resolutions (in the gigapixel to terapixel range) currently being designed by DISP.
Standard megapixel approaches have been successful for "point and shoot" cameras, but there will be simply too much information to process in a gigapixel camera. The matrices designed by Calderbank are simple to implement in hardware and are expected to enable computationally efficient reconstruction of the observed images at multiple resolutions, depending on the amount of detail needed by the camera user. The duo recently met with Brady and colleagues at DISP to begin work on the problem.
"They are finding good applications for our mathematical framework and we are looking forward to solving problems they may have," says Duarte. "We are excited about the fit."