CVPR 2012 Data
Computer vision, motion estimation, tracking, structure from motion, gesture and activity recognition, medical image analysis, robotics, machine learning, artificial intelligence.
My dissertation research focuses on long sequence motion estimation. Rather than extracting instantaneous motion between each pair of frames in an image sequence, we explicitly describe and extract motion over multiple frames at once. We recover the trajectory, a curve in (x, y, t), followed by each visible point over an extended time interval. Crucially, if a point is temporarily occluded and then reappears, the trajectory we extract bridges the occlusion, automatically associating all observations of the point.
Our formulation makes it possible to extract more accurate estimates of motion by accumulating evidence over a longer time span than traditional optical flow uses. The dense, extended tracks that result from our method can be used as features for higher-level inference problems such as activity recognition or 3D reconstruction via structure from motion. Finally, estimating occlusion is an integral part of our motion estimation, which allows for video segmentation based on analysis of occlusion surfaces. Publications related to this research appeared in CVPR 2012 and ECCV 2012.
I spent Summer 2008 at Intel Research Pittsburgh working with Dr. Mei Chen and collaborating with the ophthalmologists at the UPMC Eye Center Glaucoma Imaging Group on a technique for removing motion artifacts from retinal spectral domain optical coherence tomography (SD-OCT) data. SD-OCT images provide rich volumetric data for ophthalmologists to aid in diagnosis and treatment of various eye diseases. However, eye movement during imaging distorts the data, making it less useful. We used deformable image registration to match SD-OCT data to an artifact-free reference image. We focused specifically on correcting the effect of microsaccades, which manifest in images as blood vessel discontinuities. Publications related to this project appeared in ISBI 2009 and MICCAI 2009.
American Sign Language
I worked on vision-based recognition of fingerspelling in American Sign Language. Our approach focused on recognizing the gestures corresponding to transitions between letters rather than the individual letters themselves. At conversational speed, it is difficult to locate individual frames corresponding to single letters because native signers do not pause at each letter. This causes traditional recognition systems trained on isolated static images to fail. Additionally, motions during the transition between letters can be used to distinguish letters whose static handshapes look similar. Our system used a hidden Markov model combined with a part-based description of hand silhouettes to recognize the letter-to-letter transition gestures. This work was published in ACCV 2009.
While at Harvey Mudd, I was part of a team of four students that designed and built a robot to enter in the Scavenger Hunt at the 2005 AAAI Mobile Robot Competition and Exhibition. Our entry could successfully recognize a number of contest objects, could follow a path marked by arrows on the floor, and could retrieve objects and return them to previously specified locations. Our performance at the conference earned First Place in the Scavenger Hunt and a Technical Innovation Award.
|http://www.cs.duke.edu/~sricco||Last updated: 12 December 2009|