camera culture logo
Home   |   News   |   Join Us   |   People   |   Projects   |   Publications   |   Talks   |   Courses

eyeSelfie:
Self Directed Eye Alignment using Reciprocal Eye Box Imaging

Tristan Swedish Karin Roesch Ik-Hyun Lee Krishna Rastogi Shoshana Bernstein Ramesh Raskar
MIT Media Lab - Camera Culture Group

ACM SIGGRAPH 2015. Transactions on Graphics 34(4).

eyeSelfie Concept
Figure 1: Self-aligned, mobile, non-mydriatic Fundus Photography. The user is presented with an alignment dependent fixation cue on a ray-based display. Once correctly aligned, a self-acquired retinal image is captured. This retinal image can be used for health, security or HMD calibration. Ilustration: Laura Piraino

Abstract

Eye alignment to the optical system is very critical in many modern devices, such as for biometrics, gaze tracking, head mounted displays, and health. We show alignment in the context of the most difficult challenge: retinal imaging. Alignment in retinal imaging, even conducted by a physician, is very challenging due to precise alignment requirements and lack of direct user eye gaze control. Self-imaging of the retina is nearly impossible.

We demonstrate that a combination of simple optics and an interactive user interface can be employed for self-imaging of the retina. To the best of our knowledge, this is the first time interactive self-imaging of the retina has been demonstrated. Our setup avoids many of the pitfalls found in typical fundus camera arrangements by providing a fixation cue that indicates to the user when they are correctly aligned. Our fixation displays use much less light than infrared illumination alignment and other focusing methodologies used in standard retinal photography.

Video


Files

Citation

T. Swedish, K. Roesch, I.K. Lee, K. Rastogi, S. Bernstein, R. Raskar. eyeSelfie: Self Directed Eye Alignment using Reciprocal Eye Box Imaging. Proc. of SIGGRAPH 2015 (ACM Transactions on Graphics 34, 4), 2015.

BibTeX

@article{Swedish:2015:eyeSelfie,
author = {T. Swedish and K. Roesch and I.K. Lee and K. Rastogi and S. Bernstein and R. Raskar},
title = {eyeSelfie: Self Directed Eye Alignment using Reciprocal Eye Box Imaging}, journal = {ACM Trans. Graph.},
volume = {34},
number = {4},
year = {2015},
publisher = {ACM},
address = {New York, NY, USA}
}

Overview

We frame retinal imaging as a user-interface (UI) challenge. We can create a better UI by controlling the eye box of a projected cue. Our key concept is to exploit the reciprocity, ``If you see me, I see you'', to develop near eye alignment displays. Two technical aspects are critical: a) tightness of the eye box and (b) the eye box discovery comfort.

Previous pupil forming display architectures are not adequate to address alignment in depth. We then analyze two ray-based designs to determine efficacious fixation patterns. These ray based displays and a sequence of user steps allow lateral (x, y) and depth (z) wise alignment to deal with image centering and focus. We show a highly portable prototype and demonstrate the effectiveness through a user study.

Challenges for Self-Aligned Retinal Imaging

One of the most challenging alignment tasks is retinal imaging. Traditionally, acquiring retinal images involves complicated, difficult to use and expensive equipment. The devices are designed to be used by a trained operator, need securing of head position, and non-trivial mechanical controls to ensure precise alignment. Furthermore, most retinal imaging techniques require the use of dilation drops to obtain a sufficiently large field of view (FOV). It is the combination of previous factors that make self-imaging of the retina nearly impossible.

Acknowledgments

We would like to thank our volunteers for their participation in the study, R. Daniel Ferguson at Physical Sciences Inc. for advice and insightful discussions on optics development, the members of the Camera Culture Group at the MIT Media Lab for their support, our clinical collaborators (PS, TM, JP) for rating images and assessing the viability of our design; and the reviewers for their valuable feedback. Illustrations are drawn by Laura Piraino.

This work is supported by the Vodafone Americas Foundation, the MIT Deshpande Center for Technological Innovation, and funding from the US Army Research Laboratory's Army Research Office.

Contact

Technical Details
Ramesh Raskar, Associate Professor, MIT Media Lab
raskar (at) media.mit.edu

Press
Alexandra Kahn, Senior Press Liaison, MIT Media Lab
akahn (at) media.mit.edu or 617/253.0365


Recent projects in the Camera Culture group

Flutter Shutter Camera 6D Display Bokode BiDi Screen NETRA
Cameras 6D Display
Lighting and Viewpoint aware displays
Bokode
Long Distance Barcodes
BiDi Screen
Touch+3D Hover on Thin LCD
NETRA
Cellphone based Eye Test

Home   |   News   |   Join Us   |   People   |   Projects   |   Publications   |   Talks   |   Courses

MIT Media Lab - All rights reserved.