Research into the Office of the Future and related issues is carried out by the NSF Science and Technology Center for Graphics and Visualization at UNC and the National Tele-immersion Initiative effort at UNC.
Members
Deepak
Bandyopadhyay, Wei-Chao Chen, Greg Coombe, David Gotz, Justin Hensley, Sang-Uok Kum, Scott Larsen, Kok-Lim Low, Aditi Majumder, Andrew Nashel, Srihari Sukumaran, Ruigang Yang
Henry
Fuchs (PI), Herman Towles, Greg Welch
Gary
Bishop, Mike Brown, Anselmo Lastra, Lars Nyland, Ramesh Raskar, Brent Seales
Stephen Brumback, Kurtis Keller
Jim
Mahaney, John Thomas
Matt Cutts, Adam Lake,
David Marshburn, Gopi Meenakshisundaram, Lev
Stesin
Publications
Abstract of the paper : The Office of the Future : A Unified Approach to Image-Based Modeling and Spatially Immersive Displays
We introduce ideas, proposed technologies, and initial results for an office of the future that is based on a unified application of computer vision and computer graphics in a system that combines and builds upon the notions of the CAVE(tm), tiled display systems, and image-based modeling. The basic idea is to use real-time computer vision techniques to dynamically extract per-pixel depth and reflectance information for the visible surfaces in the office including walls, furniture, objects, and people, and then to either project images on the surfaces, render images of the surfaces, or interpret changes in the surfaces. In the first case, one could designate every-day (potentially irregular) real surfaces in the office to be used as spatially immersive display surfaces, and then project high-resolution graphics and text onto those surfaces. In the second case, one could transmit the dynamic image-based models over a network for display at a remote site. Finally, one could interpret dynamic changes in the surfaces for the purposes of tracking, interaction, or augmented reality applications.
To accomplish the simultaneous capture and display we envision an office of the future where the ceiling lights are replaced by computer controlled cameras and "smart" projectors that are used to capture dynamic image-based models with imperceptible structured light techniques, and to display high-resolution images on designated display surfaces. By doing both simultaneously on the designated display surfaces, one can dynamically adjust or autocalibrate for geometric, intensity, and resolution variations resulting from irregular or changing display surfaces, or overlapped projector images.
Our current
approach to dynamic image-based modeling is to use an optimized structured
light scheme that can capture per-pixel depth and reflectance at
interactive rates. Our system implementation is not yet imperceptible, but
we can demonstrate the approach in the laboratory. Our approach to
rendering on the designated (potentially irregular) display surfaces is to
employ a two-pass projective texture scheme to generate images that when
projected onto the surfaces appear correct to a moving head-tracked
observer. We present here an initial implementation of the overall vision,
in an office-like setting, and preliminary demonstrations of our dynamic
modeling and display techniques.
08/24/00