Matthew is a Ph.D. student at the MIT Media Lab, working with Henry Holtzman's Information Ecology Group and Ramesh Raskar's Camera Culture Group. He is making the next generation of interactive and glasses-free 3D displays. Matthew graduated summa cum laude from Tufts University in 2004 with a Bachelor of Science in Computer Engineering and worked from 2004 to 2007 at Analogic Corp. as an Imaging Engineer, where he designed threat detection algorithms for Computed Tomography security scanners. In 2009 Matthew was awarded a Masters of Media Arts and Sciences from the MIT Media Lab. His work has been funded by the NSF and the Media Lab consortia, and has appeared in SIGGRAPH, CHI, and ICCP.
Imagine a display that behaves like a window. Glancing through it, viewers perceive a virtual 3D scene with correct parallax, without the need to wear glasses or track the user. Light that passes through the display correctly illuminates the virtual scene. We contribute a new, interactive, relightable, glasses-free 3D display. By simultaneously capturing a 4D light field, and displaying a 4D light field, we are able to realistically modulate the incident light on rendered content. Our hardware points the way towards novel 3D interfaces, in which users interact with digital content using light widgets, physical objects, and gesture.
We introduce tensor displays: a family of glasses-free 3D displays comprising all architectures employing (a stack of) time-multiplexed LCDs illuminated by uniform or directional backlighting. We introduce a unified optimization framework that encompasses all tensor display architectures and allows for optimal glasses-free 3D display.
We demonstrate the benefits of tensor displays by constructing a reconfigurable prototype using modified LCD panels and a custom integral imaging backlight. Our efficient, GPU-based NTF implementation enables interactive applications. In our experiments we show that tensor displays reveal practical architectures with greater depths of field, wider fields of view, and thinner form factors, compared to prior automultiscopic displays.
We introduce polarization field displays as an optically-efficient design for dynamic light field display using multi-layered LCDs. Such displays consist of a stacked set of liquid crystal panels with a single pair of crossed linear polarizers. Each layer is modeled as a spatially-controllable polarization rotator, as opposed to a conventional spatial light modulator that directly attenuates light.We demonstrate interactive display using a GPU-based SART implementation supporting both polarization-based and attenuation-based architectures. Experiments characterize the accuracy of our image formation model, verifying polarization field displays achieve increased brightness, higher resolution, and extended depth of field, as compared to existing automultiscopic display methods for dual-layer and multi-layer LCDs.
Today's 3D display are not only light deficient, but rank deficient. We have developed a 3D display that eliminates the need for special glasses, while solving both light and rank deficiency. Until now, the commercial potential of glasses-free 3D displays, particularly those based on liquid crystal displays (LCDs), has been primarily limited by decreased image resolution and brightness compared to systems employing special eyewear.
In the Camera Culture group at the MIT Media Lab, we have found a way to increase the brightness and resolution of LCD-based, glasses-free 3D displays using a method they call Content-Adaptive Parallax Barriers. We call our new display technology High-Rank 3D or HR3D, since our display is capable of displaying a full-resolution light field.
The BiDi Screen is an example of a new type of I/O device that possesses the ability to both capture images and display them. This thin, bidirectional screen extends the latest trend in LCD devices, which has seen the incorporation of photo-diodes into every display pixel. Using a novel optical masking technique developed at the Media Lab, the BiDi Screen can capture lightfield-like quantities, unlocking a wide array of applications from 3-D gesture interaction with CE devices, to seamless video communication.
I participated in the 2011 MIT 100K Competition with Tiago Wright and Vikrham Anreddy. Our entry, Sensaction, was based on the BiDi Screen project, which was my Masters Thesis work at the MIT Media Lab.
We won the Mobile Track!
A Media Lab researcher has been kind enough to share his daily bicycle commute for research and entertainment purposes. These videos are offered under a creative commons license. The archive covers about 2.5 years of commuting.
This page describes how we turned some electronic junk we found in a spare parts bin into a twittering waching machine and dryer. With any luck, twitter will one day be filled entirely with the banal updates of machines.
The Kaidan Magellan Turntable (MDT-19) is a motorized turntable originally intended for scientific imaging. We have one of these in the Camera Culture group, which has been passed down from generation to generation, and mostly neglected along the way. Here I host some python code to get the table running again.
Tackling the rat problem in Somerville's Union Square, one Zap at a time. The The Raticator is an electric rodent trap. In this project I use the Twine and Twine breakout board to make the Raticator post its kills to a Twitter feed, and a custom web site. I include Python CGI code, with an extension to allow caching of Twitter results.
This course provides attendees with the mathematics, software, and practical details they need to build their own low-cost stereoscopic displays. Each new concept is illustrated using a practical 3D display implemented with off-the-shelf parts. First, the course explains glasses-bound stereoscopic displays and provides detailed plans for attendees to construct their own LCD shutter glasses. Then the course explains unencumbered auto-multiscopic displays, including step-by-step directions to construct lenticular and parallax-barrier designs using modified LCDs. All the necessary software, including algorithms for rendering and calibration, is provided for each example, so attendees can quickly construct 3D displays for their own educational, amusement, and research purposes.
At SIGGRAPH 2010, the Build Your Own 3D Display course demonstrated how to construct both LCD shutter glasses and glasses-free lenticular screens, providing Matlab-based code for batch encoding of 3D imagery. This follow-up course focuses more narrowly on glasses-free displays, describing in greater detail the practical aspects of real-time, OpenGL-based encoding for such multi-view, spatially multiplexed displays.
The course reviews historical and perceptual aspects, emphasizing the goal of achieving disparity, motion parallax, accommodation, and convergence cues without glasses. It summarizes state-of-the-art methods and areas of active research. And it provides a step-by-step tutorial on how to construct a lenticular display. The course concludes with an extended question-and-answer session, during which prototype hardware is available for inspection.
This is a list of a few class projects I've done with interesting results.
This site was last updated on February 28, 2014.
Copyright © 2008-2014 Matt Hirsch