Matthew is a Ph.D. graduate from the MIT Media Lab. He worked with Henry Holtzman's Information Ecology Group and Ramesh Raskar's Camera Culture Group. He is making the next generation of interactive and glasses-free 3D displays. Matthew graduated summa cum laude from Tufts University in 2004 with a Bachelor of Science in Computer Engineering and worked from 2004 to 2007 at Analogic Corp. as an Imaging Engineer, where he designed threat detection algorithms for Computed Tomography security scanners. In 2009 Matthew was awarded a Masters of Media Arts and Sciences from the MIT Media Lab. His work has been funded by the NSF and the Media Lab consortia, and has appeared in SIGGRAPH, CHI, and ICCP.
For about a century, researchers and experimentalists have strived to bring glasses-free 3D experiences to the big screen. Much progress has been made and light field projection systems are now commercially available. Unfortunately, available display systems usually employ dozens of devices making such setups costly, energy inefficient, and bulky. We present a compressive approach to light field synthesis with projection devices. For this purpose, we propose a novel, passive screen design that is inspired by angle-expanding Keplerian telescopes. Combined with high-speed light field projection and nonnegative light field factorization, we demonstrate that compressive light field projection is possible with a single device. We build a prototype light field projector and angle-expanding screen from scratch, evaluate the system in simulation, present a variety of results, and demonstrate that the projector can alternatively achieve super-resolved and high dynamic range 2D image display when used with a conventional screen.
We present a glasses-free 3D display design with the potential to provide viewers with nearly correct accommodative depth cues, as well as motion parallax and binocular cues. Building on multilayer attenuator and directional backlight architectures, the proposed design achieves the high angular resolution needed for accommodation by placing spatial light modulators about a large lens: one conjugate to the viewer's eye, and one or more near the plane of the lens. Nonnegative tensor factorization is used to compress a high angular resolution light field into a set of masks that can be displayed on a pair of commodity LCD panels. By constraining the tensor factorization to preserve only those light rays seen by the viewer, we effectively steer narrow high resolution viewing cones into the user's eyes, allowing binocular disparity, motion parallax, and the potential for nearly correct accommodation over a wide field of view. We verify the design experimentally by focusing a camera at different depths about a prototype display, establish formal upper bounds on the design's accommodation range and diffraction-limited performance, and discuss practical limitations that must be overcome to allow the device to be used with human observers.
We propose a flexible light field camera architecture that is at the convergence of optics, sensor electronics, and applied mathematics. Through the co-design of a sensor that comprises tailored, Angle Sensitive Pixels and advanced reconstruction algorithms, we show that—contrary to light field cameras today—our system can use the same measurements captured in a single sensor image to recover either a high-resolution 2D image, a low-resolution 4D light field using fast, linear processing, or a high-resolution light field using sparsity-constrained optimization.
Imagine a display that behaves like a window. Glancing through it, viewers perceive a virtual 3D scene with correct parallax, without the need to wear glasses or track the user. Light that passes through the display correctly illuminates the virtual scene. We contribute a new, interactive, relightable, glasses-free 3D display. By simultaneously capturing a 4D light field, and displaying a 4D light field, we are able to realistically modulate the incident light on rendered content. Our hardware points the way towards novel 3D interfaces, in which users interact with digital content using light widgets, physical objects, and gesture.
We introduce tensor displays: a family of glasses-free 3D displays comprising all architectures employing (a stack of) time-multiplexed LCDs illuminated by uniform or directional backlighting. We introduce a unified optimization framework that encompasses all tensor display architectures and allows for optimal glasses-free 3D display.
We demonstrate the benefits of tensor displays by constructing a reconfigurable prototype using modified LCD panels and a custom integral imaging backlight. Our efficient, GPU-based NTF implementation enables interactive applications. In our experiments we show that tensor displays reveal practical architectures with greater depths of field, wider fields of view, and thinner form factors, compared to prior automultiscopic displays.
We introduce polarization field displays as an optically-efficient design for dynamic light field display using multi-layered LCDs. Such displays consist of a stacked set of liquid crystal panels with a single pair of crossed linear polarizers. Each layer is modeled as a spatially-controllable polarization rotator, as opposed to a conventional spatial light modulator that directly attenuates light.We demonstrate interactive display using a GPU-based SART implementation supporting both polarization-based and attenuation-based architectures. Experiments characterize the accuracy of our image formation model, verifying polarization field displays achieve increased brightness, higher resolution, and extended depth of field, as compared to existing automultiscopic display methods for dual-layer and multi-layer LCDs.
Today's 3D display are not only light deficient, but rank deficient. We have developed a 3D display that eliminates the need for special glasses, while solving both light and rank deficiency. Until now, the commercial potential of glasses-free 3D displays, particularly those based on liquid crystal displays (LCDs), has been primarily limited by decreased image resolution and brightness compared to systems employing special eyewear.
In the Camera Culture group at the MIT Media Lab, we have found a way to increase the brightness and resolution of LCD-based, glasses-free 3D displays using a method they call Content-Adaptive Parallax Barriers. We call our new display technology High-Rank 3D or HR3D, since our display is capable of displaying a full-resolution light field.
The BiDi Screen is an example of a new type of I/O device that possesses the ability to both capture images and display them. This thin, bidirectional screen extends the latest trend in LCD devices, which has seen the incorporation of photo-diodes into every display pixel. Using a novel optical masking technique developed at the Media Lab, the BiDi Screen can capture lightfield-like quantities, unlocking a wide array of applications from 3-D gesture interaction with CE devices, to seamless video communication.
I participated in the 2011 MIT 100K Competition with Tiago Wright and Vikrham Anreddy. Our entry, Sensaction, was based on the BiDi Screen project, which was my Masters Thesis work at the MIT Media Lab.
We won the Mobile Track!
A Media Lab researcher has been kind enough to share his daily bicycle commute for research and entertainment purposes. These videos are offered under a creative commons license. The archive covers about 2.5 years of commuting.
This page describes how we turned some electronic junk we found in a spare parts bin into a twittering waching machine and dryer. With any luck, twitter will one day be filled entirely with the banal updates of machines.
The Kaidan Magellan Turntable (MDT-19) is a motorized turntable originally intended for scientific imaging. We have one of these in the Camera Culture group, which has been passed down from generation to generation, and mostly neglected along the way. Here I host some python code to get the table running again.
Tackling the rat problem in Somerville's Union Square, one Zap at a time. The The Raticator is an electric rodent trap. In this project I use the Twine and Twine breakout board to make the Raticator post its kills to a Twitter feed, and a custom web site. I include Python CGI code, with an extension to allow caching of Twitter results.
This course provides attendees with the mathematics, software, and practical details they need to build their own low-cost stereoscopic displays. Each new concept is illustrated using a practical 3D display implemented with off-the-shelf parts. First, the course explains glasses-bound stereoscopic displays and provides detailed plans for attendees to construct their own LCD shutter glasses. Then the course explains unencumbered auto-multiscopic displays, including step-by-step directions to construct lenticular and parallax-barrier designs using modified LCDs. All the necessary software, including algorithms for rendering and calibration, is provided for each example, so attendees can quickly construct 3D displays for their own educational, amusement, and research purposes.
At SIGGRAPH 2010, the Build Your Own 3D Display course demonstrated how to construct both LCD shutter glasses and glasses-free lenticular screens, providing Matlab-based code for batch encoding of 3D imagery. This follow-up course focuses more narrowly on glasses-free displays, describing in greater detail the practical aspects of real-time, OpenGL-based encoding for such multi-view, spatially multiplexed displays.
The course reviews historical and perceptual aspects, emphasizing the goal of achieving disparity, motion parallax, accommodation, and convergence cues without glasses. It summarizes state-of-the-art methods and areas of active research. And it provides a step-by-step tutorial on how to construct a lenticular display. The course concludes with an extended question-and-answer session, during which prototype hardware is available for inspection.
This course serves as an introduction to the emerging field of computational displays. The pedagogical goal of this course is to provide the audience with the tools necessary to expand their research endeavors by providing step-by-step instructions on all aspects of computational displays: display optics, mathematical analysis, efficient computational processing, computational perception, and, most importantly, the effective combination of all these aspects. Specifically, we will discuss a wide variety of different applications and hardware setups of computational displays, including high dynamic range displays, advanced projection systems as well as glasses-free 3D display. The latter example, computational light field displays, will be discussed in detail. In the course presentation, supplementary notes, and an accompanying website, we will provide source code that drives various display incarnations at real-time framerates, detailed instructions on how to fabricate novel displays from off-the-shelf components, and intuitive mathematical analyses that will make it easy for researchers with various backgrounds to get started in the emerging field of computational displays. We believe that computational display technology is one of the "hottest" topics in the graphics community today; with this course we will make it accessible for a diverse audience. While the popular, introductory-level courses "Build Your Own 3D Displays" and "Build Your Own Glasses-free 3D Display", previously taught at SIGGRAPH and SIGGRAPH ASIA, discussed conventional 3D displays invented in the past, this course introduces what we believe to be the future of display technology. We will only briefly review conventional technology and focus on practical and intuitive demonstrations of how an interdisciplinary approach to display design encompassing optics, perception, computation, and mathematical analysis can overcome the limitations for a variety of applications.
This is a list of a few class projects I've done with interesting results.