Introduces the latest computational methods in digital imaging that overcome the traditional limitations of a camera and enable novel imaging applications. The course provides a practical guide to topics in image capture and manipulation methods for generating compelling pictures for computer graphics and for extracting scene properties for computer vision, with several examples.
Photographers,
digital artists, image processing programmers and vision
researchers using or building applications for digital cameras or
images will
learn about camera fundamentals and powerful computational tools, along
with
many real world examples.
A
Introduction
Digital photography compared to film photography
Image formation, Image sensors and Optics
B
Understanding
the Camera
Parameters: Pixel Resolution, Exposure, Aperture, Focus, Color depth, Dynamic range
Nonlinearities: Color response, Bayer pattern, White balance, Frequency response
Noise: Electronic sources
Time factor: Lag, Motion blur, Iris
Flash settings and operation
Filters: Polarization, Density, Decamired
In camera techniques: Auto gain and white balance, Auto focus techniques, Bracketing
C
Image Processing and Reconstruction
Tools
Convolution, Overview
Gradient domain operations, Applications in fusion, tone mapping and matting
Graph cuts, Applications in segmentation and mosaicing
Bilateral and Trilateral filters, Applications in image enhancement
D
Improving Performance of Camera
Dynamic range: Variable exposure imaging and tone mapping,
Frame rate: High speed imaging using multiple cameras
Pixel resolution: Super-resolution using jitter
Focus: Synthetic Aperture from camera array for controlled depth of field
E
Image Processing and Reconstruction
Techniques
Brief overview of Computer Vision techniques: Photometric stereo, Depth from defocus, Defogging
Scene understanding: Depth edges using multiple flashes, Reflectance using retinex
Denoising using flash and no flash image pairs
Multi-image fusion techniques:
Fusing images taken by varying focus, exposure, view, wavelength, polarization or illumination
Photomontage of time lapse images
Matting
Omnidirectional and panoramic imaging
F
Computational Imaging beyond
Photography
Optical tomography, Imaging beyond visible spectrum,
Coded aperture imaging, multiplex imaging, Wavefront coded Microscopy
Scientific imaging in astronomy, medicine and geophysics
G
Future of Smart and Unconventional
Cameras
Overview of HDR cameras: Spatially adaptive prototypes, Log, Pixim, Smal
Foveon X3 color imaging
Programmable SIMD camera, Jenoptik, IVP Ranger
Gradient sensing camera
Demodulating cameras (Sony IDcam, Phoci)
Future directions
Motivation for the Field
(Also see the symposium page http://photo.csail.mit.edu/ )
Digital photography is evolving rapidly with advances in electronic sensing, processing and storage. The emerging field of computational photography attempts to exploit the cheaper and faster computing to overcome the physical limitations of a camera, such as dynamic range, resolution or depth of field, and extend the possible range of applications. The computational techniques encompass methods from modification of imaging parameters during capture to modern image reconstruction methods from the captured samples.
Many ideas in computational photography are still relatively new to digital artists and programmers although they are familiar with photography and image manipulation techniques. A larger problem is that a multi-disciplinary field that combines ideas from computational methods and modern digital photography involves a steep learning curve. For example photographers are not always familiar with advanced algorithms now emerging to capture high dynamic range images, but image processing researchers face difficulty in understanding the capture and noise issues in digital cameras. These topics, however, can be easily learned without extensive background. The goal of this course is to present both aspects in a compact form.
The new capture methods include sophisticated sensors, electromechanical actuators and on-board processing. Examples include adaptation to sensed scene depth and illumination, taking multiple pictures by varying camera parameters or actively modifying the flash illumination parameters. A class of modern reconstruction methods is emerging. The methods can achieve a ‘photomontage’ by optimally fusing information from multiple images, improve signal to noise ratio and extract scene features such as depth edges. The course briefly reviews fundamental topics in digital imaging and then provides a practical guide to underlying techniques beyond image processing such as gradient domain operations, graph cuts, bilateral filters and optimizations.
The participants learn about topics in image
capture and
manipulation methods for generating compelling pictures for computer
graphics
and for extracting scene properties for computer vision, with several
examples.
We hope to provide enough fundamentals to satisfy the technical
specialist
without intimidating the
curious
graphics researcher interested in photography.
Thanks to the growing
prevalence
of digital cameras, there has
recently been a renewed interest in digital photography-based research
and
products. The papers at Siggraph conference include high dynamic range,
matting, image fusion, synthetic aperture using camera arrays, flash
photography and cartooning. A more
detailed list is included in the sample bibliography. I plan to give an
overview of these publications and papers at Computer Vision
conferences, as
well as topics in scientific imaging beyond photography.
Ramesh Raskar
Senior Research Scientist
MERL - Mitsubishi
Electric Research Labs
201 Broadway,
Email: raskar@merl.com , http://www.merl.com/people/raskar/
Ramesh
Raskar is a Senior Research Scientist at MERL. His research
interests include projector-based graphics, computational photography
and
non-photorealistic rendering. He has published several articles
on
imaging and photography including multi-flash photography for depth
edge
detection, image fusion, gradient-domain imaging and projector-camera
systems.
His papers have appeared in SIGGRAPH, EuroGraphics, IEEE Visualization,
CVPR
and many other graphics and vision conferences. He was a course
organizer at
Siggraph 2002, 2003 and 2004. He is a panel organizer at the Symposium
on
Computational Photography and Video in