429 Ryder, Northeastern University, Boston
Created: July 7, 2005
Last modified:
CSG242: Computational Imaging, Photography and Video
This course on Computational Imaging, Photography and Video introduces the latest computational methods in digital imaging that overcome the traditional limitations of a camera and enable novel imaging applications. The course provides a practical guide to topics in image capture and manipulation methods for generating compelling pictures for computer graphics and for extracting scene properties for computer vision, with several examples.
Digital photography is evolving rapidly with advances in electronic sensing, processing and storage. The emerging field of computational photography attempts to exploit the cheaper and faster computing to overcome the physical limitations of a camera, such as dynamic range, resolution or depth of field, and extend the possible range of applications. The computational techniques encompass methods from modification of imaging parameters during capture to modern image reconstruction methods from the captured samples.
Many ideas in computational photography are still relatively new to digital artists and programmers although they are familiar with photography and image manipulation techniques. A larger problem is that a multi-disciplinary field that combines ideas from computational methods and modern digital photography involves a steep learning curve. For example photographers are not always familiar with advanced algorithms now emerging to capture high dynamic range images, but image processing researchers face difficulty in understanding the capture and noise issues in digital cameras. These topics, however, can be easily learned without extensive background. The goal of this course is to present both aspects in a compact form.
The new capture methods include sophisticated sensors, electromechanical actuators and on-board processing. Examples include adaptation to sensed scene depth and illumination, taking multiple pictures by varying camera parameters or actively modifying the flash illumination parameters. A class of modern reconstruction methods is emerging. The methods can achieve a photomontage by optimally fusing information from multiple images, improve signal to noise ratio and extract scene features such as depth edges. The course briefly reviews fundamental topics in digital imaging and then provides a practical guide to underlying techniques beyond image processing such as gradient domain operations, graph cuts, bilateral filters and optimizations.
The participants learn about topics in image capture and manipulation methods for generating compelling pictures for computer graphics and for extracting scene properties for computer vision, with several examples. We hope to provide enough fundamentals to satisfy the technical specialist without intimidating the curious graphics researcher interested in photography.
Thanks to the growing prevalence of digital cameras, there has recently been a renewed interest in digital photography-based research and products. The papers at Siggraph conference include high dynamic range, matting, image fusion, synthetic aperture using camera arrays, flash photography and cartooning. A more detailed list is included in the sample bibliography. I plan to give an overview of these publications and papers at Computer Vision conferences, as well as topics in scientific imaging beyond photography.
This document, and all documents on this website, may be modified from time to time; be sure to reload documents on occasion and check the "last modified" date against any printed version you may have.
Office Hours (to be announced).
Syllabus (subject to change)
Reading List (Technical papers for reference)
Projects (Coming soon)
Graduate students additional work (Coming soon). Can also count as extra credit for any student.
Final Projects