A light projector is a 3D perspective projection device. However,
traditionally, a light projector is treated like any other two-dimensional display device
e.g. CRTs or LCDs to create flat and usually rectangular images. My work exploits the
notion of projector as the dual of a camera. The analytical projection model along with
geometric representation of the display surface provides a conceptual framework for all
projector-based applications. This is a fundamentally different approach for classic
common applications but has greater freedom and flexibility. In addition, it enables an
interesting class of rendering and visualization methods. The conceptual framework has
resulted in new research directions in Computer Vision and Image-based Rendering.
Features such as silhouette edges, ridges, valleys and intersections
of geometric primitives are useful in technical illustrations and artistic effects. My
approach has been real-time rendering of these features without
requiring the connectivity information in polygonal scenes.
Silhouette Edges :
Image-precision technique that is surprisingly simple to implement
Cross-hatching : Effects simulating strokes for surface orientation
or shading on objects
Image-Based Rendering
Using images as input, novel views can be generated by interpolating
(or extrapolating) spatially or temporally.
Prediction
in Image Space : Traditionally, the user-location is predicted in
world-space to reduce the delay in interactive rendering. In this project, I predict the
location of image features by tracking the features in image space and then warp images
till a new frame is rendered.
Image Based
Visual Hulls : Novel views are generated (at interactive rates) by intersecting the
cones created by back-projection of image-space silhouettes into the world and then
texture mapping the resulting CSG object.
Shader Lamps : Real objects with given surface reflectance
properties are replaced by neutral colored objects and image-based illumination. The
computation allows creating lamps that add shading information. Static objects can also be
animated to some extent.
Visibility
I have been interested in exploiting the visibility coherence to
improve rendering speeds. The fact that view volume as well as the set of triangles
visible in that volume change slowly can be used to reduce the size of potentially visible
triangles for successive frames by a large factor.
Visibility Server : The idea is to create a distributed
visibility server that runs on idle machines. The server improves the performance of
graphics machines in use i.e. the rendering clients.
Virtual Reality
I have contributed some new ideas in augmented reality. When the
augmentation is performed in world space (e.g. virtual objects registered with static real
objects), it is efficient to render using world-coordinate system instead of using the
traditional HMD-based user-centric coordinate system.
Using active structured light : I developed a real-time depth
extraction system (3 fps) using active structured light. It was also used to visualize
insides of a dummy patient using see-thru head-mounted display.
Trinocular
stereo : Master's thesis, edge-based correspondence using a three-camera model and
dynamic programming
Web Visualization
I have been collaborating with several researchers to develop
techniques to improve the web surfing experience by visually representing the
activity of other visitors.
Liveweb Visualization
: Dynamic view of and interaction with visitors to emphasize the feeling of 'you are not
alone'