Balloons of Bhutan Senario Sketch

By Lining YaoAnthony DeVincenziHiroshi Ishii from MIT Media Lab

FocalSpace is a video conferencing system that dynamically recognizes relevant activities and objects through depth sensing and hybrid tracking of multimodal cues, such as voice, gesture, and proximity to surfaces. FocalSpace uses this information to enhance users? focus by diminishing the background through synthetic blur effects. We present scenarios that support the suppression of visual distraction, provide contextual augmentation, and enable privacy in dynamic mobile environments.

News:
2013.07.25 - Talk at Cisco Research @ San Jose, CA.
2013.07.20 - Presentation at SUI 2013 (ACM Symposium on Spatial User Interface) @ Los Angeles,CA.
Media Coverage: Engadget . Fast Company

"Live Mode":
Filter visual detritus; Adaptive presentation

"Record Mode":
Semantic tag with gesture and voice

Images:


"Publication:"
Interactive Paper is in CSCW 2010 with title "Kinected Conference: Augmenting Video Imaging with Calibrated Depth and Audio".
Full Paper is in SUI 2013 with title: FocalSpace: Multimodal Activity Tracking, Synthetic Blur and Adaptive Presentation for Video Conferencing.
For more information, please contact: liningy@media.mit.edu