Research

To see our latest work, please see ROC HCI Page.

 

What inspires my work?

"Is it possible to interact with computers and robots the way we interact with each other?" -- my research efforts are driven by this question. In particular, I work on developing techniques to understand and recognize human nonverbal behaviors, and inventing new applications to improve people's quality of life.  

 

Why is it difficult and how do I solve it?

Nonverbal behavior understanding and recognition concern modeling of complex, multidimensional data with subtle, uncertain and overlapping labels. I tackle these problems in context of Human-Computer Interactions while combining techniques from machine learning, computer vision, insights from psychology, algorithms and a great deal of computation. I aspire to deploy my research into the real world to generate new data and new findings to address scientific challenges that we could not solve before.

 

What have I achieved so far?

My PhD work addresses the following set of questions with real-world deployments.

Can we build technology that can help people with social-communicative difficulties, or speech deficiency?

Is it possible for machines to interpret the underlying meaning behind expressions?

Is it possible to deploy a real time computer vision system to understand the behaviors of a large community?

Can a robot be intelligent enough to see, hear and make its own decisions?

 

 

 

In the past (more details under projects): 



  During summer of 2009, while interning for Walt Disney Imagineering Research & Development, I developed  the entire vision component of the world's first autonomous Audio-Animatronics. My effort has saved Disney 50k which they were willing to spend to buy an off-the-shelf product to recognize expressions. The show went live during my internship at the Disney D23. This was a major landmark for Disney towards developing an autonomous a robotic show without the puppeteers. Details..

 

     
 

As part of a class project at MIT during Fall of 2008, I, along with my team, have developed mobile SMS technology allowing rural farmers of Zacatecas, Mexico to aggregate their produce information, and query current/historical market price in real time. This not only imposed price transparency in the market, but also allowed farmers to have better bargaining abilities against the middle-men. At the end of the course, as part of evaluation and testing, I traveled to Zacatecas, Mexico, and deployed the project. Details..

     
  In 2009, I was awarded IEEE Gold Humanitarian Fellowship for my effort to improve peoples' the quality of life with the aid of technology.
     
  During my undergrad at Penn State University, as part of a team, I built a Human Interactive Robot able to locate and recognize faces with some help from voice identification, as part of my senior design project. The system performed well with an accuracy rate of 85% on a half-dozen pre-trained people and won the "Best Design Award" at Penn State, Erie, judged by the local industries. Details..