Ehsan Hoque

Assistant Professor of Computer Science, University of Rochester
Curriculum Vitae, Brief bio

Research Interests: Developing computational approaches to decipher and model the unwritten rules of human communication. I position my research in the context of human-computer interaction by deriving techniques from machine learning and artificial intelligence to design and deploy new interactive systems.

Funding: $6.5 million ($3.5 million as a PI) of funding from National Science Foundation, DARPA, Army Research Lab, Google, and Microsoft.


Contact Information

710 Computer Studies Building,
Rochester, NY 14627
Phone: 1-408-634-6783
mehoque AT {cs dot rochester dot edu}
Twitter: @ehsan_hoque

Projects (active)

ROC Speak
My Automated Conversation coach (MACH)

Projects (archived)

MIT Mood Meter
Temporal Modeling of Smiles

Awards  |  Publications  | Teaching  |  Students  |  CV  |  Bio  | Personal


  • DARPA's Communicating w/ Computers (CwC) Challenge Award 2015
  • NSF CRII (Pre-career) Award 2015
  • One of the most influential Articles for IEEE Transactions for Affective Computing 2015
  • Provost's Multidisciplinary Award 2015
  • Google Faculty Research Award 2014
  • Excellence in Reviewing Award in ICMI 2014
  • Best Paper Award at ACM Pervasive and Ubiquitous Computing (UbiComp) 2013
  • Best Paper Nomination, IEEE Automated Face and Gesture (FG) Recognition 2011
  • Best Paper Nomination, Intelligent Virtual Agents (IVA), 2006

Selected Publications (All publications on my CV)

M. Tanveer, J. Liu, M. E. HoqueUnsupervised Extraction of Human-Interpretable Nonverbal Behavioral Cues in a Public Speaking Scenario,  ACM Multimedia (ACMMM), 2015. 

M. Fung, Y. Jin, R. Zhao,
M. E. HoqueROC Speak: Semi-Automated Personalized Feedback on Nonverbal Behavior from Recorded Videos, Proceedings of 17th International Conference on Ubiquitous Computing (Ubicomp), September 2015. Project Site [Video]

I. Naim, I. Tanveer, D. Gildea,
M. E. HoqueAutomated Prediction of Job Interview Performance: The Role of What You Say and How You Say It, Automated Face and Gesture Recognition (FG), May 2015. [Appendix]

M. Tanveer, E. Lin,
M. E. HoqueRhema: A Real-Time In-Situ Intelligent Interface to Help People with Public SpeakingIntelligent User Interfaces (IUI), April, 2015. [Video]

M. E. Hoque, R. W. Picard, Rich Nonverbal Sensing To Enable New Possibilities in Social Skills Training, Special issue on “Aware Computing” for IEEE Computer, Vol. 47, no. 4, April, 2014. 
Invited Position Paper

M. E. Hoque, M. Courgeon,  B. Mutlu, J-C. Martin, R. W. Picard, MACH: My Automated Conversation coacH,  Proceedings of 15th International Conference on Ubiquitous Computing (Ubicomp), September 2013. 
Best Paper Award (one of top 5 papers among 392 submissions)
MIT HomePage Spotlight [Video by MIT News] [Video by MIT Media Lab

M. E. Hoque, D. J. McDuff, R. W Picard, Exploring Temporal Patterns towards Classifying Frustrated and Delighted SmilesIEEE Transactions on Affective Computing, Vol. 3, no. 3, 2012.[Video]   
One of the most Influential Articles of IEEE Transactions for Affective Computing 
MIT Homepage Spotlight [Video by MIT News]

J. Hernandez*,
M. E. Hoque*, W. Drevo, R. W. Picard, Mood Meter: Counting smiles in the Wild, 14th International Conference on Ubiquitous Computing (Ubicomp), September 2012. (*equal contribution) [Video1 - demo] [Video2 - presentation]

M. E. Hoque, R. W. Picard, Acted vs. natural frustration and delight: Many people smile in natural frustration9th IEEE International Conference on Automatic Face and Gesture Recognition (FG), Santa Barbara, CA, USA, March 2011. [Video]
Best Paper Nomination (one of top 5 papers among 245 submissions) 


Fall of 2015: CSC 212/412 - Human-Computer Interaction, CSC 530: Methods in Data Enabled Research


Rafayet Ali
Yina Jin
Taylan Sen
Iftekhar Tanveer