I am Asma Ghandeharioun, a research scientist at the People + AI research team in Google Research. I received my Ph.D. from the Affective Computing Group, MIT Media Lab. I am fortunate to have had Roz as my advisor and a great source of inspiration. In addition, I have had research experiences at Google Research, Microsoft Research, and EPFL, many of which turned into exciting long-term collaborations.
Asma Ghandeharioun, M.Sc., Ph.D. is a research scientist at the People + AI research team in Google Research. She is working on systems that better interpret humans and are better interpreted by humans. Her previous work spans machine learning interpretability, conversational AI, affective computing, digital health, and, more broadly, human-centered AI. She holds a doctorate and master's degree from MIT and a bachelor's degree from the Sharif University of Technology. She has been trained as a computer scientist/engineer and has research experiences at MIT, Google Research, Microsoft Research, EPFL, and in collaboration with medical professionals from Harvard, renowned hospitals in the Boston area, and abroad.
Some of her favorite past projects include: Generating disentangled interpretations via concept traversals, approximating interactive human evaluation using self-play for open-domain dialog models, interpretability benefits of characterizing sources of uncertainty, estimating depressive symptom severity based on sensor data, and an emotion-aware wellbeing chatbot.
Her work has been published in premier peer-reviewed machine learning and digital health venues such as ICLR, NeurIPS, EMNLP, AAAI, ACII, AISTATS, Frontiers in Psychiatry, and Psychology of Well-being, and has been featured in Wired, Wall Street Journal, and New Scientist.
Towards Human-Centered Optimality Criteria
Building an optimal system from a human-centered point of view is challenging. Many variables influence the problem definition itself, let alone its solution. In my research, I have approached this from three perspectives: building systems that 1) better interpret humans; 2) are better interpreted by humans; 3) augment humans' capabilities.
DISSECT: Disentangled Simultaneous Explanationsvia Concept Traversals
Interpretability; Machine Learning; Computer Vision; Deep Learning;
One of the principal benefits of counterfactual explanations is allowing users to explore "what-if" scenarios through what does not and cannot exist in the data, a quality that many other mediums of explanation such as heatmaps and influence functions are inherently incapable of doing. However, most previous work on generative explainability cannot disentangle important concepts effectively, produces poor quality or unrealistic examples, or fails to retain relevant information. We propose a novel approach, DISSECT, that trains a generator, a discriminator, and a concept disentangler simultaneously to overcome such challenges using little supervision. DISSECT offers a way to automatically discover a classifier's inherent notion of distinct concepts rather than rely on user-predefined concepts. We validate our approach on several challenging synthetic and realistic datasets where previous methods fall short of satisfying desirable criteria for interpretability and show that our method performs consistently well across all. We demonstrate applications of DISSECT for detecting potential biases of a classifier, investigating its alignment with expert domain knowledge, and identifying spurious artifacts that impact predictions using simulated experiments.
Publications: ICLR 2022 (to appear)
More: Code, SynthDerm dataset
Towards Empathy Learning, Socially-Aware Agents
Machine Learning; Natural Language Processing; Reinforcement Learning; Deep Learning; Interpretability;
We introduce a novel, model-agnostic, and dataset-agnostic method that approximates interactive human evaluation in the open-domain dialog. We develop an off-policy reinforcement learning (RL) scenario and show that solely relying on explicit human preferences is not as effective as training with implicit human rewards. We build a novel hierarchical RL model and demonstrate its effectiveness in reducing repetitiveness or toxicity.
Publications: NeurIPS'19, EMNLP'20, AAAI'20, NeurIPS'19 Conv. AI workshop
Talks: NeurIPS'19 WiML workshop, NeurIPS'19 Conv. AI workshop
Awards: MIT Quest for Intelligence, MIT Stephen A. Schwarzman College of Computing, Machine Learning Across Disciplines Challenge
Assessing Depressive Symptoms through Physiological and Behavioral Data
Machine Learning; Affective Computing; Computational Psychiatry; Deep Learning;
In collaboration with Massachusetts General Hospital, we conduct a longitudinal clinical trial exploring data-driven methods for assessing depression and its severity.
Publications: Frontiers in Psychiatry'20, ICMLW'21, ACII'17
Abstracts/Poster Presentations: ICMLW'21, ABCT'20, ABCT'18, ABCT'17, APS'17, ADAA'17, ADAA'17, CHC'17
Talks: ICML'21 Computational Approaches to Mental Health Workshop
Patents: US 2019/0117143 A1
Awards: NIH 1R01MH118274, J-Clinic, MGH-MIT Grand Challenge
More: Project Website, Video
Interpretability Benefits of Uncertainty Quantification
Interpretability; Machine Learning; Computer Vision; Deep Learning; Affective Computing;
We use a simple modification of a classical network inference using Monte Carlo dropout to estimate uncertainty. We characterize sources of uncertainty to proxy calibration and disambiguate annotator and data bias.
Publications: ICCVW'19
Awards: MIT Stephen A. Schwarzman College of Computing, Machine Learning Across Disciplines Challenge
More: Code
Empathic Breath
Affective Computing; Human-Computer Interaction;
Motivated by the effectiveness of controlled breathing, this work studies the potential use of breathing interventions while driving to help manage stress.
Publications: UMAP'20
More: Project Website
Breath-Based Music Therapy
Affective Computing; Human-Computer Interaction;
We engineered an interactive music system that influences a user’s breathing rate to induce a relaxation response.
Publications: ACII'19
Analysis of Online Suicide Risk
Machine Learning; Natural Language Processing; Affective Computing;
We developed and compared multiple methods to analyze suicide risk from online Reddit posts.
Publications: ACIIW'19
Personalization of Photo Edits with Deep Generative Models
Machine Learning; Deep Learning;
We develop a hierarchical model using deep generative models to propose several diverse, high-quality photo edits while also learning from and adapting to a user's aesthetic preferences.
Publications: AISTATS'18
BrightBeat
Affective Computing; Positive Computing; Human-Computer Interaction;
BrightBeat is a set of seamless visual, auditory, and tactile interventions that mimic a calming breathing oscillation to influence physiological syncing and consequently bring a sense of focus and calmness.
Publications: CHI-EA'17, Master's thesis
Press: Wired Jan. 2019, New Scientist Jul. 2019
Other: Project Website
Kind and Grateful
Positive Computing; Human-Computer Interaction;
We leverage pervasive technologies to infer optimal moments for stimulating contextually relevant thankfulness and appreciation to promote kindness and gratitude.
Publications: Psychology of Wellbeing'16
Abstracts/Poster Presentations: Annals of Behavioral Medicine'16
Hierarchical Infinite HMM
Machine Learning;
We develop a simple hierarchical infinite HMM (iHMM) model, an extension to (iHMM) with an efficient inference scheme. The model can capture the dynamics of a sequence in two timescales and does not suffer from the problems of other related models in terms of implementation and time complexity.
Publications: NIPS Workshop'15
SNAPSHOT
Affective Computing; Machine Learning;
SNAPSHOT is a large-scale longitudinal study to measure: Sleep, Networks, Affect, Performance, Stress, and Health using Objective Techniques.
Publications: ACII'15
Abstracts/Poster Presentations: SLEEP'16
Awards: NIH 1R01GM105018
More: Project Website, Video