Tarhan L, Konkle T.
Modeling the Neural Structure Underlying Human Action Perception. Poster presented at the annual conference on Cognitive Computational Neuroscience, New York, NY. 2017.
tarhan_konkle_2017_ccn.pdf Tarhan L, Konkle T.
Low- and High-Level Features Explain Neural Response Tuning During Action Observation. Poster presented at the annual meeting of the Vision Sciences Society, St. Pete's Beach, FL. 2017.
Abstract
Among humans’ cognitive faculties, the ability to process others’ actions is essential. We can recognize the meaning behind running, eating, and finer movements like tool use. How does the visual system process and transform information about actions?
To explore this question, we collected 120 action videos, spanning a range of every-day activities sampled from the American Time Use Survey. Next, we used behavioral ratings and computational approaches to measure how these videos vary within three distinct feature spaces: visual shape features (“gist”), kinematic features (e.g., body parts involved), and intentional features (e.g., used to communicate). Finally, using fMRI, we obtained neural responses for each of these 2.5s action clips in 9 participants.
To analyze the structure in these neural responses, we used an encoding-model approach (Mitchell et al., 2008) to fit tuning models for each voxel along each feature space, and assess how well each model predicts responses to individual actions.
We found that a large proportion of cortex along the intraparietal sulcus and occipitotemporal surface were moderately well fit by all three models (median r=0.23-0.31). In a leave-two-out validation procedure, all three models could accurately classify between two action videos in ventral and dorsal stream sectors (65-80%, SEM=1.1%-2.6%). In addition, we observed a significant shift in classification accuracy between early visual cortex (EVC) and higher-level visual cortex: the gist model best in early visual cortex, whereas the high-level models out-performed gist in occipito-temporal and parietal regions.
These results demonstrate action representations can be successfully predicted using an encoding-model approach. More broadly, the pattern of fits for different feature models reveals that visual information is transformed from low- to high-level representational spaces in the course of action processing. These findings begin to formalize the progression of the kinds of action information being processed along the visual stream.
tarhan_konkle_2017_vss.pdf Tarhan LY, Buxbaum LJ, Watson CE.
Action Understanding and Production: Common and Distinct Neural Substrates. Poster presented at the Bi-annual Meeting of the International Neuropsychological Society, Denver, CO. 2015.
Tarhan LY, Burke DM.
Emotional Faces and Cognition: The Effects of Ekman’s Emotional Expressions on Memory. Paper presented at the annual Berkeley Interdisciplinary Research Conference, Berkeley, CA. 2013.
AbstractEkman (Ekman, 1992) developed the Directed Facial Action Task (DFAT), which demonstrated that facial expressions can elicit emotional physiology. The present study investigated whether these responses also have mood-congruent memory effects, as found when emotions are elicited in other ways. 38 participants performed the DFAT for happy and sad expressions before recalling neutral, positive and negative images. The mood-congruent memory hypothesis predicts that, if the DFAT produces sustained affect, participants should recall more mood-congruent than mood-incongruent images. Some participants performed the DFAT while Galvanic Skin Response, an index of autonomic emotional response, was recorded. GSR correlated with reported mood change in the happy condition, while the difference between mood-congruent and –incongruent memory correlated with reported mood change in the sad condition. However, there was no correlation between GSR and memory. These results show that self-reported emotion but not physiological response was linked to congruency effects in memory for emotional images.
surp_poster.pdf leyla_tarhan_thesis.pdf