My academic research and professional scientific work is all connected in some way to understanding the complexity of human decision-making both at the level of electric activity in the cortex and the level of psychological processes. I am fascinated by the internal biological mechanisms that produce behaviour as well as the experiences we have when we make choices and interact with our environments.
I enjoy using scientific methods for exploring biological processes and associated experiences because our common-sense intuitions about these things can be very compelling and very wrong. Of course, our intuitions are just the product of internal systems for rapid decision-making and estimating uncertainty, but knowing the limits of these judgments is often hard without rigorous formal methods.
As I have worked in various places, on various problems, I have always wanted to relate my work to valuable real-world outcomes. No matter what I do I am passionate about connecting my research to understanding problems in making optimal decisions and methods for improving decisions. I am also passionate about simply making things and seeing my work come to fruition, which is probably why I still like to dabble in code.
Snippets from various papers are provided below with full .pdfs provided at the bottom.
Attention includes processes that evaluate stimuli relevance, select the most relevant stimulus against less relevant stimuli, and bias choice behavior toward the selected information. It is not clear how these processes interact. Here, we captured these processes in a reinforcement learning framework applied to a feature-based attention task that required macaques to learn and update the value of stimulus features while ignoring non-relevant sensory features, locations, and action plans.
Balcarras, M., S. Ardid, D. Kaping, S. Everling, T. Womelsdorf (2016). Attentional selection can be predicted by reinforcement learning of task-relevant stimulus features weighted by value-independent stickiness. Journal of Cognitive Neuroscience: MIT Press, 1-17.
Learning in a new environment is influenced by prior learning and experience. Correctly applying a rule that maps a context to stimuli, actions, and outcomes enables faster learning and better outcomes compared to relying on strategies for learning that are ignorant of task structure. However, it is often difficult to know when and how to apply learned rules in new contexts. In our study we explored how subjects employ different strategies for learning the relationship between stimulus features and positive outcomes in a probabilistic task context. We test the hypothesis that task naive subjects will show enhanced learning of feature specific reward associations by switching to the use of an abstract rule that associates stimuli by feature type and restricts selections to that dimension.
Balcarras, M., & Womelsdorf, T. (2016). A Flexible Mechanism of Rule Selection Enables Rapid Feature-
Based Reinforcement Learning. Frontiers in Neuroscience, 10(189), 125.
Attention and learning are cognitive control processes that are closely related. This thesis investigates this inter-relatedness by using computational models to describe the mechanisms that are shared between these processes. Computational models describe the transformation of stimuli to observable variables (behaviour) and contain the latent mechanisms that affect this transformation. Here, I captured these mechanisms with the reinforcement learning (RL) framework applied in two different task contexts and three different projects to show 1) how attentional selection of stimuli involves the learning of values for stimuli, 2) how the learning of stimulus values is influenced by previously learned rules, and 3) how explorations of value-related mechanisms in the brain benefit from using intracranial EEG to investigate the strength of oscillatory activity in ventromedial prefrontal cortex.
The approach taken in this study is motivated by two significant gaps in the exploration of learning and attention. First, RL models have been used with great effect in cognitive neuroscience to elucidate the neuronal basis of learning in the brain, but despite the fact that attentional selection has been shown to be influenced by learned values for stimuli, RL models have not yet been used to quantify the trial by trial relationship between covert attentional selection of stimuli and their expected value (Anderson, 2013; Anderson et. al. 2013; Gottlieb, 2012). Second, it is not clear how value-related information is represented and processed in the brain. While RL models have been used to show that the activity of single neurons and the hemodynamic response of neuronal populations are related to the processing of stimulus value information, they have not yet been used to explore how this information is linked to the rhythmic fluctuations of neuronal activity recorded directly from the cortex (Buzsaki & Watson, 2012; Dayan & Niv, 2008). This study aims to fill these gaps.
This talk was given during convocation week at St. Stephen's University in 2016. It covers and connects the work I did in my Master's and Phd following my B.A. at St. Stephen's in 2006.