I am a computational neuroscientist studying the neural and behavioral bases of reward prediction and learning in animals and humans. I use tools from reinforcement learning, Bayesian statistics, machine learning and dynamical systems theory to develop biologically plausible algorithms that give novel insight into how animals and humans learn to predict events and outcomes directly from observing and interacting with their environment.
As of summer 2022, I am an Investigator at the National Institute of Mental Health where I lead the Unit on the Neural Computations in Learning. Research in the lab focuses on the interplay between timing processes and reward prediction in dopamine circuits, the dynamic neural representation of expected outcomes in cortical and basal ganglia circuits and the formation of task state representations from continuous experience by animals and humans. In each case, we seek to understand how these processes change at the neural and behavioral level in addiction and other disorders of mental health.
Prior to this, I was an Associate Research Scholar at the Princeton Neuroscience Institute at Princeton University, where I worked with Yael Niv on reinforcement learning theories of prediction error signaling in the brain. My PhD training was in computational neuroscience, in the School of Psychiatry at the University of New South Wales in Sydney, Australia, where I worked with Michael Breakspear on models of neural population dynamics during perceptual processing and decision making.