Michael Bowling
Michael Bowling is a professor at the University of Alberta, Fellow in the Alberta Machine Intelligence Institute, and a senior scientist in DeepMind. His research is driven by his fascination in the problem of how computers can learn to play games through experience. He led the Computer Poker Research Group, which has built some of the best poker playing programs in the world, including being the first to beat professional players at both limit and no-limit variants of the game. He also was behind the use of Atari 2600 games to evaluate the general competency of RL algorithms, which is now a ubiquitous benchmark suite of domains for RL.
Luke Chang
Luke Chang, PhD is an Assistant Professor of Psychological & Brain Sciences at Dartmouth College and director of the Computational Social & Affective Neuroscience Laboratory. His research is focused on understanding the psychological and neurobiological mechanisms underlying emotion and social interactions. For example, how do we learn about others’ mental states, beliefs, and feelings? And how does we integrate this information with our own feelings and goals when making decisions?
Anne Collins
Anne Collins is an Assistant Professor of Psychology at UC Berkeley and the director of the Cognitive Computational Neuroscience lab. Her research uses quantitative modeling at different levels of description (for example, reinforcement learning, Bayesian inference, and neural network modeling) to study the neurocognitive computations that contribute to learning, decision making and executive functions.
Fiery Cushman
Fiery Cushman is an experimental psychologist at Harvard University who studies the cognitive basis of social thought and behavior. His research draws on computational cognitive models and uses these to understand how humans understand, learn from and interact with each other. A major focus of his research is moral decision making.
Will Dabney
Will Dabney is a research scientist at DeepMind. His research focuses on reinforcement learning, with collaborations into other areas of machine learning and neuroscience. Recent work has been focused on distributional reinforcement learning and representation learning, but core problems such as exploration and temporal abstraction continue to beckon.
Ido Erev
Ido Erev (PhD in quantitative psychology from UNC in 1990) is a professor of Behavioral Science at the Technion, and the President Elect of the European Association for Decisions Making. His research focuses on the effect of experience on choice behavior. Among the contributions of this research is the discovery of a robust experience-description gap: people exhibit oversensitivity to rare events when they decide based on a description of the incentive structure, but experience reverses this bias and lead to underweighting of rare events.
Tom Griffiths
Tom Griffiths is a professor of psychology and computer science at Princeton University. He works on computational models of human cognition, drawing on ideas from machine learning and artificial intelligence to better understand how human minds work. He is currently particularly interested in how agents with finite computational resources should make best use of those resources when making decisions.
Chelsea Finn
Chelsea Finn, PhD, is a research scientist at Google Brain and a post-doc at UC Berkeley, and is joining the faculty at Stanford University in September. She is interested in building algorithms that enable robots to learn to perform many different tasks, under the constraints of the real world. This includes learning deep representations that capture both sensory percepts and motor control, enabling machines to learn through unsupervised interaction with the world, and building algorithms that leverage previous experience to more quickly acquire new capabilities.
David Foster
David Foster is an Associate Professor in the Psychology Department and Helen Wills Neuroscience Institute at the University of California, Berkeley. He initially trained as a computational neuroscientist interested in reinforcement learning. He now runs a research laboratory conducting behavioral neurophysiology studies of learning, memory and decision making. A major focus of his lab is offline replay of neural activity in the hippocampus that appears to simulate temporally extended behavioral episodes. These replays may contribute to learning about previous choices and planning future ones.
Katja Hofmann
Researcher at the Machine Intelligence and Perception group at Microsoft Research Cambridge. Her research focuses on reinforcement learning with applications in video games, which she believes will drive a transformation of how we interact with AI technology. She is the research lead of Project Malmo, which uses the popular game Minecraft as an experimentation platform for developing intelligent technology. Her long-term goal is to develop AI systems that learn to collaborate with people, to empower their users and help solve complex real-world problems.
Anna Konova
Anna Konova, PhD is an Assistant Professor of Psychiatry at Rutgers University. Her research is focused on understanding the cognitive neuroscience of drug addiction. In this research, she draws on models of learning and decision making inspired by work in psychology, economics, and neuroscience. An area of particular interest is how motivational states and environmental factors are integrated into decision processes to contribute to real-world outcomes.
Sheila McIlraith
Sheila McIlraith is a Professor in the Department of Computer Science, University of Toronto. She studies sequential decision making in its many guises, informed by research in knowledge representation, AI automated planning, and machine learning, with applications ranging from diagnostic problem solving to program synthesis. Recent work focuses on how humans can use language—formal and natural languages—to advise, task, and/or share commonsense knowledge in reinforcement learning.
Susan Murphy
Susan Murphy is Professor of Statistics at Harvard University, Radcliffe Alumnae Professor at the Radcliffe Institute, Harvard University, and Professor of Computer Science at the Harvard John A. Paulson School of Engineering and Applied Sciences. Her lab works on clinical trial designs and learning algorithms for developing mobile health policies. She is a 2013 MacArthur Fellow, a member of the National Academy of Sciences and the National Academy of Medicine, both of the US National Academies.
Pierre-Yves Oudeyer
Pierre-Yves Oudeyer is a research director at Inria and head of the FLOWERS lab at Inria and Ensta-ParisTech since 2008. He had been a permanent researcher at Sony Computer Science Laboratory for 8 years (1999-2007). He studies developmental autonomous learning and the self-organization of behavioural and cognitive structures, at the frontiers of AI, machine learning, neuroscience, developmental psychology and educational technologies. In particular, he studies exploration in large open-ended spaces, with a focus on autonomous goal setting, intrinsically motivated learning, and how they can automate curriculum learning.
Rich Sutton
Richard S. Sutton is a distinguished research scientist at DeepMind and a professor in the department of computing science at the University of Alberta. His research interests center on the learning problems facing a decision-maker interacting with its environment in realtime, which he sees as central to artificial intelligence. He is also interested in animal learning psychology, in connectionist networks, and generally in systems that continually improve their representations and models of the world.
Catharine Winstanley
Dr. Catharine Winstanley is a behavioural neuroscientist at the University of British Columbia. She is a Professor in the Department of Psychology, and an Associate Member of the Division of Neurology. Her research is focused on understanding the neurobiological regulation of cognitive traits such as impulsivity and decision making, with the goal of using this knowledge to improve treatments for psychiatric disorders such as problem gambling and drug addiction.