RLDM2015 will begin with 4 tutorials in Lister Centre, in two parallel tracks, on Sunday, June 7
***NEW*** With permission of speakers tutorials will be recorded and made available online after the conference.
Schedule
12:00-1:00 Light lunch
1:00-4:00 Basic tutorials:
- Michael Littman (Brown University) – Basics of Computational Reinforcement Learning (Maple Leaf Room)
- Nathaniel Daw (NYU) – Natural RLDM: Optimal and Subptimal Control in Brain and Behavior (Wild Rose Room)
4:00-4:30 Tea break
4:30-6:00 Focused tutorials:
- Ifat Levy (Yale University) – A neuroeconomics approach to pathological behavior (Wild Rose Room)
- David Silver (Google DeepMind and UCL) – Deep Reinforcement Learning (Maple Leaf Room)
Dinner on your own
Tutorial Abstracts and Presenter Information
Basics of Computational Reinforcement Learning
Michael Littman, Brown University
In machine learning, the problem of reinforcement learning is concerned with using experience gained through interacting with the world and evaluative feedback to improve a system’s ability to make behavioral decisions. This tutorial will introduce the fundamental concepts and vocabulary that underlie this field of study. It will also review recent advances in the theory and practice of reinforcement learning, including developments in fundamental technical areas such as generalization, planning, exploration and empirical methodology.
Michael L. Littman’s research in machine learning examines algorithms for decision making under uncertainty. He has earned multiple awards for teaching and his research has been recognized with three best-paper awards on the topics of meta-learning for computer crossword solving, complexity analysis of planning under uncertainty, and algorithms for efficient reinforcement learning. Littman has served on the editorial boards for the Journal of Machine Learning Research and the Journal of Artificial Intelligence Research. He was general chair of International Conference on Machine Learning 2013 and program chair of the Association for the Advancement of Artificial Intelligence Conference 2013. He is a three-time winner of the coveted “Shakey” for best AI video.
Natural RLDM: Optimal and Subptimal Control in Brain and Behavior
Nathaniel Daw, New York University
Approaches to reinforcement learning and statistical decision theory from artificial intelligence offer appealing frameworks for understanding how biological brains solve decision problems in the natural world. In particular, these engineering approaches typically begin with a clear, normative analysis of the optimal solution to the problem. However, rather than stopping there, they focus on realizing it, often approximately, with a step-by-step algorithmic solution, which lends itself naturally to process-level accounts of the psychological and neural mechanisms underlying behavior and its suboptimalities. In this tutorial I will review research into biological decision making and reinforcement learning from psychology, ethology, behavioral economics, and neuroscience. I will focus on how the brain may implement different approximations to the ideal observer, how this may help to explain notions of modularity or multiplicity of decision systems across several domains, how these approximations might be understood as boundedly rational when taking into account the costs and benefits of computation, and how these mechanisms might be implicated in self control and psychiatric disorders.
Nathaniel Daw is Associate Professor of Neural Science and Psychology and Affiliated Associate Professor of Computer Science at New York University. He received his Ph.D. in computer science from Carnegie Mellon University and at the Center for the Neural Basis of Cognition, before conducting postdoctoral research at the Gatsby Computational Neuroscience Unit at UCL. His research concerns computational approaches to reinforcement learning and decision making, and particularly the application of computational models in the laboratory, to the design of experiments and the analysis of behavioral and neural data. He is the recipient of a McKnight Scholar Award, a NARSAD Young Investigator Award, a Scholar Award in Understanding Human Cognition from the MacDonnell Foundation, and the Young Investigator Award from the Society for Neuroeconomics.
A Neuroeconomics Approach to Pathological Behavior
Ifat Levy, Yale University
Psychopathology is complicated and difficult to study. In the last decade an increased number of neuroscientists have been incorporating insights from behavioral economics and techniques from experimental economics in their neurobiological investigations. Recent evidence suggests that this may also be a promising approach for studying the behavioral and neural bases of psychopathology. I will review ideas from neuroeconomics that either have been applied or can be applied to psychopathology, including anxiety-based disorders and substance abuse.
Ifat Levy uses experimental economics techniques and neuroimaging to study the neural mechanisms of decision-making under uncertainty. She examines decision processes in development and aging, as well as possible impairments in these processes in obesity and mental illness. She holds a PhD in computational neuroscience from the Hebrew University of Jerusalem and was trained in the labs of Prof. Rafael Malach at the Weizmann Institute of Science and Prof. Paul Glimcher at New York University.
Deep Reinforcement Learning
David Silver, Google DeepMind and UCL
In this tutorial I will discuss how reinforcement learning (RL) can be combined with deep learning (DL). There are several ways to combine DL and RL together, including value-based, policy-based, and model-based approaches with planning. Several of these approaches have well-known divergence issues, and I will present simple methods for addressing these instabilities. The talk will include a case study of recent successes in the Atari 2600 domain, where a single agent can learn to play many different games directly from raw pixel input.
David Silver’s research focuses on reinforcement learning, planning and control. He leads the reinforcement learning research effort at Google DeepMind. His recent work has focused on combining reinforcement learning with deep learning, including a program that learns to play Atari games directly from pixels (Nature 2015). He holds a Royal Society University Research Fellowship and teaches reinforcement learning at University College London. His PhD (supervised by Rich Sutton at the University of Alberta) was on reinforcement learning in Computer Go, which co-introduced the algorithms used in the first master level Go programs. In his previous life, he was CTO at Elixir Studios and Lead Programmer on Republic: the Revolution.