RL as a Model of Agency
Organizers: David Abel (Deepmind), Anna Harutyunyan (Deepmind), and Mark Ho
Location: Macmillan Hall Room 117
In this workshop we bring together researchers at the intersection of artificial intelligence, cognitive science, and philosophy to carefully examine the reinforcement learning paradigm, the assumptions that underpin it, and its relationship with the broader concept of agency. We believe now is a critical time to improve clarity on our foundations, and we believe this workshop can help expose & explore new approaches, limitations, and perspectives across learning & agency. Our primary goal is that all attendees leave with at least one new question they had not thought about before (and perhaps new collaborations to help think through it!) We frame these discussions around the question: “Is the RL problem a suitable model of agency?”
Repetitive Negative Thinking and Simulation in Natural and Artificial Cognition
Organizers: Rachel Bedder (Princeton University), Peter Hitchcock (Brown University), and Paul Sharp (Hebrew University)
Location: Salomon 001
Replaying the past and imagining the future can accelerate learning and facilitate flexible behavior. However, mental simulation can also be dysregulated, which in humans may lead to chronic worry and rumination, and in machines can result in biased value functions and suboptimal policies. This workshop will bring together computer scientists, computational neuroscientists, and clinical scientists to address key challenges concerning what distinguishes helpful from harmful mental simulation. Topics will include how sampling facilitates learning and abstraction, how it is initiated and terminated, and how it is implemented biologically. We will consider how these topics offer insights into the mechanisms of pathological repetitive thinking processes in humans and analogous forms of detrimental simulation in machines. Our aim is to establish an interdisciplinary network of researchers capable of advancing a computational framework that accounts for both the normative role of simulation in learning and its dysregulation in various psychiatric disorders.
Social alignment in humans and machines
Organizers: Irina Rish, Guillaume Dumas, and Maximilian Puelma Touzel (University of Montreal)
Location: 85 Waterman Street, Room 130
Social norms are learned by individuals, stabilized through alignment by populations, and change with the times. They can shape what we believe as individuals and as a society. Social psychology probes the cognitive processes underlying these phenomena. Separately, social interactions are now being designed into multi-agent reinforcement learning systems to align coordinated behaviour towards specified goals. This workshop aims to strengthen the connection between the subfield of the artificial intelligence community that is incorporating social interactions into multi-agent systems and the social psychology/neuroscience community that is situating decision-making in social contexts. Our speakers will present work on how social norms and influence align individual decision-making with coordinated behaviour by human and machine collectives. The workshop addresses two kinds of applications: how social interactions can improve the performance of MARL systems, and how human-in-the-loop AI could help align social norms with societal value.
Reinforcement Learning with Humans in (and around) the Loop
Organizers: Kory Mathewson (Deepmind) and Brad Knox (Google Research)
Location: Friedman 108
Several recent successes in reinforcement learning have relied upon input from human users and domain experts. Humans can provide information that helps RL algorithms learn effective policies for well-specified tasks. They can also help specify the task to be solved by RL, increasing the alignment of learned policies with the interests of human stakeholders. This workshop focuses on two challenging questions:
• How can RL agents be designed to leverage feedback, guidance, and other information from humans that they will learn from, interact with, and assist?
• How can interactions and experiments be designed to be reproducible and compassionate to the humans involved?
We bring together interdisciplinary experts in interactive machine learning, RL, human-computer interaction, cognitive psychology, robots, and related social sciences to explore and discuss these challenges and what we can bring from our various fields of expertise to address them.
Building Accountable and Transparent RL
Organizers: Sarah Dean (Cornell), Thomas Krendl Gilbert (Cornell Tech), Nathan Lambert (UC Berkeley), and Tom Zick (Harvard)
Location: Friedman 208
When RL is used in societally relevant domains, practitioners must balance contested values, while anticipating and responding to resulting downstream effects. This will require documenting the behavior of RL systems over time, both in relation to design choices and dynamic effects. In this workshop, we will survey existing approaches to AI documentation, discuss the unique challenges posed by RL deployments, and introduce a proposal for “Reward Reports”: living documents that demarcate design choices and track system behavior. The majority of the workshop will be interactive, with participants trying out and critiquing methods of documentation for real or imagined RL applications.
Maps in reinforcement learning: efficient representations of structure and time in decision-making
Organizers: Charline Tessereau, Mihály Bányai, and Philipp Schwartenbeck (Max Planck Institute for Biological Cybernetics)
Location: Salomon 101
Decades of research, starting with studies in spatial navigation, show that experience can be represented efficiently within cognitive maps. A key challenge is to understand how experience is organised within cognitive maps, which includes pinning down theories about the static representation of experience as well as their extension to dynamic cognition. The successor representation has been proposed as a candidate mechanism by linking space and time within a predictive framework or spatial navigation. In parallel, research on episodic memory has shed light on how brains organise dynamic experience efficiently. Factorised, distributional, and graph-based approaches, have narrowed down important mechanisms that representations should incorporate to capture statistical structure within feature spaces.
This workshop brings together those different approaches and aims at elucidating how the principles of static representations can be extended to study dynamic cognition. We will discuss how they relate to and inform each other to identify synergies and open challenges for future theories of cognitive maps.
Temporal representation in Reinforcement Learning (TRiRL)
Organizers: Zafeirios Fountas (Huawei, UCL), Noor Sajid (UCL), Alex Zakharov (UCL), and Warrick Roseboom (University of Sussex)
Location: Friedman 102
The ability to perceive and estimate temporal dynamics can be considered as one of the central elements of intelligent biological agents – equipped with a model of their environment. Similarly, if one takes the view that an agent’s (internal) model is its primary guide to behaviour, the ability to learn appropriate temporal representations and employ them for action selection is a crucial consideration in RL. Although current unsupervised approaches in RL achieve competitive results, models tend to operate over a physical timescale of the environment. Consequently, further considerations are required to mimic the types of spatiotemporal representations observed in neuronal responses – operating at both subjective and physical timescales. Indeed, a large amount of neuroimaging and modeling studies in cognitive science have been focused on explaining temporal representations and their influence on human behaviour. This workshop aims to bring together experts in model-based RL and neuroscientists working on the brain’s ability to represent time, in order to exchange insights, brainstorm, and encourage a multi-angle discussion on this important topic.