Intrinsically Motivated Open-ended Learning


NeurIPS 2023 in-person Workshop, December 16, New Orleans


Mentorship session: Sign up here to chat with one of our research mentors! The list includes Anima Anandkumar, Dani Bassett, Doina Precup, Georg Martius, Jim Fan, Laura Schulz and Natalia Vélez.


Description

How do humans develop broad and flexible repertoires of knowledge and skills? How can we design autonomous lifelong learning machines with the same abilities?

A promising computational and scientific approach to these questions comes from the study of intrinsically motivated learning, sometimes called curiosity-driven learning (Oudeyer et al., 2007; Barto, 2013; Mirolli and Baldassarre, 2013, Schmidhuber, 2021); a framework that finds inspiration in the drive of humans and other animals to seek "interesting" situations for their own sake (White, 1959; Berlyne, 1960; Deci and Ryan, 1985). These intrinsic motivations (IM) have evolved in animals to drive exploratory behaviors, an essential component of efficient learning (Singh et al., 2010). When implemented in machines, they support the autonomous exploration of complex environments; a key component of many recent breakthrough in reinforcement learning (Bellemare et al., 2016; Pathak et al., 2017; Burda et al., 2019; Eysenbach et al., 2019; Warde-Farley et al., 2019; Pong et al., 2020; Raileanu and Rocktäschel, 2020; Sekar et al., 2020; Ecoffet et al., 2021; Stooke et al., 2021; Colas et al., 2022; Du et al., 2023; Adaptive Agent Team et al., 2023). In short, intrinsic motivations free artificial agents from relying on predefined learning signals and thereby offer a path towards autonomy and open-ended learning, a longstanding objective in the field of artificial intelligence.

Despite recent successes, today’s agents still lack the autonomy and flexibility required to learn and thrive in realistic open-ended environments. Such versatility requires the capacity to generalize to domains different from the ones encountered at design time, to adaptively create goals and switch between them, and to integrate incremental learning of skills and knowledge over longer periods of time. These issues are especially relevant for efforts to deploy artificial intelligence in the real world without human intervention, a topic of key concern in the NeurIPS community.

Better understanding and engineering of such flexible learning systems will require fresh approaches and cross-disciplinary conversations. We propose to bring these conversations to NeurIPS by introducing the growing field of Intrinsically Motivated Open-ended Learning (IMOL) . Taking roots in developmental robotics (Lungarella et al., 2003; Cangelosi and Schlesinger, 2015) , IMOL aims at a unified study of the motivational forces, learning architectures, and developmental and environmental constraints that support the development of open-ended repertoires of skills and knowledge over learners' lifetimes (e.g. , Barto et al., 2004; Baldassarre, 2011; Baranes and Oudeyer, 2013; Kulkarni et al., 2016; Santucci et al., 2016; Eysenbach et al., 2019; Colas et al., 2022).

More than a scientific approach, IMOL also represents an associated research community that emerged at the first IMOL workshop in 2009 and progressively developed into an active community across years of scientific events and activities. With this full-day workshop, we propose to reflect on recent advances, showcase on-going research and discuss open challenges for the future of IMOL research. To this end, we will bring together speakers, presenters and attendees from a diversity of IMOL-related fields including robotics, reinforcement learning, developmental psychology, evolutionary psychology, computational cognitive science, and philosophy.

References

  • Adaptive Agent Team, Jakob Bauer, Kate Baumli, Satinder Baveja, Feryal Behbahani, Avishkar Bhoopchand, Nathalie Bradley-Schmieg, Michael Chang, Natalie Clay, Adrian Collister, et al. Human-timescale adaptation in an open-ended task space. 2023.
  • Gianluca Baldassarre. What are intrinsic motivations? a biological perspective. 2011.
  • Adrien Baranes and Pierre-Yves Oudeyer. Active learning of inverse models with intrinsically motivated goal exploration in robots. 2013.
  • Andrew G. Barto. Intrinsic motivation and reinforcement learning. 2004.
  • Marc Bellemare, Sriram Srinivasan, Georg Ostrovski, Tom Schaul, David Saxton, and Remi Munos. Unifying count-based exploration and intrinsic motivation. 2016.
  • Daniel E Berlyne. Conflict, arousal, and curiosity. 1960.
  • Yuri Burda, Harrison Edwards, Amos Storkey, and Oleg Klimov. Exploration by random network distillation. 2019.
  • Angelo Cangelosi and Matthew Schlesinger. Developmental robotics. 2015.
  • Cédric Colas, Tristan Karch, Olivier Sigaud, and Pierre-Yves Oudeyer. Autotelic agents with intrinsically motivated goal-conditioned reinforcement learning: A short survey. 2022.
  • Edward L. Deci and Richard M. Ryan. Intrinsic Motivation and Self-Determination in Human Behavior. 1985.
  • Yuqing Du, Olivia Watkins, Zihan Wang, Cédric Colas, Trevor Darrell, Pieter Abbeel, Abhishek Gupta, and Jacob Andreas. Guiding pretraining in reinforcement learning with large language models. 2023.
  • Adrien Ecoffet, Joost Huizinga, Joel Lehman, Kenneth O. Stanley, and Jeff Clune. First return, then explore. 2021.
  • Benjamin Eysenbach, Abhishek Gupta, Julian Ibarz, and Sergey Levine. Diversity is all you need: Learning skills without a reward function. 2019.
  • Tejas D Kulkarni, Karthik Narasimhan, Ardavan Saeedi, and Josh Tenenbaum. Hierarchical deep reinforcement learning: Integrating temporal abstraction and intrinsic motivation. 2016.
  • Max Lungarella, Giorgio Metta, Rolf Pfeifer, and Giulio Sandini. Developmental robotics: a survey. 2003.
  • Marco Mirolli and Gianluca Baldassarre. Functions and mechanisms of intrinsic motivations: The knowledge versus competence distinction. 2013.
  • Pierre-Yves Oudeyer, Frédric Kaplan, and Verena V. Hafner. Intrinsic motivation systems for autonomous mental development. 2007.
  • Deepak Pathak, Pulkit Agrawal, Alexei A. Efros, and Trevor Darrell. Curiosity-driven exploration by self-supervised prediction. 2017.
  • Vitchyr Pong, Murtaza Dalal, Steven Lin, Ashvin Nair, Shikhar Bahl, and Sergey Levine. Skew-fit: State- covering self-supervised reinforcement learning. 2020.
  • Roberta Raileanu and Tim Rocktäschel. Ride: Rewarding impact-driven exploration for procedurally-generated environments. 2020.
  • Vieri Giuliano Santucci, Gianluca Baldassarre, and Marco Mirolli. GRAIL: A goal-discovering robotic archi- tecture for intrinsically-motivated learning. 2016.
  • Jürgen Schmidhuber. Artificial Curiosity & Creativity Since 1990-91. 2021, https://people.idsia.ch/~juergen/artificial-curiosity-since-1990.html.
  • Ramanan Sekar, Oleh Rybkin, Kostas Daniilidis, Pieter Abbeel, Danijar Hafner, and Deepak Pathak. Planning to explore via self-supervised world models. 2020.
  • Satinder Singh, Richard L Lewis, Andrew G Barto, and Jonathan Sorg. Intrinsically motivated reinforcement learning: An evolutionary perspective. 2010.
  • Adam Stooke, Anuj Mahajan, Catarina Barros, Charlie Deck, Jakob Bauer, Jakub Sygnowski, Maja Trebacz, Max Jaderberg, Michael Mathieu, Nat McAleese, Nathalie Bradley-Schmieg, Nathaniel Wong, Nicolas Porcel, Roberta Raileanu, Steph Hughes-Fitt, Valentin Dalibard, and Wojciech Marian Czarnecki. Open-ended learning leads to generally capable agents. 2021.
  • David Warde-Farley, Tom Van de Wiele, Tejas Kulkarni, Catalin Ionescu, Steven Hansen, and Volodymyr Mnih. Unsupervised control through non-parametric discriminative rewards. 2019.
  • Robert W. White. Motivation reconsidered: The concept of competence. 1959.