In the last few years we have witnessed a renewed and steadily growing interest in the ability to learn continuously from high-dimensional data. In this page, we will try to keep track of recent Continual/Lifelong Learning developments in a pure research context.
In this section we keep track of the people working on the subject:
- Raia Hadsell, Razvan Pascanu - DeepMind
- Eric Eaton - University of Pennsylvania
- Bing Liu - University of Illinois at Chicago
- Vincenzo Lomonaco, Davide Maltoni - University of Bologna
- Christoph Lampert - IST Austria
- Tom Mitchell - Carnegie Mellon University, USA
- Daniel L. Silver - Acadia University, Canada
- Rich Sutton - University of Alberta, Canada
- Partha Talukdar - Indian Institute of Science (IISc)
- Qiang Yang - Hong Kong University of Science and Technology
Community Selected Papers
In this section we highlight some papers the Continual AI community value as must-read:
- Hanul Shin, Jung Kwon Lee, Jaehong Kim, and Jiwon Kim. “Continual Learning with Deep Generative Replay”. Advances in Neural Information Processing Systems, 2017.
- Xu He and Herbert Jaeger. “Overcoming Catastrophic Interference using Conceptor-Aided Backpropagation”. International Conference on Learning Representations, 2018.
- Jaehong Yoon, Eunho Yang, Jeongtae Lee, and Sung Ju Hwang. “Lifelong Learning with Dynamically Expandable Networks”. International Conference on Learning Representations, 2018.
- Cuong V. Nguyen, Yingzhen Li, Thang D. Bui, and Richard E. Turner. “Variational Continual Learning”. International Conference on Learning Representations, 2018.
- Vincenzo Lomonaco and Davide Maltoni. “CORe50: a new Dataset and Benchmark for Continuous Object Recognition”. Proceedings of the 1st Annual Conference on Robot Learning, PMLR 78:17-26, 2017.
- James Kirkpatrick & All. “Overcoming catastrophic forgetting in neural networks”. Proceedings of the National Academy of Sciences, 2017, 201611835.
- Li Zhizhong and Derek Hoiem. “Learning without forgetting”. European Conference on Computer Vision. Springer International Publishing, 2016.
- Lopez-Paz David and Marc’Aurelio Ranzato. “Gradient Episodic Memory for Continual Learning”. Advances in Neural Information Processing Systems, 2017.
- Rebuffi Sylvestre-Alvise, Alexander Kolesnikov and Christoph H. Lampert. “iCaRL: Incremental classifier and representation learning.” arXiv preprint arXiv:1611.07725, 2016.
- Zenke, Friedemann, Ben Poole, and Surya Ganguli. “Continual learning through synaptic intelligence”. International Conference on Machine Learning, 2017.
- Rusu Andrei et al. “Progressive neural networks.” arXiv preprint arXiv:1606.04671, 2016.
- Davide Maltoni and Vincenzo Lomonaco. “Continuous Learning in Single-Incremental-Task Scenarios.” arXiv preprint arXiv:1606.04671, 2018.
Dissertations and Theses
- Explanation-Based Neural Network Learning: A Lifelong Learning Approach by Sebastian Thrun. Kluwer Academic Publishers, Boston, MA, 1996.
- Continual Learning in Reinforcement Environments by Mark Ring. The University of Texas at Austin,1994.
- Lifelong Machine Learning by Zhiyuan Chen and Bing Liu, Morgan & Claypool Publishers, November 2016.
In this section we keep track of all the current and past projects on Lifelong/Continual Learning.:
Continual Learning Papers Database
Waiting for better AI tools for papers reccomendation the Continual AI community is mantaining a database of CL papers which we plan to realease soon. It would be very rich of metadata so that we can better navigate the incredible number of papers published each year (query example: give me the papers employing reharsal and evaluated on CORe50).
Please add your own paper below so that we can advertise it and insert in our CL database!