ContinualAI Research (CLAIR) is a collaborative laboratory endorsed by ContinualAI and with the goal to advance the state of the art in Continual Learning (CL) for AI. We investigate learning in humans and animals and develop neuroscience-inspired approaches for continually learning systems, embodied agents, and robots.
Our mission is to advance continual learning research by trying to answer the following core research questions:
- What are the key principles and mechanisms governing learning and memory in the brain?
- How do we model artificial CL systems that better capture the flexibility, robustness, and scalability exhibited by the brain?
- How do we leverage current CL systems to synergically work on embodied agents and robots that interact with the environment?
- How can we use CL systems and robots to advance AI responsibly and ethically?
To answer these questions, we are building a growing set of computational principles and algorithms inspired by key findings in computer science, neuroscience, cognitive science, psychology, and robotics. To foster the interdisciplinary nature of our research approach, we are collaborating with a number of extraordinary researchers and experts in the fields.
CLAIR is contributing to a better understanding of CL with neural networks and the application of engineered approaches to embodied agents. We are disseminating our findings publishing our works in international conferences and journals on the topics of AI, machine learning, and robotics.
She, Q., Feng, F., Hao, X., Yang, Q., Lan, C., Lomonaco, V., Shi, X., Wang, Z., Guo, Y., Zhang, Y. and Qiao, F., (2020). OpenLORIS-Object: A Dataset and Benchmark towards Lifelong Object Recognition. International Conference on Robotics and Automation (ICRA’20). To Appear.
Lesort, T., Lomonaco, V., Stoian, A., Maltoni, D., Filliat, D., & Díaz-Rodríguez, N. (2020). Continual learning for robotics: Definition, framework, learning strategies, opportunities and challenges. Information Fusion Journal, 58:52-68.
Pomponi, J., Scardapane, S., Lomonaco, V., & Uncini, A. (2020). Efficient Continual Learning in Neural Networks with Embedding Regularization. Neurocomputing. To Appear.
Pellegrini, L., Graffieti, G., Lomonaco, V., & Maltoni, D. (2019). Latent Replay for Real-Time Continual Learning. arXiv preprint arXiv:1912.01100.
Lomonaco, V., Maltoni, D., Pellegrini, L. (2019) Fine-Grained Continual Learning. arXiv preprint arXiv:1907.03799.
Parisi, G.I. and Kanan, C. (2019) Rethinking Continual Learning for Autonomous Agents and Robots. arXiv preprint arXiv:1907.01929.
Maltoni, D., Lomonaco, V. (2019) Continuous Learning in Single-Incremental-Task Scenarios. Neural Networks, 116:56-73.
Parisi, G.I., Kemker, R., Part, J.L., Kanan, C., Wermter, S. (2019) Continual Lifelong Learning with Neural Networks: A Review. Neural Networks 113:54-71 [arXiv:1802.07569]
Parisi, G.I., Tani, J., Weber, C., Wermter, S. (2018) Lifelong Learning of Spatiotemporal Representations with Dual-Memory Recurrent Self-Organization. Frontiers in Neurorobotics, 12:78 [arXiv:1805.10966]
Díaz-Rodríguez, N., Lomonaco, V., Filliat, D., Maltoni, D. (2018) Don’t forget, there is more than forgetting: new metrics for Continual Learning. Continual Learning Workshop at NeurIPS, Montreal, Canada.
Parisi, G.I., Ji, X., Wermter, S. (2018) On the role of neurogenesis in overcoming catastrophic forgetting. Continual Learning Workshop at NeurIPS., Montreal, Canada [arXiv:1811.02113]
Lomonaco, V., Maltoni, D. (2017) CORe50: a new Dataset and Benchmark for Continuous Object Recognition. Conference on Robot Learning (CoRL), pp. 17-26.
Parisi, G.I., Tani, J., Weber, C., Wermter, S. (2017) Lifelong Learning of Human Actions with Deep Neural Network Self-Organization. Neural Networks 96:137-149.
Maltoni, D., & Lomonaco, V. (2016) Semi-supervised Tuning from Temporal Coherence. 23rd International Conference on Pattern Recognition (ICPR).
Lomonaco, V., & Maltoni, D. (2016). Comparing incremental learning strategies for convolutional neural networks. In IAPR Workshop on Artificial Neural Networks in Pattern Recognition (pp. 175-184). Springer.
Here we list the datasets released by the lab and its members: