Humans have the extraordinary ability to learn continually from experience. Not only we can apply previously learned knowledge and skills to new situations, we can also use these as the foundation for later learning. One of the grand goals of Artificial Intelligence (AI) is building an artificial continual learning agent that constructs a sophisticated understanding of the world from its own experience through the autonomous incremental development of ever more complex knowledge and skills. However, current AI systems greatly suffer from the exposure to new data or environments which even slightly differ from the ones for which they have been trained for. Moreover, the learning process is usually constrained on fixed datasets within narrow and isolated tasks which may hardly lead to the emergence of more complex and autonomous intelligent behaviors. In essence, continual learning and adaptation capabilities, while more than often thought as fundamental pillars of every intelligent agent, have been mostly left out of the main AI research focus. In this talk, we explore the application of these ideas in the context of Vision with a focus on (deep) continual learning strategies for object recognition running at the edge on highly-constrained hardware devices.
Disentangling the origin of chemical differences using GHOST
Continual Learning: Another Step Towards Truly Intelligent Machines
1. Continual Learning: Another Step
TowardsTruly Intelligent Machines
Introduction Meetup @ Numenta
16-09-2019
Vincenzo Lomonaco
vincenzo.lomonaco@unibo.it
Postdoctoral Researcher @ University of Bologna
Supervisor: Davide Maltoni
2. About me
• Post-Doc @ University of Bologna
• Research Affiliate @ AI Labs
• Teaching Assistant of the courses
Machine Learning and Computer
Architectures @ UniBo
• Author andTechnical reviewer of the
online course Deep Learning with R and
book R Deep Learning Essentials.
• Co-Founder and President of
ContinualAI.org
• Co-Founder and Board Member of Data
Science Bologna and AIforPeople.org
3. What’s ContinualAI?
• ContinualAI is a non-profit research organization and
the largest research community on Continual Learning
for AI.
• It counts more than 550+ members in 17 different
time-zones and from top-notch research institutions.
• Learn more about ContinualAI at www.continualai.org
6. Outline
1. Personal ResearchTrajectory andVision
2. Continual Learning: State-of-the-art
3. Rehearsal-free and Task-agnostic
Online Continual Learning
4. CurrentWork and Research Direction
8. ResearchTrajectory andVision
I meet Davide Maltoni
who was working at
HTMs from 2011.
I read “On
Intelligence” and join
his quest for
understanding
intelligence and build it
in silicon.
MasterThesis Published:
“Comparing HTMs and CNNs
on Object RecognitionTasks”
2014
Visiting Scholar at Purdue
University.
Working on Continual
Reinforcement /
Unsupervised Learning.
Visiting Scholar at ENSTA
ParisTech.
Working on Continual for
Robotics and a more
comprehensive CL
framework definition.
2015 2017 2018
I defend my PhD
Dissertation “Continual
Learning with Deep
Architectures”.
Putting everything
together.
Post-Doc @ UniBo
on the same topic.
2019
We abandon HTM (1st Gen.) to
work on top of deep learning
directly with a focus on
Continual Learning.
In particular, on Continual
Learning from video sequences.
2016
Long-term vision: “Understand the key computational
principles of intelligence and build truly intelligent machines.”
Main research goal: “Closing the gap between the HTM
theory and current AI systems.”
9. OurWorks with HTMs (1st Gen.)
1. D. Maltoni, Pattern Recognition by HierarchicalTemporal
Memory,Technical Report, DEIS - University of Bologna technical
report, April 2011.
2. D. Maltoni and E.M. Rehn, Incremental Learning by Message
Passing in HierarchicalTemporal Memory in 5thWorkshop on
Artificial Neural Networks in Pattern Recognition (ANNPR12),
Trento (Italy), pp.24-35, September 2012.
3. E.M. Rehn and D. Maltoni, Incremental Learning by Message
Passing in HierarchicalTemporal Memory, Neural Computation,
vol.26, no.8, pp.1763-1809, August 2014.
4. D. Maltoni andV. Lomonaco, Semi-supervisedTuning from
Temporal Coherence, in International Conference on Pattern
Recognition (ICPR16), Cancun, Mexico, December 2016.
17. CL Framework
CL Algorithm
Mini-spot Robot from Boston Dynamics, 2018
T. Lesort,V. Lomonaco et al. Continual Learning for Robotics. pre-print arxiv arXiv:1907.00182 .
19. 3 Short-term Research Objective for CL
1. Rehearsal-Free: Raw data cannot be stored and re-used
for rehearsal.
2. Task Agnostic: No use of supplementary task supervised
signal “t”.
3. Online: Bounded computational and memory
overheads, efficient, real-time updates (possibly one
data instance at a time).
T. Lesort,V. Lomonaco et al. Continual Learning for Robotics. pre-print arxiv arXiv:1907.00182 .
20. Task Agnostic Continual Learning
1. New Instances (NI)
2. New Classes (NC)
3. New Instances and Classes (NIC)
Initial Batch Incremental Batches
Τ
. . .
21. CORe50Website
Dataset, Benchmark, code and additional
information freely available at:
vlomonaco.github.io/core50
LomonacoV. and Maltoni D. CORe50: a New Dataset and Benchmark for Continuous Object Recognition. CoRL2017.
22. LomonacoV. and Maltoni D. CORe50: a New Dataset and Benchmark for Continuous Object Recognition. CoRL2017.
CORe50: aVideo Benchmark for CL
and Object Recognition/Detection
23. # Images 164,866
Format RGB-D
Image size 350x350
128x128
# Categories 10
# Obj. x Cat. 5
# Sessions 11
# img. x Sess. ~300
# Outdoor Sess. 3
Acquisition Sett. Hand held
LomonacoV. and Maltoni D. CORe50: a New Dataset and Benchmark for Continuous Object Recognition. CoRL2017.
CORe50: aVideo Benchmark for CL
and Object Recognition/Detection
29. AR-1*: Regularization Phase
● Computational
Efficient (independent
from the number of
training batches)
● Just one Fisher matrix
(running sum + max
clip)
● Importance of Batch
ReNormalization
LomonacoV., Maltoni D., Pellegrini L. Fine-Grained Continual Learning. Preprint arxiv arXiv:1907.03799, 2019.
30. AR-1*: Architectural Phase
● CWR*: generalization of
CWR+ to handle
agnostically NI, NC and
NIC settings
● Dual-Memory system for
memory consolidation.
● Based on zero-init for new
classes, weights
consolidation and
finetuning for already
encountered classes.
LomonacoV., Maltoni D., Pellegrini L. Fine-Grained Continual Learning. Preprint arxiv arXiv:1907.03799, 2019.
31. CORe50 - NICv2 Results
● (0%-92%) -45% avg. memory.
● (0%-94%) -49% avg. compute.
● -20% price in accuracy at
the end of last batch.
LomonacoV., Maltoni D., Pellegrini L. Fine-Grained Continual Learning. Preprint arxiv arXiv:1907.03799, 2019.
33. Real-World Continual Learning on
Embedded Systems
Pellegrini L., Graffieti G. , LomonacoV. and Maltoni D. Towards Continual Learning on the Edge.To be published.
34. AR-1*: Closing the Accuracy Gap with
Latent Rehearsal
Pellegrini L., Graffieti G. , LomonacoV. and Maltoni D. Towards Continual Learning on the Edge.To be published.
35. AR-1*: Closing the Accuracy Gap with
Latent Rehearsal
Pellegrini L., Graffieti G. , LomonacoV. and Maltoni D. Towards Continual Learning on the Edge.To be published.
36. AR-1*: Sparse Representations
Pellegrini L., Graffieti G. , LomonacoV. and Maltoni D. Towards Continual Learning on the Edge.To be published.
● Imposing Sparsity of the
activactivation does not
affect accuracy from
~55% to ~35%.
● It has been shown that
sparsity may help the CL
process.
● Less memory overhead for
latent rehearsal.
37. FutureWorks and Research Direction
1. Latent Generative Replay
2. Lowering the amount of Supervision (Unsupervised
Reinforcement Learning, Active Learning)
3. Infer or make use of the sparse “task signal” (context
modulation)
4. Sequence Learning/ Temporal Coherence Integration
5. Improve robustness in real-world embedded
applications (Smartphone devices, Nao Robot, …)
Maltoni D. and LomonacoV. Semi-SupervisedTuning fromTemporal Coherence. ICPR 2016.
LomonacoV., Desai K., Maltoni D. and Culurciello, E. Continual Reinforcement Learning in 3D non-stationary
environments. preprint arxiv arXiv:1905.10112, 2019.
38. AR-1*: Closing the Accuracy Gap with
Latent Generative Replay
●
●
●
●
●
●
Pellegrini L., Graffieti G. , LomonacoV. and Maltoni D. Towards Continual Learning on the Edge.To be published.
39. Questions?
Introduction Meetup @ Numenta
16-09-2019
Vincenzo Lomonaco
vincenzo.lomonaco@unibo.it
Postdoctoral Researcher @ University of Bologna
Supervisor: Davide Maltoni