Title | Continual Neural Mapping: Learning An Implicit Scene Representation from Sequential Observations |
Authors | Yan, Zike Tian, Yuxin Shi, Xuesong Guo, Ping Wang, Peng Zha, Hongbin |
Affiliation | Peking Univ, Sch EECS, Key Lab Machine Percept MOE, PKU Sense Time Machine Vis Joint Lab, Beijing, Peoples R China Beihang Univ, Sch Automat Sci & Elect Engn, Beijing, Peoples R China Intel Labs China, Beijing, Peoples R China |
Issue Date | 2021 |
Publisher | 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021) |
Abstract | Recent advances have enabled a single neural network to serve as an implicit scene representation, establishing the mapping function between spatial coordinates and scene properties. In this paper, we make a further step towards continual learning of the implicit scene representation directly from sequential observations, namely Continual Neural Mapping. The proposed problem setting bridges the gap between batch-trained implicit neural representations and commonly used streaming data in robotics and vision communities. We introduce an experience replay approach to tackle an exemplary task of continual neural mapping: approximating a continuous signed distance function (SDF) from sequential depth images as a scene geometry representation. We show for the first time that a single network can represent scene geometry over time continually without catastrophic forgetting, while achieving promising tradeoffs between accuracy and efficiency. |
URI | http://hdl.handle.net/20.500.11897/646771 |
ISBN | 978-1-6654-2812-5 |
DOI | 10.1109/ICCV48922.2021.01549 |
Indexed | EI CPCI-S(ISTP) |
Appears in Collections: | 信息科学技术学院 机器感知与智能教育部重点实验室 |