TitleReconstruction of natural visual scenes from neural spikes with deep neural networks
AuthorsZhang, Yichen
Jia, Shanshan
Zheng, Yajing
Yu, Zhaofei
Tian, Yonghong
Ma, Siwei
Huang, Tiejun
Liu, Jian K.
AffiliationPeking Univ, Sch Elect Engn & Comp Sci, Natl Engn Lab Video Technol, Beijing, Peoples R China
Peng Cheng Lab, Shenzhen, Peoples R China
Univ Leicester, Dept Neurosci Psychol & Behav, Ctr Syst Neurosci, Leicester, Leics, England
KeywordsGANGLION-CELLS
PROCESSOR
ALGORITHM
RESPONSES
MODELS
IMAGES
Issue DateMay-2020
PublisherNEURAL NETWORKS
AbstractNeural coding is one of the central questions in systems neuroscience for understanding how the brain processes stimulus from the environment, moreover, it is also a cornerstone for designing algorithms of brain-machine interface, where decoding incoming stimulus is highly demanded for better performance of physical devices. Traditionally researchers have focused on functional magnetic resonance imaging (fMRI) data as the neural signals of interest for decoding visual scenes. However, our visual perception operates in a fast time scale of millisecond in terms of an event termed neural spike. There are few studies of decoding by using spikes. Here we fulfill this aim by developing a novel decoding framework based on deep neural networks, named spike-image decoder (SID), for reconstructing natural visual scenes, including static images and dynamic videos, from experimentally recorded spikes of a population of retinal ganglion cells. The SID is an end-to-end decoder with one end as neural spikes and the other end as images, which can be trained directly such that visual scenes are reconstructed from spikes in a highly accurate fashion. Our SID also outperforms on the reconstruction of visual stimulus compared to existing fMRI decoding models. In addition, with the aid of a spike encoder, we show that SID can be generalized to arbitrary visual scenes by using the image datasets of MNIST, CIFAR10, and CIFAR100. Furthermore, with a pre-trained SID, one can decode any dynamic videos to achieve real-time encoding and decoding of visual scenes by spikes. Altogether, our results shed new light on neuromorphic computing for artificial visual systems, such as event-based visual cameras and visual neuroprostheses. (c) 2020 Elsevier Ltd. All rights reserved.
URIhttp://hdl.handle.net/20.500.11897/587577
ISSN0893-6080
DOI10.1016/j.neunet.2020.01.033
IndexedSCI(E)
Scopus
EI
Appears in Collections:信息科学技术学院

Files in This Work
There are no files associated with this item.

Web of Science®



Checked on Last Week

Scopus®



Checked on Current Time

百度学术™



Checked on Current Time

Google Scholar™





License: See PKU IR operational policies.