TitleAn empirical study on TensorFlow program bugs
AuthorsZhang, Yuhao
Chen, Yifan
Cheung, Shing-Chi
Xiong, Yingfei
Zhang, Lu
AffiliationKey Laboratory of High Confidence Software Technologies, MoE EECS, Peking University, Beijing, China
Department of Computer Science and Engineering, Hong Kong University of Science and Technology, Hong Kong, Hong Kong
Issue Date2018
Publisher27th ACM SIGSOFT International Symposium on Software Testing and Analysis, ISSTA 2018
Citation27th ACM SIGSOFT International Symposium on Software Testing and Analysis, ISSTA 2018. 2018, 129-140.
AbstractDeep learning applications become increasingly popular in important domains such as self-driving systems and facial identity systems. Defective deep learning applications may lead to catastrophic consequences. Although recent research efforts were made on testing and debugging deep learning applications, the characteristics of deep learning defects have never been studied. To fill this gap, we studied deep learning applications built on top of TensorFlow and collected program bugs related to TensorFlow from StackOver-flow QA pages and Github projects. We extracted information from QA pages, commit messages, pull request messages, and issue discussions to examine the root causes and symptoms of these bugs. We also studied the strategies deployed by TensorFlow users for bug detection and localization. These findings help researchers and TensorFlow users to gain a better understanding of coding defects in TensorFlow programs and point out a new direction for future research. © 2018 Association for Computing Machinery.
URIhttp://hdl.handle.net/20.500.11897/530839
ISSN9781450356992
DOI10.1145/3213846.3213866
IndexedEI
Appears in Collections:信息科学技术学院
高可信软件技术教育部重点实验室

Files in This Work
There are no files associated with this item.

Web of Science®



Checked on Last Week

Scopus®



Checked on Current Time

百度学术™



Checked on Current Time

Google Scholar™





License: See PKU IR operational policies.