Title | An empirical study on TensorFlow program bugs |
Authors | Zhang, Yuhao Chen, Yifan Cheung, Shing-Chi Xiong, Yingfei Zhang, Lu |
Affiliation | Key Laboratory of High Confidence Software Technologies, MoE EECS, Peking University, Beijing, China Department of Computer Science and Engineering, Hong Kong University of Science and Technology, Hong Kong, Hong Kong |
Issue Date | 2018 |
Publisher | 27th ACM SIGSOFT International Symposium on Software Testing and Analysis, ISSTA 2018 |
Citation | 27th ACM SIGSOFT International Symposium on Software Testing and Analysis, ISSTA 2018. 2018, 129-140. |
Abstract | Deep learning applications become increasingly popular in important domains such as self-driving systems and facial identity systems. Defective deep learning applications may lead to catastrophic consequences. Although recent research efforts were made on testing and debugging deep learning applications, the characteristics of deep learning defects have never been studied. To fill this gap, we studied deep learning applications built on top of TensorFlow and collected program bugs related to TensorFlow from StackOver-flow QA pages and Github projects. We extracted information from QA pages, commit messages, pull request messages, and issue discussions to examine the root causes and symptoms of these bugs. We also studied the strategies deployed by TensorFlow users for bug detection and localization. These findings help researchers and TensorFlow users to gain a better understanding of coding defects in TensorFlow programs and point out a new direction for future research. © 2018 Association for Computing Machinery. |
URI | http://hdl.handle.net/20.500.11897/530839 |
ISSN | 9781450356992 |
DOI | 10.1145/3213846.3213866 |
Indexed | EI |
Appears in Collections: | 信息科学技术学院 高可信软件技术教育部重点实验室 |