TY - GEN
T1 - An Abstraction-Based Framework for Neural Network Verification
AU - Elboher, Yizhak Yisrael
AU - Gottschlich, Justin
AU - Katz, Guy
N1 - Publisher Copyright: © 2020, The Author(s).
PY - 2020
Y1 - 2020
N2 - Deep neural networks are increasingly being used as controllers for safety-critical systems. Because neural networks are opaque, certifying their correctness is a significant challenge. To address this issue, several neural network verification approaches have recently been proposed. However, these approaches afford limited scalability, and applying them to large networks can be challenging. In this paper, we propose a framework that can enhance neural network verification techniques by using over-approximation to reduce the size of the network—thus making it more amenable to verification. We perform the approximation such that if the property holds for the smaller (abstract) network, it holds for the original as well. The over-approximation may be too coarse, in which case the underlying verification tool might return a spurious counterexample. Under such conditions, we perform counterexample-guided refinement to adjust the approximation, and then repeat the process. Our approach is orthogonal to, and can be integrated with, many existing verification techniques. For evaluation purposes, we integrate it with the recently proposed Marabou framework, and observe a significant improvement in Marabou’s performance. Our experiments demonstrate the great potential of our approach for verifying larger neural networks.
AB - Deep neural networks are increasingly being used as controllers for safety-critical systems. Because neural networks are opaque, certifying their correctness is a significant challenge. To address this issue, several neural network verification approaches have recently been proposed. However, these approaches afford limited scalability, and applying them to large networks can be challenging. In this paper, we propose a framework that can enhance neural network verification techniques by using over-approximation to reduce the size of the network—thus making it more amenable to verification. We perform the approximation such that if the property holds for the smaller (abstract) network, it holds for the original as well. The over-approximation may be too coarse, in which case the underlying verification tool might return a spurious counterexample. Under such conditions, we perform counterexample-guided refinement to adjust the approximation, and then repeat the process. Our approach is orthogonal to, and can be integrated with, many existing verification techniques. For evaluation purposes, we integrate it with the recently proposed Marabou framework, and observe a significant improvement in Marabou’s performance. Our experiments demonstrate the great potential of our approach for verifying larger neural networks.
UR - http://www.scopus.com/inward/record.url?scp=85089245974&partnerID=8YFLogxK
U2 - https://doi.org/10.1007/978-3-030-53288-8_3
DO - https://doi.org/10.1007/978-3-030-53288-8_3
M3 - منشور من مؤتمر
SN - 9783030532871
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 43
EP - 65
BT - Computer Aided Verification - 32nd International Conference, CAV 2020, Proceedings
A2 - Lahiri, Shuvendu K.
A2 - Wang, Chao
T2 - 32nd International Conference on Computer Aided Verification, CAV 2020
Y2 - 21 July 2020 through 24 July 2020
ER -