TY - JOUR
T1 - Global optimization of objective functions represented by ReLU networks
AU - Strong, Christopher A.
AU - Wu, Haoze
AU - Zeljić, Aleksandar
AU - Julian, Kyle D.
AU - Katz, Guy
AU - Barrett, Clark
AU - Kochenderfer, Mykel J.
N1 - Publisher Copyright: © 2021, The Author(s), under exclusive licence to Springer Science+Business Media LLC, part of Springer Nature.
PY - 2023/10
Y1 - 2023/10
N2 - Neural networks can learn complex, non-convex functions, and it is challenging to guarantee their correct behavior in safety-critical contexts. Many approaches exist to find failures in networks (e.g., adversarial examples), but these cannot guarantee the absence of failures. Verification algorithms address this need and provide formal guarantees about a neural network by answering “yes or no” questions. For example, they can answer whether a violation exists within certain bounds. However, individual “yes or no" questions cannot answer qualitative questions such as “what is the largest error within these bounds”; the answers to these lie in the domain of optimization. Therefore, we propose strategies to extend existing verifiers to perform optimization and find: (i) the most extreme failure in a given input region and (ii) the minimum input perturbation required to cause a failure. A naive approach using a bisection search with an off-the-shelf verifier results in many expensive and overlapping calls to the verifier. Instead, we propose an approach that tightly integrates the optimization process into the verification procedure, achieving better runtime performance than the naive approach. We evaluate our approach implemented as an extension of Marabou, a state-of-the-art neural network verifier, and compare its performance with the bisection approach and MIPVerify, an optimization-based verifier. We observe complementary performance between our extension of Marabou and MIPVerify.
AB - Neural networks can learn complex, non-convex functions, and it is challenging to guarantee their correct behavior in safety-critical contexts. Many approaches exist to find failures in networks (e.g., adversarial examples), but these cannot guarantee the absence of failures. Verification algorithms address this need and provide formal guarantees about a neural network by answering “yes or no” questions. For example, they can answer whether a violation exists within certain bounds. However, individual “yes or no" questions cannot answer qualitative questions such as “what is the largest error within these bounds”; the answers to these lie in the domain of optimization. Therefore, we propose strategies to extend existing verifiers to perform optimization and find: (i) the most extreme failure in a given input region and (ii) the minimum input perturbation required to cause a failure. A naive approach using a bisection search with an off-the-shelf verifier results in many expensive and overlapping calls to the verifier. Instead, we propose an approach that tightly integrates the optimization process into the verification procedure, achieving better runtime performance than the naive approach. We evaluate our approach implemented as an extension of Marabou, a state-of-the-art neural network verifier, and compare its performance with the bisection approach and MIPVerify, an optimization-based verifier. We observe complementary performance between our extension of Marabou and MIPVerify.
KW - Adversarial examples
KW - Marabou
KW - Neural network verification
KW - Optimization
UR - http://www.scopus.com/inward/record.url?scp=85117415874&partnerID=8YFLogxK
U2 - https://doi.org/10.1007/s10994-021-06050-2
DO - https://doi.org/10.1007/s10994-021-06050-2
M3 - مقالة
SN - 0885-6125
VL - 112
SP - 3685
EP - 3712
JO - Machine Learning
JF - Machine Learning
IS - 10
ER -