AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation

Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

We present AI2, the first sound and scalable analyzer for deep neural networks. Based on overapproximation, AI2 can automatically prove safety properties (e.g., robustness) of realistic neural networks (e.g., convolutional neural networks). The key insight behind AI2 is to phrase reasoning about safety and robustness of neural networks in terms of classic abstract interpretation, enabling us to leverage decades of advances in that area. Concretely, we introduce abstract transformers that capture the behavior of fully connected and convolutional neural network layers with rectified linear unit activations (ReLU), as well as max pooling layers. This allows us to handle real-world neural networks, which are often built out of those types of layers. We present a complete implementation of AI2 together with an extensive evaluation on 20 neural networks. Our results demonstrate that: (i) AI2 is precise enough to prove useful specifications (e.g., robustness), (ii) AI2 can be used to certify the effectiveness of state-of-the-art defenses for neural networks, (iii) AI2 is significantly faster than existing analyzers based on symbolic analysis, which often take hours to verify simple fully connected networks, and (iv) AI2 can handle deep convolutional networks, which are beyond the reach of existing methods.

Original languageEnglish
Title of host publicationProceedings - 2018 IEEE Symposium on Security and Privacy, SP 2018
Pages3-18
Number of pages16
ISBN (Electronic)9781538643525
DOIs
StatePublished - 23 Jul 2018
Externally publishedYes
Event39th IEEE Symposium on Security and Privacy, SP 2018 - San Francisco, United States
Duration: 21 May 201823 May 2018

Publication series

NameProceedings - IEEE Symposium on Security and Privacy
Volume2018-May

Conference

Conference39th IEEE Symposium on Security and Privacy, SP 2018
Country/TerritoryUnited States
CitySan Francisco
Period21/05/1823/05/18

Keywords

  • Abstract Interpretation
  • Neural Networks
  • Reliable Machine Learning
  • Robustness

All Science Journal Classification (ASJC) codes

  • Safety, Risk, Reliability and Quality
  • Software
  • Computer Networks and Communications

Fingerprint

Dive into the research topics of 'AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation'. Together they form a unique fingerprint.

Cite this