Adversarial robustness for face recognition: how to introduce ensemble diversity among feature extractors?

Takuma Amada, Kazuya Kakizaki, Toshinori Araki, Seng Pei Liew, Joseph Keshet, Jun Furukawa

Research output: Contribution to journalConference articlepeer-review

Abstract

An adversarial example (AX) is a maliciously crafted input that humans can recognize correctly, while machine learning models cannot. This paper considers how to turn deep learning-based face recognition systems to be robust against AXs. A large number of studies have proposed methods for protecting machine learning-classifiers from AXs. One of the most successful methods among them is to prepare an ensemble of classifiers and promote diversity among them. Face recognition typically relies on feature extractors instead of classifiers. We found that directly applying this successful method to feature extractors leads to failure. We show that this failure is due to a lack of true diversity among the feature extractors and fix it by synchronizing the direction of features among models. Our method significantly enhances the robustness against AXs under the white box and black box settings while slightly increasing the accuracy. We also compared our method with adversarial training.

Original languageEnglish
JournalCEUR Workshop Proceedings
Volume2808
StatePublished - 2021
Event2021 Workshop on Artificial Intelligence Safety, SafeAI 2021 - Virtual, Online
Duration: 8 Feb 2021 → …

All Science Journal Classification (ASJC) codes

  • General Computer Science

Fingerprint

Dive into the research topics of 'Adversarial robustness for face recognition: how to introduce ensemble diversity among feature extractors?'. Together they form a unique fingerprint.

Cite this