Advanced confidence methods in deep learning

Yuval Meir, Ofek Tevet, Ella Koresh, Yarden Tzach, Ido Kanter

Research output: Contribution to journalArticlepeer-review

Abstract

The typical aim of classification tasks is to maximize the accuracy of the predicted label for a given input. This accuracy increases with the confidence, which is the maximal value of the output units, and when the accuracy equals confidence, calibration is achieved. Herein, several methods are proposed to enhance the accuracy of inputs with similar confidence, extending significantly beyond calibration. Using the first gap between the maximal and second maximal output values, the accuracy of the inputs with similar confidence is enhanced. The extension of the confidence or confidence gap to their minimal value among a set of augmented inputs further enhances the accuracy of inputs with similar confidence. Enhanced accuracies are demonstrated on EfficientNet-B0 trained on ImageNet and CIFAR-100, and VGG-16 trained on CIFAR-100. The results suggest improved applications for high-accuracy classification tasks that require manual operation for a given fraction of low-accuracy inputs.

Original languageEnglish
Article number129758
JournalPhysica A: Statistical Mechanics and its Applications
Volume641
DOIs
StatePublished - 1 May 2024

Keywords

  • Deep learning
  • Machine learning

All Science Journal Classification (ASJC) codes

  • Statistical and Nonlinear Physics
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'Advanced confidence methods in deep learning'. Together they form a unique fingerprint.

Cite this