TY - GEN
T1 - DiL
T2 - 2025 IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2025
AU - Giloni, Amit
AU - Hofman, Omer
AU - Morikawa, Ikuya
AU - Shimizu, Toshiya
AU - Elovici, Yuval
AU - Shabtai, Asaf
N1 - Publisher Copyright: © 2025 IEEE.
PY - 2025/1/1
Y1 - 2025/1/1
N2 - Although object detection models are widely used, their predictive performance has been shown to deteriorate when faced with abnormal scenes. Such abnormalities can occur naturally (by partially occluded or out-of-distribution objects) or deliberately (in the case of an adversarial attack). Existing uncertainty quantification methods, such as object detection evaluation metrics and label-uncertainty quantification techniques, do not consider the abnormalities' effect on the model's internal decision-making process. Furthermore, practical methods that consider the effects of abnormalities (such as abnormality detection and mitigation) are designed to deal with one type of abnormality. We present distinctive localization (DiL), an unsupervised, practical and explainable metric that quantitatively interprets any type of abnormality and can be leveraged for preventive purposes. By utilizing XAI techniques (saliency maps), DiL maps the objectness of a given scene and captures the model's inner uncertainty regarding the identified (and missed) objects. DiL was evaluated across nine use cases, including partially occluded and out-of-distribution objects, as well as adversarial patches, in both physical and digital spaces, on benchmark datasets, and our newly E-PO dataset (generated with DALL-E 2). Our results show that DiL: i) successfully interprets and quantifies an abnormality's effect on the model's decision-making process, regardless of the abnormality type; and ii) can be leveraged to detect and mitigate this effect.
AB - Although object detection models are widely used, their predictive performance has been shown to deteriorate when faced with abnormal scenes. Such abnormalities can occur naturally (by partially occluded or out-of-distribution objects) or deliberately (in the case of an adversarial attack). Existing uncertainty quantification methods, such as object detection evaluation metrics and label-uncertainty quantification techniques, do not consider the abnormalities' effect on the model's internal decision-making process. Furthermore, practical methods that consider the effects of abnormalities (such as abnormality detection and mitigation) are designed to deal with one type of abnormality. We present distinctive localization (DiL), an unsupervised, practical and explainable metric that quantitatively interprets any type of abnormality and can be leveraged for preventive purposes. By utilizing XAI techniques (saliency maps), DiL maps the objectness of a given scene and captures the model's inner uncertainty regarding the identified (and missed) objects. DiL was evaluated across nine use cases, including partially occluded and out-of-distribution objects, as well as adversarial patches, in both physical and digital spaces, on benchmark datasets, and our newly E-PO dataset (generated with DALL-E 2). Our results show that DiL: i) successfully interprets and quantifies an abnormality's effect on the model's decision-making process, regardless of the abnormality type; and ii) can be leveraged to detect and mitigate this effect.
KW - abnormality detection
KW - abnormality mitigation
KW - adversarial ml
KW - object detection
KW - out-of-distribution
KW - partial occlusion
KW - uncertainty metric
UR - http://www.scopus.com/inward/record.url?scp=105003641860&partnerID=8YFLogxK
U2 - 10.1109/WACV61041.2025.00249
DO - 10.1109/WACV61041.2025.00249
M3 - Conference contribution
T3 - Proceedings - 2025 IEEE Winter Conference on Applications of Computer Vision, WACV 2025
SP - 2507
EP - 2516
BT - Proceedings - 2025 IEEE Winter Conference on Applications of Computer Vision, WACV 2025
Y2 - 28 February 2025 through 4 March 2025
ER -