Abstract
Research on adversarial evasion attacks primarily focuses on neural network models due to their popularity in fields such as computer vision and natural language processing, as well as their properties that facilitate the search for adversarial examples with minimal input changes. However, decision trees and tree ensembles, widely used for their high performance and interpretability in domains dominated by tabular data, are also vulnerable to adversarial attacks. A significant challenge is that existing defense strategies for tree ensembles often involve modifying the model architecture or retraining the model, which can degrade performance and may not be feasible in practice. In this research, we address the challenging problem of detecting adversarial evasion attacks on decision tree ensembles without altering the original model or compromising its performance. We introduce a novel method that leverages representation learning based on the tree structure to detect adversarial samples effectively. Our approach generates new data representations that enhance the ability to distinguish between normal and adversarial samples. Our analysis shows that this method significantly improves detection rates compared to state-of-the-art techniques and detectors trained on the original dataset representation, providing a practical solution to enhance the security of tree-based models against adversarial attacks.
Original language | American English |
---|---|
Article number | 102964 |
Journal | Information Fusion |
Volume | 118 |
DOIs | |
State | Published - 1 Jun 2025 |
Keywords
- Adversarial detection
- Adversarial learning
- Decision trees
- Evasion attacks
- Machine learning
- Representation learning
- Tree ensembles
All Science Journal Classification (ASJC) codes
- Software
- Signal Processing
- Information Systems
- Hardware and Architecture