TY - GEN
T1 - Taking over the Stock Market
T2 - European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, ECML PKDD 2021
AU - Nehemya, Elior
AU - Mathov, Yael
AU - Shabtai, Asaf
AU - Elovici, Yuval
N1 - Publisher Copyright: © 2021, Springer Nature Switzerland AG.
PY - 2021/1/1
Y1 - 2021/1/1
N2 - In recent years, machine learning has become prevalent in numerous tasks, including algorithmic trading. Stock market traders utilize machine learning models to predict the market’s behavior and execute an investment strategy accordingly. However, machine learning models have been shown to be susceptible to input manipulations called adversarial examples. Despite this risk, the trading domain remains largely unexplored in the context of adversarial learning. In this study, we present a realistic scenario in which an attacker influences algorithmic trading systems by using adversarial learning techniques to manipulate the input data stream in real time. The attacker creates a universal adversarial perturbation that is agnostic to the target model and time of use, which remains imperceptible when added to the input stream. We evaluate our attack on a real-world market data stream and target three different trading algorithms. We show that when added to the input stream, our perturbation can fool the trading algorithms at future unseen data points, in both white-box and black-box settings. Finally, we present various mitigation methods and discuss their limitations, which stem from the algorithmic trading domain. We believe that these findings should serve as a warning to the finance community regarding the threats in this area and promote further research on the risks associated with using automated learning models in the trading domain.
AB - In recent years, machine learning has become prevalent in numerous tasks, including algorithmic trading. Stock market traders utilize machine learning models to predict the market’s behavior and execute an investment strategy accordingly. However, machine learning models have been shown to be susceptible to input manipulations called adversarial examples. Despite this risk, the trading domain remains largely unexplored in the context of adversarial learning. In this study, we present a realistic scenario in which an attacker influences algorithmic trading systems by using adversarial learning techniques to manipulate the input data stream in real time. The attacker creates a universal adversarial perturbation that is agnostic to the target model and time of use, which remains imperceptible when added to the input stream. We evaluate our attack on a real-world market data stream and target three different trading algorithms. We show that when added to the input stream, our perturbation can fool the trading algorithms at future unseen data points, in both white-box and black-box settings. Finally, we present various mitigation methods and discuss their limitations, which stem from the algorithmic trading domain. We believe that these findings should serve as a warning to the finance community regarding the threats in this area and promote further research on the risks associated with using automated learning models in the trading domain.
KW - Adversarial examples
KW - Algorithmic trading
UR - http://www.scopus.com/inward/record.url?scp=85115722168&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-86514-6_14
DO - 10.1007/978-3-030-86514-6_14
M3 - Conference contribution
SN - 9783030865139
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 221
EP - 236
BT - Machine Learning and Knowledge Discovery in Databases
A2 - Dong, Yuxiao
A2 - Kourtellis, Nicolas
A2 - Hammer, Barbara
A2 - Lozano, Jose A.
PB - Springer Science and Business Media Deutschland GmbH
Y2 - 13 September 2021 through 17 September 2021
ER -