Trustworthy Autonomous System Development

Joseph Sifakis, David Harel

Research output: Contribution to journalArticlepeer-review

Abstract

Autonomous systems emerge from the need to progressively replace human operators by autonomous agents in a wide variety of application areas. We offer an analysis of the state of the art in developing autonomous systems, focusing on design and validation and showing that the multi-faceted challenges involved go well beyond the limits of weak AI. We argue that traditional model-based techniques are defeated by the complexity of the problem, while solutions based on end-to-end machine learning fail to provide the necessary trustworthiness. We advocate a hybrid design approach, which combines the two, adopting the best of each, and seeks tradeoffs between trustworthiness and performance. We claim that traditional risk analysis and mitigation techniques fail to scale and discuss the trend of moving away from correctness at design time and toward reliance on runtime assurance techniques. We argue that simulation and testing remain the only realistic approach for global validation and show how current methods can be adapted to autonomous systems. We conclude by discussing the factors that will play a decisive role in the acceptance of autonomous systems and by highlighting the urgent need for new theoretical foundations.
Original languageEnglish
Article number40
Number of pages24
JournalTransactions on Embedded Computing Systems
Volume22
Issue number3
Early online date18 Apr 2023
DOIs
StatePublished - May 2023

Fingerprint

Dive into the research topics of 'Trustworthy Autonomous System Development'. Together they form a unique fingerprint.

Cite this