Discussion Paper: The Integrity of Medical AI

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Deep learning has proven itself to be an incredible asset to the medical community. However, with offensive AI, the technology can be turned against medical community; adversarial samples can be used to cause misdiagnosis and medical deepfakes can be used fool both radiologists and machines alike. In this short discussion paper, we talk about the issue of offensive AI and from the perspective of healthcare. We discuss how defense researchers in this domain have responded to the threat and their the current challenges. We conclude by arguing that conventional security mechanisms are a better approach towards mitigating these threats over algorithm based solutions.

Original languageAmerican English
Title of host publicationWDC 2022 - Proceedings of the 1st Workshop on Security Implications of Deepfakes and Cheapfakes
Pages31-33
Number of pages3
ISBN (Electronic)9781450391788
DOIs
StatePublished - 30 May 2022
Event1st ACM Workshop on Security Implications of Deepfakes and Cheapfakes, WDC 2022, co-located with ACM AsiaCCS 2022 - Virtual, Online, Japan
Duration: 30 May 2022 → …

Publication series

NameWDC 2022 - Proceedings of the 1st Workshop on Security Implications of Deepfakes and Cheapfakes

Conference

Conference1st ACM Workshop on Security Implications of Deepfakes and Cheapfakes, WDC 2022, co-located with ACM AsiaCCS 2022
Country/TerritoryJapan
CityVirtual, Online
Period30/05/22 → …

Keywords

  • adversarial examples
  • adversarial machine learning
  • deep fake
  • deepfake
  • medical deepfake
  • medicine
  • offensive ai
  • radiology
  • security

All Science Journal Classification (ASJC) codes

  • Computer Networks and Communications
  • Computer Science Applications
  • Information Systems
  • Software

Fingerprint

Dive into the research topics of 'Discussion Paper: The Integrity of Medical AI'. Together they form a unique fingerprint.

Cite this