Improved segmentation by adversarial U-Net

David Sriker, Dana Cohen, Noa Cahan, Hayit Greenspan

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Medical image segmentation has a fundamental role in many computer-aided diagnosis (CAD) applications. Accurate segmentation of medical images is a key step in tracking changes over time, contouring during radiotherapy planning, and more. One of the state-of-the-art models for medical image segmentation is the U-Net that consists of an encoder-decoder based architecture. Many variations exist to the U-Net architecture. In this work, we present a new training procedure that combines U-Net with an adversarial training we refer to as Adversarial U-Net. We show that Adversarial U-Net outperformes the conventional U-Net in three versatile domains that differ in the acquisition method as well as the physical characteristics and yields smooth and improved segmentation maps.

Original languageEnglish
Title of host publicationMedical Imaging 2021
Subtitle of host publicationComputer-Aided Diagnosis
EditorsMaciej A. Mazurowski, Karen Drukker
PublisherSPIE
ISBN (Electronic)9781510640238
DOIs
StatePublished - 2021
EventMedical Imaging 2021: Computer-Aided Diagnosis - Virtual, Online, United States
Duration: 15 Feb 202119 Feb 2021

Publication series

NameProgress in Biomedical Optics and Imaging - Proceedings of SPIE
Volume11597

Conference

ConferenceMedical Imaging 2021: Computer-Aided Diagnosis
Country/TerritoryUnited States
CityVirtual, Online
Period15/02/2119/02/21

Keywords

  • Computer Assisted Diagnosis
  • Convolutional Neural Network
  • Deep Learning
  • Image Segmentation
  • U-Net

All Science Journal Classification (ASJC) codes

  • Electronic, Optical and Magnetic Materials
  • Atomic and Molecular Physics, and Optics
  • Biomaterials
  • Radiology Nuclear Medicine and imaging

Fingerprint

Dive into the research topics of 'Improved segmentation by adversarial U-Net'. Together they form a unique fingerprint.

Cite this