Abstract
Segmentation of medical images plays a critical role in various clinical applications, facilitating precise diagnosis, treatment planning, and disease monitoring. However, the scarcity of annotated data poses a significant challenge for training deep learning models in the medical imaging domain. In this paper, we propose a novel approach for minimally-guided zero-shot segmentation of medical images using the Segment Anything Model (SAM), originally trained on natural images. The method leverages SAM’s ability to segment arbitrary objects in natural scenes and adapts it to the medical domain without the need for labeled medical data, except for a few foreground and background points on the test image itself. To this end, we introduce a two-stage process, involving the extraction of an initial mask from self-similarity maps and test-time fine-tuning of SAM. We run experiments on diverse medical imaging datasets, including AMOS22, MoNuSeg and the Gland segmentation (GlaS) challenge, and demonstrate the effectiveness of our approach. Our code is publicly available at https://github.com/talshaharabany/ZeroShotSAM.
Original language | English |
---|---|
Pages (from-to) | 1387-1400 |
Number of pages | 14 |
Journal | Proceedings of Machine Learning Research |
Volume | 250 |
State | Published - 2024 |
Event | 7th International Conference on Medical Imaging with Deep Learning, MIDL 2024 - Paris, France Duration: 3 Jul 2024 → 5 Jul 2024 |
All Science Journal Classification (ASJC) codes
- Artificial Intelligence
- Software
- Control and Systems Engineering
- Statistics and Probability