Keywords: Analysis/Processing, AI/ML Software, Foundation model, Segmentation, Domain Adaptation
Motivation: Vision foundation models have demonstrated impressive capabilities in natural image segmentation. However, their application to MRI remains challenging due to the lack of domain-specific adaptation.
Goal(s): Introduce an adaptation method that effectively adapts vision foundation models for MRI image segmentation, without model training or fine-tuning.
Approach: We propose a training-free adaptation pipeline that emulates human analogical thinking, leveraging few-shot examples to rapidly adapt the Segment Anything Model (SAM) to medical imaging.
Results: The proposed approach demonstrates superior performance over both the original SAM and its advanced variant, SAM2, in terms of quantitative metrics and qualitative analysis.
Impact: Our training-free adaptation method circumvents the need for laborious data collection and labeling, providing a generalizable solution for applying vision foundation models to medical image segmentation.
How to access this content:
For one year after publication, abstracts and videos are only open to registrants of this annual meeting. Registrants should use their existing login information. Non-registrant access can be purchased via the ISMRM E-Library.
After one year, current ISMRM & ISMRT members get free access to both the abstracts and videos. Non-members and non-registrants must purchase access via the ISMRM E-Library.
After two years, the meeting proceedings (abstracts) are opened to the public and require no login information. Videos remain behind password for access by members, registrants and E-Library customers.
Keywords