Keywords: Segmentation, Segmentation, Deep Learning, Digestive
Motivation: In abdominal MRI segmentation tasks, the need for high-quality support information for Segment Anything Model (SAM)-driven segmentation in limited data scenarios has motivated the search for an architecture with high performance and minimal support information requirements.
Goal(s): Our objective is to design a user-friendly architecture for segmentation, focusing on using only support information within the region of interest. We aim to verify its high-performance capabilities.
Approach: We developed Point-Guided 3D U-SAM, combining SAM and 3D U-Net with point-based support input. We compared its segmentation performance with existing methods.
Results: The model excelled in abdominal MRI segmentation across various contrast levels, ensuring high performance.
Impact: Point-Guided 3D U-SAM, which combines Segment Anything Model (SAM) and 3D U-Net with point-based inputs, would advance semi-automated organ segmentation, particularly where contrast is poor, such as MRCP, and in abdominal imaging, significantly reducing manual effort in clinical segmentation.
How to access this content:
For one year after publication, abstracts and videos are only open to registrants of this annual meeting. Registrants should use their existing login information. Non-registrant access can be purchased via the ISMRM E-Library.
After one year, current ISMRM & ISMRT members get free access to both the abstracts and videos. Non-members and non-registrants must purchase access via the ISMRM E-Library.
After two years, the meeting proceedings (abstracts) are opened to the public and require no login information. Videos remain behind password for access by members, registrants and E-Library customers.
Keywords