Meeting Banner
Abstract #0094

Multi-contrast MR-driven deep learning for abdominal multi-organ segmentation (McDAMOS)

Pengcheng Wang1,2, Dan Ruan3,4, Junzhou Chen1,4, Jiayu Xiao1, Diane Ling5, Lijun Ma5, Wensha Yang6, and Zhaoyang Fan1,2,5
1Department of Radiology, University of Southern California, Los Angeles, CA, United States, 2Department of Biomedical Engineering, University of Southern California, Los Angeles, CA, United States, 3Department of Radiation Oncology, University of California, Los Angeles, Los Angeles, CA, United States, 4Department of Bioengineering, University of California, Los Angeles, Los Angeles, CA, United States, 5Department of Radiation Oncology, University of Southern California, Los Angeles, CA, United States, 66. Department of Radiation Oncology, University of California, San Francisco, San Francisco, CA, United States

Synopsis

Keywords: Analysis/Processing, Segmentation

Motivation: Efficient and accurate contouring of abdominal organs-at-risk (OAR) is crucial for MR-guided radiotherapy planning and online adaptation but challenging due to complex anatomy. Multi-contrast MR may be utilized to achieve automated multi-organ segmentation.

Goal(s): To develop a multi-contrast MR-driven DL technique for abdominal multi-organ segmentation.

Approach: Our model builds on a 3D Swin Transformer architecture with T1w and T2w dual inputs. Pre-training on a larger T1w dataset and synthesized T2w images addressed limited data. A VAE-based loss for organ shape learning was incorporated.

Results: Multi-contrast inputs, pre-training, and VAE loss all contributed to improved segmentation performance, especially for challenging organs like the duodenum.

Impact: Our work demonstrates the utility of multi-contrast MR in achieving abdominal auto-segmentation and presents a methodology to address limited data available from a novel research MR sequence. The approach benefits clinicians and propelling automated segmentation techniques forward.

How to access this content:

For one year after publication, abstracts and videos are only open to registrants of this annual meeting. Registrants should use their existing login information. Non-registrant access can be purchased via the ISMRM E-Library.

After one year, current ISMRM & ISMRT members get free access to both the abstracts and videos. Non-members and non-registrants must purchase access via the ISMRM E-Library.

After two years, the meeting proceedings (abstracts) are opened to the public and require no login information. Videos remain behind password for access by members, registrants and E-Library customers.

Click here for more information on becoming a member.

Keywords